Next Article in Journal
Estimation and Correlation Analysis of Lower Limb Joint Angles Based on Surface Electromyography
Previous Article in Journal
A Novel Fuzzy Entropy-Based Method to Improve the Performance of the Fuzzy C-Means Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CNN-Based Vehicle Target Recognition with Residual Compensation for Circular SAR Imaging

1
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu 610054, China
3
School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
4
School of Science, Southwest University of Science and Technology, Mianyang 621010, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(4), 555; https://doi.org/10.3390/electronics9040555
Submission received: 11 February 2020 / Revised: 13 March 2020 / Accepted: 24 March 2020 / Published: 26 March 2020
(This article belongs to the Section Microwave and Wireless Communications)

Abstract

:
The contour thinning algorithm is an imaging algorithm for circular synthetic aperture radar (SAR) that can obtain clear target contours and has been successfully used for circular SAR (CSAR) target recognition. However, the contour thinning imaging algorithm loses some details when thinning the contour, which needs to be improved. This paper presents an improved contour thinning imaging algorithm based on residual compensation. In this algorithm, the residual image is obtained by subtracting the contour thinning image from the traditional backprojection image. Then, the compensation information is extracted from the residual image by repeatedly using the gravitation-based speckle reduction algorithm. Finally, the extracted compensation image is superimposed on the contour thinning image to obtain a compensated contour thinning image. The proposed algorithm is demonstrated on the Gotcha dataset. The convolutional neural network (CNN) is used to recognize the target image. The experimental results show that the image after compensation has a higher target recognition accuracy than the image before compensation.

1. Introduction

Because of its all-day, all-weather imaging capability, synthetic aperture radar (SAR) has been widely used in military and civilian applications in recent years. There are different types of SAR depending on their mode of detection. One of the detection methods of SAR is to span a large azimuth in the process of data acquisition, which is called wide-angle SAR (WSAR). If the radar always shines on the same ground when detecting, and the azimuth turned around is large enough to make the radar’s flight track a circle, it is called circular SAR (CSAR). CSAR is a special case of WSAR. Research based on SAR and CSAR includes time-frequency analysis, 2D/3D imaging, digital elevation model (DEM), target detection and recognition, etc. [1,2,3,4,5,6,7,8,9,10,11].
Research into CSAR began in the early 1990s [12,13,14]. Soumekh first proposed the imaging mode and echo signal time-domain model of CSAR in 1996, and also proposed the CSAR imaging algorithm based on wavefront reconstruction [15]. Subsequently, more and more research has been carried out on CSAR. Many of the datasets used in these studies are from the Air Force Research Laboratory (AFRL). AFRL has released several experimental and simulation datasets for WSAR and CSAR, as well as challenging related problems [16,17,18,19]. In addition to the datasets released by AFRL, some researchers and organizations have conducted experiments on CSAR [20,21]. In the research on CSAR, imaging is an important research area. Imaging algorithms for CSAR include algorithms based on backprojection, wavefront reconstruction, compressed sensing, and other algorithms. Many researchers have proposed different imaging algorithms. Hong and Lin et al. carried out a series of studies on the imaging of CSAR and achieved good results [22,23,24,25,26,27,28]. Kou et al. carried out a series of studies on geosynchronous CSAR imaging, and analyzed the effects of orbit error, L-band of troposphere and high sidelobe on imaging [29,30,31]. Yuan et al. proposed a method for reconstructing SAR images that is built upon the backprojection algorithm using multiple sub-aperture images to attain both greater processing efficiency and improved image quality [32]. Chen et al. proposed a processing strategy for the 3D reconstruction of vehicles, which only needs the single-pass single-polarization CSAR data [33]. Among these imaging algorithms, the backprojection algorithm is the most widely used imaging algorithm. Much of the related work is carried out on the basis of backprojection imaging, and many imaging algorithms are modified on the basis of backprojection algorithms. Ref. [34] proposed a contour thinning imaging algorithm based on modulus stretch. This algorithm is an improvement on the backprojection algorithm, but can obtain a thinner contour than the traditional backprojection imaging. In recent years, research on CSAR imaging has been further developed. There have been many innovative research results [35,36,37,38].
In addition to imaging, target recognition is also an important aspect of CSAR research. In the early 1990s, researchers began to study automatic target recognition (ATR) on SAR [39,40,41]. At present, there is still a lot of research on SAR ATR. Due to the lack of experimental data, there are few studies on target recognition of CSAR and WSAR in the early stage. The Air Force Research Laboratory (AFRL) released a dataset in 2007. The dataset was obtained by airborne SAR during circular flight. This dataset is the first dataset about CSAR, also known as the Gotcha dataset. At the same time, AFRL also released challenging problems corresponding to the dataset [16]. In the second year, Dungan et al. proposed a target recognition algorithm based on point pattern matching. In the following years, Dungan et al. successively proposed several target recognition algorithms [42,43,44]. The algorithms proposed by Dungan et al. include a point pattern matching-based algorithm and a pyramid matching-based algorithm. In the case of using simple and easy data subsets, the algorithm proposed by Dungan et al. achieved good recognition results. In 2012, AFRL released the Target Discrimination Research Challenge and the corresponding dataset. This dataset is a subset of the Gotcha dataset, which was trimmed from a large scene, including 33 civilian vehicles and some auxiliary reflectors [19]. Since then, more researchers have begun to study target recognition with WSAR [45,46,47,48,49,50,51,52,53,54]. These target recognition algorithms, including feature set and template matching-based, manifold learning-based, and deep learning-based algorithms, have achieved good results.
Although there has been a great deal of research into SAR ATR, there is still little research on vehicle target recognition for wide-angle SAR. The existing recognition algorithms can be improved in terms of both efficiency and accuracy. Therefore, the study of wide-angle SAR target recognition is still meaningful. Combining imaging and target recognition is also a research direction in SAR and other radar detection fields. The combination target detection and imaging has also appeared in infrared target detection research [55,56,57,58,59,60,61,62]. In the previous work, the authors proposed a contour thinning imaging algorithm to improve target recognizability [34]. The algorithm combines imaging and target recognition. Based on contour thinning imaging, the authors also proposed a target recognition algorithm using feature set matching [53]. The contour thinning imaging algorithm significantly improved the target’s recognizability, and achieved good results in vehicle recognition. This paper improves the contour thinning imaging algorithm. Although the contour thinning algorithm can obtain a clearer target contour, it still has disadvantages. After the contour is thinned, the target in the image loses some detailed information; in particular, block areas are obviously suppressed. This paper mainly is mainly aimed at improving the contour thinning imaging algorithm, so that it will lose less detailed information while maintaining contour thinning, and so that the target’s recognizability will be increased. In addition to imaging, this paper uses convolutional neural network (CNN) to perform target recognition on the improved images. The recognition results are used to verify whether the improved imaging algorithm is able to improve the accuracy of target recognition.

2. Imaging Algorithm

2.1. Analysis of Contour Thinning Imaging

The contour thinning algorithm was proposed in [34] and has been used in vehicle target recognition for CSAR [53]. This algorithm is based on the backprojection algorithm. The core idea is to stretch the modulus during the projection superposition process, highlight the areas with high modulus, and suppress the areas with low modulus, so as to obtain a contour thinning image. The main steps of the algorithm are given below [34].
The echo data received by the radar is a function of slow time τ n and receiving frequency f k , which is denoted by S ( f k , τ n ) , where τ n is the n-th sampling slow time, f k is the k-th sampling frequency.
The IFFT transform is performed on S ( f k , τ n ) received at each τ n to obtain an inverse transform sequence. The FFTshift(•) function is then used to shift the zero frequency point to the middle of the sequence.
s 0 ( m , τ n ) = FFT s hift { IFFT [ S ( f k , τ n ) ] }
Let Δ R ( r , τ ) denote the difference between the distance from the origin to the radar and the distance from the target to the radar with the location r at time τ . Linear interpolation is performed on s 0 ( m , τ n ) to obtain the estimated values of the corresponding IFFT transformation at all Δ R ( r , τ n ) points, and the sub-imaging data at each time τ n is obtained by multiplying the data by the compensation term.
s i n t ( r , τ n ) = Interp [ s 0 ( m , τ n ) ] ( + j 4 π f 0 Δ R ( r , τ n ) c ) ,
S i n t ( τ n ) = { s i n t ( r x , y , τ n ) } | x , y [ 1 , L ] ,  
where L represents the side length of the imaging scene.
Let θ denote the size of the synthetic aperture of the SAR. All sub-images in the range of θ are superimposed. The function ψ ( ) is then used to stretch the modulus for each sub-aperture image. The final image I can be obtained by superimposing all the sub-aperture images.
I = θ i ψ ( τ n θ S i n t ( τ n ) )
ψ ( x ) = { k 1 x , | x | T k 2 x , | x | < T ,
where k 1 and k 2 denote the enhancement coefficient and the inhibition coefficient, respectively. The T denotes the threshold value. Ref. [34] shows that the empirical values of k 1 , k 2 and T are 1.2, 0.1 and 0.9, respectively.
Figure 1 shows an example of imaging results for three different vehicle models. Figure 1a–c shows the imaging results of using the traditional backprojection algorithm for three vehicle models, namely Chevrolet Impala LT, Mitsubishi Galant ES, and Toyota Highlander. Figure 1d–f corresponds to Figure 1a–c for the contour thinning imaging results. The value of θ during imaging is 10°. As can be seen from the figure, the results of contour thinning imaging are clearer than the results of traditional backprojection imaging, but the detailed information is also obviously lost, as shown by the red circle callout.
Figure 2 is a schematic of CSAR imaging. The missing details in the contour thinning image correspond to the high reflection parts of the front or rear of the vehicle, as shown in the left image in Figure 2. Obviously, not all vehicles have the same highly reflective contour in front or rear. Therefore, the loss of this part is likely to reduce the recognition of vehicle targets. The first task of this paper is to restore this missing part and compensate for it in the contour thinning image.

2.2. Residual Compensation

To retrieve the information lost during contour thinning imaging, compensation is considered. Because the image obtained by the backprojection algorithm can well restore the scattering characteristics of the target, in this paper we refer to the image obtained by the traditional backprojection algorithm as the original image. The goal of compensation is to make the thinning contours correspond to all possible contours on the original image, and add the missing details to the thinning image. Let Iorg denote the original image. Let Ithin denote the contour thinning image. The difference between Iorg and Ithin is denoted by Ires, and is given by
I r e s = a b s ( I o r g I t h i n ) = [ | I o r g ( 1 , 1 ) I t h i n ( 1 , 1 ) | | I o r g ( 1 , 2 ) I t h i n ( 1 , 2 ) | | I o r g ( 1 , L ) I t h i n ( 1 , L ) | | I o r g ( 2 , 1 ) I t h i n ( 1 , 1 ) | | I o r g ( 2 , 2 ) I t h i n ( 2 , 2 ) | | I o r g ( 2 , L ) I t h i n ( 1 , L ) | | I o r g ( L , 1 ) I t h i n ( L , 1 ) | | I o r g ( L , 2 ) I t h i n ( L , 2 ) | | I o r g ( L , L ) I t h i n ( L , L ) ) | ]
where L denotes the pixel length of the image.
Figure 3 shows the example of residual image of different models of vehicles, corresponding to the difference between Figure 1a,d, Figure 1b,e, and Figure 1c,f, respectively.
Obviously, the missing details are contained in the residual image Ires. How to extract effective details from Ires and eliminate noise is a problem to be studied. Suppose the function that can achieve this requirement is represented by Φ ( ) , the processed image can be expressed as
I c p s = Φ ( I r e s )
When the image of compensation Icps is obtained, it is superimposed with the thinning image Ithin to obtain the final compensated contour thinning image Ifin, which is given by
I f i n = I t h i n + I c p s
In the process of compensation, the most important is the selection of Φ ( ) . There are many ways to obtain the Φ ( ) . For example, a method based on compressed sensing can be used to obtain the best signal under certain conditions. However, in the case of this paper, the main contour of the target has been successfully obtained, and the detailed information we want to extract is only the auxiliary information in the residual images. Therefore, after considering the computational complexity and cost performance, we prefer a simple image enhancement algorithm as the Φ ( ) function. Let’s refocus on the goal of the function Φ ( ) , which should be to preserve and enhance larger, brighter areas in the residual image while suppressing smaller, darker areas in the images. This is just a common speckle reduction problem in SAR image processing. There are several speckle reduction algorithms that can be used. In this paper, an algorithm with excellent performance in different scenarios is selected, that is, the gravitation-based speckle reduction algorithm [53,63,64] is used as the Φ ( ) function. The gravitation-based speckle reduction algorithm is calculated as follows [53]:
Φ ( I ( i , j ) ) = F ( i , j ) + G ( i , j ) , F ( i , j ) = m I ( i , j ) 2 , G ( i , j ) = r R m I ( i , j ) I ( k , l ) r 2 ,
where I(i, j) denotes the brightness of the point in the i-th row and the j-th column on image I. The r = ( i k ) 2 + ( j l ) 2 , representing the distance between (k, l) and (i, j). The radius of gravitational action is denoted by R. m is the coefficient of gravitation. G(i, j) denotes the gravitational force of the surrounding point (i, j). F(i, j) denotes the internal stress of point (i, j) itself. Ref. [53] shows that the empirical values of R and m are 10 and 1, respectively.
In practice, in order to get different degree of speckle reduction effect, Φ ( ) can be used several times iteratively, which is denoted by Φ ( q ) ( ) , where q is the number of iterations. For example:
Φ ( 1 ) ( I ( i , j ) ) = Φ ( I ( i , j ) ) Φ ( 2 ) ( I ( i , j ) ) = Φ ( Φ ( I ( i , j ) ) )      Φ ( n ) ( I ( i , j ) ) = Φ ( Φ ( Φ ( I ( i , j ) ) ) ) Φ ( )   iterates   n   times
In addition, the gravitation-based speckle reduction algorithm can also be used to denoise contour thinning image Ithin. However, this will increase the computation time, which can be optionally performed according to the actual situation. The processing procedure of the residual compensation imaging algorithm is shown in Algorithm 1.
Algorithm 1: Residual compensation imaging
Input:
  Echo data S ( f k , τ n ) , receiving frequency f k , slow time τ n .
  FFT Points N f f t , slant range Δ R ( r , τ ) , synthetic aperture θ , Image size L;
Output:
  The compensated contour thinning image Ifin.
1:
BEGIN
2:
Initialize: T = 0.9, k 1 = 1.2, k 2 = 0.1, R = 10, m = 1, q.
3:
Compute Iorg and Ithin using Equations (1)–(5);
4:
Denoiseing Ithin with Equation (9); (This step is optional)
5:
I r e s = a b s ( I o r g I t h i n )
6:
whileq > 0
7:
for i = 1 to L and j = 1 to L
8:
   I c p s ( i , j ) = m r R I r e s ( i , j ) I r e s ( k , l ) r 2 + m I r e s ( i , j ) 2 ;
9:
end for
10:
I r e s = I c p s ;
11:
q = q − 1;
12:
end while
13:
I f i n = I t h i n + I c p s ;
14:
return I f i n .
15:
END

3. Vehicle Target Recognition

The image after compensation contains more information than the image before compensation. However, it is not clear whether the added information is helpful to recognize the target. In this section, the convolutional neural networks (CNN) is used for vehicle target recognition. The detailed description of CNN is found in many papers and books [65,66], and will not be repeated here. This article briefly describes the steps of the algorithm used as follows:
(1)
Linear coding and decoding are used for images in the library to obtain patch features through training. The image is cropped into patches of size 8 × 8, and then expanded into 64 × 1 vectors row by row. The vectors are input into a three-layer network, the input and output nodes of the network are 64, and the number of hidden layer nodes is variable. The function of this network is to make the output image as close to the input image as possible by training a large number of images. After the weight matrix converges, it can be used as the extracted features for subsequent convolution. When the number of hidden nodes is much less than the number of input nodes, it is also called sparse coding.
(2)
Scale all images to standard size. All images are divided into training images and test images, and label data is generated at the same time. The training data set and the test data set are obtained respectively.
(3)
The CNN is used to train the data in the training set. The network mainly includes two convolutions and pooling, and softmax regression at the output layer. In step (1), multiple 8 × 8 weight matrices obtained by linear coding are convolved with images of L × L size. The size after convolution is (L − 7) × (L − 7). The error is calculated for each training, and the corresponding network parameters are adjusted according to a certain criterion. Repeat the training until the error is small enough or the number of trainings exceeds the threshold.
(4)
The network parameters obtained after the training is the optimal parameters. Use these parameters to recognize the images in the test set and calculate the accuracy.
The architecture of CNN used in this paper is shown in Figure 4.
The output layer of CNN uses softmax regression. The number of outputs is 6, which is the number of vehicle models. The loss function is shown as [66]:
J ( y i ) = log e y i k = 1 6 e y k ,
where yi denotes the output of the i-th neuron. There are many optimization algorithms for CNN. The optimization algorithm in this paper uses Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) [67,68].
In binary decision problems, the target is labeled as positive or negative. The decision made by the classifier can be represented as a confusion matrix. The confusion matrix has four categories: True positives (TP) are examples correctly labeled as positives. False positives (FP) refer to negative examples incorrectly labeled as positive. True negatives (TN) correspond to negatives correctly labeled as negative. False negatives (FN) refer to positive examples. There are usually 4 indicators used to evaluate the recognition results, namely Accuracy, Precision, Sensitivity (Recall) and Specificity, which are defined as follows [69]:
A c c u r a c y = T P + T N T P + T N + F P + F N , P r e c i s i o n = T P T P + F P , S e n s i t i v i t y = R e c a l l = T P T P + F N , S p e c i f i c i t y = T N T N + F P .
In multi-class decision problems, the above equations cannot be used directly. However, the accuracy can be intuitively obtained, that is, the sum of all the correct numbers that are identified is divided by the total number.

4. Experimental Results

The data used in the experiments in this paper comprise a subset of the Gotcha dataset released by AFRL, that is, the Target Discrimination Research subset [19]. Airborne SAR detects vehicle targets on the ground at 31 altitude orbits. The ground area is approximately 5 km in diameter. In each altitude orbit, the airborne SAR makes circular flight around the ground area. Finally, 56 individual targets are extracted from the large dataset.

4.1. Residual Compensation Imaging

To get a better region of interest, the effect of speckle reduction under different iterations is compared, as shown in Figure 5. As can be seen from the figure, with the increase of iteration times, the bright areas become more concentrated and the noise is reduced. When the number of iterations is less than 2, the noise is relatively large, and the processed image is not suitable for direct compensation. When the iteration is more than three times, the noise is well suppressed. However, a large number of iterations will also erode the target, so it is appropriate to choose three or four iterations. In the following experiments, three iterations was selected, that is, Φ ( 3 ) ( ) is selected as the extraction function.
Figure 6 shows the comparison of Iorg, Ithin and Ifin images of the three vehicle models in Figure 1a–c. Figure 6g–i are superimposed images of the contour thinning images and compensation images of the three vehicles. As can be seen from the figure, the compensated image adds detailed information and the contour is closer to the original image.
Ref. [34] gives an index of the contour thinning degree, which is denoted as D(I). It is defined as the perimeter of the pixels of all the target areas in a binary image divided by the area of the pixels of all the target areas.
D ( I ) = pixel   perimeter   of   target pixel   area   of   target
Similarly, the data of 150 vehicles randomly selected in the Gotcha dataset are imaged with synthetic apertures of 5°, 10°, and full aperture, respectively. Imaging methods include traditional backprojection, contour thinning, and residual compensation, which are denoted as Iorg, Ithin, and Ifin. For each imaging method, the contour thinning degree D(I) is calculated according to Equation (13), and the results are shown in Figure 7. As can be seen from the figure, the contour thinning degree after the residual compensation is basically the same as before compensation, and the area increased by compensation does not lower the contour thinning degree.

4.2. Recognition Analysis

The dataset used for recognition is still the Target Discrimination Research subset of the Gotcha dataset [19]. In the process of radar detection, some vehicles have changed position, some doors or trunk opened. To reduce these interferences, only the data of the stationary vehicle is selected for the recognition experiment. Finally, 660 images from six models were used for recognition experiments. The names of the six models are Chevrolet Impala LT, Mitsubishi Galant ES, Toyota Highlander, Chevrolet HHR LT, Pontiac Torrent and Chrysler Town & Country respectively. The labels for these six models in the dataset are Fcara, Fcarb, Fsuv, Mcar, Msuv, and Van. The six vehicle models were represented in 80, 81, 143, 111, 103, and 142 images, respectively.
All images were randomly divided into training set and test set. Let β denote the ratio of the number of images in the training set to the total number of images. The training/testing ratio is β/(1 − β).
Table 1, Table 2 and Table 3 show the total confusion matrices of all models when β is 0.7 (corresponding training/testing ratio is 2.3) in an experiment. It can be clearly seen from the confusion matrix which vehicle model is more likely to be misidentified.
Vehicle target recognition in this paper is a multi-class decision problem. For each model, it can be regarded as only a binary decision problem. That is, treat the model itself as positive, and treat others as negative. The total confusion matrix can be converted into six separate confusion matrices. The respective confusion matrices are shown in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 only for the test set.
According to Equation (12), the Accuracy, Precision, Sensitivity and Specificity of each model can also be calculated as shown in Table 10.
From the data in Table 10, the recognition accuracy looks pretty good. However, in fact, this is caused by simplifying the multi-class decision problem into a binary decision problem. In the binary decision process, many samples that are classified as negative and also identified as negative are actually identified incorrectly in multi-class decision. Therefore, the accuracy in Table 10 can be considered as artificially high. Let P denote the total accuracy of all models, defined as the ratio of the number of images recognized as the correct model in the dataset to the total number of images in the dataset. Let P a l l , P t e s t , and P t r a i n denote the accuracy of all images, the accuracy of the test set, and the accuracy of the training set, respectively. According to the data of the above experiment, P a l l , P t e s t and P t r a i n are 96.2%, 88.4% and 99.6%, respectively.
The data above is only the result of one experiment and is not representative. To analyze the recognition accuracy, subsequent experiments will be repeated multiple times to take the average. In the following experiments, each point in the figure is the average of the results of ten randomized trials.
Figure 8 shows the curve of accuracy changing with training set ratio β when CNN takes different number of hidden nodes. As can be seen from Figure 8a,c, the accuracy increases as β increases. However, when β is greater than 90%, the accuracy of the test set has a large deviation. When the number of hidden nodes is sufficient and the β is between 0.7 and 0.8, the accuracy of the test set is the highest, almost 90%. In Figure 8b, when the number of hidden nodes is large, the accuracy is almost 100%. However, when the number of hidden nodes is small, the accuracy of training set decreases with the increase of β. This is because fewer hidden nodes cannot extract accurate features. When the number of training samples increases, the extracted features will become worse, and the accuracy will decrease.
Figure 9 shows the curve of accuracy changing with the number of hidden nodes in CNN at different β. As can be seen from the figure, when the number of hidden nodes exceeds 100, the accuracy is generally stable, and does not increase significantly as the number of nodes increases. When the number of hidden nodes is greater than 100, the test accuracy P t e s t is almost greater than 80%, the training accuracy P t r a i n is close to 100%, and the total accuracy P a l l is greater than 90%. When the number of hidden nodes is 100 and β is 0.7, the test accuracy P t e s t has the highest value of 89%.
Figure 10 and Figure 11 show the recognition accuracy curves of each model of vehicle. Figure 10 shows the accuracy of each model changing with training set ratio β. Figure 11 shows the accuracy of each model changing with the number of hidden nodes in CNN. It can be seen from the figure that which model of vehicle is more easily recognized and which model of vehicle is more easily confused. That is, model Mcar has higher recognition accuracy, while model Fcarb has lower recognition accuracy.
Figure 12 show the examples of residual compensation imaging for six models of vehicles. As can be intuitively seen from the figure, some of the differences between different models of vehicles are small, while others are obvious.
In the above experiment, the size of the images was 100 × 100. Figure 13 shows the comparison of the test accuracy of the compensated image Ifin and the contour thinning image Ithin changing with the image size L.
As can be seen from Figure 13, the accuracy increases as the image size increases. When the image size is less than 40 × 40, the accuracy is very low, just like random guess. When the image size is larger than 70 × 70, the accuracy tends to be stable, and increasing the image size has a limited improvement in accuracy. In almost all cases, the image recognition accuracy of Ifin is higher than that of Ithin. Experiment results show that the compensated image improves the recognition accuracy by about 3% on average.

5. Discussion

There have been many studies on CSAR imaging algorithms. Most of these studies have focused on accurately reducing the scattering characteristics of targets. The contour thinning imaging algorithm in this paper is not for the purpose of accurately reducing the scattering characteristics of the target, but for the purpose of enhancing the recognizability of the target. Residual compensation imaging algorithm proposed in this paper is to further improve and highlight the contour characteristics of the target on the basis of contour thinning. Therefore, this paper does not use other papers’ commonly used signal-to-noise ratio, peak side lobe ratio and other indicators to evaluate the image results. To compare with the previous work, the contour thinning degree is used to quantify the contour characteristics of the imaging results. However, the contour thinning degree is only a secondary indicator; more important is the impact of imaging results on target recognition. Therefore, CNN was used to test different imaging results, and the experimental results showed that the residual compensation imaging algorithm could indeed improve the accuracy of target recognition.
At present, little research has been done on vehicle model recognition using the Gotcha dataset. There are only a few papers on vehicle target recognition with WSAR. The earlier study of vehicle target recognition of CSAR on the Gotcha dataset was conducted by Dungan et al. They used a point set to represent the vehicle image, used the Mahalanobis distance as a measure between the point sets, and applied algorithms such as point pattern matching and pyramid hash matching to recognize the vehicle model, and achieved a recognition accuracy of more than 95% [42,44]. However, the data used by Dungan et al. and the data in this paper are two different subsets of Gotcha dataset. The data used by Dungan et al. are eight groups of altitude orbit data of the same vehicle at the same location. The data used in this paper includes data for different vehicles, different locations and 31 altitude orbits. Gianelli et al. used the same subset of the Gotcha dataset for vehicle target recognition research, and their proposed recognition algorithm achieved a recognition accuracy of 90% [45]. However, in their experiment, they removed many of the flawed images and kept only 540 images of vehicles for recognition. In this paper, only the moving and changing image data of the vehicle is excluded. The number of images actually used for recognition experiments is 660. Based on the above situation, this paper mainly analyzes the impact of imaging results on the recognition accuracy, and does not compare with the recognition accuracy of other recognition algorithms.

6. Conclusions

This paper presents an improved contour thinning imaging algorithm based on residual compensation for CSAR. The algorithm adds a compensation module to the contour thinning imaging algorithm, which better restores the original scattering characteristics of the target. The imaging results show that the image after compensation does not reduce the contour thinning degree, and contains more information than the image before compensation. To verify the influence of the residual compensation imaging algorithm on target recognition, the convolutional neural network is used to recognize vehicle targets. Experiment results show that the image after compensation has a higher target recognition accuracy than the image before compensation. The improved accuracy is about 3% on average. The proposed algorithm is demonstrated on the Gotcha dataset. The residual compensation imaging algorithm proposed in this paper is simple and easy to understand. This algorithm can effectively obtain a clear and complete vehicle contour image, and improve the recognizability of the target. Although the algorithm proposed in this paper is obtained from CSAR data, it can be extended to common WSAR data with detection angles of less than 360°. In our future work, we will focus on the integration of imaging, focusing and target recognition to further improve the accuracy of target recognition.

Author Contributions

Conceptualization, R.H.; Formal analysis, R.H. and J.M.; Funding acquisition, Z.P.; Investigation, R.H., J.M. and W.L.; Methodology, R.H.; Project administration, Z.P.; Software, R.H.; Supervision, Z.P.; Visualization, R.H.; Writing—original draft, R.H.; Writing—review and editing, Z.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (61775030, 61571096), Sichuan Science and Technology Program (2019YJ0167) and Open Research Fund of Key Laboratory of Optical Engineering, Chinese Academy of Sciences (2017LBC003).

Acknowledgments

The authors thank the Air Force Research Laboratory (AFRL) for the experimental dataset. The authors would like to thank the Laboratory of Imaging Detection and Intelligent Perception, School of Information and Communication Engineering, University of Electronic Science and Technology of China for providing the experiment condition. The authors thank Xingguo Liu (University of Electronic Science and Technology of China) and Kelong Zheng (Southwest University of Science and Technology) for their useful advice during this work. The authors thank reviewers for the valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peng, Z.; Zhang, J.; Meng, F.; Dai, J. Time-frequency analysis of SAR image based on generalized S-Transform. In Proceedings of the 2009 International Conference on Measuring Technology and Mechatronics Automation (ICMTMA2009), Zhangjiajie, China, 11–12 April 2009; Volume 1, pp. 556–559. [Google Scholar]
  2. Peng, Z.; Wang, H.; Zhang, G.; Yang, S. Spotlight SAR images restoration based on tomography model. In Proceedings of the 2009 Asia-Pacific Conference on Synthetic Aperture Radar, APSAR 2009, Xian, China, 26–30 October 2009; pp. 1060–1063. [Google Scholar]
  3. Peng, Z.; Liu, S.; Tian, G.; Chen, Z.; Tao, T. Bridge detection and recognition in remote sensing SAR images using pulse coupled neural networks. In Proceedings of the 7th International Symposium on Neural Networks, ISNN 2010, Shanghai, China, 6–9 June 2010; Volume 67, pp. 311–320. [Google Scholar]
  4. Tao, T.; Peng, Z.; Yang, C.; Wei, F.; Liu, L. Targets detection in SAR image used coherence analysis based on S-transform. In Electrical Engineering and Control; Springer: Berlin/Heidelberg, Germany, 2011; Volume 98, pp. 1–9. [Google Scholar]
  5. Yang, J.; Peng, Z. SAR target recognition based on spectrum feature of optimal gabor transform. In Proceedings of the International Conference on Communications, Circuits and Systems (ICCCAS2013), Chengdu, China, 15–17 November 2013; Volume 2, pp. 230–234. [Google Scholar]
  6. Liu, Y.; Peng, L.; Huang, S.; Wang, X.; Wang, Y.; Peng, Z. River detection in high-resolution SAR data using the Frangi filter and Shearlet feature. Remote Sens. Lett. 2019, 10, 949–958. [Google Scholar] [CrossRef]
  7. Huang, P.; Li, K.; Xu, W.; Tan, W.; Gao, Z.; Li, Y. Focusing arc-array bistatic synthetic aperture radar data based on keystone transform. Electronics 2019, 8, 1389. [Google Scholar] [CrossRef] [Green Version]
  8. Li, G.; Lu, Q.; Lao, G.; Ye, W. Wideband noise interference suppression for sparsity-based SAR imaging based on dechirping and double subspace extraction. Electronics 2019, 8, 1019. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, Z.; Liu, M.; Lv, K. Retrieval of three-dimensional surface deformation using an improved differential SAR tomography system. Electronics 2019, 8, 174. [Google Scholar] [CrossRef] [Green Version]
  10. Gao, B.; Li, X.; Sun, J.; Wu, J. Modeling of high-resolution data converter: Two-step pipelined-SAR ADC based on ISDM. Electronics 2020, 9, 137. [Google Scholar] [CrossRef] [Green Version]
  11. Liu, Y.; Zhang, P.; He, Y.; Peng, Z. River detection based on feature fusion for synthetic aperture radar images. J. Appl. Remote Sens. 2020, 14, 016505. [Google Scholar] [CrossRef]
  12. Jin, M.; Chen, M. Analysis and simulation for a spotlight-mode aircraft SAR in circular flight path. In Proceedings of the 13th Annual International Geoscience and Remote Sensing Symposium, Tokyo, Japan, 18–21 August 1993; Volume 4, pp. 1777–1780. [Google Scholar]
  13. Wang, Z.; Wang, Z.; Xu, J. The effect of the attitude of the circular orbiting SAR on doppler properties. Int. J. Remote Sens. 1994, 15, 2313–2322. [Google Scholar] [CrossRef]
  14. Broquetas, A.; Deporrata, R.; Sagues, L.; Fabregas, X.; Jofre, L. Circular Synthetic Aperture Radar (C-SAR) system for ground-based applications. Electron. Lett. 1997, 33, 988–989. [Google Scholar] [CrossRef]
  15. Soumekh, M. Reconnaissance with slant plane circular SAR imaging. IEEE Trans. Image Process. 1996, 5, 1252–1265. [Google Scholar] [CrossRef] [Green Version]
  16. Casteel, J.C.H.; Gorham, L.A.; Minardi, M.J.; Scarborough, S.M.; Naidu, K.D.; Majumder, U.K. A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIV; International Society for Optics and Photonics, Orlando, FL, USA, 10–11 April 2007; Volume 6568. [Google Scholar]
  17. Ertin, E.; Austin, C.D.; Sharma, S.; Moses, R.L.; Potter, L.C. GOTCHA experience report: Three-dimensional SAR imaging with complete circular apertures. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIV, Orlando, FL, USA, 10–11 April 2007; Volume 6568. [Google Scholar]
  18. Gorham, L.A.; Moore, L.J. SAR image formation toolbox for MATLAB. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XVII, Orlando, FL, USA, 8–9 April 2010; Volume 7699. [Google Scholar]
  19. Dungan, K.E.; Ash, J.N.; Nehrbass, J.W.; Parker, J.T.; Gorham, L.A.; Scarborough, S.M. Wide angle SAR data for target discrimination research. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIX, Baltimore, MD, USA, 25–26 April 2012; Volume 8394. [Google Scholar]
  20. Cantalloube, H.M.J.; Colin-Koeniguer, E.; Oriot, H. High resolution SAR imaging along circular trajectories. Int. Geosci. Remote Sens. Symp. 2007, 850–853. [Google Scholar] [CrossRef]
  21. Tan, W.X.; Wang, Y.P.; Wen, H.; Wu, Y.R.; Li, N.J.; Hu, C.F.; Zhang, L.X. Circular SAR experiment for human body imaging. In Proceedings of the 1st Asian And Pacific Conference on Synthetic Aperture Radar, Huangshan, China, 5–9 November 2007; pp. 90–93. [Google Scholar]
  22. Lin, Y.; Hong, W.; Tan, W.X.; Wang, Y.P.; Wu, Y.R. Interferometric circular SAR method for three-dimensional imaging. IEEE Geosci. Remote Sens. Lett. 2011, 8, 1026–1030. [Google Scholar] [CrossRef]
  23. Lin, Y.; Hong, W.; Tan, W.X.; Wu, Y.R. Extension of range migration algorithm to squint circular SAR imaging. IEEE Geosci. Remote Sens. Lett. 2011, 8, 651–655. [Google Scholar] [CrossRef]
  24. Lin, Y.; Hong, W.; Tan, W.X.; Wang, Y.P.; Xiang, M.S. Airborne Circular Sar Imaging: Results at P-Band. In Proceedings of the International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 5594–5597. [Google Scholar]
  25. Zhao, Y.; Lin, Y.; Hong, W.; Yu, L.J. Adaptive imaging of anisotropic target based on circular-SAR. Electron. Lett. 2016, 52, 1406–1407. [Google Scholar] [CrossRef]
  26. Xue, F.; Lin, Y.; Hong, W.; Yin, Q.; Zhang, B.; Shen, W.; Zhao, Y. Analysis of azimuthal variations using multi-aperture polarimetric entropy with circular SAR images. Remote Sens. 2018, 10, 123. [Google Scholar] [CrossRef] [Green Version]
  27. Shen, W.; Lin, Y.; Yu, L.; Xue, F.; Hong, W. Single channel circular SAR moving target detection based on logarithm background subtraction algorithm. Remote Sens. 2018, 10, 742. [Google Scholar] [CrossRef] [Green Version]
  28. Teng, F.; Hong, W.; Lin, Y. Aspect entropy extraction using circular SAR data and scattering anisotropy analysis. Sensors 2019, 19, 346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Kou, L.; Wang, X.; Xiang, M.; Zhu, M. High sidelobe effects on interferometric coherence for circular SAR imaging geometry. J. Syst. Eng. Electron. 2013, 24, 76–83. [Google Scholar] [CrossRef]
  30. Kou, L.; Xiang, M.; Wang, X.; Zhu, M. Tropospheric effects on L-band geosynchronous circular SAR imaging. IET Radar Sonar Navig. 2013, 7, 693–701. [Google Scholar] [CrossRef]
  31. Kou, L.; Wang, X.; Xiang, M. Effects on three-dimensional geosynchronous circular SAR imaging by orbit errors. J. Indian Soc. Remote Sens. 2014, 42, 1–12. [Google Scholar] [CrossRef]
  32. Yuan, X.; Ternovskiy, I. Improved image reconstruction from sub-apertures of circular spotlight SAR. In Cyber Sensing 2015; International Society for Optics and Photonics: Santa Clara, CA, USA, 2015; Volume 9458, p. 945802. [Google Scholar]
  33. Chen, L.; An, D.; Huang, X.; Zhou, Z. A 3D reconstruction strategy of vehicle outline based on single-pass single-polarization CSAR data. IEEE Trans. Image Process. 2017, 26, 5545–5554. [Google Scholar] [CrossRef]
  34. Hu, R.; Peng, Z.; Zheng, K. Modulus stretch-based circular SAR imaging with contour thinning. Appl. Sci. 2019, 9, 2728. [Google Scholar] [CrossRef] [Green Version]
  35. Xie, H.; Shi, S.; An, D.; Wang, G.; Wang, G.; Xiao, H.; Huang, X.; Zhou, Z.; Xie, C.; Wang, F.; et al. Fast factorized backprojection algorithm for one-stationary bistatic spotlight circular SAR image formation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1494–1510. [Google Scholar] [CrossRef]
  36. Liu, T.; Pi, Y.; Yang, X. Wide-angle CSAR imaging based on the adaptive subaperture partition method in the terahertz band. IEEE Trans. Terahertz Sci. Technol. 2018, 8, 165–173. [Google Scholar] [CrossRef]
  37. Wu, B.; Gao, Y.; Ghasr, M.T.; Zoughi, R. Resolution-based analysis for optimizing subaperture measurements in circular SAR imaging. IEEE Trans. Instrum. Meas. 2018, 67, 2804–2811. [Google Scholar] [CrossRef]
  38. Du, B.; Qiu, X.; Huang, L.; Lei, S.; Lei, B.; Ding, C. Analysis of the azimuth ambiguity and imaging area restriction for circular SAR based on the back-projection algorithm. Sensors 2019, 19, 4920. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Burl, M.C.; Owirka, G.J.; Novak, L.M. Texture discrimination in synthetic aperture radar imagery. In Proceedings of the Asilomar Conference on Circuits, Systems & Computers, Pacific Grove, CA, USA, 30 October–1 November 1989; Volume 1, pp. 399–404. [Google Scholar]
  40. Mahalanobis, A.; Forman, A.V.; Bower, M.R.; Day, N.; Cherry, R.F. Quadratic distance classifier for multiclass SAR ATR using correlation filters. Ultrah. Resolut. Radar 1993, 1875, 84–95. [Google Scholar]
  41. Novak, L.M.; Owirka, G.J.; Netishen, C.M. Performance of a high-resolution polarimetric SAR automatic target recognition system. Linc. Labor. J. 1993, 6, 11–24. [Google Scholar]
  42. Dungan, K.E.; Potter, L.C.; Blackaby, J.; Nehrbass, J. Discrimination of civilian vehicles using wide-angle SAR. In Proceedings of the SPIE Defense and Security Symposium, Orlando, FL, USA, 17–18 March 2008; Volume 6970. [Google Scholar]
  43. Dungan, K.E.; Potter, L.C. Effects of polarization on wide-angle SAR classification performance. In Proceedings of the 2010 IEEE National Aerospace Electronics Conference, Fairborn, OH, USA, 14–16 July 2010; pp. 50–53. [Google Scholar]
  44. Dungan, K.E.; Potter, L.C. Classifying vehicles in wide-angle radar using pyramid match hashing. IEEE J. Sel. Top. Signal Process. 2011, 5, 577–591. [Google Scholar] [CrossRef]
  45. Gianelli, C.D.; Xu, L. Focusing, imaging, and atr for the gotcha 2008 wide angle sar collection. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XX, Baltimore, MD, USA, 1–2 May 2013; Volume 8746. [Google Scholar]
  46. Ertin, E. Manifold learning methods for wide-angle SAR ATR. In Proceedings of the 2013 International Conference on Radar—Beyond Orthodoxy: New Paradigms in Radar, Adelaide, Australia, 9–12 September 2013; pp. 500–504. [Google Scholar]
  47. Saville, M.A.; Jackson, J.A.; Fuller, D.F. Rethinking vehicle classification with wide-angle polarimetric SAR. Aerosp. Electron. Syst. Mag. 2014, 29, 41–49. [Google Scholar] [CrossRef]
  48. Wagner, S.A. SAR ATR by a combination of convolutional neural network and support vector machines. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2861–2872. [Google Scholar] [CrossRef]
  49. Chen, S.; Wang, H.; Xu, F.; Jin, Y. Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  50. Kechagias-Stamatis, O.; Aouf, N. Fusing deep learning and sparse coding for SAR ATR. IEEE Trans. Aerosp. Electron. Syst. 2018, 55, 785–797. [Google Scholar] [CrossRef] [Green Version]
  51. Kechagias-Stamatis, O. Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion. J. Appl. Remote Sens. 2018, 12, 046025. [Google Scholar] [CrossRef]
  52. Li, Y.; Jin, Y. Target decomposition and recognition from wide-angle SAR imaging based on a Gaussian amplitude-phase model. Sci. China Inf. Sci. 2017, 60, 062305. [Google Scholar] [CrossRef]
  53. Hu, R.; Peng, Z.; Ma, J. A vehicle target recognition algorithm for wide-angle SAR based on joint feature set matching. Electronics 2019, 8, 1252. [Google Scholar] [CrossRef] [Green Version]
  54. Hu, R.; Peng, Z.; Ma, J. Vehicle target discrimination algorithm for wide-angle SAR based on loose iterative MDS. In Proceedings of the 8th Applied Optics and Photonics China (AOPC 2019), Beijing, China, 7–9 July 2019; Volume 11338. [Google Scholar]
  55. Liu, X.; Chen, Y.; Peng, Z.; Wu, J. Infrared image super-resolution reconstruction based on quaternion and high-order overlapping group sparse total variation. Sensors 2019, 19, 5139. [Google Scholar] [CrossRef] [Green Version]
  56. Zhang, L.; Peng, Z. Infrared small target detection based on partial sum of tensor nuclear norm. Remote Sens. 2019, 11, 382. [Google Scholar] [CrossRef] [Green Version]
  57. Zhang, T.; Wu, H.; Liu, Y.; Peng, L.; Yang, C.; Peng, Z. Infrared small target detection based on non-convex optimization with Lp-norm constraint. Remote Sens. 2019, 11, 559. [Google Scholar] [CrossRef] [Green Version]
  58. Peng, L.; Zhang, T.; Liu, Y.; Li, M.; Peng, Z. Infrared dim target detection using shearlet’s kurtosis maximization under non-uniform background. Symmetry 2019, 11, 723. [Google Scholar] [CrossRef] [Green Version]
  59. Peng, L.; Zhang, T.; Huang, S.; Pu, T.; Liu, Y.; Lyv, Y.; Zheng, Y.; Peng, Z. Infrared small target detection based on multi-directional multi-scale high boost response. Opt. Rev. 2019, 26, 568–582. [Google Scholar] [CrossRef]
  60. Huang, S.; Liu, Y.; He, Y.; Zhang, T.; Peng, Z. Structure adaptive clutter suppression for infrared small target detection: Chain-growth filtering. Remote Sens. 2020, 12, 47. [Google Scholar] [CrossRef] [Green Version]
  61. Lyv, Y.; Peng, L.; Pu, T.; Yang, C.; Peng, Z. Cirrus detection based on RPCA and fractal dictionary learning in infrared imagery. Remote Sens. 2020, 12, 142. [Google Scholar]
  62. Guan, X.; Peng, Z.; Huang, S.; Chen, Y. Gaussian scale-space enhanced local contrast meansure for small infrared target detection. IEEE Geosci. Remote Sens. Lett. 2020, 17, 327–331. [Google Scholar] [CrossRef]
  63. Tian, S.; Sun, G.; Wang, C.; Zhang, H. A ship detection method in SAR image based on Gravity Enhancement. J. Remote Sens. 2007, 11, 452–459. (In Chinese) [Google Scholar]
  64. Kong, F. Maritime Traffic Monitoring and Analysis System Based on Satellite Remote Sensing. Master’s Thesis, Dalian Maritime University, Dalian, China, 2009. (In Chinese). [Google Scholar]
  65. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  66. Ian, G.; Yoshua, B.; Aaron, C. Deep Learning; The MIT Press: Boston, MA, USA, 2016; pp. 326–366. [Google Scholar]
  67. Nocedal, J. Updating quasi-newton matrices with limited storage. Math. Comput. 1980, 35, 773–782. [Google Scholar] [CrossRef]
  68. Liu, D.C.; Nocedal, J. On the limited memory BFGS method for large scale optimization. Math. Program. B 1989, 45, 503–528. [Google Scholar] [CrossRef] [Green Version]
  69. Davis, J.; Goadrich, M. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 233–240. [Google Scholar]
Figure 1. Example of imaging results of different models of vehicle targets in the Gotcha dataset by contour thinning imaging. (ac) are images obtained by backprojection algorithm. (df) are images obtained by contour thinning algorithm. Three models of vehicle targets are used for imaging comparison: (a,d) Model Chevrolet Impala LT; (b,e) Model Mitsubishi Galant ES; (c,f) Model Toyota Highlander.
Figure 1. Example of imaging results of different models of vehicle targets in the Gotcha dataset by contour thinning imaging. (ac) are images obtained by backprojection algorithm. (df) are images obtained by contour thinning algorithm. Three models of vehicle targets are used for imaging comparison: (a,d) Model Chevrolet Impala LT; (b,e) Model Mitsubishi Galant ES; (c,f) Model Toyota Highlander.
Electronics 09 00555 g001
Figure 2. A diagram of circular SAR imaging. The picture on the left illustrates the process of airborne SAR flying around a ground target in a circle. The image on the right is the result of imaging the echo data using a backprojection algorithm. The part marked by the red circle in the right image corresponds to the high reflection part in the front or rear of the vehicle in the left image.
Figure 2. A diagram of circular SAR imaging. The picture on the left illustrates the process of airborne SAR flying around a ground target in a circle. The image on the right is the result of imaging the echo data using a backprojection algorithm. The part marked by the red circle in the right image corresponds to the high reflection part in the front or rear of the vehicle in the left image.
Electronics 09 00555 g002
Figure 3. Examples of residual image of different models of vehicles: (a) Model Chevrolet Impala LT, corresponding to the difference between Figure 1a,d; (b) Model Mitsubishi Galant ES, corresponding to the difference between Figure 1b,e; (c) Model Toyota Highlander, corresponding to the difference between Figure 1c,f.
Figure 3. Examples of residual image of different models of vehicles: (a) Model Chevrolet Impala LT, corresponding to the difference between Figure 1a,d; (b) Model Mitsubishi Galant ES, corresponding to the difference between Figure 1b,e; (c) Model Toyota Highlander, corresponding to the difference between Figure 1c,f.
Electronics 09 00555 g003
Figure 4. The architecture of convolutional neural networks (CNN) used in this paper. The input data is the residual compensation image of the vehicle target. The output of the network is the classification of vehicle targets. The network mainly includes feature training, convolution, pooling, and softmax regression.
Figure 4. The architecture of convolutional neural networks (CNN) used in this paper. The input data is the residual compensation image of the vehicle target. The output of the network is the classification of vehicle targets. The network mainly includes feature training, convolution, pooling, and softmax regression.
Electronics 09 00555 g004
Figure 5. Examples of the speckle reduction effect of three vehicle models in different iteration times: (ad) Model Chevrolet Impala LT, Corresponding to Figure 3a, iterated one to four times, respectively; (eh) Model Mitsubishi Galant ES, Corresponding to Figure 3b, iterated one to four times, respectively; (il) Model Toyota Highlander, Corresponding to Figure 3c, iterated one to four times, respectively.
Figure 5. Examples of the speckle reduction effect of three vehicle models in different iteration times: (ad) Model Chevrolet Impala LT, Corresponding to Figure 3a, iterated one to four times, respectively; (eh) Model Mitsubishi Galant ES, Corresponding to Figure 3b, iterated one to four times, respectively; (il) Model Toyota Highlander, Corresponding to Figure 3c, iterated one to four times, respectively.
Electronics 09 00555 g005
Figure 6. Comparison of Iorg, Ithin and Ifin images of the three vehicle models of Chevrolet Impala LT, Mitsubishi Galant ES, and Toyota Highlander in Figure 1a–c: (ac) Iorg images of three vehicle models; (df) Ithin images of three vehicle models; (gi) Ifin images of three vehicle models.
Figure 6. Comparison of Iorg, Ithin and Ifin images of the three vehicle models of Chevrolet Impala LT, Mitsubishi Galant ES, and Toyota Highlander in Figure 1a–c: (ac) Iorg images of three vehicle models; (df) Ithin images of three vehicle models; (gi) Ifin images of three vehicle models.
Electronics 09 00555 g006
Figure 7. Comparison of contour thinning degree of backprojection imaging (Iorg), contour thinning imaging (Ithin), and residual compensation imaging (Ifin): (a) 5° aperture imaging; (b) 10° aperture imaging; (c) Full aperture imaging.
Figure 7. Comparison of contour thinning degree of backprojection imaging (Iorg), contour thinning imaging (Ithin), and residual compensation imaging (Ifin): (a) 5° aperture imaging; (b) 10° aperture imaging; (c) Full aperture imaging.
Electronics 09 00555 g007
Figure 8. The curve of accuracy changing with training set ratio β when CNN takes different number of hidden nodes. The horizontal axis in the figure represents the proportion of the number of training images to the total number of images. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Figure 8. The curve of accuracy changing with training set ratio β when CNN takes different number of hidden nodes. The horizontal axis in the figure represents the proportion of the number of training images to the total number of images. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Electronics 09 00555 g008
Figure 9. The curve of accuracy changing with the number of hidden nodes in CNN at different β. The horizontal axis in the figure represents the hidden nodes in CNN. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Figure 9. The curve of accuracy changing with the number of hidden nodes in CNN at different β. The horizontal axis in the figure represents the hidden nodes in CNN. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Electronics 09 00555 g009
Figure 10. The accuracy of each model changing with training set ratio β. The horizontal axis in the figure represents the proportion of the number of training images to the total number of images. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Figure 10. The accuracy of each model changing with training set ratio β. The horizontal axis in the figure represents the proportion of the number of training images to the total number of images. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Electronics 09 00555 g010
Figure 11. The accuracy of each model changing with the number of hidden nodes in CNN. The horizontal axis in the figure represents the hidden nodes in CNN. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Figure 11. The accuracy of each model changing with the number of hidden nodes in CNN. The horizontal axis in the figure represents the hidden nodes in CNN. The vertical axis represents the recognition accuracy: (a) The accuracy of the test set; (b) The accuracy of the training set; (c) The accuracy of all images.
Electronics 09 00555 g011
Figure 12. Examples of residual compensation imaging for six models of vehicles: (a) Model Chevrolet Impala LT, the label in the dataset is Fcara; (b) Model Mitsubishi Galant ES, the label in the dataset is Fcarb; (c) Model Toyota Highlander, the label in the dataset is Fsuv; (d) Model Chevrolet HHR LT, the label in the dataset is Mcar; (e) Model Pontiac Torrent, the label in the dataset is Msuv; (f) Model Chrysler Town & Country, the label in the dataset is Van.
Figure 12. Examples of residual compensation imaging for six models of vehicles: (a) Model Chevrolet Impala LT, the label in the dataset is Fcara; (b) Model Mitsubishi Galant ES, the label in the dataset is Fcarb; (c) Model Toyota Highlander, the label in the dataset is Fsuv; (d) Model Chevrolet HHR LT, the label in the dataset is Mcar; (e) Model Pontiac Torrent, the label in the dataset is Msuv; (f) Model Chrysler Town & Country, the label in the dataset is Van.
Electronics 09 00555 g012
Figure 13. The comparison of the test accuracy of the original image and the contour thinning image changing with the image size. The horizontal axis in the figure represents the image size. The unit of the abscissa is pixel. The vertical axis represents the recognition accuracy: (a) The training images ratio β is 0.5; (b) The training images ratio β is 0.6; (c) The training images ratio β is 0.7.
Figure 13. The comparison of the test accuracy of the original image and the contour thinning image changing with the image size. The horizontal axis in the figure represents the image size. The unit of the abscissa is pixel. The vertical axis represents the recognition accuracy: (a) The training images ratio β is 0.5; (b) The training images ratio β is 0.6; (c) The training images ratio β is 0.7.
Electronics 09 00555 g013
Table 1. The total confusion matrix for the recognition results of the test set in an experiment.
Table 1. The total confusion matrix for the recognition results of the test set in an experiment.
Input ModelOutput Model
FcaraFcarbFsuvMcarMsuvVan
Fcara2002020
Fcarb1212000
Fsuv0134215
Mcar1003200
Msuv0001300
Van0030238
Table 2. The total confusion matrix for the recognition results of the trainning set in an experiment.
Table 2. The total confusion matrix for the recognition results of the trainning set in an experiment.
Input ModelOutput Model
FcaraFcarbFsuvMcarMsuvVan
Fcara5600000
Fcarb0570000
Fsuv0099100
Mcar0007800
Msuv0100710
Van0000099
Table 3. The total confusion matrix for the recognition results of all images in an experiment.
Table 3. The total confusion matrix for the recognition results of all images in an experiment.
Input ModelOutput Model
FcaraFcarbFsuvMcarMsuvVan
Fcara7602020
Fcarb1782000
Fsuv01133315
Mcar10011000
Msuv01011010
Van00302137
Table 4. Confusion matrix for model Fcara in an experiment.
Table 4. Confusion matrix for model Fcara in an experiment.
PredictedActual
PositiveNegative
Positive202
Negative4172
Table 5. Confusion matrix for model Fcarb in an experiment.
Table 5. Confusion matrix for model Fcarb in an experiment.
PredictedActual
PositiveNegative
Positive211
Negative3173
Table 6. Confusion matrix for model Fsuv in an experiment.
Table 6. Confusion matrix for model Fsuv in an experiment.
PredictedActual
PositiveNegative
Positive347
Negative9148
Table 7. Confusion matrix for model Mcar in an experiment.
Table 7. Confusion matrix for model Mcar in an experiment.
PredictedActual
PositiveNegative
Positive321
Negative3162
Table 8. Confusion matrix for model Msuv in an experiment.
Table 8. Confusion matrix for model Msuv in an experiment.
PredictedActual
PositiveNegative
Positive305
Negative1162
Table 9. Confusion matrix for model Van in an experiment.
Table 9. Confusion matrix for model Van in an experiment.
PredictedActual
PositiveNegative
Positive385
Negative5150
Table 10. The recognition indicators of each model were calculated based on the data in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9.
Table 10. The recognition indicators of each model were calculated based on the data in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9.
ModelAccuracy (%)Precision (%)Sensitivity (%)Specificity (%)
Fcara97.090.983.398.9
Fcarb98.095.587.599.4
Fsuv91.982.979.195.5
Mcar98.097.091.499.4
Msuv97.085.796.897.0
Van94.988.488.496.8

Share and Cite

MDPI and ACS Style

Hu, R.; Peng, Z.; Ma, J.; Li, W. CNN-Based Vehicle Target Recognition with Residual Compensation for Circular SAR Imaging. Electronics 2020, 9, 555. https://doi.org/10.3390/electronics9040555

AMA Style

Hu R, Peng Z, Ma J, Li W. CNN-Based Vehicle Target Recognition with Residual Compensation for Circular SAR Imaging. Electronics. 2020; 9(4):555. https://doi.org/10.3390/electronics9040555

Chicago/Turabian Style

Hu, Rongchun, Zhenming Peng, Juan Ma, and Wei Li. 2020. "CNN-Based Vehicle Target Recognition with Residual Compensation for Circular SAR Imaging" Electronics 9, no. 4: 555. https://doi.org/10.3390/electronics9040555

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop