Next Article in Journal
Study of PBLH and Its Correlation with Particulate Matter from One-Year Observation over Nanjing, Southeast China
Next Article in Special Issue
Pre-Flight SAOCOM-1A SAR Performance Assessment by Outdoor Campaign
Previous Article in Journal
Geometric Potential Assessment for ZY3-02 Triple Linear Array Imagery
Previous Article in Special Issue
On the Design of Radar Corner Reflectors for Deformation Monitoring in Multi-Frequency InSAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PolSAR Land Cover Classification Based on Roll-Invariant and Selected Hidden Polarimetric Features in the Rotation Domain

The State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, School of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(7), 660; https://doi.org/10.3390/rs9070660
Submission received: 8 May 2017 / Revised: 9 June 2017 / Accepted: 15 June 2017 / Published: 1 July 2017
(This article belongs to the Special Issue Advances in SAR: Sensors, Methodologies, and Applications)

Abstract

:
Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR). Target polarimetric response is strongly dependent on its orientation. Backscattering responses of the same target with different orientations to the SAR flight path may be quite different. This target orientation diversity effect hinders PolSAR image understanding and interpretation. Roll-invariant polarimetric features such as entropy, anisotropy, mean alpha angle, and total scattering power are independent of the target orientation and are commonly adopted for PolSAR image classification. On the other aspect, target orientation diversity also contains rich information which may not be sensed by roll-invariant polarimetric features. In this vein, only using the roll-invariant polarimetric features may limit the final classification accuracy. To address this problem, this work uses the recently reported uniform polarimetric matrix rotation theory and a visualization and characterization tool of polarimetric coherence pattern to investigate hidden polarimetric features in the rotation domain along the radar line of sight. Then, a feature selection scheme is established and a set of hidden polarimetric features are selected in the rotation domain. Finally, a classification method is developed using the complementary information between roll-invariant and selected hidden polarimetric features with a support vector machine (SVM)/decision tree (DT) classifier. Comparison experiments are carried out with NASA/JPL AIRSAR and multi-temporal UAVSAR data. For AIRSAR data, the overall classification accuracy of the proposed classification method is 95.37% (with SVM)/96.38% (with DT), while that of the conventional classification method is 93.87% (with SVM)/94.12% (with DT), respectively. Meanwhile, for multi-temporal UAVSAR data, the mean overall classification accuracy of the proposed method is up to 97.47% (with SVM)/99.39% (with DT), which is also higher than the mean accuracy of 89.59% (with SVM)/97.55% (with DT) from the conventional method. The comparison studies clearly demonstrate the efficiency and advantage of the proposed classification methodology. In addition, the proposed classification method achieves better robustness for the multi-temporal PolSAR data. This work also further validates that added benefits can be gained for PolSAR data investigation by mining and utilization of hidden polarimetric information in the rotation domain.

Graphical Abstract

1. Introduction

With the ability to work day and night under almost all weather conditions and to acquire full polarization information, polarimetric synthetic aperture radar (PolSAR) has become one of the most important microwave remote sensors [1]. Plenty of successful applications have been developed [1,2,3,4,5]. Among them, land cover classification is an important application for PolSAR data utilization. It is able to provide information support to many fields such as general survey of crops, appraisal of cultivated and urban land occupation, environment monitoring, etc.
Plenty of approaches have been proposed to enhance the classification performance from aspects of polarimetric features, classifiers, and both. On one hand, some approaches focused on polarimetric features with better discriminate performance among different land covers through target scattering mechanism understanding and interpretation. The commonly used polarimetric target decomposition techniques can be divided into two categories: eigenvalue-eigenvector-based decomposition and model-based decomposition [5,6]. For eigenvalue-eigenvector-based decomposition, entropy H , anisotropy A n i , and mean alpha angle α ¯ derived from Cloude-Pottier decomposition are frequently used and an entropy based PolSAR land classification scheme was established thereafter [7]. Lee et al. also used Cloude-Pottier decomposition with Wishart classifier to PolSAR image classification [8]. For model-based decomposition, the derived polarimetric features are the energy contributions of some typical scattering mechanisms from Freeman-Durden three-component decomposition [9], Yamaguchi four-component decomposition [10], and the recently reported generalized model-based decomposition techniques [11,12]. Lee et al. applied Freeman-Durden three-component decomposition with the Wishart classifier to classify PolSAR data [13]. Wang et al. adopted the non-negative eigenvalue decomposition for terrain and land-use classification [14]. Hong et al. proposed a four-component decomposition and applied it to classify wetland vegetation types [15].
On the other hand, other approaches to improve PolSAR classification accuracy aim at designing or choosing the classifier with the better classification performance to take full advantage of the available polarimetric features. Specifically, Pajares et al. proposed an optimization relaxation approach based on the analogue Hopfield Neural Network for cluster refinement of pre-classified results from the Wishart classification [16]. Attarchi and Gloaguen compared the performances of the support vector machine (SVM) classifier, neural networks classifier, and random forest classifier for classifying complex mountainous forests with SAR and other source data [17]. Zhou et al. applied the deep convolutional neural networks (CNN) classifier for PolSAR classification and obtained improved results [18]. In addition, considering both polarimetric feature and classifier at the same time is also an effective way to improve the classification accuracy. Deng et al. used both polarimetric decomposition and time-frequency decomposition to mine the hidden information of objects in PolSAR images and applied a C5.0 decision tree (DT) algorithm for optimal feature selection and final classification [19]. They also proposed an approach to classify the PolSAR data by integrating polarimetric decomposition, sub-aperture decomposition, and DT algorithm [20]. Cheng et al. designed and implemented a segmentation-based PolSAR image classification method incorporating texture features, color features and SVM classifier [21]. Wang et al. proposed a PolSAR classification method based on the generalized polarimetric decomposition of the Mueller matrix and SVM classifier [22].
Among the aforementioned PolSAR classification methods based on polarimetric features, roll-invariant polarimetric features are commonly adopted. An important reason is that polarimetric response of a target is strongly dependent on its orientation [23]. On one hand, the backscattering responses of the same target with different orientations to the PolSAR flight path are significantly various. On the other hand, the backscattering responses of different targets with some specific orientations to the flight path may be similar to each other. This target orientation diversity effect frequently induces scattering mechanism ambiguity and hinders the correct understanding and interpretation of PolSAR data [23]. As a result, roll-invariant polarimetric features which are independent of the target orientation diversity are preferred in many applications. However, roll-invariant polarimetric features may not completely represent target scattering properties. Target orientation diversity also contains rich information which is not sensed by roll-invariant polarimetric features [23]. To further improve the PolSAR classification accuracy, proper exploration of the target orientation diversity is an effective way and is able to provide valuable hidden information for physical parameter retrieval. In this vein, uniform polarimetric matrix rotation theory [23] and a visualization and characterization tool of polarimetric coherence pattern [24] were respectively proposed to extract the hidden polarimetric features in the rotation domain along the radar line of sight for hidden scattering information mining. Parts of these new features achieved successful applications including crop discrimination [25], target enhancement [23], and manmade target extraction [26], etc.
Since these hidden polarimetric features contain rich hidden scattering information of targets in the rotation domain, this work aims to utilize them to enhance PolSAR classification accuracy. Specifically, we firstly propose a polarimetric feature selection scheme to select suitable hidden polarimetric features derived from the rotation domain. Then, the selected hidden polarimetric features and the commonly used roll-invariant polarimetric features of H / A n i / α ¯ / S p a n are combined as the discriminant feature set. Finally, a classification method using the combined feature set and the SVM/DT classifier [27,28] is developed.
This work is organized as follows. In Section 2, the two novel schemes for hidden polarimetric information mining in the rotation domain and their corresponding hidden polarimetric features are reviewed. The proposed polarimetric feature selection scheme and classification method are described in Section 3. Comparison experiments with NASA/JPL AIRSAR and multi-temporal UAVSAR datasets are carried out and investigated in Section 4. Section 5 provides the final conclusions and outlook for future work.

2. Hidden Polarimetric Feature Extraction in the Rotation Domain

2.1. Polarimetric Matrices and Their Rotation

For PolSAR, in the horizontal and vertical polarization basis ( H , V ) , the acquired full polarization information can form a scattering matrix S with the representation as
S = [ S H H S H V S V H S V V ]
where S H V is the backscattered coefficient from vertical transmitting and horizontal receiving polarization. Other terms in S are defined similarly.
Subject to the reciprocity condition ( S H V = S V H ), the coherency matrix T is
T = k P k P H = [ T 11 T 12 T 13 T 21 T 22 T 23 T 31 T 32 T 33 ]
where k P = 1 2 [ S H H + S V V S H H S V V 2 S H V ] T is the Pauli scattering vector, denotes the sample average, the superscript T and H denote the transpose and conjugate transpose respectively, and T i j is the ( i , j ) entry of the coherency matrix T .
With a rotation angle θ along the radar line of sight, the rotated scattering matrix S ( θ ) and coherency matrix T ( θ ) respectively become
S ( θ ) = R 2 ( θ ) S R 2 H ( θ )
T ( θ ) = k P ( θ ) k P H ( θ ) = R 3 ( θ ) T R 3 H ( θ )
where the rotation matrixes are R 2 ( θ ) = [ cos θ sin θ sin θ cos θ ] , R 3 ( θ ) = [ 1 0 0 0 cos 2 θ sin 2 θ 0 sin 2 θ cos 2 θ ] .

2.2. Polarimetric Features Derived from Uniform Polarimetric Matrix Rotation Theory

In order to explore the target orientation diversity and mine embedded hidden information, uniform polarimetric matrix rotation theory was proposed [23]. It rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Taking the coherency matrix for example, with mathematic transformations, all the elements and powers of the off-diagonal terms of a rotated coherency matrix T ( θ ) can be represented as a uniform sinusoidal function [23]
f ( θ ) = A sin [ ω ( θ + θ 0 ) ] + B
where A is the oscillation amplitude, B is the oscillation center, ω is the angular frequency, and θ 0 is the initial angle. Therefore, the new polarimetric feature parameter set { A , B , ω , θ 0 } named as the oscillation parameter set is able to completely characterize the rotation properties of all the elements and powers of the off-diagonal terms of T ( θ ) .
Series of new polarimetric features are derived in [23] to describe the hidden scattering information of the target in the rotation domain. Among them, there are eleven independent hidden features as: θ 0 _ Re [ T 12 ( θ ) ] , θ 0 _ Im [ T 12 ( θ ) ] , θ 0 _ Re [ T 23 ( θ ) ] , θ 0 _ | T 12 ( θ ) | 2 , θ 0 _ | T 23 ( θ ) | 2 , A _ Re [ T 12 ( θ ) ] , A _ Im [ T 12 ( θ ) ] , A _ | T 12 ( θ ) | 2 , A _ | T 23 ( θ ) | 2 , B _ T 22 ( θ ) , and B _ | T 23 ( θ ) | 2 . where Re [ T i j ] and Im [ T i j ] are the real and imaginary parts of T i j respectively, and θ 0 _ T i j ( θ ) denotes the initial angle θ 0 of T i j ( θ ) . The other terms of A _ T i j ( θ ) , B _ T i j ( θ ) , and ω _ T i j ( θ ) are defined similarly.

2.3. Polarimetric Features Derived from Polarimetric Coherence Pattern

Polarimetric coherence between two polarization channels s 1 and s 2 is also used for target detection and classification. It can be estimated in practice with the sample average of sufficient samples with similar properties [29] as
| γ 1 2 | = | s 1 s 2 * | | s 1 | 2 | s 2 | 2
where the superscript * denotes the conjugate, and the value of | γ 1 2 | is within the range of [ 0 , 1 ] .
Due to the sensitivity of polarimetric coherence to the target’s orientation to the PolSAR flight path, a visualization and characterization tool of polarimetric coherence pattern [24] was proposed to extend the original polarimetric coherence at a given rotation angle ( θ = 0 ) to the whole rotation domain. It covers all rotation angles ( θ [ π , π ) ) along the radar line of sight for the exploration of complete interpretation of the target’s polarimetric coherence as
| γ 1 2 ( θ ) | = | s 1 ( θ ) s 2 * ( θ ) | | s 1 ( θ ) | 2 | s 2 ( θ ) | 2
With this approach, a set of new polarimetric features were proposed to quantitatively characterize a polarimetric coherence pattern’s variation along the radar line of sight [24]. These derived polarimetric features include: original coherence γ org , coherence degree γ mean , coherence fluctuation γ std , maximum and minimum coherences γ max and γ min , coherence contrast γ contrast , coherence beamwidth γ bw , maximum and minimum rotation angles θ γ max and θ γ min . The detailed definitions are given in [24].
With ( H , V ) polarization basis, four independent polarimetric coherence patterns can be obtained [24] as | γ H H V V ( θ ) | , | γ H H H V ( θ ) | , | γ ( H H + V V ) H V ( θ ) | , and | γ ( H H V V ) H V ( θ ) | . For each of them, the aforementioned nine hidden polarimetric features can be extracted. Therefore, there are a total of thirty-six hidden features derived from the polarimetric coherence patterns.

3. Proposed Polarimetric Feature Selection Scheme and Classification Method

3.1. Proposed Polarimetric Feature Selection Scheme

Based on the aforementioned hidden polarimetric features derived in the rotation domain, there are eleven features derived from the uniform polarimetric matrix rotation theory and thirty-six features derived from the polarimetric coherence pattern. So we need to select suitable features among them to avoid information redundancy which may decrease the accuracy of the final land cover classification. Since γ bw of | γ ( H H V V ) H V ( θ ) | is almost invariant for different land covers [24], it is not considered. Then, a polarimetric feature selection scheme is proposed for the other forty-six hidden polarimetric features. The flowchart of it is shown in Figure 1.
The steps of the proposed polarimetric feature selection scheme are as follow:
(1)
The first step is polarimetric features extraction and normalization. Based on the filtered PolSAR data, independent hidden polarimetric features are extracted and normalized to the range of [ 0 , 1 ] . The total normalized feature set is F a l l = { f i , i = 1 , ... , I } , I is the number of hidden polarimetric features.
(2)
Pre-removal is done to the F a l l based on the within-class distance, which is a measure of the disperse degree of samples within the same class. From ground-truth map, there are X known land covers C x , x = 1 , ... , X . For feature f i , the samples from each land cover C x can be represented as C x i = { f i ( x , k ) , k = 1 , 2 , ... , N x } . Where f i ( x , k ) is the feature value of the k th sample in land cover C x , N x is the total sample number of land cover C x . The within-class distance of land cover C x with feature f i is d within ( C x i ) as
d within ( C x i ) = 1 N x k = 1 N x ( f i ( x , k ) c e n t e r x i ) 2
where c e n t e r x is the center of land cover C x . With feature f i , c e n t e r x i = 1 N x k = 1 N x f i ( x , k ) . Based on different features, the within-class distances of land cover C i are d within ( C x i ) , i = 1 , ... , I . The three largest within-class distances of each land covers are all chosen. The features which produce them form the removal feature set F r e m o v a l which needs to be removed from F a l l as F p r e r e m o v a l = F a l l F r e m o v a l . F p r e r e m o v a l = { f ˜ j , j = 1 , ... , J } , J is the feature number of F p r e r e m o v a l and J I .
(3)
The preliminary selection is carried out based on the class separation distance defined as the distance between two classes plus the distance between two class centers. A corresponding amount of land cover pairs are constructed by combining each two land covers. There are Y land cover pairs as P y , y = 1 , ... , Y . For each land cover pair P y , two land covers of it are C y 1 and C y 2 , where y 1 and y 2 are the land cover labels. With feature f ˜ j , the samples of land covers C y 1 and C y 2 can be represented as C y 1 j = { f ˜ j ( y 1 , l ) , l = 1 , 2 , ... , N y 1 } and C y 2 j = { f ˜ j ( y 2 , m ) , m = 1 , 2 , ... , N y 2 } , respectively. The distance between land covers C y 1 and C y 2 with feature f ˜ j d class ( C y 1 j , C y 2 j ) and the distance between their centers c e n t e r y 1 and c e n t e r y 2 with feature f ˜ j d center ( c e n t e r y 1 j , c e n t e r y 2 j ) are respectively as
d class ( C y 1 j , C y 2 j ) = 1 N y 1 N y 2 l = 1 N y 1 m = 1 N y 2 ( f ˜ j ( y 1 , l ) f ˜ j ( y 2 , m ) ) 2
d center ( c e n t e r y 1 j , c e n t e r y 2 j ) = | 1 N y 1 l = 1 N y 1 f ˜ j ( y 1 , l ) 1 N y 2 m = 1 N y 2 f ˜ j ( y 2 , m ) |
The class separation distance of land cover pair P y with feature f ˜ j is proposed as d separation ( P y j ) = d class ( C y 1 j , C y 2 j ) + d center ( c e n t e r y 1 j , c e n t e r y 2 j ) and is able to measure the land cover separation of land cover pair P y . Based on different features of F p r e r e m o v a l , the class separation distances of land cover pair P y are d separation ( P y j ) , j = 1 , ... , J . The selected hidden polarimetric feature of land cover pair P y is f y s s = arg max f ˜ j F p r e r e m o v a l { d separation ( P y j ) } . The preliminary selected feature set for all the land cover pairs is F p r e s e l e c t i o n = f 1 s s f 2 s s ... f Y s s .
(4)
After the preliminary selection, post-refinement is implemented. The idea of post-refinement is to determine the features with relatively higher discriminant performance in F p r e s e l e c t i o n . For each land cover pairs, the features which lead to the maximum class separation distances are recorded and accounted. Features with appearance higher than a predefined threshold r are all determined as the optimal feature candidates. Then, the final selected feature set can be formed as F s e l e c t i o n with these optimal features from F p r e s e l e c t i o n .
(5)
Finally, if the PolSAR data is a single-temporal/band, the F s e l e c t i o n will be the final selection results direct. Or, if it is one of the multi-temporal/band PolSAR data, the union for all the F s e l e c t i o n of different temporal/band data will be the final selection results.
This work uses the basic and commonly adopted criterions of within-class distance, distance between two classes, and distance between two class centers to select suitable hidden polarimetric features derived in the rotation domain. Certainly, other feature selection schemes can also be suitable.

3.2. Proposed Classification Method

The main idea of the proposed classification method is to utilize the complementary information between the roll-invariant polarimetric features and the selected hidden polarimetric features in the rotation domain. The combination of them will be used as the classifier input. In order to validate the performance of the proposed classification method, both the SVM and DT classifiers [27,28] are used in this work. The flowchart of the proposed classification method is illustrated in Figure 2 and the corresponding steps are as follows.
(1)
In order to extract the roll-invariant polarimetric features of H / A n i / α ¯ , the original PolSAR data is speckle filtered. The recently reported adaptive SimiTest speckle filter [29] is adopted.
(2)
Based on the filtered coherency matrix, total scattering power S p a n can be calculated by S p a n = T 11 + T 22 + T 33 .
(3)
Roll-invariant polarimetric features of entropy H , mean alpha angle α ¯ and anisotropy A n i can be extracted by Cloude-Pottier decomposition [6].
(4)
The selected hidden polarimetric features are extracted using the uniform polarimetric matrix rotation theory [23] and the visualization and characterization tool of polarimetric coherence pattern [24].
(5)
Each of these polarimetric features is normalized to the range of [ 0 , 1 ] . And the combination of these normalized features is formed as the classifier input.
(6)
Through training and validation processing of the SVM/DT classifier, the final classification results and corresponding accuracies of each land cover and the overall can be obtained.

4. Comparison Experiments

4.1. Data Description

First, NASA/JPL AIRSAR L-band PolSAR data collected over Flevoland, the Netherlands, is adopted. The range and azimuth pixel resolutions are 6.6 m and 12.1 m respectively. The data is speckle filtered by the adaptive SimiTest approach with a 15 × 15 sliding window [29] and is shown in Figure 3a. The filter sliding window size of 15 × 15 is recommended in [29], which makes a tradeoff for the filter performance and computational cost. Besides, the filter sliding window size effect will be investigated in Section 4.3. This study area contains various land covers and a ground-truth map for eleven known land covers (including water, rapeseed, grasses, bare soil, potatoes, beet, wheat, lucerne, forest, peas, and stembeans) is shown in Figure 3b.
Secondly, NASA/JPL UAVSAR L-band multi-temporal PolSAR data collected over Manitoba, Canada, are also adopted. The range and azimuth pixel resolutions are 5 m and 7 m respectively. Four temporal data are used in the comparison. The acquisition dates are 17 June, 22 June, 5 July, and 17 July in 2012, respectively. With Pauli basis, the RGB composite images of the filtered multi-temporal UAVSAR data are shown in Figure 4a–d. Also, the adaptive SimiTest speckle filter with the recommended 15 × 15 sliding window [29] is adopted. This study area also contains various land covers and a ground-truth map for seven known land covers (including oats, rapeseed, wheat, corn, soybeans, forage crops, and broadleaf) is shown in Figure 4e.

4.2. Selected Hidden Polarimetric Features of Different PolSAR Data

For each land cover class, 1000 random samples are used to represent the class in the feature selection processing. For the AIRSAR data, X = 11 denotes eleven known land covers and the corresponding number of land cover pairs is Y = 55 . Meanwhile, for the multi-temporal UAVSAR data, X = 7 and Y = 21 . The preliminary selected feature sets F p r e s e l e c t i o n for different PolSAR data are shown in Table 1. The numbers in brackets indicate the appearance number that this feature leads to the maximum class separation distances. For example, the selected hidden polarimetric feature γ min of | γ H H V V ( θ ) | can maximize the class separation distances within fourteen land cover pairs of the fifty-five pairs of the filtered AIRSAR data.
Based on the preliminary selected feature sets in Table 1, we set r = 3 in the followed refinement processing. In this vein, features which have only one or two corresponding land cover pairs are not taken into consideration. As a result, for AIRSAR data, the final selected features are θ 0 _ Re [ T 12 ( θ ) ] , θ 0 _ Im [ T 12 ( θ ) ] , γ org of | γ ( H H + V V ) H V ( θ ) | , γ max of | γ ( H H V V ) H V ( θ ) | , γ contrast of | γ ( H H V V ) H V ( θ ) | and γ min of | γ H H V V ( θ ) | . For multi-temporal UAVSAR data, the union of the selected features of different temporal data are the final selection results, which include θ 0 _ Re [ T 12 ( θ ) ] , θ 0 _ Im [ T 12 ( θ ) ] , γ org of | γ ( H H + V V ) H V ( θ ) | , γ org of | γ ( H H V V ) H V ( θ ) | , γ min of | γ ( H H V V ) H V ( θ ) | and γ min of | γ H H V V ( θ ) | .
For AIRSAR data, four commonly adopted roll-invariant polarimetric features of H / A n i / α ¯ / S p a n are calculated and shown in Figure 5a–d. Then, the six selected hidden polarimetric features derived in the rotation domain are also calculated and shown in Figure 5e–j. In order to compare the land cover discrimination abilities of the six selected hidden polarimetric features and the four roll-invariant polarimetric features, means and standard deviations of different features in terms of each known land covers are shown in Figure 6. Based on the four roll-invariant polarimetric features of H / A n i / α ¯ / S p a n only, land cover 3 (grasses) and 7 (wheat) cannot be successfully discriminated. The discriminations between land cover 3 (grasses) and 8 (lucerne), and land cover 5 (potatoes) and 9 (forest) are also limited. In comparison, with θ 0 _ Re [ T 12 ( θ ) ] , land cover 3 (grasses) and 7 (wheat) can be discriminated successfully. Furthermore, with γ min of | γ H H V V ( θ ) | , better discriminations are achieved for land cover 3 (grasses) and 8 (lucerne), and land cover 5 (potatoes) and 9 (forest). Other selected hidden polarimetric features are also able to enhance the discriminations for some land cover pairs. Thereby, selected hidden polarimetric features can further enhance the land cover discrimination abilities and have the potentials to improve the land cover classification accuracy.
Similarly, for UAVSAR data (data of 17 June 2012 is used as an example), the four roll-invariant polarimetric features and the six selected hidden polarimetric features are calculated and shown in Figure 7. Means and standard deviations of these features for known land covers are shown in Figure 8. Using H / A n i / α ¯ / S p a n only, land cover 1 (oats) and 2 (rapeseed), land cover 1 (oats) and 3 (wheat), land cover 1 (oats) and 5 (soybeans), and land cover 3 (wheat) and 5 (soybeans) may not be successfully discriminated. While they can be discriminated by each hidden polarimetric features of θ 0 _ Im [ T 12 ( θ ) ] , γ org of | γ ( H H + V V ) H V ( θ ) | , γ min of | γ ( H H V V ) H V ( θ ) | , and θ 0 _ Re [ T 12 ( θ ) ] . This further verifies that combining the selected hidden and roll-invariant polarimetric features has better potential to enhance PolSAR classification performance.

4.3. Classification Comparison with AIRSAR Data

In order to demonstrate the added benefits from hidden polarimetric features, the proposed classification method is compared with the conventional classification method which only uses the roll-invariant polarimetric features of H / A n i / α ¯ / S p a n . For each known land cover in the different PolSAR data, a half of the known samples are randomly selected and used for training the SVM/DT classifier, and the other half of the known samples are used for validation.
First, the AIRSAR data is adopted to compare the classification performance of the conventional and proposed classification methods. Using the SVM classifier, the classification results for the AIRSAR data over eleven known land covers are shown in Figure 9. The classification accuracies are listed in Table 2. It can be observed that the performance of the proposed classification method outperforms that of the conventional one. The overall classification accuracy of the proposed classification method is 95.37%, while that of the conventional classification method is 93.87%. Moreover, for nine of these eleven land covers, the classification accuracies of the proposed classification method are higher than those of the conventional classification method. Especially for grasses, the classification accuracy increase is up to 14.35%, from 66.99% of the conventional method to 81.34% of the proposed method. Besides, the computational costs of the training and validation processing are listed in Table 3. The computational costs are comparable. Finally, the classification results over the full-scene area of this AIRSAR data with the conventional and proposed classification methods respectively are shown in Figure 10.
In addition, the DT classifier is used. With the conventional and proposed classification methods, the classification results are shown in Figure 11. The classification accuracies are listed in Table 4. The performance of the proposed classification method still outperforms that of the conventional one. The overall classification accuracy of the proposed classification method is 96.38%, which is higher than the 94.12% of the conventional one. Moreover, for ten of these eleven land covers, the classification accuracies of the proposed classification method are higher than those of the conventional one. Besides, the computational costs of the training and validation processing are listed in Table 5. The computational costs are also comparable. Finally, the classification results over the full-scene area of this AIRSAR data with the conventional and proposed classification methods respectively are shown in Figure 12. The conventional and proposed classification methods both belong to the pixel-based classification methods which are used to deal with the pixels one by one. Besides, in order to compare the performances of the conventional and proposed classification methods, no post-processing is used. Since the misclassification rate is about 5%, these misclassified pixels produce the noisy appearance. Because the DT classifier has a better performance than the SVM classifier and misclassifies less pixels in the full-scene area, the classification results using the DT classifier in Figure 12 look less noisy than those using the SVM classifier in Figure 10. Indeed, some region-based classification methods are suitable to reduce these noisy effects and will be considered in future.
Besides, based on the original AIRSAR data, the overall classification accuracies with different filter sliding window sizes (7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, and 25 × 25) are examined and listed in Table 6. It is clear that with the same classification method, the larger the filter window size, the higher is the followed classification accuracy. However, the filter window size of 15 × 15 is chosen based on the tradeoff for both classification accuracy and filter computational cost. In addition, with the same filter window size, the performance of the proposed classification method is still better than that of the conventional one. It verifies the advantage of the proposed classification method further.

4.4. Classification Comparison with Multi-Temporal UAVSAR Data

Using the SVM classifier, with the conventional and proposed classification methods, the classification results for the filtered multi-temporal UAVSAR data over seven known land covers are shown in Figure 13. The classification accuracies are listed in Table 7. It is clear that the performance of the proposed classification method is much better than that of the conventional one. The mean overall classification accuracy for four temporal data of the proposed classification method is 97.47%, which is much higher than the 89.59% of the conventional one. Additionally, the overall classification accuracy increments for the four temporal data are 6.45% (17 June: from 90.19% to 96.64%), 6.30% (22 June: from 90.75% to 97.05%), 10.24% (5 July: from 88.03% to 98.27%), and 8.54% (17 July: from 89.39% to 97.93%), respectively. Moreover, the proposed classification method has better robustness for the different temporal PolSAR data. Especially for oats, wheat, and forage crops, the classification accuracy ranges for the four temporal data of the conventional classification method are 77.29–94.61%, 76.85–97.89%, and 56.36–64.51%, while those of the proposed classification method are 94.09–97.39%, 97.79–98.88%, and 83.77–94.16%, respectively. Besides, the computational costs of the training and validation processing are listed in Table 8. It can be seen that the computational costs of the training and validation processing with the proposed classification method are mainly comparable to or less than those with the conventional one. Since the total known samples of each UAVSAR data are about four times as many as those of AIRSAR data, the computational costs with each UAVSAR data are much more than those with AIRSAR data. Finally, the classification results over the full-scene area of these four temporal UAVSAR data with the conventional and proposed classification methods are shown in Figure 14.
In addition, with the DT classifier, the classification results for the multi-temporal UAVSAR data over seven known land covers are shown in Figure 15. The classification accuracies are listed in Table 9. We can see that the performance of the proposed classification method is still better than that of the conventional one. The mean overall classification accuracy for four temporal data of the proposed classification method is 99.39%, which is still higher than the 97.55% of the conventional classification method. In addition, the overall classification accuracy enhancements for the four temporal data are 1.79% (17 June: from 97.48% to 99.27%), 1.65% (22 June: from 97.63% to 99.28%), 1.91% (5 July: from 97.65% to 99.56%), and 2.00% (17 July: from 97.45% to 99.45%), respectively. Moreover, the proposed classification method still has better robustness for the different temporal PolSAR data especially in the areas of oats, rapeseed, and forage crops. Besides, the computational costs of the training and validation processing are listed in Table 10. From it, the computational costs of the validation processing are comparable. Finally, the classification results over the full-scene area of these four temporal UAVSAR data with the conventional and proposed classification methods are shown in Figure 16.
Besides, based on the original UAVSAR data acquired on June 17, 2012, the overall classification accuracies with different filter sliding window sizes (7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, and 25 × 25) are investigated and listed in Table 11. Similar conclusion can be obtained as that obtained from AIRSAR data.

5. Conclusions and Outlook

This work validates that added benefits can be gained for PolSAR data investigation by mining and utilization of hidden polarimetric information in the rotation domain along the radar line of sight. A PolSAR land cover classification method by combining roll-invariant features and selected hidden features is established. With the added benefits, the land cover discrimination ability is enhanced and the followed classification accuracies are improved significantly. The comparison experiments based on NASA/JPL AIRSAR and multi-temporal UAVSAR data respectively clearly demonstrate the efficiency and advantage of the proposed classification methodology. Moreover, the proposed classification method is also able to achieve better robustness for multi-temporal PolSAR data. Besides, with the SVM/DT classifier, the computational costs of the proposed classification method are always comparable to those of the conventional one. These added benefits are general for the PolSAR land cover classification and the proposed classification technique can be suitable for other kinds of PolSAR data.
This work provides a new vision for PolSAR image interpretation and application. Moreover, other better feature selection schemes, some region-based classification methods, and more advanced classifiers such as CNN classifier are all worth coducting in future.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 41301490, 61490690, and 61490692.

Author Contributions

This work was prepared and accomplished by Chensong Tao, who also wrote the manuscript. Siwei Chen outlined the research and supported the analysis and discussion. He also revised the work in presenting the technical details. Yongzhen Li and Shunping Xiao both suggested the design of comparison experiments and supervised the writing of the manuscript at all stages.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, J.S.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  2. Sato, M.; Chen, S.W.; Satake, M. Polarimetric SAR analysis of tsunami damage following the March 11, 2011 East Japan earthquake. Proc. IEEE 2012, 100, 2861–2875. [Google Scholar] [CrossRef]
  3. Chen, S.W.; Sato, M. Tsunami damage investigation of built-up areas using multitemporal spaceborne full polarimetric SAR images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1985–1997. [Google Scholar] [CrossRef]
  4. Chen, S.W.; Wang, X.S.; Sato, M. Urban damage level mapping based on scattering mechanism investigation using fully polarimetric SAR data for the 3.11 East Japan earthquake. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6919–6929. [Google Scholar] [CrossRef]
  5. Chen, S.W.; Li, Y.Z.; Wang, X.S.; Xiao, S.P.; Sato, M. Modeling and interpretation of scattering mechanisms in polarimetric synthetic aperture radar: Advances and perspectives. IEEE Signal Process. Mag. 2014, 31, 79–89. [Google Scholar] [CrossRef]
  6. Cloude, S.R.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  7. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SARs. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  8. Lee, J.S.; Grunes, M.R.; Ainsworth, T.L.; Du, L.; Schuler, D.L.; Cloude, S.R. Unsupervised classification using polarimetric decomposition and the complex Wishart classifier. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2249–2258. [Google Scholar] [CrossRef]
  9. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  10. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  11. Chen, S.W.; Wang, X.S.; Li, Y.Z.; Sato, M. Adaptive model-based polarimetric decomposition using PolInSAR coherence. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1705–1718. [Google Scholar] [CrossRef]
  12. Chen, S.W.; Wang, X.S.; Xiao, S.P.; Sato, M. General polarimetric model-based decomposition for coherency matrix. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1843–1855. [Google Scholar] [CrossRef]
  13. Lee, J.S.; Grunes, M.R.; Pottier, E.; Ferro-Famil, L. Unsupervised terrain classification preserving polarimetric scattering characteristics. IEEE Trans. Geosci. Remote Sens. 2004, 42, 722–731. [Google Scholar] [CrossRef]
  14. Wang, C.L.; Yu, W.D.; Wang, R.; Deng, Y.K.; Zhao, F.J.; Lu, Y.C. Unsupervised classification based on non-negative eigenvalue decomposition and Wishart classifier. IET Radar Sonar Navig. 2014, 8, 957–964. [Google Scholar] [CrossRef]
  15. Hong, S.H.; Kim, H.O.; Wdowinski, S.; Feliciano, E. Evaluation of polarimetric SAR decomposition for classifying wetland vegetation types. Remote Sens. 2015, 7, 8563–8585. [Google Scholar] [CrossRef]
  16. Pajares, G.; Lopez-Martinez, C.; Sanchez-Llado, F.J.; Molina, I. Improving Wishart classification of polarimetric SAR data using the Hopfield Neural Network optimization approach. Remote Sens. 2012, 4, 3571–3595. [Google Scholar] [CrossRef]
  17. Attarchi, S.; Gloaguen, R. Classifying complex mountainous forests with L-band SAR and Landsat data integration: A comparison among different machine learning methods in the Hyrcanian forest. Remote Sens. 2014, 6, 3624–3647. [Google Scholar] [CrossRef]
  18. Zhou, Y.; Wang, H.P.; Xu, F.; Jin, Y.Q. Polarimetric SAR images classification using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
  19. Deng, L.; Yan, Y.N.; Wang, C.Z. Improved POLSAR image classification by the use of multi-feature combination. Remote Sens. 2015, 7, 4157–4177. [Google Scholar] [CrossRef]
  20. Deng, L.; Yan, Y.N.; Sun, C. Use of sub-aperture decomposition for supervised PolSAR classification in urban area. Remote Sens. 2015, 7, 1380–1396. [Google Scholar] [CrossRef]
  21. Cheng, J.; Ji, Y.Q.; Liu, H.J. Segmentation-based PolSAR image classification using visual features: RHLBP and color features. Remote Sens. 2015, 7, 6079–6106. [Google Scholar] [CrossRef]
  22. Wang, H.N.; Zhou, Z.M.; Turnbull, J.; Song, Q.; Qi, F. Pol-SAR classification based on generalized polar decomposition of Mueller matrix. IEEE Geosci. Remote Sens. Lett. 2016, 13, 565–569. [Google Scholar] [CrossRef]
  23. Chen, S.W.; Wang, X.S.; Sato, M. Uniform polarimetric matrix rotation theory and its applications. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4756–4770. [Google Scholar] [CrossRef]
  24. Chen, S.W.; Wang, X.S. Polarimetric coherence pattern: A visualization tool for PolSAR data investigation. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 7509–7512. [Google Scholar]
  25. Chen, S.W.; Li, Y.Z.; Wang, X.S. Crop discrimination based on polarimetric correlation coefficients optimization for PolSAR data. Int. J. Remote Sens. 2015, 36, 4233–4249. [Google Scholar] [CrossRef]
  26. Xiao, S.P.; Chen, S.W.; Chang, Y.L.; Li, Y.Z.; Sato, M. Polarimetric coherence optimization and its application for manmade target extraction in PolSAR data. IEICE Trans. Electron. 2014, E97C, 566–574. [Google Scholar] [CrossRef]
  27. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 389–396. [Google Scholar] [CrossRef]
  28. Webb, A.; Copsey, K. Statistical Pattern Recognition, 3rd ed.; John Wiley & Sons Ltd.: Chichester, UK, 2011. [Google Scholar]
  29. Chen, S.W.; Wang, X.S.; Sato, M. PolInSAR complex coherence estimation based on covariance matrix similarity test. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4699–4709. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed polarimetric feature selection scheme.
Figure 1. Flowchart of the proposed polarimetric feature selection scheme.
Remotesensing 09 00660 g001
Figure 2. Flowchart of the proposed classification method.
Figure 2. Flowchart of the proposed classification method.
Remotesensing 09 00660 g002
Figure 3. Study area. (a) RGB composite image of the filtered AIRSAR data with Pauli basis; (b) Ground-truth map for eleven known land covers.
Figure 3. Study area. (a) RGB composite image of the filtered AIRSAR data with Pauli basis; (b) Ground-truth map for eleven known land covers.
Remotesensing 09 00660 g003
Figure 4. Study area. (ad) RGB composite images of the filtered multi-temporal UAVSAR data (17 June, 22 June, 5 July, and 17 July in 2012 respectively) with Pauli basis; (e) Ground-truth map for seven known land covers.
Figure 4. Study area. (ad) RGB composite images of the filtered multi-temporal UAVSAR data (17 June, 22 June, 5 July, and 17 July in 2012 respectively) with Pauli basis; (e) Ground-truth map for seven known land covers.
Remotesensing 09 00660 g004
Figure 5. Roll-invariant polarimetric features and selected hidden polarimetric features for AIRSAR data. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ max of | γ ( H H V V ) H V ( θ ) | ; (i) γ contrast of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Figure 5. Roll-invariant polarimetric features and selected hidden polarimetric features for AIRSAR data. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ max of | γ ( H H V V ) H V ( θ ) | ; (i) γ contrast of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Remotesensing 09 00660 g005
Figure 6. Means and standard deviations comparison for AIRSAR data. Land cover 1–11 indicate water, rapeseed, grasses, bare soil, potatoes, beet, wheat, lucerne, forest, peas, and stembeans respectively. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ max of | γ ( H H V V ) H V ( θ ) | ; (i) γ contrast of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Figure 6. Means and standard deviations comparison for AIRSAR data. Land cover 1–11 indicate water, rapeseed, grasses, bare soil, potatoes, beet, wheat, lucerne, forest, peas, and stembeans respectively. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ max of | γ ( H H V V ) H V ( θ ) | ; (i) γ contrast of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Remotesensing 09 00660 g006
Figure 7. Roll-invariant polarimetric features and selected hidden polarimetric features for UAVSAR data acquired on 17 June 2012. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ org of | γ ( H H V V ) H V ( θ ) | ; (i) γ min of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Figure 7. Roll-invariant polarimetric features and selected hidden polarimetric features for UAVSAR data acquired on 17 June 2012. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ org of | γ ( H H V V ) H V ( θ ) | ; (i) γ min of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Remotesensing 09 00660 g007
Figure 8. Means and standard deviations comparison for UAVSAR data acquired on 17 June 2012. Land cover 1–7 indicate oats, rapeseed, wheat, corn, soybeans, forage crops, and broadleaf respectively. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ org of | γ ( H H V V ) H V ( θ ) | ; (i) γ min of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Figure 8. Means and standard deviations comparison for UAVSAR data acquired on 17 June 2012. Land cover 1–7 indicate oats, rapeseed, wheat, corn, soybeans, forage crops, and broadleaf respectively. (a) H ; (b) A n i ; (c) α ¯ ; (d) S p a n ; (e) θ 0 _ Re [ T 12 ( θ ) ] ; (f) θ 0 _ Im [ T 12 ( θ ) ] ; (g) γ org of | γ ( H H + V V ) H V ( θ ) | ; (h) γ org of | γ ( H H V V ) H V ( θ ) | ; (i) γ min of | γ ( H H V V ) H V ( θ ) | ; (j) γ min of | γ H H V V ( θ ) | .
Remotesensing 09 00660 g008aRemotesensing 09 00660 g008b
Figure 9. Classification results for AIRSAR data over eleven known land covers using support vector machine (SVM) classifier. (a) Conventional classification method; (b) Proposed classification method.
Figure 9. Classification results for AIRSAR data over eleven known land covers using support vector machine (SVM) classifier. (a) Conventional classification method; (b) Proposed classification method.
Remotesensing 09 00660 g009
Figure 10. Classification results over the full-scene area of AIRSAR data using SVM classifier. (a) Conventional classification method; (b) Proposed classification method.
Figure 10. Classification results over the full-scene area of AIRSAR data using SVM classifier. (a) Conventional classification method; (b) Proposed classification method.
Remotesensing 09 00660 g010
Figure 11. Classification results for AIRSAR data over eleven known land covers using decision tree (DT) classifier. (a) Conventional classification method; (b) Proposed classification method.
Figure 11. Classification results for AIRSAR data over eleven known land covers using decision tree (DT) classifier. (a) Conventional classification method; (b) Proposed classification method.
Remotesensing 09 00660 g011
Figure 12. Classification results over the full-scene area of AIRSAR data using DT classifier. (a) Conventional classification method; (b) Proposed classification method.
Figure 12. Classification results over the full-scene area of AIRSAR data using DT classifier. (a) Conventional classification method; (b) Proposed classification method.
Remotesensing 09 00660 g012
Figure 13. Classification results for multi-temporal UAVSAR data over seven known land covers using SVM classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Figure 13. Classification results for multi-temporal UAVSAR data over seven known land covers using SVM classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Remotesensing 09 00660 g013
Figure 14. Classification results over the full-scene area of multi-temporal UAVSAR data using SVM classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Figure 14. Classification results over the full-scene area of multi-temporal UAVSAR data using SVM classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Remotesensing 09 00660 g014
Figure 15. Classification results for multi-temporal UAVSAR data over seven known land covers using DT classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Figure 15. Classification results for multi-temporal UAVSAR data over seven known land covers using DT classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Remotesensing 09 00660 g015
Figure 16. Classification results over the full-scene area of multi-temporal UAVSAR data using DT classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Figure 16. Classification results over the full-scene area of multi-temporal UAVSAR data using DT classifier. (a1d1) are 17 June, 22 June, 5 July, and 17 July in 2012 with the conventional classification method respectively; (a2d2) are 17 June, 22 June, 5 July, and 17 July in 2012 with the proposed classification method respectively.
Remotesensing 09 00660 g016
Table 1. Preliminary selected feature sets for different polarimetric synthetic aperture radar (PolSAR) data.
Table 1. Preliminary selected feature sets for different polarimetric synthetic aperture radar (PolSAR) data.
PolSAR DataPreliminary Selected Feature Set
AIRSAR γ min of | γ H H V V ( θ ) | (14), γ max of | γ ( H H V V ) H V ( θ ) | (11), θ 0 _ Re [ T 12 ( θ ) ] (9), θ 0 _ Im [ T 12 ( θ ) ] (6), γ org of | γ ( H H + V V ) H V ( θ ) | (3), γ contrast of | γ ( H H V V ) H V ( θ ) | (3), γ max of | γ ( H H + V V ) H V ( θ ) | (2), γ org of | γ H H H V ( θ ) | (2), γ org of | γ H H V V ( θ ) | (2), γ max of | γ H H V V ( θ ) | (2), γ max of | γ H H H V ( θ ) | (1)
UAVSAR17 June θ 0 _ Im [ T 12 ( θ ) ] (9), γ org of | γ ( H H V V ) H V ( θ ) | (4), θ 0 _ Re [ T 12 ( θ ) ] (2), γ max of | γ ( H H V V ) H V ( θ ) | (2), γ mean of | γ ( H H + V V ) H V ( θ ) | (1), γ contrast of | γ ( H H + V V ) H V ( θ ) | (1), γ min of | γ ( H H V V ) H V ( θ ) | (1), γ org of | γ H H V V ( θ ) | (1)
22 June θ 0 _ Im [ T 12 ( θ ) ] (9), γ org of | γ ( H H V V ) H V ( θ ) | (4), γ min of | γ ( H H V V ) H V ( θ ) | (3), θ 0 _ Re [ T 12 ( θ ) ] (1), γ mean of | γ ( H H + V V ) H V ( θ ) | (1), γ org of | γ ( H H + V V ) H V ( θ ) | (1), γ org of | γ H H V V ( θ ) | (1), γ min of | γ H H V V ( θ ) | (1)
5 July θ 0 _ Im [ T 12 ( θ ) ] (7), θ 0 _ Re [ T 12 ( θ ) ] (6), γ min of | γ H H V V ( θ ) | (3), γ org of | γ ( H H + V V ) H V ( θ ) | (3), γ contrast of | γ ( H H + V V ) H V ( θ ) | (1), γ org of | γ H H V V ( θ ) | (1)
17 July θ 0 _ Im [ T 12 ( θ ) ] (6), θ 0 _ Re [ T 12 ( θ ) ] (3), γ org of | γ ( H H + V V ) H V ( θ ) | (3), γ min of | γ H H V V ( θ ) | (3), γ org of | γ ( H H V V ) H V ( θ ) | (2), γ max of | γ ( H H V V ) H V ( θ ) | (1), γ org of | γ H H H V ( θ ) | (1), γ max of | γ H H H V ( θ ) | (1), γ max of | γ ( H H + V V ) H V ( θ ) | (1)
Table 2. Classification accuracies (%) for AIRSAR data using support vector machine (SVM) classifier.
Table 2. Classification accuracies (%) for AIRSAR data using support vector machine (SVM) classifier.
Classification MethodWaterRapeseedGrassesBare SoilPotatoesBeetWheatLucerneForestPeasStembeansOverall
Conventional97.6594.8966.9995.8492.8194.8996.1295.8992.5397.8598.0793.87
Proposed98.3995.3881.3496.7593.4295.7697.5896.3394.0897.7697.4495.37
Table 3. Computational costs (s) of the training and validation processing for AIRSAR data using SVM classifier.
Table 3. Computational costs (s) of the training and validation processing for AIRSAR data using SVM classifier.
Classification MethodTrainingValidation
Conventional14.338.1
Proposed13.239.7
Table 4. Classification accuracies (%) for AIRSAR data using decision tree (DT) classifier.
Table 4. Classification accuracies (%) for AIRSAR data using decision tree (DT) classifier.
Classification MethodWaterRapeseedGrassesBare SoilPotatoesBeetWheatLucerneForestPeasStembeansOverall
Conventional99.4494.6684.5697.0891.4995.6493.7894.0192.1696.9596.5694.12
Proposed99.3996.0493.9497.0993.6896.6497.4997.4794.4097.5296.8996.38
Table 5. Computational costs (s) of the training and validation processing for AIRSAR data using DT classifier.
Table 5. Computational costs (s) of the training and validation processing for AIRSAR data using DT classifier.
Classification MethodTrainingValidation
Conventional1.100.05
Proposed2.070.04
Table 6. Overall classification accuracies (%) for AIRSAR data with different filter sliding window sizes.
Table 6. Overall classification accuracies (%) for AIRSAR data with different filter sliding window sizes.
7 × 79 × 911 × 1113 × 1315 × 1525 × 25
Conventional (SVM)89.0791.1792.4393.2193.8795.17
Proposed (SVM)91.9393.5394.4194.8695.3796.63
Conventional (DT)88.6691.0492.3993.5094.1295.57
Proposed (DT)93.0694.5895.3296.2296.3897.28
Table 7. Classification accuracies (%) for multi-temporal UAVSAR data using SVM classifier.
Table 7. Classification accuracies (%) for multi-temporal UAVSAR data using SVM classifier.
Classification MethodOatsRapeseedWheatCornSoybeansForage CropsBroadleafOverall
17 JuneConventional86.3791.7093.6396.1292.6462.2498.4790.19
Proposed96.7296.6098.0698.5896.5988.9298.4996.64
22 JuneConventional77.2993.8297.8997.3094.1461.3898.0590.75
Proposed97.2197.7698.8898.9397.6883.7797.7597.05
5 JulyConventional94.6199.2476.8599.5592.3156.3698.6388.03
Proposed97.3999.2698.5899.4599.3590.8798.6098.27
17 JulyConventional82.9892.1984.7699.7897.3864.5196.8689.39
Proposed94.0999.7497.7999.7599.4794.1697.2097.93
MeanConventional85.3194.2488.2898.1994.1261.1298.0089.59
Proposed96.3598.3498.3399.1898.2789.4398.0197.47
Table 8. Computational costs (s) of the training and validation processing for multi-temporal UAVSAR data using SVM classifier.
Table 8. Computational costs (s) of the training and validation processing for multi-temporal UAVSAR data using SVM classifier.
DatesClassification MethodTrainingValidation
17 JuneConventional610.3558.0
Proposed699.6407.4
22 JuneConventional957.0594.7
Proposed520.1410.5
5 JulyConventional784.7578.6
Proposed633.9285.8
17 JulyConventional764.7435.4
Proposed591.5291.5
Table 9. Classification accuracies (%) for multi-temporal UAVSAR data using DT classifier.
Table 9. Classification accuracies (%) for multi-temporal UAVSAR data using DT classifier.
Classification MethodOatsRapeseedWheatCornSoybeansForage CropsBroadleafOverall
17 JuneConventional98.9895.7998.5998.2897.0994.4298.7197.48
Proposed99.5698.7299.5599.5599.2598.6899.1299.27
22 JuneConventional97.6697.4597.8799.0098.2492.7298.4697.63
Proposed99.5599.0299.4799.4799.4697.8298.9799.28
5 JulyConventional98.3999.4697.1699.8097.6491.6298.6497.65
Proposed99.4699.5999.6699.8199.7898.4698.7799.56
17 JulyConventional94.8897.7797.0699.7898.6195.7497.9497.45
Proposed99.2399.5399.3499.8599.8398.4798.4799.45
MeanConventional97.4897.6297.6799.2297.9093.6398.4497.55
Proposed99.4599.2299.5199.6799.5898.3698.8399.39
Table 10. Computational costs (s) of the training and validation processing for multi-temporal UAVSAR data using DT classifier.
Table 10. Computational costs (s) of the training and validation processing for multi-temporal UAVSAR data using DT classifier.
DatesClassification MethodTrainingValidation
17 JuneConventional3.890.13
Proposed7.210.12
22 JuneConventional3.670.13
Proposed7.370.12
5 JulyConventional3.330.13
Proposed7.060.12
17 JulyConventional3.170.13
Proposed6.540.11
Table 11. Overall classification accuracies (%) for UAVSAR data acquired on June 17, 2012 with different filter sliding window sizes.
Table 11. Overall classification accuracies (%) for UAVSAR data acquired on June 17, 2012 with different filter sliding window sizes.
7 × 79 × 911 × 1113 × 1315 × 1525 × 25
Conventional (SVM)88.2388.9589.4689.8690.1991.49
Proposed (SVM)94.8895.5796.0696.3896.6497.31
Conventional (DT)95.1796.1696.7697.1897.4898.04
Proposed (DT)98.4498.8499.0699.1699.2799.44

Share and Cite

MDPI and ACS Style

Tao, C.; Chen, S.; Li, Y.; Xiao, S. PolSAR Land Cover Classification Based on Roll-Invariant and Selected Hidden Polarimetric Features in the Rotation Domain. Remote Sens. 2017, 9, 660. https://doi.org/10.3390/rs9070660

AMA Style

Tao C, Chen S, Li Y, Xiao S. PolSAR Land Cover Classification Based on Roll-Invariant and Selected Hidden Polarimetric Features in the Rotation Domain. Remote Sensing. 2017; 9(7):660. https://doi.org/10.3390/rs9070660

Chicago/Turabian Style

Tao, Chensong, Siwei Chen, Yongzhen Li, and Shunping Xiao. 2017. "PolSAR Land Cover Classification Based on Roll-Invariant and Selected Hidden Polarimetric Features in the Rotation Domain" Remote Sensing 9, no. 7: 660. https://doi.org/10.3390/rs9070660

APA Style

Tao, C., Chen, S., Li, Y., & Xiao, S. (2017). PolSAR Land Cover Classification Based on Roll-Invariant and Selected Hidden Polarimetric Features in the Rotation Domain. Remote Sensing, 9(7), 660. https://doi.org/10.3390/rs9070660

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop