Next Article in Journal
High-Resolution Reconstruction of the Maximum Snow Water Equivalent Based on Remote Sensing Data in a Mountainous Area
Previous Article in Journal
Land Subsidence Response to Different Land Use Types and Water Resource Utilization in Beijing-Tianjin-Hebei, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structure Label Matrix Completion for PolSAR Image Classification

1
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Xidian University, Xi’an 710071, China
2
Key Laboratory of Information Fusion Technology of Ministry of Education, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(3), 459; https://doi.org/10.3390/rs12030459
Submission received: 26 November 2019 / Revised: 24 January 2020 / Accepted: 30 January 2020 / Published: 1 February 2020

Abstract

:
Terrain classification is a hot topic in polarimetric synthetic aperture radar (PolSAR) image interpretation that aims at assigning a label to every pixel and forms a label matrix for a PolSAR image. From the perspective of human interpretation, classification is not in terms of pixels, but decomposed perceptual groups and structures. Therefore, a new perspective of label matrix completion is proposed to treat the classification task as a label matrix inversion process. Firstly, a matrix completion framework is built to avoid processing the large-scale PolSAR image data for traditional feature and classifier learning. Secondly, a light network is used to obtain the known labels for the complete task by uniform down-sampling of the entire image that aims to keep the shape of regions in a PolSAR image and reduce the computational complexity. Finally, the zeroth- and first-order label information is proposed as the prior distribution for label matrix completion to keep the structure and texture. The experiments are tested on real PolSAR images, which demonstrates that the proposed method can realize excellent classification results with less computation time and labeled pixels. Classification results with different down-sampling rates in the light network also prove the robustness of this method.

Graphical Abstract

1. Introduction

Polarimetric synthetic aperture radar (PolSAR) image analysis has attracted increasingly more attention for rich multi-polarization information compared with single-polarization synthetic aperture radar (SAR) [1]. PolSAR image terrain classification is a widespread application and is of great importance in city planning, sea monitoring, geological exploration, and plant-growth status assessment [2]. The classification task aims to tag every pixel a label corresponding to one definite terrain. Thus, the classification result is a label matrix with every element corresponding to the label of one pixel in a PolSAR image. In the last few years, numerous methods have been proposed to obtain the label matrix in supervised, unsupervised, and semi-supervised ways [3,4,5,6,7]. These methods usually pay attention to two aspects, namely, feature extraction and classification for the supervised way or clustering for the unsupervised way.

1.1. Related Works

In the case of feature acquisition, two alternative directions are followed according to researcher interest. First, it is well known that a PolSAR image is obtained by electromagnetic imaging. Such a principle results in orthodox polarization scatter character-based classification methods in which statistical distribution and polarization decomposition are dominant forms. However, various probability density functions (PDFs) are presented in [3,8,9,10,11,12] to model single-pixel and contextual distribution characters. Moreover, divergent polarization decomposition features can be computed with given specific scattering basis sets [13,14,15] or different eigenvalue decomposition [16,17]. Second, features are extracted with machine-learning and computer-vision methods as in [18,19,20]. More recently, with the idea of learning theory, Refs. [21,22,23,24,25] proposed to learn high-level features automatically for classification tasks.
Regarding the supervised classification task, the maximum likelihood (ML)-based Wishart classifier is the most commonly used because of its simplicity and effectiveness by modeling PolSAR data statistical character [3,26]. Next, various statistical distribution-based ML classification methods became widespread [11,19,27]. Meanwhile, machine-learning methods such as neural networks (NNs) [28], support vector machines (SVMs) [29], random forests [30], manifold-learning-based supervised graph embedding (SGE) algorithms [31], and convolutional neural networks (CNNs) [32] were proposed to make classification results robust and accurate. Unsupervised cluster methods first use Cloude–Pottier decomposition to create eight feature subspaces following the Wishart classifier [33]. Then, extensive unsupervised classification methods were based on target decomposition theories [4,15] and PolSAR data statistical theories [9,34]. Recently, machine-learning methods were introduced for unsupervised PolSAR image classification. Spectral clustering was used for various polarimetric decomposition results by [6,35,36]. In addition, image-segmentation methods such as level set [37] and simple linear iterative clustering [38] were developed for unsupervised PolSAR image classification.
Generally, supervised methods exploit labeled pixels that are selected by the user to learn a classifier. Then, the unlabeled pixels are identified as belonging to a specific class by the learned classifier, while unsupervised methods are used when ground-truth information is unavailable. Both of these two methods include the same step of feature learning following separate classifier learning and clustering. Recently, semi-supervised classification methods have become a hot topic with two advantages: First, both labeled and unlabeled pixels are used to learn a model that can obtain more information and a better classification result [21,39]; second, most of these methods make use of an end-to-end framework by combining feature and classifier learning, which is an effective method of classification and recognition [40,41,42].

1.2. Motivation and Contributions

However, the above-listed PolSAR image classification methods must extract features and classify all the pixels in one image to obtain the label matrix. Supervised classification methods need labeled pixels to learn the models, and a better result is usually obtained with more labeled pixels. It is a tedious task for PolSAR images to label a significant number of pixels [21,43]. Furthermore, unsupervised and semi-supervised classification methods suffer from a huge amount of computation, especially for large-scale PolSAR images [44,45]. In addition, it may be difficult for the existing methods to meet the processing demand of massive data collected by the PolSAR systems. Thus, many classification methods were developed by considering the contextual information of the label matrix to alleviate the labor-intensive labeling. The representative way is an additional prior distribution with a Markov random field (MRF), and better classification results are obtained [11,21,44,46]. However, traditional methods with contextual information still need heavy computation for a large number of pixels in one PolSAR image [21,46].
From the perspective of human interpretation, classification is not in terms of pixels, but decomposed perceptual groups and structures. Because pixels in a consistent region belong to the same class with a high probability, feature and classifier learning for all the pixels in this region are redundant. From this point of view, in this paper realizing the PolSAR classification task as label matrix completion is proposed. Matrix completion is an inverse process to fill the missing terms with a few of the elements known in the matrix, which can be completed accurately by the prior distribution of all the elements. Then, it turns the classification task into matrix completion with a solid mathematical foundation method in theory as in [47]. The most important point in label matrix completion is how to obtain the known labels and the prior distribution of the entire label matrix for one PolSAR image. Regarding the known labels, it is well known that the matrix can be completed with as many elements known as possible. In this paper, we obtain the known labels from a uniform down-sampling of PolSAR images. In this way, the traditional classification approach of feature and classifier learning is performed on fewer pixels. Meanwhile, the uniform sampling also keeps the region of the original PolSAR image. Moreover, in order to retain the texture and structure information, the labels of the down-sampled image are transferred by zeroth- and first-order label prior distributions to the entire label matrix. The zeroth-order prior term retains the label in the corresponding location of the entire image. The first-order prior term describes the structure information of the region consistency and sharp boundary between different regions.
As a novel approach, in this paper, a label matrix completion method for PolSAR image classification is proposed, and the main contributions are the following.
  • First, label matrix completion is introduced for the PolSAR image classification for the first time to the best of our knowledge. Label matrix completion solves a matrix inverse problem to obtain the classification result instead of the map from PolSAR data to the label field.
  • Second, a uniform down-sampled PolSAR image is used to obtain the known labels for the entire label matrix. In this way, the heavy computational burden in semi-supervised and unsupervised classification methods is relieved.
  • Third, the zeroth- and first-order label prior distributions are proposed to complete the entire label matrix with the known labels of the sampled PolSAR image. Thus, the final label matrix can be obtained with fewer labels known than traditional classification methods by this label prior to distribution. This results in state-of-the-art classification performance on both accuracy value and computation time.
The rest of this paper is organized as follows. In Section 2, the proposed structure label matrix completion method for PolSAR image classification tasks is introduced in detail. In Section 3, the optimization process and analysis of the model and computation complexity are illustrated to demonstrate the practicability of the proposed method. The analysis of parameters and experimental results on PolSAR images are presented in Section 4 to validate the classification effectiveness. Conclusions are drawn and planned future work described in Section 5.

2. Proposed Structure Label Matrix Completion Method for PolSAR Image Classification

To describe the proposed method, the PolSAR classification task is derived into a label matrix completion problem as a matrix inverse equation first. Then, a uniform under-sampled PolSAR image is introduced to obtain the known labels. Finally, the entire label matrix is completed by the zeroth- and first-order label prior distributions with the known labels to retain the region and structure information.

2.1. From Classification to Label Matrix Completion

Generally, let X R h × w × d be a PolSAR image with h, w and d denoting its height, width, and dimension of every pixel, respectively. Above all, labeled pixels and their corresponding labels are selected randomly to form the subset X L , Y L , while the remaining pixels are unlabeled and denoted X U , Y U . The traditional PolSAR image classification task aims to assign each unlabeled pixel with a proper label to produce an estimated classification map matrix Y U .
The traditional classification method consists of PolSAR image feature transformation or learning, as well as classifier learning. With the learned determinate model, the classification map of unlabeled pixels will be predicted. This process is illustrated as follows:
max Y , Θ , Φ p ( Y U , Θ | X U ) p ( Θ | F L , Y L ) p ( F , Φ | X ) ,
where Φ and Θ are the designed feature and classifier models, respectively. This process must extract features and classification for all the pixels in one PolSAR image with a large computational burden. Furthermore, classification performance for different PolSAR images may vary widely. In fact, it is not necessary to extract features and classification for every pixel in a homogeneous region because pixels in one region usually belong to the same terrain. In addition, redundant operations cost much computation time. Therefore, a new perspective is considered inspired by label matrix completion to achieve less computational overhead for PolSAR image classification. In the following, basic knowledge about matrix completion will be introduced first.
Suppose Y ^ is a matrix with size w × d and only a subset of entries Ω o b s [ w ] × [ d ] are known, where [ w ] denotes 1 , , w . Defining the projection operator P Ω o b s as P Ω o b v B = C , where C i j = B i j if ( i ; j ) Ω o b s and C i j = 0 if ( i ; j ) Ω o b s , for any w × d matrix B = ( B i j ) i [ w ] ; j [ d ] , the following is a standard: formulation for matrix completion [47].
max Y p ( Y ) s . t . Dis ( P Ω o b s ( Y ^ ) , P Ω o b s ( Y ) ) e ,
where p ( Y ) is any information known for the recovered matrix. In addition, e > 0 and Dis ( P Ω o b s ( Y ^ ) , P Ω o b s ( Y ) is the distance loss used to measure the fitness of the recovered matrix Y to the observed matrix Y ^ . Note that, for the cause of noisy labels existing in the measurement matrix Y ^ , the above constraint allows for a slight discrepancy between the recovered and observed matrices.
For PolSAR image classification, observed matrix Y ^ is replaced by known labels of pixels filling its corresponding position. It can be noted that three terms should be designed to perform classification with the label matrix completion framework. Known pixels in the observed matrix Y ^ are obtained from a light classification network for a down-sampled PolSAR image. Then, the prior term of the label matrix is designed with texture and structure information. Finally, the square loss function is designed for distance measure Dis , resulting in the following restraint expression form:
min Y P Ω o b s ( Y ^ ) , P Ω o b s ( Y ) ) F 2 log p ( Y ) ,
where F is the Frobenius norm a F = i a i 2 (square root of the sum of squares for all the elements in the matrix). In the following, the detailed description of the learning process of label prior p( Y ) and observed label matrix Y ^ are introduced.
In this paper, the sampling method is introduced into the input domain of a classification model to alleviate the labor-intensive labeling and heavy computational burden for highly redundant pixels. Specifically, a smaller image X ^ R h ^ × w ^ × d of the entire PolSAR image X is obtained by the uniform down-sampling method as shown in Figure 1. To reduce the computational burden, the smaller image is used for classification to obtain a label matrix Y ^ . Thus, the overall pixels are much fewer in number than in the traditional classification method that extracts features and learns parameters from the entire image. Then, the classification result of this down-sampled image is transferred to the label matrix Y for the entire image. On the one hand, the down-sampled image needs fewer labeled pixels and less computation time to train the model. On the other hand, the region structure information will be retained in the sampled image of the entire image. Thus, it is desirable to design the loss so as to transfer all label information without missing any important ones from the known labels.

2.2. Completion of Label Matrix by Matrix Completion

Label matrix completion requires two important functions: obtaining the known labels and understanding the prior distribution of the entire label matrix. As shown in the entire framework depicted in Figure 2, the light network processes a uniform down-sampled PolSAR image as in Section 2.1 to obtain the known labels. Then, the prediction result is used to complete the entire label matrix with prior distribution from the zeroth- and first-order perspectives to keep the texture and structure information. The specific form for the zeroth- and first-order information is introduced in the following. As shown in the left-hand part of Figure 3, the squares in green color are the pixels classified with a light network for the uniform down-sampled matrix from the entire image, while the white squares are pixels to be completed by the known ones represented by the green squares. Because the green squares are transferred from the sampled image to the entire image in the corresponding index, it is named the zeroth-order prior.
Regarding the first-order prior distribution, the structure and contextual information of an entire PolSAR image is considered. In this paper, a two-structure prior is designed with the fact that pixels in the same region belong to the same class and the boundary between different regions is obvious. On the one hand, to make pixels in the same region have the same label, a spatially smooth term is improved by MRF priority. On the other hand, the boundary between different regions means the labels on the two sides of this boundary are different. Thus, a sharp edge is modeled as a Laplace prior as shown in the right-hand part of Figure 3. Specifically, we formulate the objective function for our label prior p ( Y ) learning as
p ( Y ) exp Z ( Y d , Y ^ ) α F ( Y d , Y ) ,
where Y d is the predicted label matrix for the uniform down-sampled PolSAR image with the ith vector as one-hot for pixel i. Z indicates the zeroth-order information about the priority of the entire image. This term aims to fill the entire label matrix Y with Y d in the corresponding location index of the entire image. The other elements are zero in the rest of the locations, which results in the label matrix Y ^ as in Equation (3). F is the first-order information transferred from the sampling image to the entire label matrix, which contains the spatial and contextual information for the label matrix. Specifically, we define F ( Y d , Y ) as
F ( Y d , Y ) = F p ( Y d ) + F c ( Y d , Y ) = i = 1 N j N i ( Y i Y j F 2 L ( Y ) ) ,
where N i indicates the eight-neighborhood index of pixel x and Y is the logits output of the corresponding network. The first term ensures the spatial consistency that elements in the neighbor location have the same label, while L ( Y ) is the Laplace variance that clarifies the edge. Here, this term is defined as a quadratic function of Y ; therefore, a differentiable function can be easily integrated into the learning process. It is given by [48]
L ( Y ) = 1 N 2 1 i ( Y i i Y i N 2 ) 2 .
The light network in Figure 2 is operated on the uniform down-sampled PolSAR image, which is obtained from the entire image and results in less computational burden. Here, the semi-supervised method is used for combining the labeled and unlabeled pixels for a better classification result in the light network. With excellent classification performance, the method proposed for the imbalanced PolSAR images is used as shown in our preliminary work [24]. The cost-sensitive latent space learning network is built based on the parametric feature and classifier learning framework for imbalanced PolSAR images, where classifier and latent space learning are defined as optimizing the posterior and likelihood function for labeled pixels, respectively. On the one hand, the down-sampled PolSAR images have fewer samples in minor classes, which results in more imbalanced data distribution than in the entire image. On the other hand, the proposed classification method can obtain high accuracy and robust results for balanced and imbalanced PolSAR images.

2.3. Model Analysis

The label matrix completion task in Equation (3) consists of two crucial aspects: the known labels learning and matrix completion with label prior distribution. To keep the region and structure of the entire PolSAR image, the known labels are obtained by classifying a uniform down-sampled PolSAR image. The sampled image has the same structure as the entire PolSAR image, but there are fewer pixels to be processed. Notably, the classification method for this down-sampled image can be any supervised, unsupervised, or semi-supervised method with good results because it is just a way to obtain the known labels to complete the entire label matrix. The labels obtained from the uniform down-sampled image are the zeroth-order information to complete the entire label matrix. With first-order prior distribution information, a smooth and edge-shaped label matrix is obtained.
The proposed method in this paper is a new framework incorporating the innovative perspective of label matrix completion. Instead of designing a map function from the PolSAR data field to the label field [6,21,36,49], a label matrix inverse recovery method is used for the PolSAR image classification task to avoid the tedious learning processes of feature and classifier. The uniform down-sampling of the entire image may retain the structure and shape information for terrains in a PolSAR image compared with random sampling. Meanwhile, the information of the PolSAR data is used in this down-sampled image to consider the physical characteristic of PolSAR data. Most important, the classification model adopted for light lines can be any method that works effectively. This characteristic increases the generalization of the proposed model for different PolSAR images and makes the proposed method a new baseline for PolSAR image classification methods from the perspective of label matrix completion.

3. Optimization and Complexity Analysis

In this section, a detailed optimization scheme is presented to solve the proposed classification model. First, a form of the maximum a posteriori (MAP) conversion of the probability Equation (4) is obtained to update variables and parameters in this method. Then, the computational complexities of the proposed model for each step are discussed (Algorithm 1).
Algorithm 1 Optimization of proposed method.
Input:   PolSAR image raw feature data, down-sampled PolSAR image index I n , light classification network, and maximum iteration time Maxiter.
   Main Loop:
   Compute the prior distribution p ( Y ) and observed label matrix Y ^ with light classification network for the down-sampled PolSAR image to obtain Y d
   for t = 1 to Maxiter do
     Compute observed label matrix Y ^ of observed information with index I n
     Compute Laplace term of prior information with Equation (9)
     Compute final label matrix Y with Equation (8)
     Compute predicted label by maximizing Equation (8)
   end for
Output:  Classification result of predicted label.

3.1. Optimization Process

To solve Equation (4), a two-stage process can be obtained by minimizing an error function for label prior distribution learning and matrix completion. The final loss function can be obtained by combining Equation (3) to Equation (6) as
min Y P Ω o b s ( Y ^ ) P Ω o b s ( Y ) ) F 2 + α i = 1 N j N i ( Y t i Y t j F 2 β N 2 1 i ( Y i i Y i N 2 ) 2 ) s . t . Y ^ = Z ( Y d ) ,
where Y ^ = Z ( Y d ) denotes filling an observed label matrix with Y d in the corresponding down-sampling location, α and β are the weighting coefficients that indicate the importance of these two terms in first-order priority. First, Y d is obtained with a light classification network. Then, the label matrix completion task enters into solving Equation (8), which is a differentiable squared function with a closed-form solution for Y . The solution for this pixel i is [21,48]:
Y i = P Ω o b s ( Z ( Y d ) ) + j N i ( α ( Y j ) β ( q i j N i q j ) ,
where q i = 2 N 2 ( N 2 1 ) ( N 2 p i i p i m ( p m p m N 2 ) ) , P = [ p i ] , and P is obtained by convolving Y with a 3 × 3 Laplace operator L . p i is expressed by
p i = Y i j N i Y j .
The final classification label is an index of the maximum value in Equation (8) for pixel i.

3.2. Complex Analysis

Considering the proposed optimization scheme of the proposed model in the preceding section, the entire complexity comprises two aspects corresponding to known labels learning with a light network and label matrix completion, respectively. Specifically, as for the known labels learning task, the computation complexity for every layer is O ( ( h ^ w ^ ) 2 E 2 d l d l 1 ) , where E is the size of the convolution kernel and d the number of latent units. The computational complexity relies heavily on the size of the down-sampled PolSAR image, which is uniformly sampled from the entire image. The complexity is lower when the down-sampling rate is smaller. Regarding label matrix completion, the computational cost is illustrated in Equation (7) with O ( ( h w ) 2 . In theory, it can be observed that the total time computational complexity of the proposed method is acceptable.

4. Results

In this section, the proposed structure label matrix completion method is evaluated with several experiments. First, the effectiveness of the proposed method on a PolSAR image is investigated with different down-sampling rates. Next, the effectiveness of the structure priority distribution for the entire label matrix is validated. Finally, the effectiveness of the proposed label matrix completion with a structure prior for two different PolSAR datasets is evaluated to establish the superiority of our proposed framework. All experiments are conducted on a desktop PC equipped with an i5 [email protected] GHz and 16 GB of memory in the TensorFlow environment. In the following, the total accuracy is the number of well-classified pixels divided by the number of all pixels. Meanwhile, the classification accuracy and kappa coefficient are given in Tables 3 and 5 on two different PolSAR datasets, respectively. The overall accuracy and kappa coefficient are calculated by the following equation [21,26]:
O A = N c N κ = N k n k k k n : , k n k , : N 2 k n : , k n k , : ,
where N c denotes the number of pixels with the correct label compared with the “Total” number of the pixels N. n k k is the diagonal element of the confusion matrices and n k , : , n : , k are the k-th row and k-th column of the confusion matrices, respectively. They denote the number of classes k divided into the k-th class and k-th class separated into others, respectively.

4.1. Parameter Analysis

Information for the PolSAR database tested and detailed here is shown in Figure 4. Figure 4a is the Pauli RGB image of Flevoland from NASA/JPL AIRSAR in 1989, which is an L-band four-look PolSAR database of size 750 × 1024 and a resolution of 12 × 6 m. Figure 4b,c show the ground-truth image and all terrains in this dataset, respectively, which includes 15 different classes. Here, the 1 4 under-sampled image is used for semi-supervised classification with a light network and completing the label matrix for the entire image.
Parameters play an important role in the classification performance, which is usually chosen within a range. The parameters to be tested are the coefficients α and β before the MRF term and Laplace term, the aims of which are to retain smoothness in one region and the edge between two regions, respectively. In our experience, we vary the parameter from 10 3 to 10 for α and 10 10 to 10 1 with the interval of 10 1 for β . Classification accuracy versus different parameter setting values is shown in Figure 5. On the one hand, the classification-accuracy values are almost unchanged as the change of parameter β , which reveals that the Laplace term is robust in the proposed method. On the other hand, classification accuracy increases as parameter α varies from 10 3 to 1, while decreasing from 1 to 10 and thus reaching the highest value when α is 1. As a whole, it can be seen that the classification-accuracy values exhibit little difference for various α . Thus, in the following experiments, the chosen parameter values are 1 and 10 4 for α and β , respectively.

4.2. Classification Performance with Different Down-Sampling Rates

To validate the robustness of the label matrix completion method, the down-sampled PolSAR images with different down-sampling rates were tested on dataset Flevoland, as shown in Figure 4. In Table 1, the number of pixels in the down-sampled images are given to illustrate the computational cost compared with the entire image. First, the classification results are shown in Figure 6a–d with down-sampling rates of 1 4 , 1 9 , 1 16 , and 1 25 , respectively. Then, a priority matrix for the entire image is obtained by adding the labels into the corresponding locations in the large PolSAR image, as shown in Figure 6e–h. Finally, the final label matrix completion results are shown in Figure 6i–l.
In Table 1, it can be seen that many fewer samples are to be classified with the down-sampling operation, which results in low computational and storage costs. In this way, a fast classification process will be realized. The computation times used by different down-sampled images and the entire image are given in Table 2, which shows that less time is needed for down-sampled images than that for the entire image. From the classification accuracies in Table 2, it can be concluded that satisfactory values are obtained when the sampling rate is 1 4 , 1 9 , and 1 16 . The accuracy values of these label matrix completion results are 99%, 97%, and 96%, respectively. Even though the accuracy of the training pixels used is 0.6%, the classification accuracy value is high enough. More importantly, the classification accuracy values are good enough for almost all categories when the down-sampling rate is 1 25 . The classification accuracy is poor because the down-sampled image would significantly destroy the structure information of the entire image. It is relatively obvious that pixels in the minor class Building could not be classified correctly. It also results in low accuracy values for down-sampled images with 1 16 and 1 9 sampling rates. However, the reduction of accuracy value is relatively low compared with the computation time, as shown in Table 1.
The runtimes in Table 1 were tested on the down-sampled images and the entire images with the same light classification network setting as in [50]. The classification results correspond to Figure 6 and Table 2. It can be concluded that the computation times of different down-sampled images are proportional to the sampling rates. The time for the 1 25 down-sampling-rate image is only 13.765 s, which is less than the 344 s the whole image needs. It costs only 1 4 time for the whole image compared to that for the 1 4 down-sampling data. Similar results are obtained with the other down-sampled PolSAR images. Therefore, with a suitable light network, a huge PolSAR image can reach a satisfactory classification result with much lower computation time and storage. This situation makes the proposed method able to be widely applied in all the excellent classification methods of the light network to process the down-sampled PolSAR images.
Regarding the visual classification results with the proposed label matrix completion method, experiments were conducted on images with different down-sampling rates. It is obvious that the visual results of the down-sampled images are satisfactory, as shown in Figure 6a–d, which have the correct structure and smooth regions. Almost every terrain can be classified correctly in these images. In the completed label prior map images in Figure 6e–h, the structure of regions are retained for the special down-sampling strategy with an adaptive prior learning method. With different sampling rates, the completed label prior image varies greatly when more information is obtained with a high sampling rate, as in Figure 6e. Thus, the final completed label matrix of the image with high sampling rate is better than that with low sampling rate. Figure 6i illustrates better performance in keeping the structure of regions, while the label matrix in Figure 6l with a 1 25 sampling rate exhibits obvious over-fitting where regions in the unconnected part are continuous, such as the terrain Grass. The results of images with 1 9 and 1 16 sampling rates are satisfactory in keeping region structure and smoothness. In general, the performance of label matrix completion is excellent for PolSAR image terrain classification on accuracy values, visual results, and computation time.
To show the robustness of the proposed method with different labeled data and sampling rates, the box plots of different sampling rates are given in Figure 7. From the plot results, it can be seen that the classification-accuracy value increases with decreasing sampling rate, because the higher classification-accuracy value is obtained with more labeled data. With the same sampling rate, the classification-accuracy value is stable. The variance of the classification-accuracy value decreases with different sampling rates from 1 25 to 1 4 . The classification-accuracy result of the down-sampled image is the lowest with a 1 25 sampling rate. Furthermore, the completion results will vary greatly for fewer known labels in the inversion of the matrix completion problem.

4.3. Structure Prior for Label Matrix Completion

The effectiveness of structure prior information for label matrix completion was validated. All of the experiments were carried out on the PolSAR image Flevoland database from NASA/JPL AIRSAR with a 1 16 down-sampling rate. The classification visual results and accuracy values are given in Figure 8. Figure 8a is the result of the down-sampled image, where the accuracy value is 94.94%. Filling the down-sampled image into the entire image as shown in Figure 8b, the overall accuracy value is 5%. In addition, the structure of regions can be retained by the uniform sampling method as in Figure 8b. Thus, the classification result with the MRF prior is good in terms of both the visual result and accuracy value in Figure 8c. However, the boundary between different regions is not smooth in the result without the Laplace priority, and the classification visual result in Figure 8d is the best for every region being homogeneous and the boundary is smooth. This is obvious in the regions that are outlined by red boxes as shown in Figure 8c,d.

4.4. Classification Performance on PolSAR Data Sets

It has been proved that the proposed label matrix completion method for PolSAR image classification can obtain satisfactory results with low computation times under different down-sampling rates. The classification performance of visual results and accuracy values was compared with several state-of-the-art methods, the results of which were taken from several cited papers.

4.4.1. Dataset Flevoland from NASA/JPL AIRSAR

The methods compared here with the Flevoland dataset include the former works variational mixture Wishart (VMW) [26] and robust-semi-supervised (RS) [21] methods, a traditional machine-learning classifier SVM [29], CNN-based methods, a real-valued CNN (RV-CNN) [32], a complex-valued CNN (CV-CNN) [51], and a graph semi-supervised (GSS) method [44]. The method called CLSL is a former work- and cost-sensitive latent space learning model for imbalanced PolSAR images [50]. The proposed SLMC is classifying a down-sampling PolSAR image with the CLSL method following label matrix completion. All of the compared methods are for PolSAR semi-supervised classification consisting of pixel- and region-based methods. The classification results for these methods are shown in Figure 9, and the accuracy values are given in Table 3.
The image in Figure 9a is the classification result of VMW, which is a pixel-based method heavily influenced by the speckle noise in the PolSAR data. A similar situation also occurs in the classification result of SVM, as shown in Figure 9b. Figure 9c of the RS method has a better classification result with the feature-learning process for the original data. The results in Figure 9d,e are images classified by the RV-CNN and CV-CNN methods, respectively. Both of these latter methods can obtain smooth regions for spatial feature learning for pixels with its neighboring pixels. By computing a graph among the entire image, graph-based methods in Figure 9e can also obtain excellent classification results. However, graph-based methods are heavily limited by the computation of the graph for the entire image, which increases the computational and storage burden. This situation occurs in the deep-network-based methods, such as the RV-CNN and CV-CNN methods, as shown in Figure 9d,e, which also exhibit good classification results. In Figure 9f, the classification result of the proposed CLSL method is good for an imbalanced PolSAR image. The last image in Figure 9g is the result of the proposed label matrix completion method with a 1 4 down-sampling rate. This method can realize excellent classification results with less time and storage.
Regarding the classification accuracy in Table 3, a similar conclusion can be obtained. First, the graph-based and deep-network-based methods obtain higher overall classification accuracy from 33% to 51% than the pixel-based VMW method. The number of training pixels used in the VMW method is unified as 300 for every class, which has a robust result for the number of training pixels. The proposed method obtains the same accuracy of 99% with half the training pixels used in the GSS method and 1 4 used in the CLSL method. Compared with deep-network-based classification methods, e.g., RV-CNN, CV-CNN, and SRDNN, the proposed method can obtain an accuracy value higher than 5%.
Furthermore, it can be seen that 1 4 computation time is needed compared with the other methods for the down-sampling process of the entire image, as shown in Table 4. Semi-supervised methods, e.g., the VMW, RS, RV-CNN, and CLSL methods, need feature learning for all the pixels by a parameterized network. The SRDNN method costs more time than the above methods for an extra constraint of super-pixel information for the network. There is an extra relation matrix to be computed for all the pixels of graph-based semi-supervised methods, e.g., the SAG and GSS methods. Thus, these methods cost more time than the others. The proposed method only processes a down-sampling of pixels, resulting in less computation time and space. The overall time is proportional to the number of pixels to be processed. Thus, it is believed that the computational burden will be much less with a lower sampling rate.

4.4.2. Dataset Oberpfaffenhofen from ESAR Airborne Platform

The Pauli RGB pseudo-colored images database tested is shown in Figure 10. Figure 10a is the Pauli RGB image of an L-band multi-look PolSAR database called Oberpfaffenhofen of size 1300 × 1200 obtained by the ESAR airborne platform provided by the German Aerospace Center. Figure 10b,c are the ground-truth image and all terrains in this dataset, respectively. There are three different terrains in this dataset, including built-up area, woodland, and open area, as shown in Figure 10j. Terrains in this PolSAR dataset are balanced and homogeneous in a large area, which makes it suitable for the proposed label matrix completion method with a large down-sampling rate. With a few pixels labeled, the final classification result is obtained with satisfactory performance. The classification visual results and accuracy values are given in Figure 10c–i and Table 5, respectively. The methods compared on this dataset are the SVM [29], RS [21], and CNN-based RV-CNN [32] and CV-CNN [51] methods, and the SRDNN method [52].
From the classification visual and quantified performance, it can be seen that a different method could achieve a similar label map. The largest accuracy value of the proposed method is 15% higher than the smallest one. While only a 1% accuracy value is higher than that of most of compared methods, it validates that all the methods have excellent performance on this dataset. However, the training pixels used in different methods vary greatly. The proposed method uses a 1 64 down-sampling image with only 0.15% training pixels and can realize an excellent classification result. CNN-based methods realize this with 1% pixels, which is 10 times the number used in the proposed method. Not only are the computation time and storage reduced, but the human cost in label tagging is also relieved. The classification visual result in Figure 10d is produced by the RS method, which aims to make the method robust to noisy PolSAR data and labels. This method falsely classifies several built-up areas as woodland. In addition, there are some significantly noisy blocks in the visual results of the RV-CNN and CV-CNN methods. The visual result in Figure 10g of the SRDNN method has some misclassified pixels in the top right-hand corner of the terrain buildup. The visual result of the proposed method is relatively satisfactory in the entire image.

5. Conclusions and Future Work

In this paper, the PolSAR image classification task is first tackled from a label matrix completion perspective. The known labels are learned by a light network that processes a uniform down-sampled PolSAR image to relieve the heavy computational burden and retain the regional information of the entire PolSAR image. Then, the label transfer framework realizes the final label matrix completion for the entire PolSAR image. Compared with other state-of-the-art classification methods, the proposed label matrix completion model can achieve good classification results. It can be concluded from the experiments that the proposed method realizes excellent classification results at a low computation-time cost. In addition, from the experiments described in Section 4.4, it can also be found that PolSAR images with continuous large areas are more suitable for the proposed method than those with small and imbalanced regions.
In fact, the proposed structure label matrix completion method for PolSAR classification task is a label prior transfer process. This task could be realized to transfer information from a down-sampling PolSAR image as in this paper and from the different data sources such as infrared or optics. Thus, the future work will pay attention to the transfer learning for label prior of the label matrix completion task in PolSAR classification.

Author Contributions

Q.W. and Z.W. conceived and designed the method and experiments; Z.R. and B.R. performed the experiments; Q.W. analyzed the results; Q.W. wrote the article; B.H. and L.J. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 61671350 and 61836009, by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant No. 61621005, and by China Postdoctoral Science Foundation Funded Project No. 2018M633468.

Acknowledgments

In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, J.S.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009; pp. 268–269. [Google Scholar]
  2. Lee, J.S.; Grunes, M.R.; Ainsworth, T.L.; Du, L.J.; Schuler, D.L.; Cloude, S.R. Unsupervised classification using polarimetric decomposition and the complex wishart classifier. IEEE Trans. Geosci. Remote Sens. 2002, 37, 2249–2258. [Google Scholar]
  3. Lee, J.S.; Grunes, M.R.; Kwok, R. Classification of multi-look polarimetric sar imagery based on complex wishart distribution. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
  4. Pottier, E. The h/a/ polarimetric decomposition approach applied to polsar data processing. In Proceedings of the PIERS Workshop on Advanced in Radar Methods, Baveno, Italy, 20–22 July 1998. [Google Scholar]
  5. Frery, A.C.; Correia, A.H.; Freitas, C.D.C. Classifying multifrequency fully polarimetric imagery with multiple sources of statistical evidence and contextual information. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3098–3109. [Google Scholar] [CrossRef]
  6. Bo, R.; Hou, B.; Jin, Z.; Jiao, L. Unsupervised classification of polarimetirc sar image via improved manifold regularized low-rank representation with multiple features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2007, 10, 580–595. [Google Scholar]
  7. Chi, L.; Liao, W.; Li, H.C.; Fu, K.; Philips, W. Unsupervised classification of multilook polarimetric sar data using spatially variant wishart mixture model with double constraints. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5600–5613. [Google Scholar]
  8. Kong, J.A. K-distribution and polarimetric terrain radar clutter. J. Electromagn. Waves Appl. 1989, 3, 747–768. [Google Scholar]
  9. Bombrun, L.; Beaulieu, J.M. Fisher distribution for texture modeling of polarimetric sar data. IEEE Geosci. Remote Sens. Lett. 2008, 5, 512–516. [Google Scholar] [CrossRef]
  10. Fernandez-Michelli, J.I.; Areta, J.A.; Hurtado, M.; Muravchik, C.H. Polarimetric sar image classification using em method and g p 0 model. In Proceedings of the 2015 XVI Workshop on Information Processing and Control (RPIC), Cordoba, Argentina, 6–9 October 2015. [Google Scholar]
  11. Xu, Q.; Chen, Q.; Xing, X.; Yang, S.; Liu, X. Polarimetric sar images classification based on l distribution and spatial context. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 4976–4979. [Google Scholar]
  12. Li, H.C.; Sun, X.; Emery, W.J. H distribution for multilook polarimetric sar data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 489–493. [Google Scholar] [CrossRef]
  13. Hulst, H.C.V.D.; Twersky, V. Light Scattering by Small Particles; Wiley: Hoboken, NJ, USA, 1957. [Google Scholar]
  14. Freeman, A.; Durden, S.L. Three-component scattering model to describe polarimetric sar data. Proc. SPIE 1993, 1748, 213–224. [Google Scholar]
  15. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Fourcomponent scattering model for polarimetric sar image decomposition. Tech. Rep. IEICE SANE 2005, 104, 1699–1706. [Google Scholar]
  16. Holm, W.A. On radar polarization mixed target state decomposition techniques. In Proceedings of the 1988 IEEE National Radar Conference, Ann Arbor, MI, USA, 20–21 April 1988. [Google Scholar]
  17. Cloude, S.R.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  18. Zhou, G.; Cui, Y.; Chen, Y.; Yang, J.; Rashvand, H.; Yamaguchi, Y. Linear feature detection in polarimetric sar images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1453–1463. [Google Scholar] [CrossRef]
  19. Tao, M.; Zhou, F.; Liu, Y.; Zhang, Z. Tensorial independent component analysis-based feature extraction for polarimetric sar data classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2481–2495. [Google Scholar] [CrossRef]
  20. Redolfi, J.; Snchez, J.; Flesia, A.G. Fisher vectors for polsar image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2057–2061. [Google Scholar] [CrossRef]
  21. Hou, B.; Wu, Q.; Wen, Z.; Jiao, L. Robust semisupervised classification for polsar image with noisy labels. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6440–6455. [Google Scholar] [CrossRef]
  22. Chen, S.W.; Tao, C.S. Polsar image classification using polarimetric-feature-driven deep convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 627–631. [Google Scholar] [CrossRef]
  23. Wu, W.; Li, H.; Zhang, L.; Li, X.; Guo, H. High-resolution polsar scene classification with pretrained deep convnets and manifold polarimetric parameters. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6159–6168. [Google Scholar] [CrossRef]
  24. Lin, H.; Shi, Z.; Zou, Z. Fully convolutional network with task partitioning for inshore ship detection in optical remote sensing images. IEEE Geosci.Remote Sens. Lett. 2017, 14, 1665–1669. [Google Scholar] [CrossRef]
  25. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Gill, E.; Molinier, M. A new fully convolutional neural network for semantic segmentation of polarimetric SAR imagery in complex land cover ecosystem. ISPRS J. Photogramm. Remote Sens. 2019, 151, 223–236. [Google Scholar] [CrossRef]
  26. Qian, W.; Hou, B.; Wen, Z.; Jiao, L. Variational learning of mixture wishart model for polsar image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 141–154. [Google Scholar]
  27. Liu, C.; Yin, J.; Yang, J.; Gao, W. Classification of multi-frequency polarimetric sar images based on multi-linear subspace learning of tensor objects. Remote Sens. 2015, 7, 9253–9268. [Google Scholar] [CrossRef] [Green Version]
  28. Ince, T. Polarimetric sar image classification using a radial basis function neural network. In Proceedings of the Electromagnetic Research Symposium (PIERS), Cambridge, MA, USA, 5–8 July 2010. [Google Scholar]
  29. Lardeux, C.; Frison, P.L.; Tison, C.; Souyris, J.C.; Stoll, B.; Fruneau, B.; Rudant, J.P. Support vector machine for multifrequency sar polarimetric data classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4143–4152. [Google Scholar] [CrossRef]
  30. Loosvelt, L.; Peters, J.; Skriver, H.; Baets, B.D.; Verhoest, N.E. Impact of reducing polarimetric sar input on the uncertainty of crop classifications based on the random forests algorithm. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4185–4200. [Google Scholar] [CrossRef]
  31. Shi, L.; Zhang, L.; Yang, J.; Zhang, L.; Li, P. Supervised graph embedding for polarimetric sar image classification. IEEE Geosci. Remote Sens. Lett. 2013, 10, 216–220. [Google Scholar] [CrossRef]
  32. Zhou, Y.; Wang, H.; Xu, F.; Jin, Y.Q. Polarimetric sar image classification using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2017, 13, 1935–1939. [Google Scholar] [CrossRef]
  33. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric sar. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  34. Formont, P.; Pascal, F.; Vasile, G.; Ovarlez, J.P.; Ferro-Famil, L. Statistical classification for heterogeneous polarimetric sar images. IEEE J. Sel. Top. Signal Process. 2011, 5, 567–576. [Google Scholar] [CrossRef] [Green Version]
  35. Ersahin, K.; Cumming, I.G.; Ward, R.K. Segmentation and classification of polarimetric sar data using spectral graph partitioning. IEEE Trans. Geosci. Remote Sens. 2009, 48, 164–174. [Google Scholar] [CrossRef] [Green Version]
  36. Lin, L.Q.; Song, H.; Huang, P.P.; Yang, W.; Xu, X. Unsupervised classification of polsar data using large scale spectral clustering. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; p. 2817. [Google Scholar]
  37. Luo, S.; Ling, T.; Yan, C. A multi-region segmentation method for sar images based on the multi-texture model with level sets. IEEE Trans. Image Process. 2018, 27, 2560–2574. [Google Scholar] [CrossRef]
  38. Xiang, D.; Ban, Y.; Wei, W.; Yi, S. Adaptive superpixel generation for polarimetric sar images with local iterative clustering and sirv model. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3115–3131. [Google Scholar] [CrossRef]
  39. Wen, Z.; Hou, B.; Jiao, L. Joint sparse recovery with semi-supervised music. IEEE Signal Process. Lett. 2017, 24, 629–633. [Google Scholar] [CrossRef] [Green Version]
  40. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Available online: https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf (accessed on 26 November 2019).
  41. Jie, Z.; Shan, S.; Kan, M.; Chen, X. Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment. In ECCV; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  42. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  43. Liu, H.; Wang, Y.; Yang, S.; Shuang, W.; Jie, F.; Jiao, L. Large polarimetric sar data semi-supervised classification with spatial-anchor graph. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1439–1458. [Google Scholar] [CrossRef]
  44. Bi, H.; Sun, J.; Xu, Z. A graph-based semisupervised deep learning model for polsar image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2116–2132. [Google Scholar] [CrossRef]
  45. Ding, P.; Zhang, Y.; Deng, W.J.; Jia, P.; Kuijper, A. A light and faster regional convolutional neural network for object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2018, 141, 208–218. [Google Scholar] [CrossRef]
  46. Wu, Y.; Ji, K.; Yu, W.; Su, Y. Region-based classification of polarimetric sar images using wishart mrf. IEEE Geosci. Remote Sens. Lett. 2008, 5, 668–672. [Google Scholar] [CrossRef]
  47. Wong, R.K.W.; Lee, T.C.M. Matrix completion with noisy entries and outliers. J. Mach. Learn. Res. 2017, 18, 5404–5428. [Google Scholar]
  48. Cherukuri, V.; Guo, T.; Schiff, S.J.; Monga, V. Deep Mr image super-resolution using structural priors. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018. [Google Scholar]
  49. Liu, F.; Jiao, L.; Hou, B.; Yang, S. Pol-sar image classification based on wishart dbn and local spatial information. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3292–3308. [Google Scholar] [CrossRef]
  50. Wu, Q.; Hou, B.; Wen, Z.; Ren, L.; Ren, B.; Jiao, L. Cost-sensitive Latent Space Learning for Imbalanced PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2019. under review. [Google Scholar]
  51. Zhang, Z.; Wang, H.; Xu, F.; Jin, Y.Q. Complex-valued convolutional neural network and its application in polarimetric sar image classification. IEEE Trans. Geosci. Remote Sens. 2017, 7177–7188. [Google Scholar] [CrossRef]
  52. Jie, G.; Ma, X.; Fan, J.; Wang, H. Semisupervised classification of polarimetric sar image via superpixel restrained deep neural network. IEEE Geosci. Remote Sens. Lett. 2017, 15, 122–126. [Google Scholar]
Figure 1. Uniform down-sampling for entire polarimetric synthetic aperture radar (PolSAR) image, resulting in a small image with the same region and structure as the entire image.
Figure 1. Uniform down-sampling for entire polarimetric synthetic aperture radar (PolSAR) image, resulting in a small image with the same region and structure as the entire image.
Remotesensing 12 00459 g001
Figure 2. Illustration of entire proposed framework (dashed line is used to explain the conciseness of the light network without being used in the method).
Figure 2. Illustration of entire proposed framework (dashed line is used to explain the conciseness of the light network without being used in the method).
Remotesensing 12 00459 g002
Figure 3. Structure label matrix completion framework.
Figure 3. Structure label matrix completion framework.
Remotesensing 12 00459 g003
Figure 4. Pauli RGB pseudo-colored image and terrain information from PolSAR database Flevoland from NASA/JPL AIRSAR: (a) Pauli RGB pseudo-colored image, (b) ground-truth image, and (c) terrains in this image.
Figure 4. Pauli RGB pseudo-colored image and terrain information from PolSAR database Flevoland from NASA/JPL AIRSAR: (a) Pauli RGB pseudo-colored image, (b) ground-truth image, and (c) terrains in this image.
Remotesensing 12 00459 g004
Figure 5. Classification accuracy vs different parameters: classification accuracy curve vs number of regularized parameters α and β .
Figure 5. Classification accuracy vs different parameters: classification accuracy curve vs number of regularized parameters α and β .
Remotesensing 12 00459 g005
Figure 6. Pauli RGB pseudo-colored images and their terrain information from PolSAR database Flevoland from NASA/JPL AIRSAR: (ad) classification results of light network for images with 1 4 , 1 9 , 1 14 , and 1 25 down-sampling rates, respectively; (eh) completed label priors Y ^ with corresponding down-sampling rates; and (il) final completed label maps for the entire image with same down-sampling rates as in (ad).
Figure 6. Pauli RGB pseudo-colored images and their terrain information from PolSAR database Flevoland from NASA/JPL AIRSAR: (ad) classification results of light network for images with 1 4 , 1 9 , 1 14 , and 1 25 down-sampling rates, respectively; (eh) completed label priors Y ^ with corresponding down-sampling rates; and (il) final completed label maps for the entire image with same down-sampling rates as in (ad).
Remotesensing 12 00459 g006
Figure 7. Classification accuracy vs different sampling rates for five runs.
Figure 7. Classification accuracy vs different sampling rates for five runs.
Remotesensing 12 00459 g007
Figure 8. Classification result and accuracy values of PolSAR database Flevoland from NASA/JPL AIRSAR: (a) result with 1 16 down-sampling rate, (b) result and accuracy value with zeroth-order information, (c) result and accuracy value without Laplace prior, and (d) result and accuracy value with entire-structure prior.
Figure 8. Classification result and accuracy values of PolSAR database Flevoland from NASA/JPL AIRSAR: (a) result with 1 16 down-sampling rate, (b) result and accuracy value with zeroth-order information, (c) result and accuracy value without Laplace prior, and (d) result and accuracy value with entire-structure prior.
Remotesensing 12 00459 g008
Figure 9. Classification results on dataset Flevoland with different methods, including (a) variational mixture Wishart (VMW) [26], (b) support vector machines (SVM) [29], (c) robust semi-supervised [21], (d) real-valued CNN (RV-CNN) [32], (e) complex-valued CNN (CV-CNN) [51], (f) graph semi-supervised (GSS) [44], and (g) our former CLSL [50] model and (h) proposed SLMC methods, respectively.
Figure 9. Classification results on dataset Flevoland with different methods, including (a) variational mixture Wishart (VMW) [26], (b) support vector machines (SVM) [29], (c) robust semi-supervised [21], (d) real-valued CNN (RV-CNN) [32], (e) complex-valued CNN (CV-CNN) [51], (f) graph semi-supervised (GSS) [44], and (g) our former CLSL [50] model and (h) proposed SLMC methods, respectively.
Remotesensing 12 00459 g009
Figure 10. Pauli RGB pseudo-colored images and their terrain information from PolSAR database Oberpfaffenhofen from ESAR Airborne Platform: (a) Pauli RGB pseudo-colored image, (b) ground-truth image, (ci) classification results on dataset Flevoland with different methods, including SVM [29], RS [21], RV-CNN [32], CV-CNN [51], SRDNN [52] methods, and our former CLSL [50] model and proposed SLMC method, respectively, and (j) terrains in the image in (a).
Figure 10. Pauli RGB pseudo-colored images and their terrain information from PolSAR database Oberpfaffenhofen from ESAR Airborne Platform: (a) Pauli RGB pseudo-colored image, (b) ground-truth image, (ci) classification results on dataset Flevoland with different methods, including SVM [29], RS [21], RV-CNN [32], CV-CNN [51], SRDNN [52] methods, and our former CLSL [50] model and proposed SLMC method, respectively, and (j) terrains in the image in (a).
Remotesensing 12 00459 g010
Table 1. Pixels of dataset Flevoland with different down-sampling rates.
Table 1. Pixels of dataset Flevoland with different down-sampling rates.
Land CoverPeasForestGrassesWheatBarleyStem BeansBare SoilLucerne
1 25 sampling rate381718276682320242208411
1 16 sampling rate59311464321006468397320636
1 9 sampling rate1037200180118178167205721152
1 4 sampling rate24334551179541201910154413002556
All pixels958218,044694816,3867595598651097628
Wheat2WaterBeetRapeseedPotatoesWheat3BuildingsTotalTime (s)
45853041654364690823676213.765
80265234785895413604210,36825.175
1201151511401545176424467918,60646.056
27903308249534504051558317742,06387.601
11,159990410,033667116,15622,241735153,590351.557
Table 2. Classification accuracy on dataset Flevoland for different sampling rates.
Table 2. Classification accuracy on dataset Flevoland for different sampling rates.
Sampling RateTrainingPeasForestGrassesWheatBarleyStem BeansBare SoilLucerne
1 25 0.4%0.900.980.720.910.990.950.920.89
1 16 0.6%0.931.000.860.990.970.950.980.94
1 9 1.1%0.971.000.980.981.000.960.990.94
1 4 2.5%1.001.000.991.001.000.991.000.97
Sampling RateTrainingWheat2WaterBeetRapeseedPotatoesWheat3BuildingsTotal
OA κ
1 25 0.4%0.890.980.840.950.930.970.130.930.92
1 16 0.6%0.950.980.940.980.970.960.440.960.96
1 9 1.1%0.930.990.950.930.991.000.850.970.97
1 4 2.5%0.981.000.991.001.001.000.960.990.99
Table 3. Pixel and classification accuracies on the dataset Flevoland.
Table 3. Pixel and classification accuracies on the dataset Flevoland.
TrainingPeasForestGrassesWheatBarleyStem BeansBare SoilLucerne
VMW3000.440.540.280.380.740.600.110.57
SVM5%0.500.660.190.590.800.140.640.62
RS5%0.960.940.600.911.000.540.950.70
RV-CNN10%0.970.960.940.930.860.981.000.95
CV-CNN10%0.990.970.900.950.950.990.990.98
SAG1%0.930.860.650.900.960.940.960.90
SRDNN1%0.950.970.870.950.950.970.940.95
GSS5%0.990.950.970.980.991.001.000.99
CLSL10%0.991.000.971.000.991.001.000.98
SLMC2.5%1.001.000.991.001.000.991.000.97
TrainingWheat2WaterBeetRapeseedPotatoesWheat3BuildingsTotal
OA κ
VMW3000.310.880.560.350.260.560.870.480.47
SVM5%0.450.760.740.310.530.750.370.580.56
RS5%0.700.590.990.460.830.980.350.810.80
RV-CNN10%0.970.990.980.920.960.960.800.95 \
CV-CNN10%0.941.000.970.920.970.970.830.96 \
SAG1%0.770.920.940.810.870.910.780.88 \
SRDNN1%0.900.990.920.920.940.970.810.950.94
GSS5%0.991.000.990.990.990.991.000.99 \
CLSL10%0.991.000.990.980.991.000.970.990.99
SLMC2.5%0.981.000.991.001.001.000.960.990.99
Table 4. Total times for different methods on dataset Flevoland.
Table 4. Total times for different methods on dataset Flevoland.
MethodVMWSVMRSRV-CNNSAGSRDNNGSSCLSLSLMC
Time (s)311.56631.52610.27334.6921456728.04702351.5687.60
Table 5. Classification accuracy on dataset Oberpfaffenhofen.
Table 5. Classification accuracy on dataset Oberpfaffenhofen.
Training PixelsBuilt-Up AreaWoodlandOpen AreaTotal
OA κ Time (s)
SVM5%0.440.760.950.790.771282.77
RS5%0.750.950.930.890.891239.61
RV-CNN1%0.860.850.930.90 \679.84
CV-CNN1%0.910.920.950.93 \ \
SRDNN0.5%0.900.940.930.930.881478.83
CLSL1%0.830.930.960.930.92714.11
SLMC0.15%0.890.920.960.940.9311.16

Share and Cite

MDPI and ACS Style

Wu, Q.; Hou, B.; Wen, Z.; Ren, Z.; Ren, B.; Jiao, L. Structure Label Matrix Completion for PolSAR Image Classification. Remote Sens. 2020, 12, 459. https://doi.org/10.3390/rs12030459

AMA Style

Wu Q, Hou B, Wen Z, Ren Z, Ren B, Jiao L. Structure Label Matrix Completion for PolSAR Image Classification. Remote Sensing. 2020; 12(3):459. https://doi.org/10.3390/rs12030459

Chicago/Turabian Style

Wu, Qian, Biao Hou, Zaidao Wen, Zhongle Ren, Bo Ren, and Licheng Jiao. 2020. "Structure Label Matrix Completion for PolSAR Image Classification" Remote Sensing 12, no. 3: 459. https://doi.org/10.3390/rs12030459

APA Style

Wu, Q., Hou, B., Wen, Z., Ren, Z., Ren, B., & Jiao, L. (2020). Structure Label Matrix Completion for PolSAR Image Classification. Remote Sensing, 12(3), 459. https://doi.org/10.3390/rs12030459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop