Next Article in Journal
Evaluation and Drivers of Four Evapotranspiration Products in the Yellow River Basin
Next Article in Special Issue
OPT-SAR-MS2Net: A Multi-Source Multi-Scale Siamese Network for Land Object Classification Using Remote Sensing Images
Previous Article in Journal
Reviewing Space-Borne GNSS-Reflectometry for Detecting Freeze/Thaw Conditions of Near-Surface Soils
Previous Article in Special Issue
SAM-Induced Pseudo Fully Supervised Learning for Weakly Supervised Object Detection in Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Input Schemes for Polarimetric SAR Classification Using Deep Learning

1
College of Electronic Science and Engineering, National University of Defense Technology (NUDT), Changsha 410073, China
2
College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
3
Faculty of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
4
China Centre Resources Satellite Data and Application, Beijing 100094, China
5
National Satellite Ocean Application Service, Beijing 100081, China
6
Key Laboratory of Space Ocean Remote Sensing and Applications, Ministry of Natural Resources, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(11), 1826; https://doi.org/10.3390/rs16111826
Submission received: 19 April 2024 / Revised: 16 May 2024 / Accepted: 20 May 2024 / Published: 21 May 2024

Abstract

:
This study employs the reflection symmetry decomposition (RSD) method to extract polarization scattering features from ground object images, aiming to determine the optimal data input scheme for deep learning networks in polarimetric synthetic aperture radar classification. Eight distinct polarizing feature combinations were designed, and the classification accuracy of various approaches was evaluated using the classic convolutional neural networks (CNNs) AlexNet and VGG16. The findings reveal that the commonly employed six-parameter input scheme, favored by many researchers, lacks the comprehensive utilization of polarization information and warrants attention. Intriguingly, leveraging the complete nine-parameter input scheme based on the polarization coherence matrix results in improved classification accuracy. Furthermore, the input scheme incorporating all 21 parameters from the RSD and polarization coherence matrix notably enhances overall accuracy and the Kappa coefficient compared to the other seven schemes. This comprehensive approach maximizes the utilization of polarization scattering information from ground objects, emerging as the most effective CNN input data scheme in this study. Additionally, the classification performance using the second and third component total power values (P2 and P3) from the RSD surpasses the approach utilizing surface scattering power value (PS) and secondary scattering power value (PD) from the same decomposition.

1. Introduction

Polarimetric synthetic aperture radar (PolSAR) possesses the capability to capture the complete polarized scattering characteristics of ground objects under diverse environmental conditions, making it applicable in various remote sensing scenarios [1,2,3]. Unlike conventional single-polarization SAR, PolSAR actively retrieves polarization information from surface scattering, offering a larger set of parameters to characterize electromagnetic scattering properties. For effective classification of polarimetric SAR data, these polarization features from PolSAR images must be comprehensively explored and leveraged within widely adopted deep learning algorithms and SAR systems are developing rapidly, and relevant scholars have conducted in-depth research on issues such as SAR imaging [4,5].
Currently, PolSAR classification methods can be broadly categorized into three groups: 1. Polarimetric decomposition features: In this approach, PolSAR images undergo decomposition into polarimetric components, directly extracting the scattering characteristics of target objects. Common methods include Freeman decomposition [6], Cloude-Potier decomposition [7], Huynen decomposition [8], and others. 2. Statistical distribution characteristics: classification is based on the statistical distribution characteristics of PolSAR data, with commonly used algorithms such as Wishart classification [9]. 3. Deep learning methods: With the rapid evolution of deep learning approaches, various methods have been introduced into PolSAR image classification [9,10,11]. The incorporation of multiple convolutional layers allows deep learning models to effectively extract high-level features, enhancing overall classification performance. Despite the promising results achieved by researchers in PolSAR image classification using deep learning methods, the existing approaches have several limitations:
  • Some algorithms stack and combine polarimetric decomposition features without considering the inherent limitations of the decomposition methods.
  • Some methods normalize polarimetric features without accounting for the distribution characteristics of the data, often applying linear normalization methods to non-linear PolSAR data.
  • Some methods employ different forms of CNN but overlook the complete scattering information and various polarimetric scattering characteristics in PolSAR images, utilizing incomplete polarized data as input for the network.
PolSAR images inherently contain multiple polarimetric features that can be utilized for CNN classification. Typically, the polarization coherency matrix (T) and the polarization covariance matrix (C) are widely used to represent polarimetric characteristics. Extracting valuable feature information for neural network classification involves decomposing PolSAR images into target polarimetric components using these matrices. Researchers have employed Sinclair scattering matrices [12], texture features [13,14,15], and spatial segmentation features [16] for PolSAR image classification. Pseudo-color synthesis using decomposed target components yields color characteristics of the targets, providing diverse information for PolSAR deep learning classification [17,18,19]. John Burns Kilbride et al. [20] used spatial and temporal information and Google Earth Engine to extract information from SAR images. They semantically segmented the forest distribution in tropical rainforest areas and established a near-time mapping system. To some extent, this solves the timeliness problem in traditional SAR classification. The challenge lies in effectively combining these features to enhance the accuracy of PolSAR classification. Shi et al. [21] proposed a method based on complex matrix and multi feature learning to classify PolSAR images. Shang et al. [22] proposed a dual branch CNN structure that extracts features from PolSAR images through shared parameters, alleviating the problem of insufficient labeled training data in PolSAR image classification tasks.
PolSAR classification based on texture features has also received attention from relevant scholars. Zakhvatkina et al. [23] used neural network algorithms and Bayesian methods to classify land features in SAR images based on texture features. Zhang et al. [24] also used texture feature-based methods to classify multi-band PolSAR images of land features in the intertidal zone of coastal wetlands. Zhu et al. [25] demonstrated the potential for universal applicability of easily computable texture features in various computer vision tasks related to image classification. Similar classification methods include the Markov random classification field method [26] and the covariance matrix-based method [27].
There are also related methods that use traditional machine learning to construct PolSAR image classification schemes [28,29,30]. Kersten et al. [31] used the EM and fuzzy clustering methods, combined with multiple distance measurement methods, to segment PolSAR images. The experimental results indicate that using the Wishart method is superior to other methods. Wang et al. [32] evaluated the classification performance of sea ice during the melting period using multi-frequency PolSAR data. Using the maximum similarity classification method, support vector machine method, random forest, and backpropagation neural network method, 12, 14, 15, and 19 polarization features were used for classification. Before classification, these features are classified into different feature combinations based on Euclidean distance. Then the classification results are evaluated, and the research content can provide certain reference significance for relevant scholars. However, these methods require a lot of manpower and time to extract features.
With the advent of deep learning, researchers have explored various polarimetric data input schemes for PolSAR classification. A large number of scholars have used deep learning methods to study PolSAR image classification methods [33,34,35,36,37,38]. Liu et al. [33] proposed a polarimetric convolutional network for the classification of PolSAR images, which achieved good classification results. Based on literature research, the most commonly employed input schemes are the six-parameter method [39,40,41] and the nine-parameter method [42]. Additionally, some researchers [43] have integrated Cloude-Potier decomposition, Freeman-Durden decomposition, and Huynen decomposition, resulting in a total of 16 polarimetric features input schemes for PolSAR image classification. Nie et al. [12] utilized 12 polarimetric features from Freeman-Durden decomposition, Van Zyl decomposition [44], and Cloude-Potier decomposition, applying an enhanced learning framework for PolSAR image classification. Jafari et al. [45] used VGG16, ResNet-50, and ConvNeXt networks to fuse the features extracted from SAR images, as well as the statistical and spatial features and incident angles, to classify ships and sea ice in the images. However, the features used in CNNs do not have clear physical meanings, meaning that they do not have physical interpretability. Although good classification results have been achieved, further research is needed on the classification features in the future. Ren et al. [46] used a graph neural network with transfer attention to segment PolSAR images and used an end-to-end trainable residual model to fuse the extracted multi-scale feature representations. The proposed method performed well in classifying similar features in unknown images.
While these methods have achieved high-accuracy classification of PolSAR images, increasing the number of polarimetric features does not consistently lead to improved classification accuracy [47] in PolSAR image classification. We attribute this to the following factors: (1) non-independence of polarimetric features obtained from polarimetric coherence/covariance matrices; (2) indiscriminate input of polarimetric features into the network, often increasing the difficulty of feature learning; and (3) the associated increase in computational cost with an increased number of polarimetric features. Additionally, researchers have not thoroughly investigated the merits and limitations of polarimetric decomposition methods when utilizing polarimetric features. Instead, they directly applied components obtained from these algorithms without fully leveraging complete polarimetric decomposition to extract comprehensive backscattering information from objects. Consequently, the information at the data input stage remains incomplete, necessitating the combination of feature parameters at the input end of deep learning—a novel exploration in PolSAR deep learning classification.
PolSAR images encapsulate various original features of targets and extensive polarization information. This study adopts reflection symmetric decomposition (RSD), which can fully extract target polarization information. Polarimetric scattering features are extracted, and eight polarimetric feature input schemes are designed; comparing classification accuracy on the classical CNNs, AlexNet and VGG16, is more common when analyzing performance. The article conducts a comparative analysis based on various classification schemes employed by different scholars. By enhancing existing research schemes through feature extraction at the input stage and utilizing classic CNNs for PolSAR image classification, we achieve elevated classification accuracy and determine the optimal combination of polarimetric features as input schemes. The key conclusions of this study, with implications for researchers, are as follows:
  • The classification performance utilizing total power values of the second component (P2) and the third component (P3) obtained from RSD surpasses schemes using surface scattering power value (PS) and double-bounce scattering power value (PD) from RSD. However, the optimal input scheme includes P2, P3, PS, and PD.
  • The commonly employed six-parameter input scheme [39,40,41] inadequately exploits polarimetric information. All seven alternative input strategies outperform this scheme.
  • Regarding input schemes, in the face of limited computational resources, it is advisable to directly use the input scheme with all elements of the T-matrix or utilize all components obtained through RSD, as both ensure the completeness of polarimetric information.
  • The 21-channel input scheme should be used when computational resources are sufficient.
  • The two classic CNNs employed, VGG16 and AlexNet, differ in depth. After five rounds of accuracy statistics, VGG16 demonstrates superior stability. While the five-layer AlexNet neural network achieves high accuracy, it suggests that for PolSAR image classification using CNNs, an excessively deep network is unnecessary. In other words, VGG16 exhibits better stability, while the five-layer AlexNet achieves higher accuracy.
The subsequent sections of the article are organized as follows: Part II primarily introduces classifiers for CNN classification and classic PolSAR decomposition methods. Part III presents the selected polar decomposition methods and the research plan. Part IV delves into experimental results and analysis. Finally, Part V elucidates the experimental conclusions and outlines prospects for future research endeavors.

2. Related Works

2.1. PolSAR Classification with CNN

The advent of computer hardware development has ushered in the era of deep learning, giving rise to networks such as AlexNet [48], GoogleNet [49], and the VGG series [50]. These networks have demonstrated exceptional performance across various domains. In a convolutional neural network, deep-level features of objects within images are extracted through convolutional layers, pooling layers, activation layers, and fully connected layers. This approach is more efficient than traditional methods and has been applied extensively [51,52,53].
The distinctive imaging mechanisms of PolSAR images render traditional methods for optical image classification obsolete. Challenges arise from differences in imaging geometry shape, object size, speckle noise, and non-linear normalization of PolSAR data. Scholars have turned to deep learning methods for PolSAR image classification, achieving notable success. Nie et al. [12] employed reinforcement learning to address low classification accuracy with limited samples. Gui et al. [54] proposed the use of gray-level co-occurrence matrices and conducted experiments on an enhanced convolutional autoencoder, achieving higher accuracy. Bi et al. [55] adopted a graph-based deep learning approach, enhancing classification performance by pairing and merging semi-supervised terms with limited samples.

2.2. Perform Polarization Decomposition Using a Scattering Mechanism

Target decomposition stands as a pivotal approach in the processing of PolSAR data, fundamentally expressing pixels as a weighted sum of diverse scattering mechanisms. In 1998, scholars Anthony Freeman and Stephen L. Durden introduced the initial model-based, non-coherent polarimetric decomposition algorithm [8], subsequently acknowledged as Freeman decomposition. Originally, Freeman’s decomposition aimed to provide viewers of multi-view SAR images with an intuitive means to distinguish the primary scattering mechanisms of objects. Freeman decomposition relies entirely on the back-scattering data observed by radar, with each component in its decomposition yielding a corresponding physical interpretation. Consequently, it earned its distinction as the first model-based, non-coherent polarimetric decomposition algorithm. The advent of Freeman decomposition marked a significant breakthrough. However, following its inception, extensive usage and further exploration unveiled three primary issues associated with its decomposition method: an overestimation of the volume scattering component, the presence of negative power components in the results, and the loss of polarization information. Notably, these three issues were found to be interrelated. For instance, the overestimation of the volume scattering component contributed to the existence of negative power values in subsequent surface scattering and double scattering components. Simultaneously, the loss of polarization information played a role in the inappropriate estimation of the power values of the volume scattering component [56].
In 2005, Yamaguchi et al. introduced a second model-based, non-coherent polarimetric decomposition algorithm [57], denoted as the Yamaguchi algorithm hereafter. This algorithm comprises four scattering components and introduced helix scattering as the fourth component, challenging the reflection symmetry assumption of Freeman decomposition and enhancing its applicability, particularly in urban area analysis. While this model-based approach opened avenues for improving the performance of non-coherent polarimetric decomposition algorithms through scattering model modifications, it did not offer a theoretical foundation for choosing helix scattering as the fourth component. According to the authors, the selection was more comparative and preferential. Notably, the innovations of Yamaguchi decomposition centered on the scattering model without altering the decomposition algorithm, which employed Freeman decomposition’s processing method. Despite exhibiting improved experimental results, the Yamaguchi algorithm retained issues like overestimation of volume scattering, negative power components, and loss of polarization information [58].
In the subsequent decade, numerous model-based, non-coherent polarimetric decomposition algorithms emerged. Reflection symmetry decomposition (RSD) [59,60] is a novel model-based, non-coherent polarimetric decomposition method that preserves polarization information. Demonstrating excellent algorithmic performance, RSD decomposes three components, all adhering to the mirror symmetry assumption. Notably, the original polarimetric coherence matrix can be fully reconstructed from RSD’s decomposition results, rendering it a comprehensive decomposition algorithm. The RSD algorithm employs an expanded set of polarimetric decomposition parameters, primarily involving unitary transformation, with superior mathematical properties and more expansive research possibilities compared to other decomposition algorithms. Leveraging these advantages, we adopt RSD as the polarimetric decomposition method for PolSAR images in this study.

3. Methods

This section outlines the experimental processing flow, covering radiometric calibration, polarization filtering, polarization feature extraction, and the configuration of CNNs and relevant parameters. It emphasizes the processing of PolSAR data and polarization features, providing insights into the basis and specific distribution of the chosen polarization data input scheme. The details are as follows:

3.1. Data Analysis and Feature Extraction

PolSAR data, represented by a 2 × 2  Sinclair matrix under a single look, reflects polarimetric backscattering information related solely to the targets. The polarimetric scattering matrix can be expressed as follows:
S = S HH S HV S VH S VV
The polarization coherency matrix T includes the complete information regarding the polarization scattering of the targets. It is vital for PolSAR image classification. Upon satisfying the reciprocity theorem, the polarization coherency matrix T is derived after multi-look processing, eliminating coherent speckle noise [58]:
T = k k H = T 11 T 12 T 13 T 12 T 22 T 23 T 13 T 23 T 33
Among them,
k = 1 2 S HH + S VV S HH S VV S HV S VH
k represents the scattering vector of the backscattering S matrix in the Pauli basis, where the superscript H denotes the Hermitian transpose. <•> represents an ensemble average. Additionally, the S-matrix is vectorized using the Lexicographic basis to obtain the polarimetric covariance matrix C, which can be converted back and forth between C-matrix and T-matrix. The T-matrix is a positive semi-definite Hermitian matrix, which can be represented as a 9-dimensional real vector [T11, T22, T33, Re(T12), Re(T13), Re(T23), Im(T12), Im(T13), Im(T23)]. Tij represents the element in the i-th row and j-th column of the T matrix. Re(Tij) and Im(Tij) represent the real and imaginary parts of the Tij element, respectively.
Researchers have used this vector or its partial parameters for PolSAR image classification [39,40,41]. Additionally, the T-matrix can undergo non-coherent polarimetric decomposition, yielding several scattering components with parameters utilized for PolSAR classification [12,43]. Furthermore, pseudocolored power values of the scattering components from polarimetric decomposition provide color information for features in PolSAR images.

3.2. PolSAR Data Preprocessing and Input Schemes

The PolSAR images, acquired from the L1A-level standard single-look data of China’s GF-3 satellite, underwent polarization decomposition. The T-matrix and all polarization feature parameters from RSD were obtained. Non-local means filtering [61], chosen for its superior effect after comparison with methods like mean filtering, median filtering, Lee filtering [62], and polarization whitening filtering [63], was employed.
In PolSAR image classification, emphasis is often placed on the potential enhancement of classification accuracy through various deep learning modules, analyzing input values. However, attention to the polarization parameter schemes of the input is scarce. Effective feature combinations are crucial for PolSAR image classification, as different polarimetric scattering features can reflect object scattering characteristics from diverse perspectives.
While CNNs typically use only a subset of these features for training, limiting the utilization of polarization information, each pixel in PolSAR data can be represented by the T matrix—a fundamental form for PolSAR classification tasks.
Target decomposition, a primary approach in polarimetric SAR data processing, represents pixels as a weighted sum of several scattering mechanisms. In 1998, Freeman and Durden proposed the first model-based incoherent polarimetric decomposition algorithm [8], which had issues such as overestimation of volume scattering components, presence of negative power components, and loss of polarization information. In 2005, Yamaguchi et al. introduced the second model-based incoherent polarimetric decomposition algorithm [57]. Despite improvements in the scattering model, the decomposition algorithm itself still followed Freeman’s method, and issues of overestimation, negative power components, and loss of polarization information persisted [58].
Compared to several classic polar decomposition algorithms, RSD [59] possesses advantages such as no negative power components in the decomposition results, complete reconstruction of the original polarimetric covariance matrix, and structural conformity of the three components with the selected scattering model. By applying RSD, more polarimetric decomposition physical quantities can be obtained. The decomposition algorithm, mainly involving unitary transformation, exhibits better mathematical properties and more research possibilities compared to other methods. Hence, this study selects RSD as the polarimetric decomposition method for PolSAR imagery.
The polarized characteristics derived from reflected symmetry decomposition encompass surface scattering power (PV), secondary scattering power (PS), bulk scattering power (PD), the total power value of the second component of reflected symmetry decomposition (P2), and the total power value of the third component of reflected symmetry decomposition (P3). The value range for these components is [0, +∞). The doubled directional angle θ spans (−π/2, π/2], and the doubled helix angle φ covers [−π/4, π/4]. The power proportion of spherical scattering in the second component of reflected symmetry decomposition is denoted as x, and in the third component, it is denoted as y. Both x and y range from [0, 1]. The phase of element a in the second component of reflected symmetry decomposition (T12) and the phase of element b in the third component of reflected symmetry decomposition (T12) both fall within the range of [−π, π] [60].
Before inputting these physical quantities into the CNN model, it is essential to normalize their ranges. In the T-matrix, the total power value is normalized by converting Span to a unit of dB. For nonlinear polarization features like the scattering power parameters T11, T22, T33, PS, PD, PV, P2, and P3 are all divided by Span to achieve normalization. The remaining components, because of linear characteristics, undergo maximum–minimum normalization, as indicated in Formula (4).
X L = x n m i n m m a x n m i n
The correlation coefficients between channels T12, T23, and T23 in the T-matrix are given by Formulas (5)–(7).
c o e 12 = T 12 / T 11 T 22
c o e 13 = T 13 / T 11 T 33
c o e 23 = T 23 / T 33 T 22
This article adopts the complete decomposition method—reflection symmetric decomposition (RSD)—to extract ground features. Compared with traditional methods such as Freeman and Yamaguchi decomposition methods, it can obtain more information. It mainly selects the extracted ground features based on the information in the polarization power and T-matrix and divides the research scheme according to the normalization of physical quantities.
The normalized polarimetric feature parameters mentioned above are categorized into different input schemes following specified rules. We mainly divide based on three principles: whether the total polarization power is normalized, whether it includes polarization power components, elements in the T matrix, and polarization power features. First, as per references [39,40,41], the non-normalized total power (NonP0), T11, T22, T33, and the correlation coefficients coe12, coe13, coe23 between the T12, T13, and T23 channels form input scheme 1. Recognizing that the polarimetric total power Span is not normalized, normalized Span (P0) is adopted as research scheme 2. Subsequently, normalized T11 is added to research scheme 2 as research scheme 3. Considering that PS, PD, and PV are all polarization power values, these three physical quantities are replaced, resulting in research scheme 4. The decomposed total power values P2 and P3 obtained through reflection symmetry decomposition are used to substitute PS and PD in research scheme 4, resulting in research scheme 5. P2, P3, PS, and PD are simultaneously inputted into the CNN as research scheme 6. Furthermore, based on the research of related scholars, all elements of the T-matrix, augmented with the normalized Span (P0), form research scheme 7. Finally, all reflection symmetry decomposition parameters after normalization constitute research scheme 8. The specific details of all eight polarization data input schemes are shown in Table 1.

3.3. Network Selection and Parameter Configuration, Loss Function, Evaluation Criteria

AlexNet and VGG16 are seminal networks in deep learning that demonstrate exceptional performance in image classification tasks. This paper opts for these two networks to validate the accuracy of each research scheme. The utilized AlexNet comprises 3 convolutional layers, one pooling layer, 3 fully connected layers, and one softmax layer. VGG16, on the other hand, integrates 13 convolutional layers, four max-pooling layers, three fully connected layers, and one softmax layer. Post-experimentation, within both networks, AlexNet and VGG16, the input data size is set at 64   × 64 ×   n , where n represents the number of parameters in the polarized data input scheme. Employing the Kaiming initialization method [64], an initial learning rate of 0.1, decay rate of 0.1, weight initialization of 0.9, and weight decay coefficient of 0.0005 [65] are applied to achieve optimal training accuracy. The cross-entropy loss function is a function wherein we need to calculate the loss value for each sample when training a neural network and minimize it. For this function, we can use the stochastic gradient descent optimization algorithm to minimize it. Specifically, we calculate the gradient value of the function by taking its derivative and then updating the model parameters. The network utilizes the cross-entropy loss function, as expressed in Formula (8).
L S o f t m a x = 1 N i L i = 1 N i c = 1 M y i c log p i c
Here, M signifies the number of categories, yic represents the indicator function (0 or 1), and pic is the probability of observing the sample value. To quantitatively assess classification accuracy, five experiments are conducted on the classification results, utilizing average accuracy, highest overall accuracy, accuracy for each land cover type, and the Kappa coefficient.

3.4. Experimental Process

Figure 1 illustrates the process of employing a CNN to classify eight polarimetric data input schemes. Initially, upon obtaining L1A level GF3 data, the original data undergo radiometric calibration [66] and polarimetric filtering [61]. Subsequently, the processed data undergo polarimetric decomposition to extract features characterizing the back-scattering information of the targets. Following different normalization rules, the data are segmented into eight polarimetric data input schemes. The acquired datasets are then trained and validated using the CNN, saving parameters such as weights and biases. Finally, the trained model classifies the entire image, leveraging convolution to ascertain feature value sizes. The fully connected layer and the softmax function are employed to determine the class to which the targets belong. The classification results are filled into an empty matrix of the same size as the predicted image, yielding the complete image classification results.
As mentioned in the previous section, the sample size used in the experiment is 64   × 64 ×   n , where n represents the number of polarization features in the scheme. This approach not only classifies the terrain from the perspective of polarization features, but also considers the influence of neighboring pixels from the dimension of spatial features.

4. Experimental Results and Analysis

In this section, we conducted experiments employing various research approaches with AlexNet and VGG16, systematically comparing the accuracy variations between them. For training and testing, four scenes of high-resolution polarimetric Synthetic Aperture Radar (SAR) images from the Yellow River Delta area, acquired by the GF3 satellite, were employed. All experiments were executed on a single GeForce 3060Ti GPU with the PyTorch 3.8 framework, and the results were derived from five independent trials.

4.1. Data Explanation

GF-3 stands as China’s first C-band high-resolution fully-polarimetric SAR, widely applied owing to its diverse imaging modes [67,68,69]. Particularly, the full-polarimetric imaging mode I (QPSI) proves suitable for large-scale land cover investigations. The Yellow River Delta, selected as the research area based on field investigations, provided data obtained from the China Ocean Satellite Data Service System [70]. Four images were utilized: two taken on 14 September 2021 (7882 × 9072 pixels and 7882 × 9070 pixels), one on 13 October 2021 (6526 × 7317 pixels), and one on 12 October 2017 (6014 × 7637 pixels). The initial three images were allocated for training, while the last image served as the test set. All images, acquired via the QPSI imaging mode, spanned an imaging range of (118°33′–119°20′E, 37°35′–38°12′N), with an incidence angle range of 30.97°–37.71°. Table 2 provides specific details and applications of the images, with the test image size set at 6014 × 7637 pixels.
After field investigations, the primary land cover types in the research area were identified as nearshore water, seawater, spartina alterniflora, tamarix, reed, and tidal flats. Figure 2 illustrates pseudocolored composites of PS, PD, and PV in the Yellow River Delta region and the ground truth map.
In this study, based on field investigations, the land cover types in the Yellow River Delta were classified into seven categories: nearshore water, seawater, spartina alterniflora, tamarix, reed, tidal flat, and suaeda salsa, labeled as numbers 1 to 7, respectively. In three training images, specific areas for each land cover type were chosen based on field investigations. Within these areas, 1000 samples were randomly selected, with 800 used for training and 200 for validation. The distribution of data samples is detailed in Table 3.
For test samples, 1000 samples for each land cover type on the test image were randomly selected. These samples constituted the test set, inputted into the trained model for testing. The classification results for the entire image were provided simultaneously, accompanied by an evaluation of the network model’s classification performance and the various polarimetric data input schemes using diverse accuracy indicators.
Figure 3 depicts the specific selection of training and testing sample datasets.

4.2. Classification Results of the Yellow River Delta on AlexNet

To ensure the robustness of our findings and mitigate the impact of individual results on the ultimate conclusion, we conducted five independent experiments on AlexNet, assessing eight polarized data input schemes. In each experiment, we calculated the overall accuracy and kappa coefficient for classification. The results of these experiments were then arranged in descending order, with the highest value representing the top overall classification accuracy. We computed the average accuracy over the five experiments and utilized the Kappa coefficient to evaluate the quality of the classification outcomes. Both the accuracy for each terrain class and the Kappa coefficient were derived from the highest overall classification accuracy result.
The classification results of the eight polarized data input schemes are presented in Table 4 and Figure 4. Notably, the six-parameter classification using research scheme 1 demonstrated lower overall accuracy and average overall accuracy and Kappa coefficient compared to the other seven research schemes. Normalizing the total power value led to a 2.81% increase in the highest overall classification accuracy and a 6.54% rise in average overall classification accuracy. This underscores the importance of normalizing inputs to meet the CNN’s requirements. Additionally, the incorporation of the T11 component further enhanced classification accuracy, with the highest overall accuracy increasing by 0.74% and the average accuracy rising by 1.026%. Thus, supplementing the network with pertinent information aids in extracting effective features through convolution and pooling, thereby improving accuracy.
Moreover, when employing the power value combination for classification, the traditional polarized data input scheme 4, using the PS, PD, and PV elements, outperformed the three research schemes mentioned earlier. Similarly, when classifying results using the reflection symmetric decomposition P2 and P3, polarized data input scheme 5 surpassed the PS and PD research schemes. The highest overall classification accuracy improved by 2.35%, and the average accuracy increased by 1.24%. This implies that using the reflected symmetric decomposed P2 and P3 is superior to the PS and PD research schemes. A study on a combination that includes P2, P3, PS, and PD (polarized data input scheme 6) indicated that when using only polarized power components, the highest overall classification accuracy increased by 4.52% and 2.17%, and the average accuracy improved by 2.874% and 1.626%, respectively. When all elements in the T-matrix were used for classification (polarized data input scheme 7), the highest overall classification accuracy increased by 1.9%, and the average overall classification accuracy improved by 2.152%. Finally, when using all parameters in the T-matrix and all components obtained from the reflected symmetric decomposition (polarized data input scheme 8), both the highest overall classification accuracy (98.1%) and the average classification accuracy (96.768%) were the highest. Compared to the six-parameter research scheme 1, there was an improvement of 14.51% and 19.696%, respectively.
Notably, when employing scheme 1, the classification accuracy for the tidal flat falls below 50%. This can be attributed to the tidal flats being influenced by multiple types of terrain scattering, particularly the presence of diverse vegetation on the beach. The six-parameter research scheme cannot effectively input the polarized scattering characteristics representing this terrain into the network, resulting in reduced classification accuracy for this area. A similar decrease in accuracy is evident for tamarix-covered terrain. Given that tamarix is closely associated with tidal flats, the polarized scattering characteristics within the sixparameters are insufficient for distinguishing the polarization traits of this terrain. Thus, the six-parameter input scheme under scheme 1 is inherently incomplete, failing to input all the polarized characteristics representing terrain information into the CNN. Moreover, inputting normalized polarized total power notably enhances the accuracy of identifying tamarix-covered terrain, validating the effectiveness of the improved input scheme for this terrain. However, scheme 2 actually reduces the classification accuracy of the tidal flat, prompting a continued search for new polarized scattering characteristics. When we input T11 from the T-matrix into the CNN, accuracy slightly improves. Introducing PS, PD, and PV decomposed from RSD into the CNN enhances the overall classification accuracy by 29.1%. Furthermore, inputting all polarized scattering characteristics decomposed by RSD into the CNN raises the highest overall accuracy to 90.6%, highlighting the efficacy of the designed polarized data input scheme. For the other six terrain types, the classification accuracy generally exhibits an upward trend from schemes 1 to 8. This trend reinforces the effectiveness of employing reflection symmetry decomposition to extract terrain-polarized characteristics for classification.
The image classification outcomes using various research schemes are depicted in Figure 4. From the classification result graph, it can be seen that using scheme 8 can effectively distinguish the features in homogeneous areas, while also achieving better classification results in heterogeneous areas. This indicates that when using polarization features such as the T-matrix and polarization power, the polarization features of the features can be well characterized. The neural network used can also effectively extract and classify the ground objects through these features.
From the texture perspective, the information in the T-matrix can already represent the polarization characteristics of the terrain to a certain extent. When incorporating features such as polarization power and total polarization power obtained through reflection symmetry decomposition, it further supplements the missing information.

4.3. Classification Results on VGG16

Similarly, we validated the eight polarimetric data input schemes on VGG16. Table 5 presents the accuracy of each land category on VGG16, along with the highest overall accuracy, average overall accuracy, and distribution of Kappa coefficients. The table reveals that the classification accuracy for the tidal flat category under the eight data input schemes aligns with the experimental results of AlexNet. This indicates that the decomposed polarimetric scattering features indeed contribute to the classification of land categories. It also suggests that using the six-parameter polarimetric data input scheme 1 for CNN classification is insufficient in terms of information. We speculate that this is due to the fact that the polarization features such as correlation coefficients included in the scheme cannot effectively represent the features in the PolSAR image. At the same time, only six polarization features have fewer elements than the elements in the T matrix, indicating a lack of information.
Continuously optimizing the input scheme and incorporating more polarimetric scattering features favorable for classification into the CNN will help improve the final classification accuracy. Furthermore, the conclusion that the results from classifying with P2 and P3 are better than PS and PD is also validated. When using all of the information from the T matrix for classification, higher accuracy can be achieved, and the processing time is also less than that of the 21-parameter polarimetric data input scheme. However, when using 21 elements to classify PolSAR images, better results can be achieved in terms of accuracy. Therefore, if the accuracy requirement is not very high, all elements in the T-matrix can be used as the selection scheme.
It is notable that when employing all parameters decomposed from the T-matrix and reflection symmetry, the accuracy of tidal flat classification reaches 99.8%. In contrast, AlexNet achieves a classification accuracy of 90.6% with the same input scheme. Thus, VGG16 exhibits a stronger capacity than AlexNet to recognize polarimetric scattering features of land categories in complex environments. Additionally, VGG16 maintains a relatively high accuracy across various land categories.
Figure 5 illustrates the classification results of the eight research schemes using VGG16. When using VGG16 for classification, it can be seen that in each scheme, the overall classification effect in the image is better, and the clustering effect of various features is better than AlexNet, indicating that in terms of the neural network used, VGG16 can extract deeper features in PolSAR images.
Simultaneously, we conducted a statistical comparison of the classification results of the two network architectures, as depicted in Figure 6. “OA” represents the highest classification accuracy, and “AA” represents the average classification accuracy. Among the 21-parameter polarized data input schemes, AlexNet achieved a higher overall accuracy than VGG16. However, the highest overall accuracy was not stable and fluctuated significantly, while VGG16 exhibited more stability. Thus, when classifying PolSAR data using a CNN, a deeper network does not necessarily ensure higher performance. AlexNet, with only five layers, can achieve high classification accuracy. However, deeper networks can achieve more stable classification results.

5. Conclusions

This study delved into polarization data input schemes at the neural network’s input stage. Eight schemes were proposed and tested using classic CNN models—AlexNet and VGG16—as the primary experimental networks. The findings on various combinations of polarization scattering features are summarized as follows:
  • The classification performance utilizing total power values of the second component (P2) and the third component (P3), obtained through reflection symmetry decomposition, surpasses the research scheme using surface scattering power (PS) and second-order scattering power (PD) from RSD.
  • The six-parameter polarization data input scheme [39,40,41] provides incomplete information. The seven alternative methods designed alongside it all outperform it. Therefore, the six-parameter scheme is not recommended.
  • Concerning polarization data input schemes with limited computational resources, direct use of scheme 7, which encompasses all of the information of the T-matrix, is suggested. If device configuration allows, prioritizing the use of the 21-parameter polarization data input scheme 8, including all parameters of the T-matrix and RSD, is recommended.
  • Among the two classic CNN models in the experiment, VGG16 exhibits better stability, while the five-layer AlexNet achieves higher overall classification accuracy. Therefore, for PolSAR image classification using a CNN, an excessively deep network may not be necessary. However, deeper networks tend to offer better stability in training accuracy.
This study highlights that deep CNNs cannot spontaneously learn all polarization feature information. Hence, it is crucial to ensure the input polarization feature information is mathematically complete, as incomplete input results in the loss of some polarization information in classification. There is also a need to input more polarization feature information into deep neural networks, provided computational resources allow. However, further research is required to determine whether all extractable polarization feature information should be inputted into the network, the necessity of having over a hundred polarization feature parameters as input, and whether redundant information is abundant. Our future work will explore more effective polarization information in PolSAR data, propose polarization data input schemes for better utilization of object back-scattering information with increased efficiency, and enhance classification performance while maintaining computational efficiency.

Author Contributions

Methodology, W.A.; Formal analysis, S.Z.; Data curation, L.C., Y.Z. and T.X.; Writing—original draft, S.Z.; Supervision, Z.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2022YFB3902404 and 2021YFC2803304.

Data Availability Statement

The experiment data can be downloaded from https://osdds.nsoas.org.cn/ (last access: 30 October 2023).

Acknowledgments

We express our sincere appreciation to An [36] for generously providing the RSD code. Our gratitude extends to the National Satellite Ocean Application Center of the Ministry of Natural Resources for its commendable efforts in developing and maintaining the ocean data distribution system and offering superior querying and downloading services for GF-3 satellite data [46]. Finally, we acknowledge and value the constructive feedback provided by the reviewers on this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yajima, Y.; Yamaguchi, Y.; Sato, R.; Yamada, H.; Boerner, W.-M. POLSAR Image Analysis of Wetlands Using a Modified Four-Component Scattering Power Decomposition. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1667–1673. [Google Scholar] [CrossRef]
  2. Shi, J.; He, T.; Ji, S.; Nie, M.; Jin, H. CNN-improved Superpixel-to-pixel Fuzzy Graph Convolution Network for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4410118. [Google Scholar] [CrossRef]
  3. Gu, M.; Wang, Y.; Liu, H.; Wang, P. PolSAR Ship Detection Based on Noncircularity and Oblique Subspace Projection. IEEE Geosci. Remote Sens. Lett. 2023, 20, 4008805. [Google Scholar] [CrossRef]
  4. Ji, Y.; Dong, Z.; Zhang, Y.; Tang, F.; Mao, W.; Zhao, H.; Xu, Z.; Zhang, Q.; Zhao, B.; Gao, H. Equatorial Ionospheric Scintillation Measurement in Advanced Land Observing Satellite (ALOS) Phased Array-Type L-Band Synthetic Aperture Radar (PALSAR) Observations. Engineering, 2024; in press. [Google Scholar] [CrossRef]
  5. Tang, F.; Ji, Y.; Zhang, Y.; Dong, Z.; Wang, Z.; Zhang, Q.; Zhao, B.; Gao, H. Drifting ionospheric scintillation simulation for L-band geosynchronous SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 852–854. [Google Scholar] [CrossRef]
  6. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  7. Cloude, S.R.; Pottier, E. ‘An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  8. Huynen, J.R. Physical reality of radar targets. Proc. SPIE 1993, 1748, 86–96. [Google Scholar]
  9. Lee, J.-S.; Grunes, M.R.; Kwok, R. Classification of multi-look polarimetric SAR data based on complex Wishart distribution. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
  10. Zhang, F.; Li, P.; Zhang, Y.; Liu, X.; Ma, X.; Yin, Z. A Enhanced DeepLabv3+ for PolSAR image classification. In Proceedings of the 2023 4th International Conference on Computer Engineering and Application (ICCEA), Hangzhou, China, 7–9 April 2023; pp. 743–746. [Google Scholar] [CrossRef]
  11. Zhang, Q.; He, C.; He, B.; Tong, M. Learning Scattering Similarity and Texture-Based Attention with Convolutional Neural Networks for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5207419. [Google Scholar] [CrossRef]
  12. Nie, W.; Huang, K.; Yang, J.; Li, P. A Deep Reinforcement Learning-Based Framework for PolSAR Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4403615. [Google Scholar] [CrossRef]
  13. Ulaby, F.T.; Elachi, C. Radar polarimetry for geoscience applications. In Geocarto International; Artech House: Norwood, MA, USA, 1990; p. 376. Available online: http://www.informaworld.com (accessed on 30 October 2023).
  14. Yang, M.; Zhang, L.; Shiu, S.C.K.; Zhang, D. Gabor feature based robust representation and classification for face recognition with Gabor occlusion dictionary. Pattern Recognit. 2013, 46, 1865–1878. [Google Scholar] [CrossRef]
  15. Wang, X.; Han, T.X.; Yan, S. An HOG-LBP human detector with partial occlusion handing. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 32–39. [Google Scholar]
  16. Lazebnik, S.; Schmid, C.; Ponce, J. Beyond bags of features: Spatial pyranic matching recognizing natural scene categories. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2006, 13, 2169–2178. [Google Scholar]
  17. Chen, Q.; Li, L.; Xu, Q.; Yang, S.; Shi, X.; Liu, X. Multi-feature segmentation for high-resolution polarimetric SAR data based on fractal net evolution approach. Remote Sens. 2011, 9, 570. [Google Scholar] [CrossRef]
  18. Hua, W.; Wang, S.; Xie, W.; Guo, Y.; Jin, X. Dual-channel convolutional neural network for polarimetric SAR images classification. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3201–3204. [Google Scholar]
  19. Ren, Z.; Hou, B.; Wen, Z.; Jiao, L. Patch-sorted deep feature learning for high resolution SAR image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 3113–3126. [Google Scholar] [CrossRef]
  20. Kilbride, J.B.; Poortinga, A.; Bhandari, B.; Thwal, N.S.; Quyen, N.H.; Silverman, J.; Tenneson, K.; Bell, D.; Gregory, M.; Kennedy, R.; et al. A Near Real-Time Mapping of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine. Remote Sens. 2023, 15, 5223. [Google Scholar] [CrossRef]
  21. Shi, J.; Wang, W.; Jin, H.; He, T. Complex matrix and multi-feature collaborative learning for polarimetric SAR image classification. Appl. Soft Comput. 2023, 134, 109965. [Google Scholar] [CrossRef]
  22. Shang, R.; Wang, J.; Jiao, L.; Yang, X.; Li, Y. Spatial feature-based convolutional neural network for PolSAR image classification. Appl. Soft Comput. 2022, 123, 108922. [Google Scholar] [CrossRef]
  23. Zakhvatkina, N.Y.; Alexandrov, V.Y.; Johannessen, O.M.; Sandven, S.; Frolov, I.Y. Classification of Sea Ice Types in ENVISAT Synthetic Aperture Radar Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2587–2600. [Google Scholar] [CrossRef]
  24. Zhang, D.; Wang, W.; Gade, M.; Zhou, H. TENet: A Texture-Enhanced Network for Intertidal Sediment and Habitat Classification in Multiband PolSAR Images. Remote Sens. 2024, 16, 972. [Google Scholar] [CrossRef]
  25. Zhu, L.; Ji, D.; Zhu, S.; Gan, W.; Wu, W.; Yan, J. Learning Statistical Texture for Semantic Segmentation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 12532–12541. [Google Scholar]
  26. Wang, T.; Yang, X.; Wang, Y.; Fang, J.; Jia, L. A multi-level SAR sea ice image classification method by incorporating egg-code-based expert knowledge. In Proceedings of the 2012 5th International Congress on Image and Signal Processing (CISP), Chongqing, China, 16–18 October 2012. [Google Scholar]
  27. Liu, M.; Dai, Y.; Zhang, J.; Ren, G.; Meng, J.; Zhang, X. Research on Sea Ice Secondary Classification Method Using High-Resolution Fully Polarimetric Synthetic Aperture Radar Data. Acta Oceanol. Sin. 2013, 4, 80–87. [Google Scholar]
  28. Wang, W.; Gade, M.; Stelzer, K.; Kohlus, J.; Zhao, X.; Fu, K. A Classification Scheme for Sediments and Habitats on Exposed Intertidal Flats with Multi-Frequency Polarimetric SAR. Remote Sens. 2021, 13, 360. [Google Scholar] [CrossRef]
  29. Hughes, M.G.; Glasby, T.M.; Hanslow, D.J.; West, G.J.; Wen, L. Random Forest Classification Method for Predicting Intertidal Wetland Migration Under Sea Level Rise. Front. Environ. Sci. 2022, 10, 749950. [Google Scholar] [CrossRef]
  30. Davies, B.F.R.; Gernez, P.; Geraud, A.; Oiry, S.; Rosa, P.; Zoffoli, M.L.; Barillé, L. Multi- and hyperspectral classification of soft-bottom intertidal vegetation using a spectral library for coastal biodiversity remote sensing. Remote Sens. Environ. 2023, 290, 113554. [Google Scholar] [CrossRef]
  31. Kersten, P.R.; Lee, J.S.; Ainsworth, T.L. Unsupervised classification of polarimetric synthetic aperture radar images using fuzzy clustering and EM clustering. IEEE Trans. Geosci. Remote Sens. 2005, 43, 519–527. [Google Scholar] [CrossRef]
  32. Wang, P.; Zhang, X.; Shi, L.; Liu, M.; Liu, G.; Cao, C.; Wang, R. Assessment of Sea-Ice Classification Capabilities during Melting Period Using Airborne Multi-Frequency PolSAR Data. Remote Sens. 2024, 16, 1100. [Google Scholar] [CrossRef]
  33. Liu, X.; Jiao, L.; Tang, X.; Sun, Q.; Zhang, D. Polarimetric Convolutional Network for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3040–3054. [Google Scholar] [CrossRef]
  34. Campos-Taberner, M.; García-Haro, F.J.; Martínez, B.; Izquierdo-Verdiguier, E.; Atzberger, C.; Camps-Valls, G.; Gilabert, M.A. Understanding deep learning in land use classification based on Sentinel-2 time series. Sci. Rep. 2020, 10, 17188. [Google Scholar] [CrossRef] [PubMed]
  35. Garg, R.; Kumar, A.; Bansal, N.; Prateek, M.; Kumar, S. Semantic segmentation of PolSAR image data using advanced deep learning model. Sci. Rep. 2021, 11, 15365. [Google Scholar] [CrossRef]
  36. Cui, X.; Yang, F.; Wang, X.; Ai, B.; Luo, Y.; Ma, D. Deep learning model for seabed sediment classification based on fuzzy ranking feature optimization. Mar. Geol. 2021, 432, 106390. [Google Scholar] [CrossRef]
  37. Wu, W.; Li, H.; Li, X.; Guo, H.; Zhang, L. PolSAR Image Semantic Segmentation Based on Deep Transfer Learning—Realizing Smooth Classification with Small Training Sets. IEEE Geosci. Remote Sens. Lett. 2019, 16, 977–981. [Google Scholar] [CrossRef]
  38. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  39. Ai, J.; Wang, F.; Mao, Y.; Luo, Q.; Yao, B.; Yan, H.; Xing, M.; Wu, Y. A Fine PolSAR Terrain Classification Algorithm Using the Texture Feature Fusion-Based Improved Convolutional Autoencoder. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5218714. [Google Scholar] [CrossRef]
  40. Zhou, Y.; Wang, H.; Xu, F.; Jin, Y.-Q. Polarimetric SAR image classification using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
  41. Chen, S.-W.; Tao, C.-S. PolSAR image classification using polarimetric-feature-driven deep convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 627–631. [Google Scholar] [CrossRef]
  42. Feng, Z.; Min, T.; Xie, W.; Hanqiang, L. A new parallel dual-channel fully convolutional network via semi-supervised fcm for polsar image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4493–4505. [Google Scholar]
  43. Shi, J.; Jin, H.; Li, X. A Novel Multi-Feature Joint Learning Method for Fast Polarimetric SAR Terrain Classification. IEEE Access 2020, 8, 30491–30503. [Google Scholar] [CrossRef]
  44. van Zyl, J.J.; Arii, M.; Kim, Y. Model-based decomposition of polarimetric SAR covariance matrices constrained for nonnegative eigenvalues. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3452–3459. [Google Scholar] [CrossRef]
  45. Jafari, Z.; Karami, E.; Taylor, R.; Bobby, P. Enhanced Ship/Iceberg Classification in SAR Images Using Feature Extraction and the Fusion of Machine Learning Algorithms. Remote Sens. 2023, 15, 5202. [Google Scholar] [CrossRef]
  46. Ren, S.; Zhou, F.; Bruzzone, L. Transfer-Aware Graph U-Net with Cross-Level Interactions for PolSAR Image Semantic Segmentation. Remote Sens. 2024, 16, 1428. [Google Scholar] [CrossRef]
  47. Yin, Q.; Hong, W.; Zhang, F.; Pottier, E. Optimal combination of polarimetric features for vegetation classification in PolSAR image. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 3919–3931. [Google Scholar] [CrossRef]
  48. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet classification with deep convolutional neural networks. Proc. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  49. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 18–20 June 1996; pp. 1–9. [Google Scholar]
  50. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  51. Gou, S.; Li, X.; Yang, X. Coastal Zone Classification with Fully Polarimetric SAR Imagery. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1616–1620. [Google Scholar] [CrossRef]
  52. Wang, Y.; Cheng, J.; Zhou, Y.; Zhang, F.; Yin, Q. A Multichannel Fusion Convolutional Neural Network Based on Scattering Mechanism for PolSAR Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4007805. [Google Scholar] [CrossRef]
  53. Xiao, D.; Wang, Z.; Wu, Y.; Gao, X.; Sun, X. Terrain Segmentation in Polarimetric SAR Images Using Dual-Attention Fusion Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4006005. [Google Scholar] [CrossRef]
  54. Gui, R.; Xu, X.; Yang, R.; Xu, Z.; Wang, L.; Pu, F. A General Feature Paradigm for Unsupervised Cross-Domain PolSAR Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4013305. [Google Scholar] [CrossRef]
  55. Bi, H.; Sun, J.; Xu, Z. A Graph-Based Semisupervised Deep Learning Model for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2116–2132. [Google Scholar] [CrossRef]
  56. Cui, Y.; Liu, F.; Jiao, L.; Guo, Y.; Liang, X.; Li, L.; Yang, S.; Qian, X. Polarimetric Multipath Convolutional Neural Network for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5207118. [Google Scholar] [CrossRef]
  57. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  58. An, W. Research on Target Polarization Decomposition and Scattering Characteristic Extraction Based on Polarized SAR. Ph.D. Dissertation, Tsinghua University, Beijing, China, 2010. [Google Scholar]
  59. An, W.T.; Lin, M.S. A Reflection Symmetry Approximation of Multi-look Polarimetric SAR Data and its Application to Freeman-Durden Decomposition. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3649–3660. [Google Scholar] [CrossRef]
  60. An, W.; Lin, M.; Yang, H. Modified Reflection Symmetry Decomposition and a New Polarimetric Product of GF-3. IEEE Geosci. Remote Sens. Lett. 2022, 19, 8019805. [Google Scholar] [CrossRef]
  61. Chen, J.; Chen, Y.L.; An, W.T.; Cui, Y.; Yang, J. Nonlocal filtering for polarimetric SAR data: A pretest approach. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1744–1754. [Google Scholar] [CrossRef]
  62. Lee, J.S. Digital image enhancement and noise filtering by use of local statistics. IEEE Trans. Pattern Anal. Mach. Intell. 1980, 2, 165–168. [Google Scholar] [CrossRef] [PubMed]
  63. Novak, L.M.; Burl, M.C. Optimal speckle reduction in polarimetric SAR imagery. IEEE Trans. Aerosp. Electron. Syst. 1990, 26, 293–305. [Google Scholar] [CrossRef]
  64. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
  65. Gao, Y.; Li, W.; Zhang, M.; Wang, J.; Sun, W.; Tao, R.; Du, Q. Hyperspectral and multispectral classification for coastal wetland using depthwise feature interaction network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5512615. [Google Scholar] [CrossRef]
  66. User Manual of Gaofen-3 Satellite Products; China Resources Satellite Application Center: Beijing, China, 2016.
  67. Bentes, C.; Velotto, D.; Tings, B. Ship classification in TerraSAR-X images with convolutional neural networks. IEEE J. Ocean. Eng. 2018, 43, 258–266. [Google Scholar] [CrossRef]
  68. Sunaga, Y.; Natsuaki, R.; Hirose, A. Land form classification and similar land-shape discovery by using complex-valued convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7907–7917. [Google Scholar] [CrossRef]
  69. Hou, X.; Wei, A.; Song, Q.; Lai, J.; Wang, H.; Xu, F. FUSAR-Ship: Building a high-resolution SAR-AIS matchup dataset of Gaofen-3 for ship detection and recognition. Sci. China Inf. Sci. 2020, 63, 140303. [Google Scholar] [CrossRef]
  70. China Ocean Satellite Data Service System. Available online: https://osdds.nsoas.org.cn/ (accessed on 30 October 2023).
Figure 1. Classification of eight polarimetric data input schemes.
Figure 1. Classification of eight polarimetric data input schemes.
Remotesensing 16 01826 g001
Figure 2. Research area and ground truth map.
Figure 2. Research area and ground truth map.
Remotesensing 16 01826 g002
Figure 3. Distribution of training, validation, and testing samples. (a) Image from 14 September 2021; (b) Image from 14 September 2021; (c) Image from 13 October 2021; (d) Image from 12 October 2017.
Figure 3. Distribution of training, validation, and testing samples. (a) Image from 14 September 2021; (b) Image from 14 September 2021; (c) Image from 13 October 2021; (d) Image from 12 October 2017.
Remotesensing 16 01826 g003
Figure 4. Classification results of eight research schemes on AlexNet.
Figure 4. Classification results of eight research schemes on AlexNet.
Remotesensing 16 01826 g004
Figure 5. Classification results of eight polarized data input schemes.
Figure 5. Classification results of eight polarized data input schemes.
Remotesensing 16 01826 g005
Figure 6. Trend chart of overall classification accuracy and average accuracy.
Figure 6. Trend chart of overall classification accuracy and average accuracy.
Remotesensing 16 01826 g006
Table 1. List of eight polarization data input schemes.
Table 1. List of eight polarization data input schemes.
SchemeParametersPolarization Features
16NonP0, T22, T33, coe12, coe13, coe23
26P0, T22, T33, coe12, coe13, coe23
37P0, T11, T22, T33, coe12, coe13, coe23
47P0,T11, T22, T33, PS, PD, PV
57P0, T11, T22, T33, P2, P3, PV
69P0, T11, T22, T33, P2, P3, PS, PD, PV
710P0, T11, T22, T33, Re(T12), Re(T13), Re(T23), Im(T12), Im(T13), Im(T23)
821P0, T11, T22, T33, Re(T12), Re(T13), Re(T23), Im(T12), Im(T13), Im(T23), P2, P3, PS, PD, PV, x, y, a, b
Table 2. Experiment images.
Table 2. Experiment images.
IDDateTime (UTC)Inc. Angle (°)ModeResolutionUse
12021.09.1422:14:1130.98QPSI8 mTrain
22021.09.1422:14:0630.97QPSI8 mTrain
32021.10.1310:05:3537.71QPSI8 mTrain
42017.10.1222:07:3636.89QPSI8 mTest
Table 3. Distribution of Training and Validation Datasets.
Table 3. Distribution of Training and Validation Datasets.
ImagesNearshore WaterSeawaterSpartina AlternifloraTamarixReedTidal FlatSuaeda Salsa
20210914_15004001000500500500500
20210914_25002000005000
20211013040005005000500
Total1000100010001000100010001000
Table 4. Classification accuracy of the eight polarized data input schemes on the AlexNet network.
Table 4. Classification accuracy of the eight polarized data input schemes on the AlexNet network.
Classification Accuracy
Input Scheme
12345678
Nearshore water 96.810076.985.093.494.896.499.7
Seawater96.910099.598.898.799.298.799.7
Spartina alterniflora96.810093.393.285.292.995.5100
Tamarix10097.699.093.875.910096.096.7
Reed94.598.393.463.793.394.999.2100
Tidal flat49.316.249.578.685.561.171.690.6
Suaeda salsa50.892.798.497.695.199.498.2100
Indepent experiments Overall Accuracy83.5986.4087.1487.2489.5991.7693.6698.10
81.4185.1984.2787.1988.9191.7691.8496.54
77.8382.6484.0185.3786.3087.6991.0696.44
73.6681.8683.6785.2986.1986.6189.2996.40
68.8781.5383.6684.9685.3086.6089.3396.36
Average Overall Accuracy77.07283.52484.5586.0187.25888.88491.03696.768
Kappa coefficient0.80850.84130.85000.85120.87850.90380.92600.9778
Table 5. Classification accuracy of eight polarimetric data input schemes on the VGG16 network.
Table 5. Classification accuracy of eight polarimetric data input schemes on the VGG16 network.
Classification Accuracy
Input Scheme
12345678
Nearshore water 95.782.591.191.394.993.490.577.2
Seawater97.798.899.898.599.499.399.399.6
Spartina alterniflora96.695.994.195.793.594.998.7100
Tamarix98.5100100067.510089.699.990.8
Reed93.885.091.368.082.269.691.799.9
Tidal flat28.542.025.788.567.295.871.499.8
Suaeda salsa66.291.394.198.910010099.6100
Indepent experiments Overall Accuracy82.4385.0785.1686.9191.0391.8093.0195.33
82.2185.0384.6686.6388.9990.6192.0394.93
81.4484.7484.1086.5787.5090.5491.9494.76
79.4482.0683.6484.9086.7790.4391.2992.96
77.5381.9383.4180.4786.8390.3789.9491.97
Average Overall Accuracy80.6183.76684.19485.09688.22490.7591.64293.99
Kappa coefficient0.79500.82580.82680.84730.89530.90430.91850.9455
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Cui, L.; Zhang, Y.; Xia, T.; Dong, Z.; An, W. Research on Input Schemes for Polarimetric SAR Classification Using Deep Learning. Remote Sens. 2024, 16, 1826. https://doi.org/10.3390/rs16111826

AMA Style

Zhang S, Cui L, Zhang Y, Xia T, Dong Z, An W. Research on Input Schemes for Polarimetric SAR Classification Using Deep Learning. Remote Sensing. 2024; 16(11):1826. https://doi.org/10.3390/rs16111826

Chicago/Turabian Style

Zhang, Shuaiying, Lizhen Cui, Yue Zhang, Tian Xia, Zhen Dong, and Wentao An. 2024. "Research on Input Schemes for Polarimetric SAR Classification Using Deep Learning" Remote Sensing 16, no. 11: 1826. https://doi.org/10.3390/rs16111826

APA Style

Zhang, S., Cui, L., Zhang, Y., Xia, T., Dong, Z., & An, W. (2024). Research on Input Schemes for Polarimetric SAR Classification Using Deep Learning. Remote Sensing, 16(11), 1826. https://doi.org/10.3390/rs16111826

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop