Next Article in Journal
Object Counting in Remote Sensing via Triple Attention and Scale-Aware Network
Next Article in Special Issue
Novel Spatial–Spectral Channel Attention Neural Network for Land Cover Change Detection with Remote Sensed Images
Previous Article in Journal
Ground Subsidence Monitoring in a Mining Area Based on Mountainous Time Function and EnKF Methods Using GPS Data
Previous Article in Special Issue
A Lightweight Model for Ship Detection and Recognition in Complex-Scene SAR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Unsupervised SAR Image Change Type Recognition Using Regionally Restricted PCA-Kmean and Lightweight MobileNet

1
College of Data and Target Engineering, Information Engineering University, Zhengzhou 450000, China
2
The Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 611756, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(24), 6362; https://doi.org/10.3390/rs14246362
Submission received: 5 November 2022 / Revised: 2 December 2022 / Accepted: 12 December 2022 / Published: 15 December 2022

Abstract

:
Change detection using synthetic aperture radar (SAR) multi-temporal images only detects the change area and generates no information such as change type, which limits its development. This study proposed a new unsupervised application of SAR images that can recognize the change type of the area. First, a regionally restricted principal component analysis k-mean (RRPCA-Kmean) clustering algorithm, combining principal component analysis, k-mean clustering, and mathematical morphology composition, was designed to obtain pre-classification results in combination with change type vectors. Second, a lightweight MobileNet was designed based on the results of the first stage to perform the reclassification of the pre-classification results and obtain the change recognition results of the changed regions. The experimental results using SAR datasets with different resolutions show that the method can guarantee change recognition results with good change detection correctness.

1. Introduction

Globally, natural phenomena such as earthquakes and heavy rainfall events, which sometimes occur simultaneously, can lead to building collapse and flooding [1,2,3], causing significant damage and serious economic and social impacts on the natural environment and human infrastructure. Synthetic aperture radar (SAR) sensors are used in remote sensing geodynamic monitoring owing to their all-day, all-weather operation. Image change detection using SAR has become increasingly important for disaster assessment in urban areas [4], deforestation [5], and flood and glacier monitoring to analyze events that change a geographical area following a disaster [6]. However, the existing change detection only detects the change area and cannot recognize the change type, such as the change from what-to-what ground object type. If the change type can be directly recognized, the application of biphasic SAR images can be greatly expanded.
Several efforts have been made in the application of SAR images [7,8,9,10]. Similar to change detection, the presence of inherent noise in SAR images makes change identification difficult. A common approach is to first derive the difference image (DI) of a multi-temporal SAR image and then analyze the DI to obtain a change map [11], and our proposed change recognition draws on this idea.
For DI analysis, the hierarchical fuzzy C-means clustering (HFCM) algorithm was used in previous studies [12,13,14] to obtain pre-classification results. In these studies, the pre-classification stage of the HFCM algorithm had more misclassified pixels in the pre-classification results, which led to incorrect training samples being provided to the deep learning network classification model and ultimately resulted in misclassification. A principal component analysis (PCA) and k-mean clustering algorithm (PCA-Kmean) [15] was proposed to obtain detection results that better retain the change region, albeit with more false alarms. Recently, deep learning (DL) has become an effective nonlinear modeling tool for the reclassification stage, and various neural network classification models are widely used for change detection, as they can extract high-dimensional abstract features from images to achieve better automatic classification. The standard convolutional neural network (CNN) [16] is a common classification model; however, its simple structure usually leads to poor classification results. Gong et al. [17] proposed a deep neural network (DNN) for SAR image change detection, but it has the disadvantages of large parameters, slow training, and limited performance. The PCANet [18] was found to be more time consuming due to the long feature extraction time. However, for the same accuracy conditions, smaller models have fewer parameters, are faster to train, and are easy to deploy on mobile devices. The SAR image change detection method of pre-classification, which involves clustering and then post-classification by a neural network model, is expected to be a lightweight and well-classified neural network classification model. Recently, several lightweight DL networks such as SqueezeNet [19], ShuffleNet [20], and MobileNet V2 [21] were proposed, with all of them achieving good classification efficiency; however, their network depth is redundant for training small-sized image blocks, and the training time is long.
Inspired by the two-stage idea of clustering pre-classification of SAR change detection and deep learning network classification, we designed a new application of unsupervised change recognition using Mobile PCA_kmean (RRPCA-Kmean) and lightweight networks. It can greatly expand the application of bi-temporary SAR images and is no longer limited to the detection of changing regions by the existing change detection.
The novelty of this study is based on the following three points:
(1) An RRPCA-Kmean clustering algorithm was designed to provide highly reliable pre-classification results. These results can be used as a pseudo-label for training samples that emphasize central pixels and ignore edge noise.
(2) A Lightweight MobileNet (LMNet) classification model was designed to provide a fast and efficient classification network for change recognition.
(3) A two-stage unsupervised change recognition framework was designed. The method simultaneously implements change region detection and change recognition.

2. Methodology

The methods proposed in this study can be categorized into two stages (pre-classification and reclassification), which are described in Figure 1. First, RRPCA-Kmean was used to obtain pre-classification results from the DI; subsequently, the pre-classification results were used as pseudo-labels to generate training samples from the dual-temporal SAR images. Finally, an LMNet was designed to train the samples and reclassify the pre-classification results.

2.1. RRPCA-Kmean Clustering Algorithm

Two SAR images were taken from the same location at different times: I 1 = { I 1 ( m , n ) , 1 m M , 1 n N } and I 2 = { I 2 ( m , n ) , 1 m M , 1 n N } , both of size M × N . Change recognition is required to recognize the change type from I 1 to I 2 .
The first step was to generate the initial DI from two original SAR images. Considering its ability to suppress speckles, log-ratio is a common operator in many change detection studies [22]. The DI is defined as follows:
I D I ( m , n ) = log I 1 ( m , n ) + 1 I 2 ( m , n ) + 1
Subsequently, we combined mathematical morphology, PCA, and k-mean clustering to design a RRPCA-Kmean clustering algorithm. First, pre-classification results were obtained using PCA and k-means clustering, as they consider a large number of mis-classifiable pixels as intermediate classes, which greatly reduces the generation of incorrect labels. To reduce the classification error problem of a large number of intermediate classes in reclassification, we also introduced mathematical morphology erosion to restrict the pre-classification results. A 50 × 50 all-1 matrix was used as the structural element for local erosion, and the minimum value was found to replace the original gray value within this 50 × 50 neighborhood. In the pre-classification result, there were only three gray values: 0.5, 1, and 0. The unchanged region with gray value 0 occupied most of the matrix; consequently, the gray values of the changed region (with gray value 1) and the intermediate class region (with gray value 0.5) were reduced to 0 to achieve morphological region restriction. The process of the algorithm is summarized in Algorithm 1.
Algorithm 1 RRPCA-Kmean Algorithm
Input: DI
Step 1: Extract the PCA feature vector.
Step 2: Run the k-mean clustering algorithm to generate three classes Ω c 1 , Ω i 1 and Ω u 1 . where Ω c 1 is a pseudo-changed class, Ω i 1 is a pseudo-intermediate class, and Ω u 1 is a pseudo-unchanged class.
Step 3: Calculate the ratio of the mean value to the number of pixels for each class and arrange them from smallest to largest to obtain three classes { Ω c 2 , Ω i 2 , Ω u 2 } , the initial pre-classification results.
Step 4: Perform mathematical morphological erosion of the pre-classified result map using a 50 × 50 all-1 matrix.
Step 5: Take out the pre-classified result map within the corrupted range, the RRPCA-Kmean result map.
Output: RRPCA-Kmean result map containing { Ω c , Ω i , Ω u } .

2.2. Generation of Training Samples

The next step was to extract training samples for change recognition. We further filtered the changed class pixels in the regionally restricted pre-classification result. The changed class pixels with grayscale values greater than 0 in the time 2 image and time 1 image subtraction results denote the land to water (LW) changed labels, and the change class pixels with grayscale values less than 0 denote the water to land (WL) changed labels. It is worth noting that mathematical morphology is introduced in this paper to improve the accuracy of labels with different variation types and reduce the recognition error. Unchanged type labels are represented as unchanged class in the pre-classification result. The training samples are generated as shown in Figure 2.
First, image patches of positions of interest (pixels belonging to Ω c and Ω u ) were generated. These image patches contained enough change information around the positions. P m n I 1 represents a patch with the center at position ( m , n ) in image I 1 , and the size of P m n I 1 is ( k / 2 ) × ( k / 2 ) . P m n I 2 represents the corresponding patch in image I 2 . To dilute the noise, a bilinear interpolation was applied to obtain blocks of size ( k / 2 ) × k . The two blocks were then combined and multiplied with a mask (blue represents gray value 1 and white represents 0 in Figure 2) to obtain the training samples. This mask processing step suppressed edge noise and emphasized the central pixel.

2.3. Lightweight MobileNet Classification Model

The MobileNet v2 [21] has many layers; it contains depth-separable convolution and pointwise convolution modules as well as an inverse residual module, which were introduced in our proposed network. Among them, the depth-separable convolution and pointwise convolution modules mainly reduce the parameters and computation cycles. The inverse residual module mainly avoids the gradient vanishing problem and reduces information loss. Both are effective for our small image blocks, since they minimize the number of parameters and operations and shorten the training time. We designed a lightweight MobileNet reclassification model for reducing information loss and realizing efficient classification. The proposed model mainly consisted of five modules: Modules A, B, C, D, and E. Module A and Module B are depth-separable convolutions with the addition of a 1 × 1 convolution; Module C is an inverse residual module; Module D is a squeeze excitation (SE) module; and Module E is an efficient final stage module. Depth-separable convolution is good for reducing the computational effort, but it exhibits reduced accuracy. The addition of a 1 × 1 pointwise convolution in Module A and Module B helps to alleviate this problem. The inverse residual module in Module C helps to alleviate the problem of information loss caused by dimensional transformation. Module D is a lightweight attention module for reducing computational effort. To alleviate the problem of high resource consumption in the output of the network, Module E uses a 1 × 1 convolution for expansion, and Relu6 activation function is used immediately after the pooling layer to improve the network speed. The Relu6 activation function maintains the robustness of the network well, as shown in Equation (2).
R e l u 6 = min ( max ( 0 , x ) , 6 )
The network finally uses a 1 × 1 convolution for linear output to prevent information loss due to dimensional transformations. The use of the drop layer helps to further reduce computation, accelerate convergence, and alleviate overfitting. The processing of this efficient final stage module increases the speed of computation while preserving accuracy. The network structure is shown in Figure 3, and the specific network body architecture is shown in Table 1.

3. Results

3.1. Datasets

Three actual SAR image data sets in Figure 4 were employed to prove the superiority of the proposed approach. Table 2 lists a detailed description of each dataset, including sensor type, location, imaging date, image size, resolution, and reason for variation.
To evaluate the effectiveness of our proposed LMNet, the related models were considered for comparison, including the standard CNN [16], SqueezeNet [19], ShuffleNet [20], and MobileNet v2 [21] networks. The training samples for each network were selected as in the RRPCA-Kmean. The SqueezeNet, ShuffleNet, and MobileNet networks were trained for classification using migration learning with modified input size and final output of the softmax and fully connected layers. All experiments were performed using MATLAB 2020b. The initial learning rate was set to 0.001, and the “adam” optimizer was used.

3.2. Evaluation Metric

We used four credible evaluation indexes to measure the performance of the proposed LMNet in change recognition, including the kappa coefficient (k) of change detection, LW recognition accuracy P 1 , water to land (WL) recognition accuracy P 2 , overall accuracy (OA), and average accuracy (AA).
P 1 = T P 1 T P 1 + F P 1 + F N 1
P 2 = T P 2 T P 2 + F P 2 + F N 2
O A = T P 1 + T P 2 T P 1 + T P 2 + F P 1 + F N 1 + F P 2 + F N 2
A A = ( P 1 + P 2 ) / 2
For each prediction, separate statistics were used for TP (predicted answer correct), FP (wrongly predicted other class as this class), and FN (predicted this class label as other class label). The subscripts in the above equation indicate that they were calculated separately for different types of changes.

3.3. Analysis of Results

The final recognition result was a subjective evaluation of the recognition of the change type by the color change type map. To evaluate the results objectively, k was evaluated for change detection of the binary image of the reference image. By comparing the change recognition map with the change type reference map, LW, WL, OA, and AA were calculated. Figure 5a–e shows the recognition results of different methods, and Figure 5f indicates the corresponding reference chart. Table 3 and Figure 6 present the results of each method quantitatively.
Speckle noise is a critical and unavoidable factor affecting the change detection results in SAR image processing. In addition, complex backgrounds, such as the edges of land and water, can also introduce some interference.
As for the standard CNN, it can be seen from Figure 5a that some pixels in dataset A were missed, while extra pixels in dataset B should not have been detected. Table 3 also shows that the standard CNN had lower evaluation metrics in all the B and C datasets. These indicate that the simple shallow neural networks were not sufficiently generalized and are prone to increased false positives or misses.
For SqueezeNet, it can be seen from Figure 5b that there were many false alarms in all three datasets, which were not effective. As can be seen from Table 3, the metric OA (51.67%) was better than CNN (45.93) in dataset B but was lower than CNN on the other two datasets A and C. This indicates that such a lightweight network is not suitable for high-resolution detection of the types of changes that occur at river edges.
For ShuffleNet, it can be seen from Figure 5c that more false detections occurred in datasets A and C due to noise interference. However, the recognition of change types at river edges in dataset B, such as CNN and SqueezeNet, had fewer false pixels than CNN and SqueezeNet, indicating that ShuffleNet is more suitable for change recognition at river edges. As shown in Table 3, OA (70.08%) was lower than CNN (74.97%) on dataset A. Therefore, this lightweight network needs further improvement to enhance its adaptability.
A comparison of Figure 5a–d shows that MobileNet V2 had the lowest number of recognition errors, and the details were better retained. MobileNet V2 recognizes each change type better than the first three methods. The evaluation metrics of MobileNet V2 in Table 3 were also optimal, indicating that this deep lightweight network is better adapted to recognize various datasets. Although the MobileNet V2 network is time-consuming, it can be further improved to suit different needs.
As for our proposed LMNet, a comparison of Figure 5a–e shows that our result was the best with the least number of incorrectly recognized pixels. LMNet achieved the highest basic k values, OA, and AA in all datasets in Table 3. In addition, it can be seen from Table 4 that LMNet required less time and had the least number of parameters, indicating that our LMNet works best.
In addition, to clearly understand the effect of each method in Table 3, we present Figure 6, which clearly shows that the indicators of our method are optimal compared to those of other methods.

3.4. Analysis of the Patch Size

The training samples were captured by image patches of size k. We evaluated the performance of the proposed LMNet with k = 8, 10, 12, 14, 16, and 18. Figure 7 shows the relationship between k and OA. The value of OA first increased and then decreased as k gradually increased. The OA curve shows that the training sample size was very important for the change recognition task. However, large patch sizes increase the computational burden and may introduce some information noise that affects the performance of change recognition. Therefore, we took k = 10 for the first two datasets and k = 14 for the last dataset in the experiments. The sizes differed, as the first two datasets had few and concentrated change regions, while the last dataset had small and many change regions with high resolutions. The larger dataset contained more information and was more suitable for the recognition of small change regions.

4. Conclusions

In this study, a RRPCA-Kmean and LMNet two-stage unsupervised change recognition method was proposed for further application to change detection in SAR images. The RRPCA-Kmean designed in this study can be applied to various change detection and recognition methods that generate pre-classification results by clustering. The proposed training sample design method emphasizes the central change pixels and suppresses edge noise. The proposed LMNet has a good balance between recognition time and recognition effect, while exhibiting a good application value. Our method achieved good results on SAR images with different resolutions, and experiments demonstrated the future application of the algorithm in change detection. In our future work, we will aim to recognize more types of changes.

Author Contributions

W.L. (Wei Liu) proposed the most primitive idea and conceived the experiments. Z.L. and G.G. directed the experiments. Z.L. and C.N. contributed to the revision of the paper. W.L. (Wanjie Lu) provided suggestions on the language and structure of the paper. Z.L. compiled all the amendments proposed by the authors and wrote the manuscripts that needed to be revised. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “the National Natural Science Foundation of China, grant number 41822105”, “the Fundamental Research Funds for the Central Universities was funded by 2682020ZT34 and 2682021CX071” and “the State Key Laboratory of Geo-Information Engineering by SKLGIE2020-Z-3-1 and SKLGIE2020-M-4-1.

Acknowledgments

The authors sincerely thank the editors and reviewers for their valuable revision suggestions, which have greatly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chunga, K.; Livio, F.A.; Martillo, C.; Lara-Saavedra, H.; Ferrario, M.F.; Zevallos, I.; Michetti, A.M. Landslides triggered by the 2016 Mw 7.8 Pedernales, Ecuador earthquake: Correlations with ESI-07 intensity, lithology, slope and PGA-h. Geosciences 2019, 9, 371. [Google Scholar] [CrossRef] [Green Version]
  2. Ferrario, M. Landslides triggered by multiple earthquakes: Insights from the 2018 Lombok (Indonesia) events. Nat. Hazards 2019, 98, 575–592. [Google Scholar] [CrossRef]
  3. Lê, T.T.; Froger, J.-L.; Minh, D.H.T. Multiscale framework for rapid change analysis from SAR image time series: Case study of flood monitoring in the central coast regions of Vietnam. Remote Sens. Environ. 2022, 269, 112837. [Google Scholar] [CrossRef]
  4. Masoumi, Z. Flood susceptibility assessment for ungauged sites in urban areas using spatial modeling. J. Flood Risk Manag. 2022, 15, e12767. [Google Scholar] [CrossRef]
  5. Zhao, F.; Sun, R.; Zhong, L.; Meng, R.; Huang, C.; Zeng, X.; Wang, M.; Li, Y.; Wang, Z. Monthly mapping of forest harvesting using dense time series Sentinel-1 SAR imagery and deep learning. Remote Sens. Environ. 2022, 269, 112822. [Google Scholar] [CrossRef]
  6. De, A.; Upadhyaya, D.B.; Thiyaku, S.; Tomer, S.K. Use of Multi-sensor Satellite Remote Sensing Data for Flood and Drought Monitoring and Mapping in India. In Civil Engineering for Disaster Risk Reduction; Springer: Berlin/Heidelberg, Germany, 2022; pp. 27–41. [Google Scholar]
  7. Zhang, X.; Su, X.; Yuan, Q.; Wang, Q. Spatial–Temporal Gray-Level Co-Occurrence Aware CNN for SAR Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4018605. [Google Scholar] [CrossRef]
  8. Chen, H.; Qi, Z.; Shi, Z. Remote sensing image change detection with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  9. Wang, C.; Su, W.; Gu, H. SAR Image Change Detection Based on Semisupervised Learning and Two-Step Training. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4008905. [Google Scholar] [CrossRef]
  10. Zhang, T.; Quan, S.; Yang, Z.; Guo, W.; Zhang, Z.; Gan, H. A Two-Stage Method for Ship Detection Using PolSAR Image. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  11. Zhang, X.; Su, H.; Zhang, C.; Gu, X.; Tan, X.; Atkinson, P.M. Robust unsupervised small area change detection from SAR imagery using deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 173, 79–94. [Google Scholar] [CrossRef]
  12. Gao, F.; Wang, X.; Gao, Y.; Dong, J.; Wang, S. Sea ice change detection in SAR images based on convolutional-wavelet neural networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1240–1244. [Google Scholar] [CrossRef]
  13. Geng, J.; Ma, X.; Zhou, X.; Wang, H. Saliency-guided deep neural networks for SAR image change detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7365–7377. [Google Scholar] [CrossRef]
  14. Zhang, X.; Su, H.; Zhang, C.; Atkinson, P.M.; Tan, X.; Zeng, X.; Jian, X. A Robust Imbalanced SAR Image Change Detection Approach Based on Deep Difference Image and PCANet. arXiv 2020, arXiv:2003.01768. [Google Scholar]
  15. Celik, T. Unsupervised change detection in satellite images using principal component analysis and k-means clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  16. Wang, Q.; Yuan, Z.; Du, Q.; Li, X. GETNET: A general end-to-end 2-D CNN framework for hyperspectral image change detection. IEEE Trans. Geosci. Remote Sens. 2018, 57, 3–13. [Google Scholar] [CrossRef] [Green Version]
  17. Gong, M.; Zhao, J.; Liu, J.; Miao, Q.; Jiao, L. Change detection in synthetic aperture radar images based on deep neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 125–138. [Google Scholar] [CrossRef] [PubMed]
  18. Gao, F.; Dong, J.; Li, B.; Xu, Q. Automatic change detection in synthetic aperture radar images based on PCANet. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1792–1796. [Google Scholar] [CrossRef]
  19. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  20. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  21. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  22. Qu, X.; Gao, F.; Dong, J.; Du, Q.; Li, H.-C. Change detection in synthetic aperture radar images using a dual-domain network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
Figure 1. Proposed LMNet change recognition method.
Figure 1. Proposed LMNet change recognition method.
Remotesensing 14 06362 g001
Figure 2. Flowchart of the proposed training sample extraction.
Figure 2. Flowchart of the proposed training sample extraction.
Remotesensing 14 06362 g002
Figure 3. Classification model of the proposed LMNet.
Figure 3. Classification model of the proposed LMNet.
Remotesensing 14 06362 g003
Figure 4. Datasets of change type recognition. The first row is the A dataset of the Yellow River Estuary. The B dataset of the Yellow River is seen in the second row, whereas the C dataset of the Yellow River is presented in the third row. (a,b) the two original SAR images, respectively; (c) the change recognition reference chart.
Figure 4. Datasets of change type recognition. The first row is the A dataset of the Yellow River Estuary. The B dataset of the Yellow River is seen in the second row, whereas the C dataset of the Yellow River is presented in the third row. (a,b) the two original SAR images, respectively; (c) the change recognition reference chart.
Remotesensing 14 06362 g004
Figure 5. Change recognition results of different methods: (a) Results of the CNN; (b) results of SqueezeNet; (c) results of ShuffleNet; (d) results of MobileNet v2; and (e) results of LMNet. (f) The change recognition reference chart. Red indicates the change type of land to water (LW), green indicates the change type of water to land (WL), and black indicates the unchanged type.
Figure 5. Change recognition results of different methods: (a) Results of the CNN; (b) results of SqueezeNet; (c) results of ShuffleNet; (d) results of MobileNet v2; and (e) results of LMNet. (f) The change recognition reference chart. Red indicates the change type of land to water (LW), green indicates the change type of water to land (WL), and black indicates the unchanged type.
Remotesensing 14 06362 g005aRemotesensing 14 06362 g005b
Figure 6. Change recognition results of different methods on three datasets.
Figure 6. Change recognition results of different methods on three datasets.
Remotesensing 14 06362 g006
Figure 7. Relationship between OA and the size of the training sample on three real SAR datasets.
Figure 7. Relationship between OA and the size of the training sample on three real SAR datasets.
Remotesensing 14 06362 g007
Table 1. Lightweight MobileNet body architecture.
Table 1. Lightweight MobileNet body architecture.
TypeFilter ShapeInput Size
Conv3 × 3 × 3210 × 10 × 1
Module A3 × 3 × 325 × 5 × 32
Module B3 × 3 × 965 × 5 × 96
Module C1 × 1 × 24 × 1443 × 3 × 24
Module D3 × 33 × 3 × 144
Module E1 × 1 × 24 × 323 × 3 × 24
Table 2. Details of a real SAR data set.
Table 2. Details of a real SAR data set.
DatasetABC
SensorRadarsat-2Radarsat-2GaoFen-3
LocationYellow River,
China
Yellow River,
China
Yellow River,
China
BandCCC
PolarizationVVVVVV
Date2008.062008.062021.07.20
2009.062009.062021.07.24
Size257 × 289233 × 356300 × 300
Resolution8 m8 m5 m
ChangesFarmingFloodFarming
Table 3. Change recognition results of different methods on three datasets.
Table 3. Change recognition results of different methods on three datasets.
MethodResults on the A dataset
k (%)LW Area (%)WL Area (%)OA (%)AA (%)
CNN82.8078.9037.5874.9758.24
SqueezeNet72.9565.1041.7562.6853.42
ShuffleNet78.8373.9840.0070.0856.99
MobileNet v284.0279.7948.7376.7264.26
LMNet87.6484.0049.6181.5966.80
MethodResults on the B dataset
k (%)LW Area (%)WL Area (%)OA (%)AA (%)
CNN61.3161.5928.2245.9344.90
SqueezeNet66.6958.1838.8951.6748.54
ShuffleNet72.8964.1347.2958.7755.71
MobileNet v276.5266.4155.4563.2260.93
LMNet81.1071.3263.9369.2367.62
MethodResults on the C dataset
k (%)LW Area (%)WL Area (%)OA (%)AA (%)
CNN69.3961.3645.5956.0153.47
SqueezeNet66.1870.6631.6052.9551.13
ShuffleNet74.0073.4444.3262.1558.88
MobileNet v274.9468.2449.9362.5359.09
LMNet78.9671.3259.8667.7965.59
Table 4. The training times and parameters of compared methods.
Table 4. The training times and parameters of compared methods.
MethodsCNNSqueezeNetShuffleNetMobileNet v2LMNet
Times1.3 min7.3 min21.4 min32.2 min5.4 min
Parameters39.7 k9.8 M863 k3 M158 k
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, W.; Lin, Z.; Gao, G.; Niu, C.; Lu, W. Unsupervised SAR Image Change Type Recognition Using Regionally Restricted PCA-Kmean and Lightweight MobileNet. Remote Sens. 2022, 14, 6362. https://doi.org/10.3390/rs14246362

AMA Style

Liu W, Lin Z, Gao G, Niu C, Lu W. Unsupervised SAR Image Change Type Recognition Using Regionally Restricted PCA-Kmean and Lightweight MobileNet. Remote Sensing. 2022; 14(24):6362. https://doi.org/10.3390/rs14246362

Chicago/Turabian Style

Liu, Wei, Zhikang Lin, Gui Gao, Chaoyang Niu, and Wanjie Lu. 2022. "Unsupervised SAR Image Change Type Recognition Using Regionally Restricted PCA-Kmean and Lightweight MobileNet" Remote Sensing 14, no. 24: 6362. https://doi.org/10.3390/rs14246362

APA Style

Liu, W., Lin, Z., Gao, G., Niu, C., & Lu, W. (2022). Unsupervised SAR Image Change Type Recognition Using Regionally Restricted PCA-Kmean and Lightweight MobileNet. Remote Sensing, 14(24), 6362. https://doi.org/10.3390/rs14246362

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop