Next Article in Journal
Aberration Modulation Correlation Method for Dim and Small Space Target Detection
Previous Article in Journal
Automated Recognition of Snow-Covered and Icy Road Surfaces Based on T-Net of Mount Tianshan
Previous Article in Special Issue
An Adaptive Noisy Label-Correction Method Based on Selective Loss for Hyperspectral Image-Classification Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Supervised Subcategory Centroid Alignment-Based Scene Classification for High-Resolution Remote Sensing Images †

by
Nan Mo
1 and
Ruixi Zhu
2,*
1
School of Geomatics Science and Technology, Nanjing Tech University, Nanjing 211816, China
2
Department of Research, Nanjing Research Institute of Electronic Technology, Nanjing 210039, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our conference paper “Rotation Robust Neighbor-Based Subcategory Centroid Alignment for Cross-Domain Scene Classification of Aerial Images” that published in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2023.
Remote Sens. 2024, 16(19), 3728; https://doi.org/10.3390/rs16193728
Submission received: 7 September 2024 / Revised: 1 October 2024 / Accepted: 2 October 2024 / Published: 7 October 2024
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing II)

Abstract

:
It is usually hard to obtain adequate annotated data for delivering satisfactory scene classification results. Semi-supervised scene classification approaches can transfer the knowledge learned from previously annotated data to remote sensing images with scarce samples for satisfactory classification results. However, due to the differences between sensors, environments, seasons, and geographical locations, cross-domain remote sensing images exhibit feature distribution deviations. Therefore, semi-supervised scene classification methods may not achieve satisfactory classification accuracy. To address this problem, a novel semi-supervised subcategory centroid alignment (SSCA)-based scene classification approach is proposed. The SSCA framework is made up of two components, namely the rotation-robust convolutional feature extractor (RCFE) and the neighbor-based subcategory centroid alignment (NSCA). The RCFE aims to suppress the impact of rotation changes on remote sensing image representation, while the NSCA aims to decrease the impact of intra-category variety across domains on cross-domain scene classification. The SSCA algorithm and several competitive approaches are validated on two datasets to demonstrate its effectiveness. The results prove that the proposed SSCA approach performs better than most competitive approaches by no less than 2% overall accuracy.

1. Introduction

In recent years, the successful emission of high-resolution remote sensing satellites has made them significant for land-cover classification. Scene classification can extract high-level semantic information from remote sensing images, which has been widely applied to land-cover classification [1,2]. The traditional supervised scene classification methods that have achieved great success are usually dependent on the availability of abundant samples. However, it is usually hard to label adequate annotated data for satisfactory results [3,4]. To solve this problem of insufficient samples, semi-supervised domain adaptation methods have been studied for decades, which can transfer the knowledge learned from previously labeled data to images with limited labeled data [5,6,7]. According to [8], three categories of semi-supervised domain adaptation approaches that are utilized for classifying remote sensing images are as follows:
1. Invariant feature selection methods [9,10,11]. The features that are robust to the domain or spectral shift are derived based on the original features for training a more discriminative classifier. This family of methods cannot perform well on heterogeneous domain adaptation tasks.
2. Classifier adaptation methods [12,13,14]. Here, the classifier trained from previously labeled samples takes into account the target unlabeled samples to adapt the source classifier to the target data. It may not adapt well when the probability distribution bias between different high-resolution remote sensing images is strong.
3. Data distribution adaptation approaches [15,16,17]. This type of method is aimed at making the data from different domains share similar data distributions, which allows for the classifiers obtained from existing labeled data to classify target images with different feature distributions.
The previously labeled data and the remote sensing images to be classified usually have different feature spaces and highly different probability distributions. For these reasons, we mainly study data distribution adaptation approaches. The purpose of studying data distribution adaptive methods is to solve the problem of data distribution deviation between existing sample labels and remote sensing images to be classified, due to differences in geographical environments, locations, seasons, imaging modalities, etc. [18]. Semi-supervised data distribution adaptation methods explore the hidden relationships between previously labeled data and unlabeled images when limited data are available. Therefore, it is important for them to learn image representations that are insensitive to domain shifts. The dictionary learning approaches that belong to data distribution adaptation methods can provide domain-insensitive sparse representations. The advantage of dictionary learning methods is that they can represent the high-dimensional information in remote sensing images by a linear combination of multiple visual dictionary features [19]. Dictionary learning methods have demonstrated better domain adaptation performance than some of the existing semi-supervised methods, including manifold alignment [20], transfer component analysis [21], and class centroid alignment (CCA) [22]. However, several issues still remain that negatively influence the learning of domain-insensitive feature representations.
1. The rotation variance may contribute to the feature distribution bias. Figure 1 shows examples of scene images with rotation variance. The spatial distribution of objects in high-resolution remote sensing images usually has random directions because remote sensing images taken overhead have different shooting angles. Consequently, the rotation robustness should be considered in feature representations.
2. A great data distribution bias exists in instances across domains, exacerbating the severity of high intra-class diversity and increasing the difficulty of classification. The intra-class diversity may be caused by different sensors, locations, and natural environments. Figure 2 shows that the river and airport categories demonstrate very different spectral characteristics. The high intra-class diversity can increase the difficulty of distinguishing cross-domain remote sensing images with similar land-cover types.
In order to handle the rotation variance, existing feature extraction methods are usually based on manual features or deep learning features. Among them, manual features are usually integrated with rotation information. A cyclic shift was used to generate a rotation-invariant LBP feature using the method of [23]. Other representative descriptors include the circular Fourier histogram of oriented gradient features, which uses orientation alignment [24], and the rotation-invariant histogram of oriented gradients (HOG) [25], which utilizes radial gradient transform. Although the above features can perform well under certain circumstances, their performances are limited in high-resolution remote sensing images. That is because hand-crafted features may fail to describe the hidden semantic information well [26]. Deep-learning-based methods incorporate rotation invariance into the existing convolutional neural network (CNN) architectures so as to overcome the limitations of hand-crafted features. In order to obtain robustness in rotation, spatial transformer networks [27], transformation-invariant pooling [28], oriented response networks [29], the group-equivariant CNN framework [30], and rotation-equivariant vector field networks [31] have all been proposed. However, the existing CNN feature extractors usually only use RGB three-channel images for feature extraction, without considering improving the adaptability of features to the rotation variance of scene images from the perspective of image representation.
Cross-domain data distribution can be aligned by decreasing the means, subspace eigenvectors, correlation coefficients, or covariance matrix between domains. Tuia et al. [20] proposed a manifold alignment approach where the manifolds of cross domains are matched. Matasci et al. [21] utilized semi-supervised transfer component analysis to make the means of cross-domains close. Volpi et al. [32] performed feature alignment by maximizing the correlation coefficient between the data of the cross domains. Li et al. [33] derived a common kernel space in which the data distributions of two heterogeneous cross-domains are aligned. Sparse representation with reconstruction strategies and methods based on low-rank representations are proposed to reduce the differences in the target representation [34]. However, the above methods ignore the fact that intra-category variety exacerbates the effect of improving the feature spatial distribution deviation on cross-domain scene classification.
The highlights in this paper are as follows:
  • The proposed RCFE incorporates rotation robustness into convolution feature extractor where both rotation-invariant HOG images and original images are considered as the input, which can reduce the impact of spectral shift and rotation variance on feature extraction.
  • We proposed the NSCA method by moving the target features toward the relevant subcategories of their source domain features in order to reduce the deviation between feature distributions across domains.
  • The proposed SSCA framework with RCFE and NSCA achieves a classification accuracy that is better than that of most of existing methods on two testing datasets.
The rest of this article is arranged as follows. The key theory of the proposed SSCA algorithm is described in Section 2. The descriptions of the datasets, the experimental setup, and the experimental results are provided in Section 3. Feature visualization and experiment analysis are provided in Section 4. Finally, we outline our conclusions and potential future research work in Section 5.

2. Materials and Methods

We propose an SSCA framework to classify the land-cover types of scene images with limited samples. Figure 3 depicts the overall flowchart of the proposed SSCA framework.
Step 1. Rotation-invariant HOG images are generated for original images in different domains. Different colors in rotation-invariant HOG images represent different magnitudes.
Step 2. The original images and their corresponding rotation-invariant HOG images are the input of RCFE to extract rotation-robust convolutional features.
Step 3. Move the feature of target images towards their corresponding subcategories of source domain images in the feature space whose direction is determined by the proposed NSCA to obtain optimized convolutional features.
Step 4. Based on the previously labeled data and moved target features, train an SVM classifier and predict each unlabeled target image.

2.1. Generating Rotation-Invariant HOG Images

Rotation-invariant HOG images which have been successfully applied to object detection of remote sensing images can help to reduce the negative influence of rotation variance that may decrease the ability of convolutional features to distinguish diverse land-cover types. The process of obtaining rotation-invariant HOG images is as follows.
First of all, the Fourier HOG is calculated based on the remote sensing scene image I. The gradient map D of the image I in the horizontal and vertical directions is calculated according to Equation (1). The Fourier HOG F ^ m ( x ) is calculated based on the gradient map D ( x ) of the image through Equation (2), where e i m Φ ( D ( x ) ) is the Fourier basis and Φ ( D ( x ) ) is the gradient direction. Fourier HOG feature map F ^ m is normalized to obtain F ˜ m through Equation (3), where N is the smoothing convolutional kernel.
D = I
F ^ m ( x ) = D ( x ) e i m Φ ( D ( x ) ) , m
F ˜ m = F ^ m / D 2 N
Then, Fourier HOG is used to generate regional features. Regional features B are computed by convolutions with circular harmonic basis functions U p , q through Equation (4). In Equation (5), the P p ( r ) is the radial function and q is the rotation order of the output function. Compute the convolution between the basis function U p , q and the Fourier HOG F ˜ m and generate the feature describing the HOG features in the region covered by U p , q .
B = U p , q F ˜ m
U p , q = P p ( r ) e i q φ
Finally, generate final rotation-invariant features based on the regional features B . The complex-valued features B are separated into real and imaginary parts to generate real-valued rotation-invariant images.
The obtained rotation-invariant HOG image can also reduce the spectral differences between different color spaces to a certain extent. Figure 4 shows original images in different color spaces along with their rotation-invariant HOG images. As shown in Figure 4b,d, rotation-invariant HOG images demonstrate less spectral difference compared with their corresponding original images. Therefore, rotation-invariant HOG images can reduce the spectral shift and rotation variance in some color spaces. In order to incorporate rotation variance into convolutional feature extraction, original images and their corresponding rotation-invariant HOG images are used as input of the RCFE.

2.2. Rotation-Robust Convolutional Feature Extractor

The input of existing convolutional neural network extractors is usually three-channel spectral data, without considering that rotation-invariant information may exist in remote sensing images. The proposed RCFE method uses three-channel spectral data as well as rotation-invariant images as the input of CNN models, which is conducive to reducing the negative impact of rotation variances of scene classification on convolutional features.
The rotation-robust convolutional feature extractor requires training with original images and rotation-invariant HOG images. However, the initial weights of existing CNN models are usually pre-trained with the ImageNet dataset, which is not conducive to the training and convergence of CNN models. Considering the scale variance in diverse land-cover types, multi-scale images are generated for input images. Three different scales are used in the experiments undertaken in this study, and the proportion between the adjacent scales is set to 0.5. As shown in Figure 5, the proposed method downsamples the image to obtain input images of three different scales including scale level 1, scale level 2, and scale level 3. The scale levels are defined in the order of decreasing resolution. That is to say, scale level 1 is the finest scale, namely, the original image size. Scale level 3 is the coarsest scale. ResNet 101 [35] is used as the backbone for feature extraction. The coarser CNN is fine-tuned on images of scale level 3. Then the finer CNN is initialized with the pre-trained coarser weights and fine-tuned with the finer images. The feature extractor trained from the finest-scale images is used to provide initial features for the NSCA method.

2.3. Neighbor-Based Subcategory Centroid Alignment

Because of the feature distribution bias caused by diverse sensors, locations, seasons, or nature environments, the classifier trained from the source labeled data may deliver poor performance in classifying target images. Moreover, the remote sensing images demonstrate higher within-class spectral differences between source labeled data and target data and the within-class spectral difference may aggravate the feature distribution bias. Moving the target features toward the direction of their corresponding source labels can help to increase classification accuracy by decreasing the distribution difference between source and target features. That is because similar feature distributions between source and target features can make the classifier trained from the source labeled data classify the target data well. But how to determine the direction of the moving target images still needs investigating.
The difference between existing CCA and the proposed NSCA method is that the existing CCA method moves target features toward the mean of difference between each neighbor image feature vector and its corresponding class centroid as shown in Figure 6a. However, the target image will not be moved toward its corresponding source class when a target image is close to source labeled data that are far from the centroid of its own class. That is because high within-class diversity in some land-cover categories and source labeled data that are far from the corresponding class centroid but close to another class centroid may lead to inaccurate moving direction. Therefore, the NSCA method proposes to move target features toward a more accurate direction by replacing the class centroid with a subcategory centroid as shown in Figure 6b. The new direction calculates the difference between each neighbor image feature vector and its corresponding center of predicted subcategories of classes rather than predicted classes. The new direction can increase the possibility of finding corresponding classes for target images. The difference between the moving directions of the proposed NSCA and those of the existing CCA is depicted in Figure 6a,b. The details of NSCA can be illustrated as follows.
Let X s d × n s denote source features extracted from the RCFE with labels Y s 1 × n s . X t d × n t represents target features extracted from the RCFE, where d reflects the feature dimension. n s and n t are the source and target image number. X t s d × n t represents moved target features. Ω = Ω 11 , , Ω 1 k , , Ω C 1 , , Ω C k represents k × C subcategories clustered from X s . k is the subcategory number in each class and C is the class number. Y t is the target pseudo subcategories obtained by a classifier of k × C subcategories trained from X s and Y s . Ω and Y t are used to calculate the moving directions for the NSCA method.
To determine the moving direction d i j , one subcategory Ω i j is represented by its cluster center. The centroid of one target subcategory is calculated by the mean of target feature vectors whose pseudo subcategory is the corresponding subcategory. Then the domain shift can be represented by the discrepancy d i j between the centroid of same subcategory in diverse domains U s i j and U t i j . The d i j , U s i j , and U t i j are shown in Equations (6)–(8), respectively.
d i j = U s i j U t i j , i [ 1 , C ] , j [ 1 , k ]
U s i j = y s i Ω i j x s i N s i j
U t i j = y t i Ω i j x t i N t i j
where U s i j and N s i j represent the mean and quantity of the previously labeled data belonging to the j-th subcategory of the i-th class, respectively. U t i j and N t i j are the mean and number of target feature vectors predicted as the j-th subcategory of the i-th class. Then each moved target feature becomes x t s = x t + d i j .
The moving direction d i j of target features may be inaccurate when a target image is wrongly predicted. The nearest neighbors of it may be correctly predicted and the association between target features and their nearest neighbors needs keeping after moving. Therefore, the optimized direction that considers nearest neighbors is in Equation (9).
d = j = 1 C × k i = 1 M δ ( y i , Ω j ) d i j M
where M represents the nearest neighbor number. The pseudo labels of all neighbors are denoted as Y N = y 1 , , y M and δ ( y i , Ω j ) is calculated as Equation (10):
δ ( y i , Ω j ) = 1 y i = Ω j 0 o t h e r w i s e
Algorithm 1 describes the procedure of the NSCA approach as follows.
Algorithm 1 NSCA approach description
1: Input: target features X t , target labels Y t , source features X s , source labels Y s , category number C, nearest neighbor number M, subcategory number k.
2: Output: target features after moving X t s .
3: Source features of all categories X s are divided into k × C subcategories with k-means. There exist k subcategories in each category, Ω = Ω 11 , , Ω 1 k , , Ω C 1 , , Ω C k represent all subcategories. The source and target images belong to Ω i j are considered as label Ω i j .
4: While predictions Y t l is not convergent do
5:  A classifier of k × C subcategories is trained based on X s , X t and Ω .
6:  When the iteration l is set to 1, the predicted label Y t l for X t is predicted by the trained classifier.
7:   U s and U t l is estimated based on Ω and Y t l .
8:   d i j is calculated for each subcategory Ω i j based on Equations (6)–(8).
9:  Find M nearest neighbors for each target feature, whose direction is calculated by Equation (9).
10: Each target feature x t l is moved based on x t s l = x t l + d
11: The moved target feature X t s l is predicted by the classifier in step 5.
12. The predicted label is updated in the iteration l + 1
13: End while
14: Return X t s l

3. Results

3.1. Dataset Partition and Description

NWPU-RESISC45 [36] and RSI-CB256 [37] are selected as training datasets for experiments in this paper, which provide rich image variations and high within-class diversity with a varied resolution from 0.3 to 3 m. And two datasets have category complementarity and can cover the category types of the target domain. Twenty percent of the labeled data from UC Merced [38] and SIRI-WHU [39] are used as the validation set for parameter selection. Eighty percent of the unlabeled data from SIRI-WHU and UC Merced datasets are used as the test set for accuracy evaluation. The validation set and the test set should be collected at different times, locations, and sensors from the source domain training samples. The image resolutions of UC Merced and SIRI-WHU are 0.3 m and 0.6 m, respectively. The two datasets are collected at different times and locations from the training data, so UC Merced and SIRI-WHU are selected as target domain data. Figure 7 and Figure 8 show the examples of categories in the training dataset and testing dataset, respectively. There are large differences in spectral and spatial distribution between the source domain and the target domain samples. Table 1 describes the class number used in the experiments. Common classes existing in the testing and training dataset are used. ✕ represents no samples in this category.
NWPU-RESISC45 dataset: The dataset originally contains 45 scene categories, and we selected 21 of them as training data for this article. The scene images with size of 256 × 256 are all clipped from Google Earth imagery.
RSI-CB256 dataset: The dataset originally contains 35 scene categories, and we selected 8 of them as training data for this article. This dataset is also with a size of 256 × 256 . These scenes are with a resolution ranging from 0.3 to 3 m in the RGB space.
UC Merced dataset. This dataset contains 21 categories, with a size of 256 × 256 and 0.3 m resolution. Figure 8a depicts the examples of 21 categories in the UC Merced dataset.
SIRI-WHU dataset. This dataset is from Montgomery, Ohio in the USA (latitude 32 ° 22 N , longitude 86 ° 2 E ). The original image size is 10,000 × 9000 and with a resolution of 0.6m. The original image is divided into patches of 256 × 256 . This dataset contains six categories. Figure 8b,c depict the original large image and the examples of six categories in the SIRI-WHU dataset.

3.2. Experimental Setup

The proposed SSCA approach and some competitive methods are compared to demonstrate its effectiveness. The optimum hyperparameters of the SSCA including the number of subcategories in each category k and number of nearest neighbors M are calculated by the validation dataset. The sensitivity analysis of these two parameters was performed when fixing other parameters. ResNet101 [35] is the CNN model to extract initial features. We choose SVM as the classifier for the method proposed in the paper. SVM can handle high-dimensional features and nonlinearly separable data by using kernel functions. In the case of scarce samples, SVM has good robustness and generalization ability.
Five data distribution adaptation methods are compared with the proposed SSCA method to ensure the competitive accuracy of the SSCA framework in data distribution adaptation methods. This family of methods covers existing dictionary learning methods including domain-adaptive dictionary learning (DADL) [40], incremental dictionary learning (IDL) [41], class centroid alignment (CCA), and asymmetric adaptation of deep features (AADF) [42].
Four adversarial domain adaptation methods including semi-supervised center-based discriminative adversarial learning (SCDAL) framework [13], adversarial discriminative domain adaptation (ADDA) [43], conditional adversarial domain adaptation (CADA) [44], and collaborative and adversarial network (CAN) [45] are compared with the SSCA method to show its competitiveness over the adversarial domain adaptation techniques.
In the experiments, features of the RCFE are utilized for methods including the proposed NSCA, CCA, DADL, and IDL. The role of the rotation-invariant HOG images and NSCA is evaluated by performing ablation studies. The optimum experimental setup of all baseline methods and the proposed framework is shown in Table 2.
Four different evaluation metrics were calculated to assess the classification performance, mainly including confusion matrix, overall accuracy for each category, overall accuracy, and kappa coefficient. Among them, kappa coefficient provides a more reliable consistency measure than simple accuracy by taking into account the accidental factors of classification, which ranges from −1 to 1. Here, −1 means completely inconsistent, 0 means random guess, and 1 means completely consistent.

3.3. Comparison Experiment

Table 3 describes the overall accuracy of all compared approaches shown in Section 3.2. The proposed SSCA method performs better than compared approaches by at least 2% because it can decrease the influence of intra-category variety and rotation variance for decreased distribution bias. SSCA is a method between the middle-level feature method and the high-level feature method. This method takes spectral images and rotation-invariant images as input and uses the neural network trained with coarse-scale images as the initial weights of higher-scale images to obtain the deep features of the source and the target images. The NSCA method is able to narrow the distance between the deep features extracted from across domain images. Therefore, our method can reduce the negative effects of rotation changes, spectral bias, and cross-domain intra-class differences on feature extraction and obtain more robust land-cover classification results.
The SCDAL framework delivers the second-highest classification accuracies since it also makes the spatial distance of source and target images closer in the feature space. The existing dictionary learning methods including SSCA, DADL, IDL, CCA, and AADF deliver poorer classification performance because they ignore the discriminative ability of the learned dictionary, which may lead to confusion with similar land-cover types. CAN or CADA delivers poorer performance because the training and testing datasets have diverse feature distributions. CCA, ADDA, and AADF can make target features close to source features but they do not address the data distribution bias from the image representation.

3.4. Ablation Experiment

Table 4 shows the classification accuracy of ablation studies in both datasets. The rotation-invariant HOG image plays a more important role than only input original input images since it can decrease the impact of both spectral shift and rotation variance. The NSCA method is more significant in increasing accuracy since it can decrease the negative influence of high intra-category variety on feature representations from different domains.
The classification results in Figure 9a are clipped in three locations. As shown in Figure 9b–e, the classification maps are generated for the SIRI-WHU dataset so as to have an intuitive feeling of the land-cover classification results. The NSCA method plays a more important role than the rotation-invariant HOG image because the NSCA method can reduce the feature distribution bias by reducing the impact of intra-class diversity on the adaptation process. And the classifier has a high discrimination ability for the moved target domain features.
Confusion exists in farmland/forest, freeway/residential, and parking lot/residential, as can be seen in Figure 9f–h. This confusion occurs mostly in the method without the NSCA method and least in the proposed SSCA framework. Rotation-invariant HOG images and NSCA can reduce data distribution deviation to varying degrees, which leads to different land-cover mapping performances.

4. Discussion

4.1. Confusion Analysis

According to the confusion matrix of the SSCA method provided in Figure 10, we can analyze the misclassified categories of the scene classification results. The kappa coefficient is calculated based on the confusion matrix. The kappa coefficient of the UC Merced dataset is 0.976, and the kappa coefficient of the SIRI-WHU dataset is 0.902, both of which are close to 1, further proving that the proposed method has good classification performance. Confusion exists in the UC Merced dataset for medium residential/dense residential, runway/forest, tennis court/intersection, and storage tank/building as shown in Figure 10a and Figure 11a–d. Confusion occurs in freeway/parking lot, river/forest, residential/parking lot, and residential/freeway for the SIRI-WHU dataset as shown in Figure 10b and Figure 11e–h. Figure 10a,c,e,g share similar backgrounds including buildings, trees, or soil while diverse spatial distributions of similar objects including buildings or vehicles may lead to a confusion in Figure 11b,d,f,h.
As shown in Figure 12, the above four images are scene examples of building, dense residential, medium residential, and mobile homepark. The above images are composed of buildings, but the types and spatial distributions of buildings are different. Since it is impossible to understand the criteria used by sample annotators to distinguish the above four categories, it is difficult for humans to distinguish the above categories. The misclassification of our method mainly occurs in the situation of small inter-class differences. Due to the lack of guidance from prior knowledge of spatial distribution, our method in this paper also struggles to accurately distinguish these types of scenes.

4.2. Feature Visualization

Figure 13 shows the feature visualization comparison results before and after performing NSCA. As shown in Figure 13, the proposed NSCA approach plays an important role in increasing classification accuracy since it can address overlapped categories well by making the topologies of target data and source data similar in the feature space.

4.3. Sensitivity Analysis

When performing semi-supervised domain adaptation, the effects of the nearest neighbor parameter M and the subcategory parameter k on the overall accuracy were studied on the test dataset. The classification accuracy in Figure 14 increases at first before decreasing. For the number of subcategories, too many subcategories may lead to a lower performance in those categories with relatively low within-class diversity. If the number of nearest neighbors is too large, some nearest neighbors may have a negative influence on determining the moving direction of the target image.

5. Conclusions

A semi-supervised subcategory centroid alignment method for cross-domain scene classification called SSCA is presented in this paper, which is used to increase the classification performance when limited target labeled data are available. In the SSCA framework, our method introduces the HOG feature map based on the three-channel image and uses the HOG feature map with small spectral differences and rotation invariance to improve the adaptability of features to spectral differences and rotation variance. In addition, the NSCA method is improved based on the CCA method to further increase the model’s ability to distinguish different types of objects by decreasing the feature distribution bias.
The experimental results show that the SSCA framework with RCFE and NSCA outperforms previous representative domain adaptation approaches. The ablation studies with the SSCA framework also show that the rotation-invariant HOG images and the NSCA can increase the performance with overall classification accuracy improvements of 1.2 and 4.1%, respectively. The feature visualization results demonstrate the effectiveness of moving target features toward corresponding subcategories of the source domain in reducing intra-category variety and feature distribution bias across domains.
However, the SSCA method has limitations in distinguishing scenes composed of similar objects but with different spatial distributions because the prior information of spatial distribution is not incorporated into the SSCA. We will further address this issue in future work.

Author Contributions

Conceptualization, R.Z. and N.M.; methodology, R.Z.; validation, R.Z. and N.M.; writing—original draft preparation, R.Z.; writing—review and editing, N.M.; supervision, R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China 24KJB420004.

Data Availability Statement

All the data used in this paper come from public datasets. The UC Merced dataset is available at http://weegee.vision.ucmerced.edu/datasets/landuse.html (accessed on 1 October 2024). The NWPU-RESISC45 dataset is available at https://gcheng-nwpu.github.io/#Datasets (accessed on 1 October 2024). The RSI-CB 256 dataset is available at https://github.com/lehaifeng/RSI-CB (accessed on 1 October 2024). The SIRI-WHU dataset is available at http://rsidea.whu.edu.cn/resource_sharing.htm (accessed on 1 October 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cheng, G.; Xie, X.; Han, J.; Guo, L.; Xia, G.S. Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3735–3756. [Google Scholar] [CrossRef]
  2. Adegun, A.A.; Viriri, S.; Tapamo, J.R. Review of deep learning methods for remote sensing satellite images classification: Experimental survey and comparative analysis. J. Big Data 2023, 10, 93. [Google Scholar] [CrossRef]
  3. Zhang, Q.; Yuan, Q.; Song, M.; Yu, H.; Zhang, L. Cooperated spectral low-rankness prior and deep spatial prior for HSI unsupervised denoising. IEEE Trans. Image Process. 2022, 31, 6356–6368. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, Q.; Zheng, Y.; Yuan, Q.; Song, M.; Yu, H.; Xiao, Y. Hyperspectral image denoising: From model-driven, data-driven, to model-data-driven. IEEE Trans. Neural Netw. Learn. Syst. 2023, 6, 1–21. [Google Scholar] [CrossRef] [PubMed]
  5. Thapa, A.; Horanont, T.; Neupane, B.; Aryal, J. Deep learning for remote sensing image scene classification: A review and meta-analysis. Remote Sens. 2023, 15, 4804. [Google Scholar] [CrossRef]
  6. Qiao, H.; Qian, W.; Hu, H.; Huang, X.; Li, J. Semi-Supervised Building Extraction with Optical Flow Correction Based on Satellite Video Data in a Tsunami-Induced Disaster Scene. Sensors 2024, 24, 5205. [Google Scholar] [CrossRef]
  7. Liu, K.; Yang, J.; Li, S. Remote-Sensing Cross-Domain Scene Classification: A Dataset and Benchmark. Remote Sens. 2022, 14, 4635. [Google Scholar] [CrossRef]
  8. Tuia, D.; Persello, C.; Bruzzone, L. Domain Adaptation for the Classification of Remote Sensing Data: An Overview of Recent Advances. IEEE Geosci. Remote Sens. Mag. 2016, 4, 41–57. [Google Scholar] [CrossRef]
  9. Yan, L.; Zhu, R.; Mo, N.; Liu, Y. Cross-Domain Distance Metric Learning Framework with Limited Target Samples for Scene Classification of Aerial Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3840–3857. [Google Scholar] [CrossRef]
  10. Yang, C.; Dong, Y.; Du, B.; Zhang, L. Attention-Based Dynamic Alignment and Dynamic Distribution Adaptation for Remote Sensing Cross-Domain Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5634713. [Google Scholar] [CrossRef]
  11. Li, Y.; Li, Z.; Su, A.; Wang, K.; Wang, Z.; Yu, Q. Semi supervised Cross-Domain Remote Sensing Scene Classification via Category-Level Feature Alignment Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5621614. [Google Scholar]
  12. Bahirat, K.; Bovolo, F.; Bruzzone, L.; Chaudhuri, S. A Novel Domain Adaptation Bayesian Classifier for Updating Land-Cover Maps with Class Differences in Source and Target Domains. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2810–2826. [Google Scholar] [CrossRef]
  13. Wei, H.; Ma, L.; Liu, Y.; Du, Q. Combining Multiple Classifiers for Domain Adaptation of Remote Sensing Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 1832–1847. [Google Scholar] [CrossRef]
  14. Zheng, Z.; Zhong, Y.; Su, Y.; Ma, A. Domain Adaptation via a Task-Specific Classifier Framework for Remote Sensing Cross-Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5620513. [Google Scholar] [CrossRef]
  15. Zhu, R.; Yan, L.; Mo, N.; Liu, Y. Semi-supervised center-based discriminative adversarial learning for cross-domain scene-level land-cover classification of aerial images. ISPRS J. Photogramm. Remote Sens. 2019, 155, 72–89. [Google Scholar] [CrossRef]
  16. Zhao, X.; Zhang, M.; Tao, R.; Li, W.; Liao, W.; Philips, W. Cross-Domain Classification of Multisource Remote Sensing Data Using Fractional Fusion and Spatial-Spectral Domain Adaptation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5721–5733. [Google Scholar] [CrossRef]
  17. Zhu, S.; Wu, C.; Du, B.; Zhang, L. Style and content separation network for remote sensing image cross-scene generalization. ISPRS J. Photogramm. Remote Sens. 2023, 201, 1–11. [Google Scholar] [CrossRef]
  18. Ye, M.; Qian, Y.; Zhou, J.; Yuan, Y. Dictionary Learning-Based Feature-Level Domain Adaptation for Cross-Scene Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1544–1562. [Google Scholar] [CrossRef]
  19. Patel, V.M.; Gopalan, R.; Li, R. Visual Domain Adaptation: An Overview of Recent Advances. IEEE Signal Process. Mag. 2015, 32, 53–69. [Google Scholar] [CrossRef]
  20. Tuia, D.; Volpi, M.; Trolliet, M.; Camps-Valls, G. Semi-supervised Manifold Alignment of Multimodal Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7708–7720. [Google Scholar] [CrossRef]
  21. Matasci, G.; Volpi, M.; Kanevski, M.; Bruzzone, L.; Tuia, D. Semisupervised Transfer Component Analysis for Domain Adaptation in Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3550–3564. [Google Scholar] [CrossRef]
  22. Zhu, L.; Ma, L. Class centroid alignment based domain adaptation for classification of remote sensing images. Pattern Recognit. Lett. 2016, 83, 124–132. [Google Scholar] [CrossRef]
  23. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Gray Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  24. Skibbe, H.; Reisert, M. Circular Fourier-HOG features for rotation invariant object detection in biomedical images. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Barcelona, Spain, 2–5 May 2012; pp. 450–453. [Google Scholar]
  25. Liu, K.; Skibbe, H.; Schmidt, T.; Blein, T.; Palme, K.; Brox, T.; Ronneberger, O. Rotation-Invariant HOG Descriptors Using Fourier Analysis in Polar and Spherical Coordinates. Int. J. Comput. Vis. 2014, 106, 342–364. [Google Scholar] [CrossRef]
  26. Gong, C.; Yang, C.; Yao, X.; Lei, G.; Han, J. When Deep Learning Meets Metric Learning: Remote Sensing Image Scene Classification via Learning Discriminative CNNs. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2811–2821. [Google Scholar] [CrossRef]
  27. Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 2017–2025. [Google Scholar]
  28. Laptev, D.; Savinov, N.; Buhmann, J.M.; Pollefeys, M. TI-Pooling: Transformation-invariant pooling for feature learning in Convolutional Neural Networks. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 289–297. [Google Scholar]
  29. Zhou, Y.; Ye, Q.; Qiang, Q.; Jiao, J. Oriented Response Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4961–4970. [Google Scholar]
  30. Cohen, T.S.; Welling, M. Group Equivariant Convolutional Networks. In Proceedings of the International Conference on Machine Learning, New York City, NY, USA, 19–24 June 2016; pp. 2990–2999. [Google Scholar]
  31. Marcos, D.; Volpi, M.; Kellenberger, B.; Tuia, D. Land cover mapping at very high resolution with rotation equivariant CNNs: Towards small yet accurate models. ISPRS J. Photogramm. Remote Sens. 2018, 145, 96–107. [Google Scholar] [CrossRef]
  32. Volpi, M.; Camps-Valls, G.; Tuia, D. Spectral alignment of multi-temporal cross-sensor images with automated kernel canonical correlation analysis. J. Photogram. Remote Sens. 2015, 107, 50–63. [Google Scholar] [CrossRef]
  33. Li, X.; Zhang, L.; Du, B.; Zhang, L.; Shi, Q. Iterative reweighting heterogeneous transfer learning framework for supervised remote sensing image classification. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2017, 10, 2022–2035. [Google Scholar] [CrossRef]
  34. Sun, H.; Liu, S.; Zhou, S.; Zou, H. Transfer sparse subspace analysis for unsupervised cross-view scene model adaptation. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2016, 9, 2901–2909. [Google Scholar] [CrossRef]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  36. Gong, C.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar]
  37. Li, H.; Dou, X.; Tao, C.; Wu, Z.; Chen, J.; Peng, J.; Deng, M.; Zhao, L. RSI-CB: A Large-Scale Remote Sensing Image Classification Benchmark Using Crowdsourced Data. Sensors 2020, 20, 1594. [Google Scholar] [CrossRef] [PubMed]
  38. Yi, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the Sigspatial International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
  39. Zhong, Y.; Zhu, Q.; Zhang, L. Scene Classification Based on the Multifeature Fusion Probabilistic Topic Model for High Spatial Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6207–6222. [Google Scholar] [CrossRef]
  40. Qiang, Q.; Patel, V.M.; Turaga, P.; Chellappa, R. Domain Adaptive Dictionary Learning. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 631–645. [Google Scholar]
  41. Lu, B.; Chellappa, R.; Nasrabadi, N.M. Incremental Dictionary Learning for Unsupervised Domain Adaptation. In Proceedings of the British Machine Vision Conference, Swansea, UK, 7–10 September 2015; pp. 108.1–108.12. [Google Scholar]
  42. Ammour, N.; Bashmal, L.; Bazi, Y.; Rahhal, M.A.; Zuair, M. Asymmetric Adaptation of Deep Features for Cross-Domain Classification in Remote Sensing Imagery. IEEE Geosci. Remote Sens. Lett. 2018, 15, 597–601. [Google Scholar] [CrossRef]
  43. Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7167–7176. [Google Scholar]
  44. Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Conditional adversarial domain adaptation. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; pp. 1640–1650. [Google Scholar]
  45. Zhang, W.; Ouyang, W.; Li, W.; Xu, D. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3801–3809. [Google Scholar]
  46. AlRahhal, M.; Bazi, Y.; AlHichri, H.; Alajlan, N.; Melgani, F.; Yager, R.R. Deep learning approach for active classification of electrocardiogram signals. Inf. Sci. 2016, 345, 340–354. [Google Scholar]
  47. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 2016, 17, 1–35. [Google Scholar]
Figure 1. Examples of scene images with rotation variance: (a) airport and (b) residential.
Figure 1. Examples of scene images with rotation variance: (a) airport and (b) residential.
Remotesensing 16 03728 g001
Figure 2. The intra-class diversity caused by different locations, sensors, and environments.
Figure 2. The intra-class diversity caused by different locations, sensors, and environments.
Remotesensing 16 03728 g002
Figure 3. The overall flowchart of the proposed SSCA framework.
Figure 3. The overall flowchart of the proposed SSCA framework.
Remotesensing 16 03728 g003
Figure 4. Rotation-invariant HOG images in different spectral conditions. (a,c) Original images in different color spaces. (b,d) The rotation-invariant HOG images of (a,c).
Figure 4. Rotation-invariant HOG images in different spectral conditions. (a,c) Original images in different color spaces. (b,d) The rotation-invariant HOG images of (a,c).
Remotesensing 16 03728 g004
Figure 5. The structure of the rotation-robust convolutional feature extractor. Different colors of label mean different scene categories.
Figure 5. The structure of the rotation-robust convolutional feature extractor. Different colors of label mean different scene categories.
Remotesensing 16 03728 g005
Figure 6. The difference between the moving directions of the existing CCA method and the proposed NSCA method. (a) The moving direction of CCA. (b) The moving direction of NSCA. (c) The determination of moving direction for (a,b).
Figure 6. The difference between the moving directions of the existing CCA method and the proposed NSCA method. (a) The moving direction of CCA. (b) The moving direction of NSCA. (c) The determination of moving direction for (a,b).
Remotesensing 16 03728 g006
Figure 7. Display of scene sample labels for 21 categories in the training dataset.
Figure 7. Display of scene sample labels for 21 categories in the training dataset.
Remotesensing 16 03728 g007
Figure 8. Display of scene sample labels in the testing dataset. (a) Examples of 21 classes in the UC Merced dataset. (b) Original large image of SIRI-WHU dataset. (c) Examples of 6 classes in the SIRI-WHU dataset.
Figure 8. Display of scene sample labels in the testing dataset. (a) Examples of 21 classes in the UC Merced dataset. (b) Original large image of SIRI-WHU dataset. (c) Examples of 6 classes in the SIRI-WHU dataset.
Remotesensing 16 03728 g008
Figure 9. Visualization of classification results produced by the ablation studies for the SSCA method in the SIRI-WHU dataset when performing semi-supervised domain adaptation. (a) The three clipped patches (A), (B), and (C). (b) The proposed SSCA framework. (c) Without rotation-invariant HOG images but with the original images for feature extraction. (d) Without the NSCA. (e) Ground-truth map. (f) Clipped land-cover maps in location (A). (g) Clipped land-cover maps in location (B). (h) Clipped land-cover maps in location (C).
Figure 9. Visualization of classification results produced by the ablation studies for the SSCA method in the SIRI-WHU dataset when performing semi-supervised domain adaptation. (a) The three clipped patches (A), (B), and (C). (b) The proposed SSCA framework. (c) Without rotation-invariant HOG images but with the original images for feature extraction. (d) Without the NSCA. (e) Ground-truth map. (f) Clipped land-cover maps in location (A). (g) Clipped land-cover maps in location (B). (h) Clipped land-cover maps in location (C).
Remotesensing 16 03728 g009
Figure 10. Confusion matrices of the SSCA method. (a) UC Merced dataset. (b) SIRI-WHU dataset.
Figure 10. Confusion matrices of the SSCA method. (a) UC Merced dataset. (b) SIRI-WHU dataset.
Remotesensing 16 03728 g010
Figure 11. Examples of major confusion of two benchmark datasets. (a) Runway and forest. (b) Dense residential and medium residential. (c) Tennis court and intersection. (d) Storage tank and building. (e) Parking lot and freeway. (f) Residential and freeway. (g) River and forest. (h) Parking lot and residential.
Figure 11. Examples of major confusion of two benchmark datasets. (a) Runway and forest. (b) Dense residential and medium residential. (c) Tennis court and intersection. (d) Storage tank and building. (e) Parking lot and freeway. (f) Residential and freeway. (g) River and forest. (h) Parking lot and residential.
Remotesensing 16 03728 g011
Figure 12. Examples of scenes with diverse spatial distributions. (a) Building. (b) Dense residential. (c) Medium residential. (d) Mobile homepark.
Figure 12. Examples of scenes with diverse spatial distributions. (a) Building. (b) Dense residential. (c) Medium residential. (d) Mobile homepark.
Remotesensing 16 03728 g012
Figure 13. Visualization of spatial distribution before and after feature alignment. (a) The unadapted features of UC Merced. (b) The adapted features of UC Merced. (c) The unadapted features of SIRI-WHU. (d) The adapted features of SIRI-WHU.
Figure 13. Visualization of spatial distribution before and after feature alignment. (a) The unadapted features of UC Merced. (b) The adapted features of UC Merced. (c) The unadapted features of SIRI-WHU. (d) The adapted features of SIRI-WHU.
Remotesensing 16 03728 g013
Figure 14. The parameter analysis of the classification accuracy in the testing data. (a) Number of subcategories. (b) Number of nearest neighbors.
Figure 14. The parameter analysis of the classification accuracy in the testing data. (a) Number of subcategories. (b) Number of nearest neighbors.
Remotesensing 16 03728 g014
Table 1. Division of experimental datasets for each category.
Table 1. Division of experimental datasets for each category.
ClassTraining DatasetValidation DatasetTesting Dataset
NWPU-RESISC45RSI-CB256UC MercedSIRI-WHUUC
Merced
SIRI-WHU
airport7003512080
baseball7002080
beach7002080
buildings10142080
chaparral7002080
dense residential7002080
farmland70064420512801549
forest700108220286801148
freeway7002232010580420
golf course7002080
harbor7002080
intersection7002080
medium residential70020271801084
mobile homepark7002080
overpass7002080
parking lot700467204580182
river70053920138052
runway7002080
sparse7002080
storage tank70013072080
tennis court7002080
Table 2. The experimental setup of the proposed method and comparison methods.
Table 2. The experimental setup of the proposed method and comparison methods.
Types of MethodsMethodsExperimental Parameter Settings
Data distribution adaptation methodsSSCA k = 25 ,   M = 7 for UC Merced dataset;
k = 15 ,   M = 5 for SIRI-WHU dataset.
DADLSparsity level T = 0.4, tradeoff parameter λ = 0.3 ,   η = 10 , the codebook size s = 1300, the stopping threshold 0.9.
IDLThe tradeoff parameter λ = 0.05 and normalization parameter σ 2 = 0.05 , the codebook size s = 1300, and the number of supportive samples Q = 50.
CCANumber of the nearest neighbors M = 5 , the parameters of SVM are the same as those in SSCA.
AADF256-dimension features by DAE network in [46], dropout value is 0.5, learning rate is 0.1, momentum is 0.5, regularization parameter is 0.5, batch sizes are [100, 80, 60, 40, 20, 10].
Adversarial domain adaptation methodsSCDAL p = 4 ,   τ = 0.2   , β = 0.5 ,   λ = 0.5 ,   M = 250 ,   N = 300 ,   k = 20 ,   m = 0.05 .
CADABatch size 128; learning rate and momentum are the same as in the domain adversarial neural network (DANN) [47].
CANThe initial learning rate is 0.0015, which is decreased gradually after each iteration, as in DANN. The weight decay, momentum, and batch size were 3 × 10−4, 0.9, and 128.
ADDABatch size is 128, maximum iterations are 20,000, and learning rate is 1 × 10−4.
Table 3. Comparison with previous methods in two datasets, UC Merced and SIRI-WHU.
Table 3. Comparison with previous methods in two datasets, UC Merced and SIRI-WHU.
MethodUC MercedSIRI-WHU
The proposed SSCA0.93140.9177
SCDAL0.91180.8958
ADDA0.87230.8617
CADA0.89380.8850
CAN0.89720.8756
DADL0.86700.8425
IDL0.86250.8541
CCA0.85280.8478
AADF0.89810.8730
Table 4. Ablation studies of the proposed method of UC Merced and SIRI-WHU datasets.
Table 4. Ablation studies of the proposed method of UC Merced and SIRI-WHU datasets.
MethodUC MercedSIRI-WHU
The proposed SSCA framework0.93140.9177
Without rotation-invariant HOG0.91190.9043
Without the NSCA method0.89330.8748
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mo, N.; Zhu, R. Semi-Supervised Subcategory Centroid Alignment-Based Scene Classification for High-Resolution Remote Sensing Images. Remote Sens. 2024, 16, 3728. https://doi.org/10.3390/rs16193728

AMA Style

Mo N, Zhu R. Semi-Supervised Subcategory Centroid Alignment-Based Scene Classification for High-Resolution Remote Sensing Images. Remote Sensing. 2024; 16(19):3728. https://doi.org/10.3390/rs16193728

Chicago/Turabian Style

Mo, Nan, and Ruixi Zhu. 2024. "Semi-Supervised Subcategory Centroid Alignment-Based Scene Classification for High-Resolution Remote Sensing Images" Remote Sensing 16, no. 19: 3728. https://doi.org/10.3390/rs16193728

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop