Next Article in Journal
The Concept of Topological Derivative for Eigenvalue Optimization Problem for Plane Structures
Previous Article in Journal
Solving Linear and Nonlinear Delayed Differential Equations Using the Lambert W Function for Economic and Biological Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Class-Aware Self- and Cross-Attention Network for Few-Shot Semantic Segmentation of Remote Sensing Images

1
Department of Electrical Engineering and Computer Science, Technische Universität Berlin, 10623 Berlin, Germany
2
Department of Electrical Engineering, National Ilan University, Yilan 260007, Taiwan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(17), 2761; https://doi.org/10.3390/math12172761
Submission received: 7 August 2024 / Revised: 29 August 2024 / Accepted: 5 September 2024 / Published: 6 September 2024

Abstract

:
Few-Shot Semantic Segmentation (FSS) has drawn massive attention recently due to its remarkable ability to segment novel-class objects given only a handful of support samples. However, current FSS methods mainly focus on natural images and pay little attention to more practical and challenging scenarios, e.g., remote sensing image segmentation. In the field of remote sensing image analysis, the characteristics of remote sensing images, like complex backgrounds and tiny foreground objects, make novel-class segmentation challenging. To cope with these obstacles, we propose a Class-Aware Self- and Cross-Attention Network (CSCANet) for FSS in remote sensing imagery, consisting of a lightweight self-attention module and a supervised prior-guided cross-attention module. Concretely, the self-attention module abstracts robust unseen-class information from support features, while the cross-attention module generates a superior quality query attention map for directing the network to focus on novel objects. Experiments demonstrate that our CSCANet achieves outstanding performance on the standard remote sensing FSS benchmark iSAID-5i, surpassing the existing state-of-the-art FSS models across all combinations of backbone networks and K-shot settings.

1. Introduction

Remote sensing image analysis has greatly contributed to academic research, industrial development, and public affairs management, as remote sensing images are rich in geographical information [1,2,3]. In the context of remote sensing image analysis, semantic segmentation aims to assign predefined geospatial categories to the images at pixel level [4]. The emergence of convolutional neural networks (CNNs) has significantly advanced the development of semantic segmentation [5,6,7,8]. However, the remarkable performance of these CNN-based models relies heavily on large datasets. In addition, traditional semantic segmentation models struggle to generalize to classes that are absent from the training dataset.
To deal with these problems, Few-Shot Semantic Segmentation (FSS) has been developed. This technique enables the deep models to segment novel-class objects with scarce support examples, which has been proven effective in low-data scenarios [9]. The conceptualization of FSS was first defined by Shaban et al. [9]. Afterward, many researchers proposed their own insights and pushed the performance of FSS to a new limit. Zhang et al. [10] incorporated an attention module and an iterative optimization method into FSS, where the support information is successfully merged and the segmentation results are improved recursively. Lang et al. [11] proposed a base learner and an ensemble module to suppress the false-positive prediction caused by the similarities between base classes and novel classes. Despite impressive results, these methods mainly focused on the segmentation of natural images, and few works investigated real-world scenarios [12,13,14]. The images of these application scenarios have special properties and pose great challenges to the segmentation task. For instance, remote sensing images, which are investigated in this paper, have greater foreground–background class similarity and more tiny objects compared with natural images. It can be observed in the first row of Figure 1, the target class ship, ground track field and harbor are greatly similar to the background class harbor, grassland and river bank, respectively. In addition, there is usually more than one target object to be segmented in an image, and in some circumstances, they are too tiny to identify (as shown in the second row of Figure 1). These unique characteristics would undoubtedly lead to unsatisfactory predictions in the existing FSS frameworks (e.g., false activation and coarse boundaries).
Furthermore, prevalent FSS approaches are mostly built on metric learning, which can be divided into affinity learning [15,16,17] and prototype learning [18,19,20,21]. Affinity-learning-based methods usually establish pixel-level support–query correspondences, which are then aggregated into query prediction. These methods, however, failed to utilize the semantic information from the extracted features and resulted in imperfect predictions.
In contrast, prototypical FSS approaches leverage one or two rich semantic class-wise prototypes to construct prototype–query connections for query segmentation. For instance, SG-One [10] applied masked average pooling (MAP) over support features to generate the class representative prototype vectors, against which the query feature is matched by the cosine similarity metric to yield query segmentation. More recently, researchers have striven to elevate the performance of the prototypical FSS paradigm by obtaining more guidance from class-wise prototypes such as PPNet [22], PFENet [20], ASGNet [21] and SD-AANet [17]. However, depending solely on scarce compressed prototypes is bound to incur information loss, making it difficult to deal with challenging scenarios in remote sensing image segmentation.
To cope with the aforementioned problems, we proposed a Class-Aware Self- and Cross-Attention Network (CSCANet) for the FSS of remote sensing images. The proposed CSCANet consists of the self-attention module (SAM) and the prior-guided supervised cross-attention module (PG-CAM). Firstly, a CBAM [23]-like self-attention module is designed to exploit unseen-class information from support images. Specifically, we incorporate a weighted max pooling branch to extract robust discriminative novel-class features. Secondly, a prior-guided supervised cross-attention mechanism is proposed to direct our CSCANet to concentrate on the unseen classes in the query set. In detail, we first generate the prior similarity mask by measuring the cosine similarity between the intermediate-level support and query features. The prior similarity mask and support masks, along with support and query features, are fed into the cross-attention module to yield a high-quality affinity attention map.
In summary, the contributions of our work include the following:
  • We devise an efficient self-attention module, which makes use of support features and the corresponding ground-truth mask to mine the unseen-class information distinct from the background classes.
  • We propose a prior-guided supervised cross-attention module to generate a high-quality query attention map. The query attention map can outline the tiny objects in images, which enhances the network’s ability to segment tiny targets.
  • The CSCANet outperforms the existing FSS methods across almost all the combinations of backbone networks (VGG-16, ResNet-50) and few-shot settings (one-shot and five-shot) on the standard remote sensing benchmark iSAID-5i.

2. Related Work

2.1. Semantic Segmentation

Semantic segmentation stands as a foundational computer vision task with the primary goal of accomplishing pixel-level classification in images, categorizing each pixel into annotated semantic categories. Benefiting from the emergence of fully convolutional networks (FCNs) [5], significant advancements in this field have been achieved. For example, Unet [24] adopted an encoder–decoder-like architecture to generate the predicted mask in a symmetric manner. Later on, PSPNet [25] incorporated a pyramid pooling module to enhance the robustness of image features. In addition, an attention mechanism was also employed to direct the network to focus on the foreground regions [26]. Although traditional segmentation models have achieved impressive performance, they face a challenge in effectively adapting to novel-class objects as they heavily depend on a substantial number of annotated samples, hindering their practical applications to some extent.

2.2. Few-Shot Learning

Few-shot learning (FSL) aims to train models with scarce labeled examples, promoting the generalization ability of deep networks in scenarios with limited data. Most of the prevalent FSL approaches are implemented within the meta-learning paradigm [27], which has three sub-divisions: metric-based [28,29,30], optimization-based [31,32,33] and augmentation-based [34]. Our work is built upon the metric-based approaches, where distance metrics (e.g., cosine distance, Euclidean distance) are leveraged to measure the support–query similarities.

2.3. Few-Shot Semantic Segmentation

Few-Shot Semantic Segmentation (FSS) has gained massive attention as an extension of FSL. FSS aims to adapt deep networks to predict pixel-to-pixel correspondence between support–query image pairs. This technique facilitates unseen-class segmentation, making it a promising solution for challenges in low-data regimes. The problem of FSS was initially formulated by Shaban et al. [9]. They proposed OSLSM to make query predictions using a classifier trained on the support branch. After that, Zhang et al. [10] proposed the first end-to-end prototypical FSS framework, which has become the paradigm in the field of FSS. ASGNet [21] adaptively extracted multiple prototypes according to the feature similarity and allocated them in the prototype–query matching based on an attention-like algorithm. Lang et al. [11] proposed a novel FSS paradigm where an auxiliary base learner was leveraged to explicitly identify confusing target regions that are similar to the base-class objects.
However, existing prevalent methods are mainly designed for natural image segmentation, which fails to consider the tricky properties of remote sensing images. Wang et al. [14] proposed a metametric-based FSS framework for few-shot geographical image segmentation, where the feature comparison sub-branch and affinity-based feature aggregation were introduced to improve the predictions. Lang et al. [35] designed a few-shot remote sensing image segmentation framework, in which the proposed global rectification and decouple registration mechanism address the inter-class similarity and intra-class diversity to some extent. Nevertheless, these approaches did not thoroughly solve the aforementioned complicated cases in remote sensing image segmentation. Therefore, we propose a lightweight self-attention module and a supervised cross-attention module to solve these problems and push the performance to a new level.

3. Methodology

In this section, we first introduce the problem setting in Section 3.1. The overall architecture of our CSCANet is mentioned in Section 3.2. Then, in Section 3.3 and Section 3.4, we describe our lightweight self-attention block and prior-guided supervised cross-attention block in detail, respectively. Section 3.5 is about the ASPP module and classifier. Finally, we briefly introduce the K-shot setting of our proposed method in Section 3.6.

3.1. Problem Definition

The goal of Few-Shot Semantic Segmentation is to segment novel-class targets with merely a few annotated exemplars. The training process of FSS models is usually performed within the meta-learning paradigm, also known as episodic training [36]. To ensure a reliable generalization ability, the model training and testing phases are separately performed on two subsets D t r a i n (sufficient base classes) and D t e s t (scarce unseen classes) with no overlapped classes. Both image sets contain a series of episodes. Each episode includes a small number of support sets S = I s i , M s i i = 1 K and a query set Q = I q , M q , where I * denotes a raw image and M * the corresponding ground-truth mask. In each episode of training, a support set S and a query image I q are input to the model, with each query prediction supervised by its corresponding ground-truth mask. During each episode of the testing stage, the model is tested on D t e s t to assess the performance.

3.2. Overall Framework

Figure 2 depicts the overall architecture of our CSCANet under a 1-shot setting. Initially, a pre-trained backbone network is utilized to extract support and query features from input image sets. The support features F s 2 of block2 and F s 3 of block3 are concatenated and then processed by a 1 × 1 convolution to generate the intermediate-level support feature F s 23 :
F s 23 = C o n v 1 × 1 { F s 2 c F s 3 } ,
where c represents the concatenate operation. Thereafter, the support prototypes V s can be calculated as follows:
F m a s k e d 23 = F s 23 ζ ( M s ) ,
V s = F a v g _ p o o l ( F m a s k e d 23 ) ,
Here, ⊙ denotes element-wise multiplication, ζ is the bi-linear interpolation function such that R H × W R c × h × w . F a v g _ p o o l represents the average pooling operation. In the self-attention module, the support feature F s 23 and its corresponding support mask are utilized to yield the support attention feature map A s . Thereafter, the support and query features, as well as the prototype vector, are fed into the cross-attention module to yield a query attention map. Subsequently, the support attention feature map, query attention map and prototype vector, along with the query feature, are input to a dilated ASPP module for feature refinement. The enriched feature is processed by the classifier, where 3 × 3 and 1 × 1 convolution are applied to generate the query prediction.

3.3. Self-Attention Module

In the context of limited cues provided by the support prototypes, we proposed an efficient self-attention module to exploit novel-class cues from the scarce support images, which guides the network to concentrate on the unseen-class objects and avoid false activation. As shown in Figure 3, we first generate the pooling vector as follows:
V p o o l = F a v g _ p o o l ( F m a s k e d 23 ) α F m a x _ p o o l ( F m a s k e d 23 ) ,
Here, F m a x _ p o o l denotes the max pooling operation, and ⊕ represents the element-wise addition. The average pooling operation is employed to extract the global general features of the novel-class objects, while the max pooling operation is applied to abstract the local discriminative unseen-class features. However, we notice that directly incorporating the max pooling branch will result in a non-uniform feature representation of the novel classes. Therefore, we adopt a learnable parameter α to weight the max pooling branch and mitigate this side effect. We set the initial value of α to 1. Subsequently, the attention vector can be derived as follows:
V a = σ ( C o n v N ( V p o o l ) ) ,
where C o n v N refers to a series of convolutional layers, and σ denotes the activation function Sigmoid, respectively.
Finally, a foreground-focused support attention map is generated as follows:
A s = F s 23 V a ,

3.4. Prior-Guided Supervised Cross-Attention Module

A high-quality query attention map is an important hint for accurate novel-class segmentation. We proposed a prior-guided supervised cross-attention block to generate such an attention map, which is capable of accurately capturing the query targets regardless of their sizes. PFENet [20] introduced a similar attention mechanism, where the cosine similarity between the deepest support and query features is calculated to generate a query attention map. However, the backbone network adopted to extract the image features is pre-trained on ImageNet [37] for classification tasks, which would be ineffective for FSS. In contrast, we treat the cosine similarity map as a prior and adopt the pyramid pooling module (PPM) [25] as the feature extractor, which is trained in a standard supervised manner. The architecture of the proposed PG-CAM is visualized in Figure 4.
In detail, the cosine similarity between query feature F q 3 and support prototype V s is calculated to generate the prior similarity mask M c r s , which serves as an important clue to locating the target regions:
M c r s ( x , y ) = arg max k exp ( γ ϕ ( F q 3 ( x , y ) , V s k ) ) V s k V s a l l exp ( γ ϕ ( F q 3 ( x , y ) , V s k ) ) ,
where x { 1 , . . . , h } , y { 1 , . . . , w } , k { 1 , . . . , N } , and we set γ to 10 in all experiments.
For the support branch, we first concatenate the support prototype, the support feature F s 23 and the prior similarity mask M c r s and pass them through PPM. Subsequently, a 1 × 1 convolution is used to generate support prediction P s with two output channels:
P s = C o n v 1 × 1 D e F s 23 c V s c M c r s ,
Thereafter, the ground-truth support mask is applied to supervise the training of the proposed cross-attention module:
L c e , s = x = 1 h y = 1 w M s ( x , y ) · l o g P s ( x , y ) ,
where L c e , s represents the cross-entropy loss for the support prediction. M s ( x , y ) and P s ( x , y ) denote the ( x , y ) location of support ground truth and support prediction, respectively.
The same operation as in the support branch is applied for the affinity attention map prediction, except that the output of the 1 × 1 convolution is a binary mask:
M a t t n = C o n v 1 × 1 D e F q 23 c V s c M c r s ,

3.5. Classifier

The obtained support attention feature map A s and the query affinity attention map M a t t n are concatenated along with the support prototype V s and the query feature F q 23 . A dilated version of the ASPP module is introduced to merge and enrich these concatenated features. Finally, we obtain the mask prediction P R 2 × h × w through
F q 23 = C o n v 1 × 1 { F q 2 c F q 3 } ,
F m e r g e d = F g u i d a n c e M a t t n , A s , V s , F q 23 ,
P = S o f t m a x D m F m e r g e d ,
where F g u i d a n c e denotes the combination of concatenate and expand operations. D m consists of the ASPP module, convolutional operation and classifier.
Finally, binary cross-entropy (BCE) loss between M q ( j ) and P ( j ) is employed to supervise the training of the meta learner:
L m = 1 n e p j = 1 n e p B C E ( M q ( j ) , P ( j ) ) ,
where n e p represents the number of training episodes in each batch.

3.6. K-Shot Setting

In K-shot (K > 1) segmentation, there are K support sets available. For the self-attention mechanism, we directly take the average of K and generate support attention maps. For the query affinity attention map prediction, K support features are fed into the cross-attention module separately, with each prediction supervised by its own label. Then, we average the K losses as follows:
L c e , s = i = 1 K L c e , s i ,
where L c e , s i denotes the cross-entropy loss of the i-th support image.
Finally, the K-times generated support attention feature map A s and support prototypes V s are averaged. Then, the averaged A s and V s concatenated with F q 23 and M a t t n are passed through the ASPP module to obtain the predictions.

4. Experiments

4.1. Experimental Setup

Dataset. We assess the effectiveness of our approach on the standard remote sensing benchmark dataset iSAID-5i [38], which is generated from 2806 high-resolution images. This publicly available aerial image dataset includes 655,451 object instances from 15 geospatial categories. We employ a cross-validation strategy for our experiments, dividing the dataset into three evenly distributed folds, where one fold is used for meta testing and the remaining folds are adopted for meta training. We randomly select 1000 support–query image pairs for validation in each training episode. As shown in Table 1, we select the unseen classes in each fold following the experimental settings of [13,35], in which the determination of the categories is based on the original sequence of the label dictionary [38].
Evaluation Metrics. Consistent with previous studies [11,22,39], we employ the mean intersection over union (MIoU) for performance assessment. In addition, foreground–background IoU (FB-IoU) is also adopted as the evaluation metric.
Implementation Details. In order to enhance the network’s generalization ability, most of the existing FSS approaches use a backbone network pre-trained on the large natural image dataset ImageNet [37], the parameters of which are frozen in the meta training phase. This backbone network cannot perfectly adapt to remote sensing image segmentation due to the unignorable domain shift. Hence, we train a more suitable backbone network on iSAID-5i from scratch within the standard supervised learning paradigm. The backbone network is initialized with the parameters pre-trained on ImageNet [37]. We set the learning rate, training epoch and batch size to 1.25 × 10−3, 50 and 16, respectively.
For the meta training, we adopt the episodic training strategy [11,36]. Specifically, we train the CSCANet using SGD optimizer for 12 epochs, with learning rate and batch size configured as 5 × 10−2 and 8, respectively. We adopt a similar data augmentation strategy to [35]. All experiments are conducted in PyTorch [40] on 4 NVIDIA Tesla T4s.
For a fair comparison, we run the source codes of the selected prevalent FSS approaches, except that we adopt the same retrained backbone network for training. Additionally, we use the same hyper-parameters for training as in our CSCANet.

4.2. Visualization Analysis

Visualization of segmentation results. We visualize some representative predicted masks generated by our CSCANet in Figure 5. The first two rows depict examples of support images (blue) and query images (green). The last two rows show the samples of baseline predictions and the results of CSCANet, respectively. It can be seen in all the examples that the proposed CSCANet is able to effectively reduce false activation. The last five columns show that the proposed method is capable of segmenting the multiple tiny query targets more precisely and completely than the baseline. The predicted masks are almost identical to the corresponding labels.
Visualization of query affinity attention map. To investigate the quality of query attention maps generated by PG-CAM, we plot some representative attention maps in Figure 6. Given the supported image(s) (the 1st row) and query image (the 2nd row), the cross-attention module is able to effectively capture the query targets regardless of their sizes and quantities.

4.3. Comparison with State of the Art

We compare the performance of CSCANet against other state-of-the-art FSS approaches. Table 2 demonstrates the performance of different approaches on iSAID-5i in terms of MIoU and FB-IoU. The results indicate that our CSCANet outperforms all SOTA methods across almost all combinations of backbone network (VGG-16 and ResNet-50) and few-shot settings (1-shot and 5-shot), except in the case of backbone VGG-16 under the 1-shot setting. For backbone ResNet-50, we achieve 1.61%mIoU (1-shot) and 2.04%mIoU (5-shot) performance improvements over the best competitor R2Net. Remarkably, CSCANet significantly surpasses the second-best approach under a 5-shot setting by 2.12%mIoU on average for both backbones. Additionally, we also list the model complexity and inference speed in Table 3. It can be observed that our proposed method reaches a superior balance between performance and efficiency.
In addition, we also list the class-wise results in Table 4. It is noteworthy that our proposed CSCANet surpasses other prevalent FSS methods with the backbone ResNet-50 in class C12 (Roundabout) and C14 (Plane) by 13.32%mIoU and 4.73%, separately. The proposed method also obtained the second-best performances in class C1 (Ship), C2 (Storage tank), C3 (Baseball diamond) and C4 (Tennis court). The sizes of these categories are usually tiny and densely arranged in an image, indicating our proposed method is capable of accurately segmenting multiple tiny target objects.

4.4. Limitation Analysis

We observe that the proposed method has a poor performance in C9 (Small vehicle) with both backbone networks. We assume that this is due to the class similarity between C9 (Small vehicle) and other classes like C1 (Ship), C7 (Bridge), and C8 (Large vehicle) in the top-view conditions.
We also visualize some representative failure cases of our proposed method in Figure 7. Failure cases happen mainly due to different resolutions (row 1) and intra-class discrepancy (row 2 and row 3). These are also the major challenges faced by the current Few-Shot Semantic Segmentation methods for remote sensing images. In the case of limited representativeness, our attention mechanism may concentrate on unrepresentative target information, leading to performance degradation.

4.5. Ablation Studies

The ablation study aims to examine the importance of each component of our CSCANet. We conducted a variety of ablation experiments on iSAID-5i under a 1-shot setting, with ResNet-50 selected as the backbone network. The results are presented in Table 5.

4.5.1. Effect of Self-Attention Module

Compared with the performance of the complete pipeline of CSCANet, the model without a self-attention module reduces it to 0.24% in terms of mIoU. Furthermore, the first two rows of Table 5 show that introducing the learnable parameter α in the SAM brings a further improvement of 0.17% mIoU, implying that α is important for abstracting a robust feature representation of novel classes. These results demonstrate our SAM can effectively extract robust class-relevant information and direct the model to concentrate on the novel class targets.

4.5.2. Effect of Cross-Attention Module

A high-quality query affinity attention map has a significant impact on the final prediction. Therefore, we conducted relevant ablation tests on PG-CAM, which is the core component of CSCANet. As shown in the second and fifth rows of Table 5, the model without PG-CAM decreases the performance to 1.14%. In particular, we also investigated the impact of the prior map on the proposed PG-CAM. Referring to the third and fourth rows, incorporating the prior similarity map achieved a 0.47% mIoU improvement, indicating that the prior information plays a crucial role in guiding the cross-attention module to focus on the unseen-class objects.

5. Conclusions

In this paper, we introduced a few-shot remote sensing image segmentation framework named CSCANet to address the problems of foreground–background similarity and multiple tiny objects. The proposed CSCANet includes a simple yet effective self-attention module and a prior-guided cross-attention module. Specifically, the first module is able to extract robust unseen-class information from the support set and avoid undesired activation. The second module generates a high-quality query attention map, which can guide the network to concentrate on the tiny target regions. The proposed method demonstrates an outstanding ability to adapt to unseen classes, achieving state-of-the-art (SOTA) performance in both one-shot and five-shot settings.
The major factors in failure cases are different resolutions between support and query sets and the intra-class discrepancy. To address these issues, we will adopt stronger backbones (e.g., ResNet101, Swin-B) and incorporate transformer architecture to enhance the model’s feature extraction ability in the future. Furthermore, we will validate the proposed method on more remote sensing benchmark datasets and try to create a new few-shot remote sensing image dataset. We will also explore the potential of extending the proposed framework to the zero-shot remote sensing image segmentation task.

Author Contributions

Conceptualization, G.L., F.X. and Y.-R.C.; Methodology, G.L., F.X. and Y.-R.C.; Experiments, G.L. and F.X.; Validation, G.L. and F.X.; Formal analysis, G.L., F.X. and Y.-R.C.; Investigation, G.L.; Data curation, F.X.; Writing—original draft, G.L., F.X. and Y.-R.C.; Writing—review and editing, Y.-R.C.; Visualization, G.L.; Project administration, Y.-R.C.; Funding acquisition, Y.-R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science and Technology Council, Taiwan (NSTC) under Grant 112-2221-E-197-022.

Data Availability Statement

The original data presented in the study are openly available in iSAID at https://captain-whu.github.io/iSAID/ (accessed on 23 May 2024).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FSSFew-Shot Semantic Segmentation
FSLFew-Shot Learning
CNNConvolutional Neural Network
FCNFully Convolutional Network
ASPPAtrous Spatial Pyramid Pooling
PPMPyramid Pooling Module
MAPMasked Average Pooling
SAMSelf Attention Module
PG-CAMPrior-Guided Supervised Cross-Attention Module
BCEBinary Cross Entropy
MIoUMean Intersection Over Union
FB-IoUForeground–Background Intersection Over-Union

References

  1. Sun, W.; Du, Q. Graph-regularized fast and robust principal component analysis for hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3185–3195. [Google Scholar] [CrossRef]
  2. Peng, J.; Sun, W.; Ma, L.; Du, Q. Discriminative transfer joint matching for domain adaptation in hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 972–976. [Google Scholar] [CrossRef]
  3. Sun, X.; Yin, D.; Qin, F.; Yu, H.; Lu, W.; Yao, F.; He, Q.; Huang, X.; Yan, Z.; Wang, P.; et al. Revealing influencing factors on global waste distribution via deep-learning based dumpsite detection from satellite imagery. Nat. Commun. 2023, 14, 1444. [Google Scholar] [CrossRef] [PubMed]
  4. Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  5. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  6. Lin, D.; Dai, J.; Jia, J.; He, K.; Sun, J. Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3159–3167. [Google Scholar]
  7. Zhang, H.; Dana, K.; Shi, J.; Zhang, Z.; Wang, X.; Tyagi, A.; Agrawal, A. Context encoding for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7151–7160. [Google Scholar]
  8. Strudel, R.; Garcia, R.; Laptev, I.; Schmid, C. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 7262–7272. [Google Scholar]
  9. Shaban, A.; Bansal, S.; Liu, Z.; Essa, I.; Boots, B. One-shot learning for semantic segmentation. arXiv 2017, arXiv:1709.03410. [Google Scholar]
  10. Zhang, X.; Wei, Y.; Yang, Y.; Huang, T.S. Sg-one: Similarity guidance network for one-shot semantic segmentation. IEEE Trans. Cybern. 2020, 50, 3855–3865. [Google Scholar] [CrossRef] [PubMed]
  11. Lang, C.; Cheng, G.; Tu, B.; Han, J. Learning what not to segment: A new perspective on few-shot segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8057–8067. [Google Scholar]
  12. Ouyang, C.; Biffi, C.; Chen, C.; Kart, T.; Qiu, H.; Rueckert, D. Self-supervision with superpixels: Training few-shot medical image segmentation without annotation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIX 16. Springer: Cham, Switzerland, 2020; pp. 762–780. [Google Scholar]
  13. Yao, X.; Cao, Q.; Feng, X.; Cheng, G.; Han, J. Scale-aware detailed matching for few-shot aerial image semantic segmentation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5611711. [Google Scholar] [CrossRef]
  14. Wang, B.; Wang, Z.; Sun, X.; Wang, H.; Fu, K. Dmml-net: Deep metametric learning for few-shot geographic object segmentation in remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5611118. [Google Scholar] [CrossRef]
  15. Zhang, C.; Lin, G.; Liu, F.; Guo, J.; Wu, Q.; Yao, R. Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9587–9595. [Google Scholar]
  16. Wang, H.; Zhang, X.; Hu, Y.; Yang, Y.; Cao, X.; Zhen, X. Few-shot semantic segmentation with democratic attention networks. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XIII 16. Springer: Cham, Switzerland, 2020; pp. 730–746. [Google Scholar]
  17. Zhao, Q.; Liu, B.; Lyu, S.; Chen, H. A self-distillation embedded supervised affinity attention model for few-shot segmentation. IEEE Trans. Cogn. Dev. Syst. 2023, 16, 177–189. [Google Scholar] [CrossRef]
  18. Wang, K.; Liew, J.H.; Zou, Y.; Zhou, D.; Feng, J. Panet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9197–9206. [Google Scholar]
  19. Zhang, C.; Lin, G.; Liu, F.; Yao, R.; Shen, C. Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5217–5226. [Google Scholar]
  20. Tian, Z.; Zhao, H.; Shu, M.; Yang, Z.; Li, R.; Jia, J. Prior guided feature enrichment network for few-shot segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1050–1065. [Google Scholar] [CrossRef] [PubMed]
  21. Li, G.; Jampani, V.; Sevilla-Lara, L.; Sun, D.; Kim, J.; Kim, J. Adaptive prototype learning and allocation for few-shot segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8334–8343. [Google Scholar]
  22. Liu, Y.; Zhang, X.; Zhang, S.; He, X. Part-aware prototype network for few-shot semantic segmentation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part IX 16. Springer: Cham, Switzerland, 2020; pp. 142–158. [Google Scholar]
  23. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015, Proceedings, Part III 18; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  25. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  26. Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–November 2019; pp. 603–612. [Google Scholar]
  27. Jindal, S.; Manduchi, R. Contrastive representation learning for gaze estimation. In Proceedings of the Annual Conference on Neural Information Processing Systems, PMLR, New Orleans, LA, USA, 10–16 December 2023; pp. 37–49. [Google Scholar]
  28. Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese neural networks for one-shot image recognition. In Proceedings of the ICML Deep Learning Workshop, Lille, France, 6–11 July 2015; Volume 2. [Google Scholar]
  29. Snell, J.; Swersky, K.; Zemel, R. Prototypical networks for few-shot learning. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
  30. Li, H.; Eigen, D.; Dodge, S.; Zeiler, M.; Wang, X. Finding task-relevant features for few-shot learning by category traversal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1–10. [Google Scholar]
  31. Finn, C.; Abbeel, P.; Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 1126–1135. [Google Scholar]
  32. Jamal, M.A.; Qi, G.-J. Task agnostic meta-learning for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11719–11727. [Google Scholar]
  33. Ravi, S.; Larochelle, H. Optimization as a model for few-shot learning. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  34. Chen, Z.; Fu, Y.; Chen, K.; Jiang, Y.-G. Image block augmentation for one-shot learning. AAAI Conf. Artif. Intell. 2019, 33, 3379–3386. [Google Scholar] [CrossRef]
  35. Lang, C.; Cheng, G.; Tu, B.; Han, J. Global rectification and decoupled registration for few-shot segmentation in remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5617211. [Google Scholar] [CrossRef]
  36. Vinyals, O.; Blundell, C.; Lillicrap, T.; Wierstra, D. Matching networks for one shot learning. Adv. Neural Inf. Process. Syst. 2016, 29, 1–9. [Google Scholar]
  37. Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  38. Zamir, S.W.; Arora, A.; Gupta, A.; Khan, S.; Sun, G.; Khan, F.S.; Zhu, F.; Shao, L.; Xia, G.-S.; Bai, X. Isaid: A large-scale dataset for instance segmentation in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 28–37. [Google Scholar]
  39. Yang, B.; Liu, C.; Li, B.; Jiao, J.; Ye, Q. Prototype mixture models for few-shot semantic segmentation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part VIII 16. Springer: Cham, Switzerland, 2020; pp. 763–778. [Google Scholar]
  40. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.P.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst 1912, 32, 8026. [Google Scholar]
  41. Zhang, B.; Xiao, J.; Qin, T. Self-guided and cross-guided learning for few-shot segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8312–8321. [Google Scholar]
  42. Liu, Y.; Liu, N.; Cao, Q.; Yao, X.; Han, J.; Shao, L. Learning non-target knowledge for few-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11573–11582. [Google Scholar]
  43. Lang, C.; Tu, B.; Cheng, G.; Han, J. Beyond the prototype: Divide-and-conquer proxies for few-shot segmentation. arXiv 2022, arXiv:2204.09903. [Google Scholar]
  44. Jiang, X.; Zhou, N.; Li, X. Few-shot segmentation of remote sensing images using deep metric learning. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6507405. [Google Scholar] [CrossRef]
  45. Puthumanaillam, G.; Verma, U. Texture based prototypical network for few-shot semantic segmentation of forest cover: Generalizing for different geographical regions. Neurocomputing 2023, 538, 126201. [Google Scholar] [CrossRef]
Figure 1. Characteristics of remote sensing images.
Figure 1. Characteristics of remote sensing images.
Mathematics 12 02761 g001
Figure 2. Meta learner of our proposed CSCANet.
Figure 2. Meta learner of our proposed CSCANet.
Mathematics 12 02761 g002
Figure 3. Architecture of the proposed SAM in 1-shot setting.
Figure 3. Architecture of the proposed SAM in 1-shot setting.
Mathematics 12 02761 g003
Figure 4. Architecture of the proposed PG-CAM in 1-shot setting.
Figure 4. Architecture of the proposed PG-CAM in 1-shot setting.
Mathematics 12 02761 g004
Figure 5. Qualitative examples of 1-shot prediction on the iSAID-5i.
Figure 5. Qualitative examples of 1-shot prediction on the iSAID-5i.
Mathematics 12 02761 g005
Figure 6. Visualization of the cross-attention maps generated by PG-CAM on the iSAID-5i in the 1-shot setting.
Figure 6. Visualization of the cross-attention maps generated by PG-CAM on the iSAID-5i in the 1-shot setting.
Mathematics 12 02761 g006
Figure 7. Visualization of the failure cases of the proposed CSCANet on iSAID-5i (ResNet50, 1-shot setting).
Figure 7. Visualization of the failure cases of the proposed CSCANet on iSAID-5i (ResNet50, 1-shot setting).
Mathematics 12 02761 g007
Table 1. Selection of novel classes for each fold of iSAID- 5 i dataset.
Table 1. Selection of novel classes for each fold of iSAID- 5 i dataset.
# FoldNovel Classes
0Ship (C1)Storage tank (C2)Baseball diamond (C3)Tennis court (C4)Basketball court (C5)
1Ground track field (C6)Bridge (C7)Large vehicle (C8)Small vehicle (C9)Helicopter (C10)
2Swimming pool (C11)Roundabout (C12)Soccer ball field (C13)Plane (C14)Harbor (C15)
Table 2. Comparison of the CSCANet with other FSS networks on iSAID-5i under 1-shot and 5-shot settings. The results that are underlined denote the second-best performance, while the results that are bold show the best performance (the same applies to all the following tables).
Table 2. Comparison of the CSCANet with other FSS networks on iSAID-5i under 1-shot and 5-shot settings. The results that are underlined denote the second-best performance, while the results that are bold show the best performance (the same applies to all the following tables).
BackboneMethod1-Shot5-Shot
Fold-0 Fold-1 Fold-2 MIoU% FB-IoU% Fold-0 Fold-1 Fold-2 MIoU% FB-IoU%
VGG-16PANet(ICCV-19) [18]26.8614.5620.6920.7052.6930.8916.6324.0523.8654.75
CANet (CVPR-19) [19]13.9112.9413.6713.5153.9817.3215.0718.2316.8756.86
SCL (CVPR-21) [41]25.7518.5722.2422.1958.9635.7724.9232.7031.1361.56
PFENet (TPAMI-22) [20]28.5217.0518.9421.5057.7937.5923.2230.4530.4260.84
NERTNet (CVPR-22) [42]25.7820.0119.8821.8956.3438.4324.2128.9930.5461.97
DCP (arXiv-22) [43]28.1716.5222.4922.3959.5539.6522.6829.9330.7560.78
BAM (CVPR-22) [11]33.9316.8821.4724.0959.2038.4622.7628.8130.0162.26
DMML (TGRS-21) [14]24.4118.5819.4620.8254.2128.9721.0222.7824.2654.89
SDM (TGRS-22) [13]24.5216.3121.0120.6156.3926.7319.9726.1024.2756.65
DML (GRSL-22) [44]30.9914.6019.0521.5555.9834.0316.3826.3225.4856.26
TBPN (IJON-23) [45]27.8612.3218.1619.4554.2632.7916.2824.2724.4555.79
R2Net (TGRS-23) [35]35.2719.9324.6326.6161.7142.0623.5230.0631.8863.55
CSCANet (Ours)33.2620.4425.9826.5661.4540.0824.1538.0034.0863.74
ResNet-50PANet(ICCV-19) [18]27.5617.2324.6023.1356.5636.5416.0526.2226.2757.37
CANet (CVPR-19) [19]25.5113.5024.4521.1556.6429.3221.8526.9126.0359.46
SCL (CVPR-21) [41]34.7822.7731.2029.5861.3041.2925.7337.7034.9164.13
PFENet (TPAMI-22) [20]35.8423.3527.2028.8060.0942.4225.3433.0033.5963.25
NERTNet (CVPR-22) [42]34.9323.9528.5629.1559.9744.8326.7337.1936.2564.45
DCP (arXiv-22) [43]37.8322.8628.9229.8762.3641.5228.1833.4334.3863.37
BAM (CVPR-22) [11]39.4321.6928.6429.9262.0443.2927.9238.6236.6165.00
DMML (TGRS-21) [14]28.4521.0223.4624.3157.7830.6123.8524.0826.1858.26
SDM (TGRS-22) [13]27.9621.9927.8225.9259.5828.5025.2331.0728.2759.90
DML (GRSL-22) [44]32.9618.9826.2726.0758.9333.5822.0529.7728.4759.23
TBPN (IJON-23) [45]29.3316.8425.4723.8857.3430.9820.4228.0726.4958.63
R2Net (TGRS-23) [35]41.2221.6435.2832.7163.8246.4525.8039.8437.3666.18
CSCANet (Ours)42.3024.1736.5034.3263.5647.8530.0440.3239.4066.32
Table 3. Model complexity and average speed (FPS) comparisons between our approach (ResNet-50, 1-shot) and previous state-of-the-art methods.
Table 3. Model complexity and average speed (FPS) comparisons between our approach (ResNet-50, 1-shot) and previous state-of-the-art methods.
OursPANet [18]CANet [19]SCL [41]PFENet [20]DCP [43]
#Params.5.2M23.6M22.3M11.9M10.8M11.3M
FPS40.3658.132.739.245.737.9
BAM [11]DMML [14]SDM [13]DML [44]TBPN [45]R2Net [35]
#Params4.9M23.6M29.3M23.6M23.6M5.0M
FPS44.447.452.959.556.541.5
Table 4. Class-wise comparison of CSCANet with other FSS networks on iSAID-5i under 1-shot setting.
Table 4. Class-wise comparison of CSCANet with other FSS networks on iSAID-5i under 1-shot setting.
MethodC1C2C3C4C5C6C7C8C9C10C11C12C13C14C15MIoU%
VGG-16
PANet(ICCV-19) [18]20.0537.7121.1841.2214.1512.1713.8221.057.8917.884.3631.6827.5526.8812.9720.70
CANet (CVPR-19) [19]24.136.7313.8316.328.5414.123.2421.043.3522.969.5714.9117.8316.119.9213.51
SCL (CVPR-21) [41]28.5032.9319.6829.6018.0522.487.9231.468.9922.0214.1716.5319.7239.4021.3722.19
PFENet (TPAMI-22) [20]34.3231.8124.2035.4316.8613.986.0131.686.7626.858.1517.7520.5633.3414.8721.50
NERTNet (CVPR-22) [42]12.6623.1126.9050.4715.7723.148.4831.7311.7524.9414.6320.4529.0328.067.2421.89
DCP (arXiv-22) [43]27.6938.4525.9233.2015.5717.6212.3626.798.0517.8022.4518.2918.0337.5716.1022.39
BAM (CVPR-22) [11]27.6643.9031.4843.9622.6613.578.9131.769.2620.9117.0526.2730.6825.278.0724.09
DMML (TGRS-21) [14]34.7537.3615.1522.8511.9421.4113.8523.9210.2423.508.1716.3221.0829.6322.0920.82
SDM (TGRS-22) [13]33.7623.8817.8027.7619.3818.369.6325.248.6319.6910.5615.3624.7632.3022.0620.61
DML (GRSL-22) [44]27.3042.6319.2550.6315.1314.1615.9422.407.7412.743.7923.7323.4727.4016.8821.55
TBPN (IJON-23) [45]22.0339.7520.8042.8013.9410.416.8716.544.3823.415.6823.6622.1324.6314.7219.45
R2Net (TGRS-23) [35]37.8245.1626.2745.30 21.81 ̲ 24.1114.3830.9212.2118.03 18.66 ̲ 25.0229.6431.9517.8726.61
CSCANet (Ours)36.2143.8826.0143.3916.8121.8015.8426.6510.5827.339.0541.6732.1931.0115.9726.56
ResNet-50
PANet (ICCV-19) [18]21.8136.3123.0142.0614.5912.1117.4422.7012.2721.6030.2924.6226.7925.5415.7923.13
CANet (CVPR-19) [19]39.5718.5418.4633.6317.349.785.4922.155.1724.899.9636.5019.1238.8217.8521.15
SCL (CVPR-21) [41]37.6133.6326.6854.7521.2222.6024.4030.226.7129.9333.0044.6818.2544.6315.4629.58
PFENet (TPAMI-22) [20]39.0245.6320.8649.9623.7221.0024.7631.596.9832.4213.3447.6430.6532.8211.5428.80
NERTNet (CVPR-22) [42]33.5942.8322.3049.3521.9121.6228.8225.649.3534.3023.9138.6725.6340.8413.7428.83
DCP (arXiv-22) [43]37.4242.4435.1656.5517.5821.6619.5732.9710.6029.5024.0235.3428.4439.8017.0229.87
BAM (CVPR-22) [11]36.3439.7638.2358.13 24.71 ̲ 18.2512.6835.9111.4230.2128.9840.7429.4333.2510.7929.92
DMML (TGRS-21) [14]40.1440.1821.3127.0213.6015.5615.1926.0513.8434.4411.2617.5723.2739.1126.1224.31
SDM (TGRS-22) [13]41.7735.5021.4120.8120.2915.6025.6028.6613.2926.7913.6132.3524.5942.7925.7525.92
DML (GRSL-22) [44]35.1342.1030.4941.7915.3113.2516.8724.7014.6225.4510.2435.4925.3541.6918.5726.07
TBPN (IJON-23) [45]25.3641.2830.6732.8816.4813.489.7427.8812.5220.5611.1234.3123.5740.3617.9823.88
R2Net (TGRS-23) [35]46.8749.0630.7052.8626.6224.3117.2531.2513.6721.7324.8846.0742.2942.0721.0832.71
CSCANet (Ours)45.9647.8336.6257.9923.1021.2723.4529.8711.9834.2818.6959.3937.4546.8020.1734.32
Table 5. Ablation study of our CSCANet at module level. The first row represents the result of the baseline.
Table 5. Ablation study of our CSCANet at module level. The first row represents the result of the baseline.
Self AttentionCross AttentionAlphaPriorMIoU%FB-IoU%
----32.8561.75
---33.0161.81
--33.1862.13
---33.6162.50
--34.0862.92
34.3263.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, G.; Xie, F.; Chien, Y.-R. Class-Aware Self- and Cross-Attention Network for Few-Shot Semantic Segmentation of Remote Sensing Images. Mathematics 2024, 12, 2761. https://doi.org/10.3390/math12172761

AMA Style

Liang G, Xie F, Chien Y-R. Class-Aware Self- and Cross-Attention Network for Few-Shot Semantic Segmentation of Remote Sensing Images. Mathematics. 2024; 12(17):2761. https://doi.org/10.3390/math12172761

Chicago/Turabian Style

Liang, Guozhen, Fengxi Xie, and Ying-Ren Chien. 2024. "Class-Aware Self- and Cross-Attention Network for Few-Shot Semantic Segmentation of Remote Sensing Images" Mathematics 12, no. 17: 2761. https://doi.org/10.3390/math12172761

APA Style

Liang, G., Xie, F., & Chien, Y. -R. (2024). Class-Aware Self- and Cross-Attention Network for Few-Shot Semantic Segmentation of Remote Sensing Images. Mathematics, 12(17), 2761. https://doi.org/10.3390/math12172761

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop