Next Article in Journal
Research on the Application of Dynamic Process Correlation Based on Radar Data in Mine Slope Sliding Early Warning
Previous Article in Journal
Research on Student’s T-Distribution Point Cloud Registration Algorithm Based on Local Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Self-Supervised Few-Shot Semantic Segmentation Method Based on Multi-Task Learning and Dense Attention Computation

1
Intelligent Policing Key Laboratory of Sichuan Province, Luzhou 646099, China
2
College of Computer Science, Sichuan University, Chengdu 610042, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(15), 4975; https://doi.org/10.3390/s24154975
Submission received: 31 May 2024 / Revised: 21 June 2024 / Accepted: 22 July 2024 / Published: 31 July 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Nowadays, autonomous driving technology has become widely prevalent. The intelligent vehicles have been equipped with various sensors (e.g., vision sensors, LiDAR, depth cameras etc.). Among them, the vision systems with tailored semantic segmentation and perception algorithms play critical roles in scene understanding. However, the traditional supervised semantic segmentation needs a large number of pixel-level manual annotations to complete model training. Although few-shot methods reduce the annotation work to some extent, they are still labor intensive. In this paper, a self-supervised few-shot semantic segmentation method based on Multi-task Learning and Dense Attention Computation (dubbed MLDAC) is proposed. The salient part of an image is split into two parts; one of them serves as the support mask for few-shot segmentation, while cross-entropy losses are calculated between the other part and the entire region with the predicted results separately as multi-task learning so as to improve the model’s generalization ability. Swin Transformer is used as our backbone to extract feature maps at different scales. These feature maps are then input to multiple levels of dense attention computation blocks to enhance pixel-level correspondence. The final prediction results are obtained through inter-scale mixing and feature skip connection. The experimental results indicate that MLDAC obtains 55.1% and 26.8% one-shot mIoU self-supervised few-shot segmentation on the PASCAL- 5 i and COCO- 20 i datasets, respectively. In addition, it achieves 78.1% on the FSS-1000 few-shot dataset, proving its efficacy.

1. Introduction

With the rapid improvement of computing power (supported by advanced hardware platforms) and the rise of deep learning algorithms, various scene awareness schemes have been integrated into smart cars. For instance, Light Detection and Ranging (LiDAR), Radio Detection and Ranging (RADAR) are 2 important auxiliary equipment for scene understanding under poor lighting conditions via radio waves and laser pulses, respectively [1]. They are highly valued for precise distance measurement (for scene reconstruction and mapping), but behave poor in visual recognition aspect. Ultrasonic sensors are specially designed for parking assistance, but they are limited to short distance warning. By contrast, vision sensors with the state-of-the-art recognition and perception algorithms greatly improve the scene awareness ability of the intelligent vehicles, which also have lower prices than the abovementioned instruments. The commonly involved algorithms in scene understanding include object detection, semantic segmentation and depth recovery [2] etc.
Traditional deep learning approaches often require a massive number of labeled samples. Computer vision tasks like image segmentation typically need numerous high-quality pixel-level annotations to guide model training [3,4]. However, acquiring such annotated data is expensive, time-consuming, or even infeasible. Few-shot learning leverages prior knowledge to generalize to new tasks with only a few supervised samples. Therefore, to reduce the labeling costs of image segmentation under scenarios with limited or scarce samples, many studies have incorporated few-shot learning into the image semantic segmentation field.
Few-Shot Semantic Segmentation (FSS) requires less annotation data to complete pixel-level semantic segmentation tasks, improving its generalization ability. Normally, a limited number of images is annotated for one category, which indicates that the model must learn intra-class features and migrate them to unseen classes. Currently, mainstream FSS includes metric learning-based methods, parameter prediction-based methods, fine tuning-based methods, and memory-based methods. Among them, metric learning-based methods [5,6,7,8,9] play a dominant role. The distances between feature vectors supporting images and querying images in high-dimensional space are used [5] to calculate the similarity between them so as to predict the category probability of each pixel in the image.
In this method, the amount of annotation of images without visible categories is greatly reduced, but the annotation requirement of visible categories is still indispensable in the training process. Therefore, it is still challenging to further reduce annotation requirements using few-shot semantic segmentation. A two-stage unsupervised image segmentation method was proposed by the authors of [10], who used the K-means clustering method to cluster pixels in images into semantic groups to obtain significant regions with continuous semantic information. A self-supervised FSS method based on unsupervised significance for prototype learning was devised in [11] to generate training pairs from a single training image to capture the similarity between the queried image and a specific region supporting the image. The feature vector of the Laplacian matrix derived from the feature affinity matrix of a self-supervised network was utilized in [12] to eliminate the need for extensive annotation, effectively capturing the global representation of the object of interest in the supporting image. As mentioned earlier, traditional methods rely heavily on manual annotation. Although the above methods alleviate such problems to some extent under a self-supervised learning framework, most of them do not fully utilize the different scale features, leading to poor segmentation performance.
To solve the abovementioned problem, a self-supervised few-shot segmentation method based on saliency segmentation is proposed in this paper. Our method is built under a multi-task learning framework. Each saliency mask is divided into two parts, one of which is used as a support mask in a random way, while the other part is used as a query mask to participate in the model training of few-shot meta learning. To further enhance the meta learning effect, multiple learning tasks are proposed after saliency segmentation to jointly enhance the few-shot segmentation performance. At the same time, in order to enhance the robustness of the model, noise addition and image enhancement are applied to process the input image so as to better simulate FSS tasks. To fully utilize the multi-scale features, a dense attention calculation mechanism is developed, which transforms the multi-scale feature map into a multi-scale dense attention block to yield the final prediction result via inter-scale mixing. Finally, the self-supervised few-shot semantic segmentation method is formed, which is based on a multi-task learning scheme and dense attention calculation.
The main contributions of this paper are summarized as follows:
  • A self-supervised few-shot segmentation method is proposed based on a multi-task learning paradigm. The unsupervised salient part of the image is split into two parts; one of them is used as a support image mask for few-shot segmentation, and the other part and the entire image are used to calculate the cross-entropy with the prediction results to realize multi-task learning so as to improve the generalization ability;
  • An efficient few-shot segmentation network based on dense attention computation is proposed. Multi-scale feature extraction is carried out using Swin Transformer so as to make full use of the multi-scale pixel-level correlation.
Experimental results obtained on three mainstream datasets show that our method surpasses other popular methods in segmentation accuracy, demonstrating its efficacy.

2. Related Works

2.1. Few-Shot Semantic Segmentation with Fully Supervised Learning

A two-branch network with a prediction-based method was proposed in [13], which consists of a conditional branch and a segmentation branch. It aims to solve a one-way one-shot image segmentation problem by using parameter prediction-based methods to modify the classifier weights for cross–class adaptation. Instead of relying only on support samples, query images were also used to generate classifier weights [14]. Instead of directly replacing the classifier parameters, the classifier weights were dynamically added in [15], enabling the model to master both base and unseen categories.
Metric learning-based methods are the most commonly used techniques for FSS. Among them, the methods based on prototype networks are particularly prevalent. In traditional learning-based methods [8,16], the learned prototype of a class is an approximate estimate of the optimal prototype. Recent few-shot methods aim to obtain specific prototypes of objects so as to provide relevant information. Such methods provide higher similarity scores for query characteristics belonging to the same semantic classes as the object instead of approximating the best prototype [5,6,17,18]. When applying the masked average pooling operation to generate a holistic descriptor for each semantic category in these methods, some problems arise, such as the prototype learner failing to output a robust class representation, making it difficult to capture rich and fine-grained semantic information only using a global feature vector due to the visual differences between support and query images. To address these issues, some subsequent approaches have sought to generate multiple prototypes for each semantic category [9,19], and others perform intensive matching between support and query images [20,21,22]. The fine tuning-based approaches aim to use an optimization algorithm to improve the parameters of the pre-trained segmentation network to learn unseen categories [23,24,25]. In memory-based FSS, semantic information is retained to assist in segmentation of query samples and obtain more cross-resolution information and precise segmentation results [26,27].

2.2. Self-Supervised Learning for Image Semantic Segmentation

SSL bridges supervised learning and unsupervised learning. It still requires large-scale labeled data to obtain better performance in terms of visual features. To avoid high-cost annotation, methods are proposed to learn general image and video features from unlabeled data without additional supervision. Self-supervised learning aims to obtain universal image or video features by learning large-scale unlabeled data without any manual annotation. In image segmentation, self-supervised segmentation refers to the automatic generation of segmentation labels for images without the need for manual annotation in order to achieve learning of image segmentation tasks.
The self-supervised semantic segmentation model predicts a set of labels (i.e., masks with deterministic meanings) based on the input data. Previous methods [28,29] were based on offline pre-calculated labels, followed by model updating. As a more lightweight method, self-training on pseudo-labels is an important way to seek high-quality supervision through high-confidence class prediction. Wen et al. [30] defined each class of objects as a learnable class vector and calculated the similarity between the class vector and each position in the image feature map. They aggregated features of the same class in the image, then constructed positive and negative sample pairs of the aggregated class features for comparative training and learning. Araslanov et al. [31] applied standard data augmentation techniques such as noise, flipping, and scaling to self-supervised segmentation, ensuring consistent semantic prediction results across different image transformations.

2.3. FSS Vision Transformers

The use of transformers was originally proposed for natural language processing tasks due to their excellent long-range dependency modeling ability. They were later migrated to the computer vision domain [32,33].
The combination of a vision transformer and FSS is a recently emerging topic. Lu et al. [34] designed a Classifier Weight Transformer (CWT) to dynamically adjust the weight of the classifier for each query image to make better use of the support set (a limited collection of images with corresponding annotated masks to furnish the model with exemplars of the target classes) in the query image. However, it still follows the prototype pipeline and, therefore, cannot fully exploit the supporting information at a fine-grained level. A novel Cyclic Consistent Transformer (CyCTR) module was developed in [35] that aggregates pixel-level support features into query features, focusing on providing each query pixel with relevant information from the support image to facilitate the classification of query pixels. DCAMA [36] follows the design of CyCTR, introducing full exploitation of the support image by pairing queries and supporting multi-level pixel-level correlations between features.

3. Method

In this section, we introduce our proposed MLDAC in detail. First, the task of self-supervised few-shot segmentation is clarified; then, our multi-task framework is introduced in Section 3.2. The core modules in MLDAC are described in Section 3.3.

3.1. Problem Definition

Fully supervised few-shot semantic segmentation. Traditional FSS is always based on fully supervised learning. Specifically, given the same class of images and corresponding mask conditions in the training set ( D t r a i n ), the model aims to find the designated area related to the mask in another image based on the images and corresponding masks in the given test set ( D t e s t ) so as to accomplish the few-shot segmentation task. This is the meta-learning paradigm called episodic training. In real applications, both D t r a i n and D t e s t consist of different classes of objects, and image pairs with the same category are selected to realize the meta-learning paradigm. D t r a i n over class C t r a i n has a completed annotation mask for every image. The classes ( C t e s t ) of D t e s t have no shared classes, as is the case for C t r a i n (i.e., C t r a i n C t e s t = Φ ). In episodic training, each image pair contains a duplicate image, mask, and class information, where the class information is the same, i.e., for ( x 1 , m 1 , y 1 ) and ( x 2 , m 2 , y 2 ) , ( y 1 = y 2 ) , where x 1 and x 2 are the images, m 1 and m 2 are the ground-truth binary masks, and y 1 and y 2 are the class labels corresponding to the mask.
Self-supervised few-shot semantic segmentation (SFSS). For the self-supervised few-shot semantic segmentation problem, D t r a i n consists of images without masks or labels so that the training process cannot be implemented. To solve this problem, a new SFSS method based on multi-task learning to build a self-supervised experimental process is proposed. After training, the same evaluation protocol as the standard FSS can be used to evaluate the learned meta-models for a multitude of segmentation tasks with few images.

3.2. Framework

To realize SFSS, a complete episodic training framework is constructed in this paper. The architecture of our proposed MLDAC based on multi-task learning consists of three inputs (query image, support image, and support mask) and one output (segmentation result), as shown in Figure 1. The input is a single image without any annotation or class label ( I i m a g e D t r a i n ). Self-supervised learning usually uses data attributes to set unsupervised tasks instead of manual annotation. Therefore, unsupervised saliency prediction was utilized to obtain the saliency region ( M s a l i e n c y ), which depicts the arbitrary object in the image with continuous and accurate edge information. Next, M s a l i e n c y is divided into 2 parts, namely M s a l i e n c y 1 and M s a l i e n c y 2 , the former of which is used as the support mask that is input to MLDAC, while M s a l i e n c y 2 and M s a l i e n c y are used to calculate the loss as follows:
l o s s 1 = C r o s s E n t r o p y L o s s ( M s a l i e n c y 1 , r e s u l t ) ; i g n o r e ̲ i n d e x = M s a l i e n c y
l o s s 2 = C r o s s E n t r o p y L o s s ( M s a l i e n c y , r e s u l t ) ; i g n o r e ̲ i n d e x = Φ ;
then,
r e s u l t = M L D A C ( I Q u e r y , I S u p p o r t , M s a l i e n c y 2 )
Equations (1) and (2) are both cross-entropy loss functions with slightly different implementation details. For Loss 1, M s a l i e n c y 1 do not participate in the calculation of the loss function so that the model focuses on learning the query region, weakening the impact of the support region. Loss1 and Loss2 guide model training in an alternate way, with probabilities set to a and 1-a, respectively.
Meanwhile, since I Q u e r y and I S u p p o r t come from the same source, to highlight the difference between them, we employ data augmentation techniques, including jittering, horizontal flip-flop, rotation, and random cropping. Gaussian noise is also added before image enhancement (i.e., the color of the selected query region is perturbed slightly to augment the diversity of the training data). The pseudo code of our proposed self-supervised few-shot semantic segmentation framework is expressed in Algorithm 1 as follows:
Algorithm 1 FSS self-supervised framework based on multi-task learning
1:
multi-task ̲ split( I m a g e ):
2:
S a l i e n c y = Unsupervised ̲ Saliency ̲ Detection ( I m a g e )
3:
S a l i e n c y 1 , S a l i e n c y 2 = S p l i t ( S a l i e n c y )
4:
if r a n d o m . r a n d o m ( ) < a then
5:
     t a r g e t = S a l i e n c y
6:
     l o s s = n n . C r o s s E n t r o p y L o s s ( )
7:
else {}
8:
     t a r g e t = S a l i e n c y 2
9:
     l o s s = n n . C r o s s E n t r o p y L o s s ( g n o r e ̲ i n d e x = S a l i e n c y 1 )
10:
end if
11:
q ̲ i m a g e , t a r g e t = A u g m e n t a t i o n s ( I m a g e , t a r g e t )
12:
s ̲ i m a g e , s ̲ m a s k = A u g m e n t a t i o n s ( I m a g e + b , S a l i e n c y 1 )
13:
r e s u l t = M L D A C ( q ̲ i m a g e , s ̲ i m a g e , s ̲ m a s k )
14:
l o s s = l o s s ( r e s u l t , t a r g e t )

3.3. MLDAC Network Architecture

As shown in Figure 2, our proposed MLDAC consists of the following 3 parts:
In the first part, a pre-trained feature extractor is applied to process both the query and the support images to obtain multi-scale query and support features and support image masks of the corresponding size;
The second part inputs the query features, support features, and support masks at each scale into a multi-layer Dense Attention Computation Block (DACB) of the same scales as Q, K, and V. DACB performs a multi-scale query, support features, and support mask attention calculations;
The third part involves the aggregation of the outputs from the previous stage and the multi-scale features. This process produces the final prediction masks using a tailored mixer.

3.3.1. Feature Extraction and Masking

The first stage involves the extraction of different levels of semantic features. Here, Swin Transformer (Swin-B) is employed as the feature extractor, which captures both local fine-grained features and contextual semantic information. Through the bottom-up pathway, features at multiple scales are computed, enabling multi-scale feature learning. Following [7], after capturing image features of different sizes, the image mask is scaled to different support mask sizes via linear interpolation, allowing for cross-feature attention in different layers. Compared with existing FSS models, the Swin Transformer model (Swin-B) is adapted to extract features and was pre-trained on ImageNet-1K and liu2021swin.

3.3.2. Dense Attention Computation Block(DACB)

Our proposed DACB aggregates multi-scale features to produce semantic information. The initial stage involves the transformer architecture (i.e., scaled dot-product attention). The corresponding calculation is written as follows:
A t t n Q , K , V = s o f t m a x Q K T d V ,
where Q , K , and V are the sets of query, key, and value vectors, respectively; d represents the dimension of the query and key vectors; and Q , K indicates that the location code has been added to Q , K with absolute learnable position encoding.
In this paper, the query and support feature maps are denoted as F q , F s R h × w × c , where h, w, and c represent the height, width, and number of channels of the feature maps, respectively. As shown in Figure 3, the support feature (Fq) and query feature (Fs) are flattened first, and each pixel value is regarded as a token. Then, after adding learnable linear position coding, the Q , K matrix is generated from the flattened F q and F s , and the multi-head attention mechanism is implemented as follows:
M H A Q , K , V = h e a d 1 , h e a d 2 , , h e a d n ,
where h e a d m = A t t e n Q m , K m , V m , and the inputs Q m , K m , V m are the m t h group from Q m , K m , V m with dimension d / h . For the support mask, it is only necessary to flatten it to participate in the calculation of dense attention. Finally, the output of the multiple attention heads of each token is averaged, and the averaged output is reset to a two-dimensional tensor expressed as R ^ with dimensions of h × w × c , which is the final dense attention computation output.

3.3.3. Inter-Scale Mixing and Up-Sampling Module

After cross-feature dense attention computation at different scales of features from multiple layers, it is necessary to mix attention results from these different scales to obtain the final prediction. Our inter-scale mixing and up-sampling module has 2 parts; one stitches the different layers directly after cross-feature dense attention computation, and the other one improves the recognition of image features using skip linking. In this step, the size of each layer is adjusted via continuous up-sampling operations.
First, the dense attention computation scale of 1 8 , 1 16 , 1 32 is specifically used to obtain the attention-weighted result, and R 1 / 8 , R 1 / 16 , R 1 / 32 . R 1 / i are subsequently processed through several convolution blocks that are finally merged into the resultant block after suitable up-sampling operations. The outputs of 1/32 and 1/16 are connected, resized, and concatenated to yield the outputs of the 1/8 scale as follows:
R 1 / 32 + 1 / 16 = k R 1 / 32 k R 1 / 16
R = k R 1 / 32 + 1 / 16 k R 1 / 8
where ↑ is the up-sampling operation and ⊕ stands for the connection operation. R is then processed by a skip connection and decoding operation to obtain the final predicted mask. Then, the last layer of features extracted by the feature extractor with 1/4 and 1/8 scales are concatenated as follows:
R = R F 1 / 8 q F 1 / 8 s
F 1 / 8 + 1 / 4 = F 1 / 8 q F 1 / 8 s F 1 / 4 q F 1 / 4 s
R = R k F 1 / 8 + 1 / 4
Finally, the result is obtained using a decoder ( f X ) to produce the final mask prediction as follows:
M r e s u l t = f R
The decoder is composed of several convolutional modules and ReLU blocks that operate alternately, along with up-sampling operations to attain the final segmentation resolution. The decoder blocks gradually reduce the dimensions of the output channels to 2 (1 for foreground and the other for background) in one-way segmentation. Two interleaved up-sampling operations are used to restore the output size to match that of the input images.

4. Experiments and Results

Datasets. To validate the effectiveness of our proposed method, extensive experiments were conducted on the PASCAL- 5 i , COCO- 20 i and FSS-1000 datasets.
PASCAL- 5 i is built upon PASCAL VOC [37]. It has 20 categories that are further divided into 4 folds, namely 5 0 , 5 1 , 5 2 , and 5 3 . Each fold has different kinds of categories. For instance, 5 0 includes planes, bikes, birds, etc., while 5 1 includes buses, cars, chairs, etc. During the training for each fold, the other three folds are used as the training dataset. We need only image data for our unsupervised training, without the tasks or class information associated with the images. Hence, we use images from all the folds to support our training and use the unsupervised saliency map of all folds and the folded images to assess the average concurrency ratios of each fold and preserve the best outcomes.
Similar to PASCAL- 5 i , COCO- 20 i is derived from MS COCO [38], which consists of more than 120,000 images from 80 categories. It is split into four folds denoted by 20 0 , 20 1 , 20 2 , and 20 3 , each of which contains 20 categories.
The FSS-1000 dataset [39] is set up with well-established categories; we use only images from pre-trained categories as support and do not use images from the target categories as part of the training set. For all datasets, the mean intersection over union (mIoU) is used, and one-shot segmentation results are reported and compared.

4.1. Implementation Details

All experiments were performed using PyTorch framework. The pre-trained Swin-B-based model is used as the backbone feature extractor, (which is trained on ImageNet-1K [33]). Both support and query images have input sizes of 384 × 384 pixels. For optimization, rgw Adam optimizer was applied with a learning rate of 10 4 , a weight decay of 10 5 , and pixel cross-entropy loss. Each model was trained on two 3090 GPUs for 100 epochs using the PASCAL dataset and 30 epochs using the COCO dataset, with a batch size of 16.

4.2. Comparison with Other Popular Methods

Comparisons are made in Table 1 and Table 2 between our method and other state-of-the-art supervised few-shot segmentation approaches and self-supervised semantic segmentation approaches. Here, avg represents the mean intersection over union, 5 i represents the average category segmentation accuracy of all categories in the i-th fold, and FSS-1000 represents the segmentation accuracy on the FSS-1000 dataset. The supervised models utilized the ground-truth segmentation mask during usual fold-based training, whereas the unsupervised models were trained on a training set without the ground truth. As shown in Table 1, we achieved the best results among all the self-supervised methods and even surpassed two of the fully supervised methods. Similarly for COCO and FSS-1000, we also achieved the best overall results among all the self-supervised methods, exceeding two of the fully supervised methods (on COCO).
Above all, the framework that we propose is highly effective. When comparing results on the PASCAL dataset, we use the results reported in MASKSPLIT, where the Saliency* and MaskContrast* methods are from [10], optimized to obtain unsupervised approaches that represent the framework. In order to make a comprehensive comparison with the existing supervised few-shot method, the source code provided by MIANet [42] is used to re-conduct the experiment under the self-supervised settings, obtaining a result of 53.8%. In comparison to supervised approaches, self-supervised approaches perform poorly because they do not learn intra-class information. Despite this, our approach performed exceptionally well, achieving a score of 55.1% on PASCAL. This score is two points higher than the initial slicing approach, MASKSPLIT, which is very competitive.
Table 2 shows a comparison between our method and other popular self-supervised and fully-supervised few-shot segmentation methods on COCO and FSS-1000. As shown by the results, the performance of our method was greatly enhanced compared to current self-supervised few-shot segmentation methods, with an increase from 23.3 to 26.8 on the COCO dataset and to 78.1 on the FSS-1000 dataset, which is a significant improvement. We attribute the superb results on the FSS-1000 dataset to the unsupervised saliency regions being more prominent and free of noise and to the relatively high within-class image similarity. It is worth noting that MaskSplit surpasses our method on 5 3 in both Table 1 and Table 2. The reason is that MaskSplit masks out all the background regions of the supported image (i.e., masked pooling) during the self-supervised training process. This strategy is effective when facing a complex background. However, we realize better overall results by using dense attention computation to acquire both the background and foreground information.
To further demonstrate the advantages of our model for cross-domain few-shot segmentation tasks, an additional experiment was conducted on the ISIC2018 dataset [44], which contains skin lesion images and is mainly used for medical image analysis and model training. It comprises thousands of high-resolution images of skin lesions, including benign lesions such as moles and pigmented nevi, as well as malignant lesions such as melanoma and basal cell carcinoma. The major advantage is that the images have already been annotated by professionals.
Comparative results on ISIC are shown in Table 3. PATNet [45] proposes a few-shot segmentation network based on pyramid anchoring transformation, which converts domain-specific features into domain-independent features for downstream segmentation modules to quickly adapt to unknown domains. PMNet [46] proposes a pixel matching network that extracts domain-independent pixel-level dense matches and captures pixel–pixel patch relationships in each supporting query pair using bidirectional 3D convolution. Compared with PATNet [45] and PMNet [46], we achieved much higher accuracy.

4.3. Analysis of the Computational Complexity

In this section, we analyze the computational complexity of MLDAC in terms of model parameters (Params), floating-point operations (FLOPs), and inference time. Among them, Params indicates the number of parameters that the model has (i.e., model size), and FLOPs indicates the computation cost during inference. The inference time is the time that the model spends to produce the segmentation results. The experiment was conducted on 2 RTX 3090 GPUs, and we adopted Swin-B as our backbone. A comparison is shown in Table 4 below. Compared with HSNet [7], although our method has higher computational costs, it requires fewer iterations (i.e., much shorter inference time).

4.4. Visualization Results

In this section, a comparison of visualization results is shown to demonstrate the segmentation results obtained by different methods. As shown in The first column in Figure 4 shows the ground-truth segmentation results; the second column shows the supporting image and corresponding masks; and the third and the fourth columns show the segmentation results obtained by MaskSplit and our method, respectively. In comparison, our proposed method clearly delineates the boundary of each object better than MaskSplit. For complex backgrounds, MLDAC can better distinguish the foreground and the background based on the supporting images.

4.5. Ablation Study

A comprehensive ablation study was conducted on PASCAL to validate the effectiveness of our proposed method.

4.5.1. Multi-Task Learning Parameter Settings

In this section, the proposed multi-task learning parameter (a) and noise injection parameter (b) in MLDAC are described. Here, a represents the probability of selecting task 1 or task 2 during training, which balances the proportion of the two few-shot segmentation target loss functions. When it is offset to 0 or 1, the network degenerates into an ordinary single-target structure. The experimental results obtained using different values of a are shown in Figure 5(1). It can be seen that when a = 0.15, the model achieves the optimal result; therefore, we set a = 0.15 for all other experiments. Similarly, as shown in Figure 5(2), the best result is reached when b = 1 . Here, b represents the mean of Gaussian noise mixed into the image with additive noise.

4.5.2. The Architecture of MLDAC

As shown in Table 5, the combination of different schemes was validated to search for the optimal settings of MLDAC. As shown in Table 5, our proposed learnable linear positional encoding and skip connection are, indeed, effective. The former enhances the connections between different features, and the latter strengthens the semantic information, making it easier to obtain the correlated region between the supporting and query images. Meanwhile, the 1/4 and 1/8 features accomplish the segmentation task in a more effective way.
It can be seen from the data in Table 5 that multi-scale DACBcan better capture semantic information of different scales and obtain better segmentation effects than the compared models. Removing 18 scale-intensive attention computing blocks reduces the model performance by 0.6%, and consecutive removal of 1 8 and 1 16 level dense attention computation blocks reduces the model’s performance.

4.5.3. Configuration of Learnable Absolute PE and Dense Skip Connections

Ablation studies were performed on the absolute learnable positional encodings and skip connections, and the results are shown in Table 6. The absolute learnable position encoding we use is added only at the 1 16 and 1 32 levels of dense attention computation. The 1 8 level uses the encodings fixed by sine and cosine functions with different frequencies to save training costs. Our skip connections of the 1 4 scale refer to the up-sampling of features from the previous layer when using features from the later layer and performing the original skip-linking operation by using a conv module to resize the spliced features back to the original size. Ablation experiments show that absolute learnable position encoding facilitates the experiments. Using the dense skip connection operation only on 1 8 and 1 4 can improve the final segmentation by identifying the features in the intermediate layer more efficiently. The combination of 1 4 scale features and 1 8 scale features with conv blocks before skip connections can complete segmentation tasks more efficiently.

5. Conclusions

Self-supervised methods have begun to prevail in multiple computer vision tasks, including semantic segmentation. In this paper, a self-supervised few-shot segmentation method is proposed based on multi-task learning and dense attention computation. Our method utilizes unsupervised saliency regions for self-supervised learning for few-shot segmentation (FSS), which avoids the need for extensive manual annotation. The unsupervised saliency regions provide continuous semantic information to improve the training of self-supervised FSS. The self-supervised FSS method based on multi-task learning is proposed to solve the lack of category information, which divides the salient regions into query regions and support regions. The introduction of an attention mechanism improves the segmentation accuracy of the model. Extensive experiments were conducted on COCO- 20 i and PASCAL- 5 i , on which our model achieved 55.1% and 26.8% one-shot mIoU, respectively. In addition, it realized 78.1% on FSS-1000.
Despite the appealing results we achieved, the proposed self-supervised FSS method based on saliency segmentation still cannot effectively provide continuous salient regions for objects of the same category. In the future, we plan to introduce an image generation scheme to construct a meta-learning paradigm for FSS so as to achieve higher segmentation accuracies.

Author Contributions

Conceptualization, K.Y.; Methodology, K.Y.; Software, K.Y.; Validation, K.Y. and Y.Z.; Formal analysis, K.Y. and W.W.; Investigation, W.W.; Resources, Y.Z.; Writing—original draft preparation, K.Y. and W.W.; Writing—review and editing, W.W. and Y.Z.; Visualization, K.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Intelligent Policing Key Laboratory of Sichuan Province, No. ZNJW2024FKMS004.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy concerns.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kim, M.Y.; Kim, S.; Lee, B.; Kim, J. Enhancing Deep Learning-Based Segmentation Accuracy through Intensity Rendering and 3D Point Interpolation Techniques to Mitigate Sensor Variability. Sensors 2024, 24, 4475. [Google Scholar] [CrossRef]
  2. Jun, W.; Yoo, J.; Lee, S. Synthetic Data Enhancement and Network Compression Technology of Monocular Depth Estimation for Real-Time Autonomous Driving System. Sensors 2024, 24, 4205. [Google Scholar] [CrossRef]
  3. You, L.; Zhu, R.; Kwan, M.; Chen, M.; Zhang, F.; Yang, B.; Wong, M.; Qin, Z. Unraveling adaptive changes in electric vehicle charging behavior toward the postpandemic era by federated meta-learning. Innovation 2024, 5. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, S.; You, L.; Zhu, R.; Liu, B.; Liu, R.; Yu, H.; Yuen, C. AFM3D: An Asynchronous Federated Meta-Learning Framework for Driver Distraction Detection. In IEEE Transactions on Intelligent Transportation Systems; IEEE: Piscataway, NJ, USA, 2024. [Google Scholar]
  5. Wang, K.; Liew, J.; Zou, Y.; Zhou, D.; Feng, J. Panet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9197–9206. [Google Scholar]
  6. Zhang, C.; Lin, G.; Liu, F.; Yao, R.; Shen, C. Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5217–5226. [Google Scholar]
  7. Min, J.; Kang, D.; Cho, M. Hypercorrelation squeeze for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 6941–6952. [Google Scholar]
  8. Zhou, T.; Wang, W.; Konukoglu, E.; Van Gool, L. Rethinking semantic segmentation: A prototype view. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2582–2593. [Google Scholar]
  9. Yang, B.; Wan, F.; Liu, C.; Li, B.; Ji, X.; Ye, Q. Part-based semantic transform for few-shot semantic segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 7141–7152. [Google Scholar] [CrossRef] [PubMed]
  10. Van Gansbeke, W.; Vandenhende, S.; Georgoulis, S.; Van Gool, L. Unsupervised semantic segmentation by contrasting object mask proposals. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10052–10062. [Google Scholar]
  11. Amac, M.; Sencan, A.; Baran, B.; Ikizler-Cinbis, N.; Cinbis, R. MaskSplit: Self-supervised meta-learning for few-shot semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 1067–1077. [Google Scholar]
  12. Karimijafarbigloo, S.; Azad, R.; Merhof, D. Self-supervised few-shot learning for semantic segmentation: An annotation-free approach. arXiv 2023, arXiv:2307.14446. [Google Scholar]
  13. Shaban, A.; Bansal, S.; Liu, Z.; Essa, I.; Boots, B. One-shot learning for semantic segmentation. arXiv 2017, arXiv:1709.03410. [Google Scholar]
  14. Zhuge, Y.; Shen, C. Deep reasoning network for few-shot semantic segmentation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 5344–5352. [Google Scholar]
  15. Liu, L.; Cao, J.; Liu, M.; Guo, Y.; Chen, Q.; Tan, M. Dynamic extension nets for few-shot semantic segmentation. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1441–1449. [Google Scholar]
  16. Wang, W.; Zhou, T.; Yu, F.; Dai, J.; Konukoglu, E.; Van Gool, L. Exploring cross-image pixel contrast for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 7303–7313. [Google Scholar]
  17. Dong, N.; Xing, E. Few-shot semantic segmentation with prototype learning. BMVC 2018, 3, 4. [Google Scholar]
  18. Lang, C.; Cheng, G.; Tu, B.; Han, J. Learning what not to segment: A new perspective on few-shot segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8057–8067. [Google Scholar]
  19. Zhang, X.; Wei, Y.; Li, Z.; Yan, C.; Yang, Y. Rich embedding features for one-shot semantic segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6484–6493. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, C.; Lin, G.; Liu, F.; Guo, J.; Wu, Q.; Yao, R. Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9587–9595. [Google Scholar]
  21. Wang, H.; Zhang, X.; Hu, Y.; Yang, Y.; Cao, X.; Zhen, X. Few-shot semantic segmentation with democratic attention networks. In Proceedings, Part XIII 16, Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 730–746. [Google Scholar]
  22. Liu, B.; Jiao, J.; Ye, Q. Harmonic feature activation for few-shot semantic segmentation. IEEE Trans. Image Process. 2021, 30, 3142–3153. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, X.; Wang, B.; Chen, K.; Zhou, X.; Yi, S.; Ouyang, W.; Zhou, L. Brinet: Towards bridging the intra-class and inter-class gaps in one-shot segmentation. arXiv 2020, arXiv:2008.06226. [Google Scholar]
  24. Tian, P.; Wu, Z.; Qi, L.; Wang, L.; Shi, Y.; Gao, Y. Differentiable meta-learning model for few-shot semantic segmentation. Proc. Aaai Conf. Artif. Intell. 2020, 34, 12087–12094. [Google Scholar] [CrossRef]
  25. Boudiaf, M.; Kervadec, H.; Masud, Z.; Piantanida, P.; Ben Ayed, I.; Dolz, J. Few-shot segmentation without meta-learning: A good transductive inference is all you need? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13979–13988. [Google Scholar]
  26. Wu, Z.; Shi, X.; Lin, G.; Cai, J. Learning meta-class memory for few-shot semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 517–526. [Google Scholar]
  27. Xie, G.; Xiong, H.; Liu, J.; Yao, Y.; Shao, L. Few-shot semantic segmentation with cyclic memory network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 7293–7302. [Google Scholar]
  28. Li, G.; Kang, G.; Liu, W.; Wei, Y.; Yang, Y. Content-consistent matching for domain adaptive semantic segmentation. In European Conference on Computer Vision; Springer International Publishing: Cham, Switzerland, 2020; pp. 440–456. [Google Scholar]
  29. Subhani, M.; Ali, M. Learning from scale-invariant examples for domain adaptation in semantic segmentation. In Proceedings, Part XXII 16, Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 290–306. [Google Scholar]
  30. Wen, X.; Zhao, B.; Zheng, A.; Zhang, X.; Qi, X. Self-supervised visual representation learning with semantic grouping. Adv. Neural Inf. Process. Syst. 2022, 35, 16423–16438. [Google Scholar]
  31. Araslanov, N.; Roth, S. Self-supervised augmentation consistency for adapting semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15384–15394. [Google Scholar]
  32. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. Others An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  33. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
  34. Lu, Z.; He, S.; Zhu, X.; Zhang, L.; Song, Y.; Xiang, T. Simpler is better: Few-shot semantic segmentation with classifier weight transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 8741–8750. [Google Scholar]
  35. Zhang, G.; Kang, G.; Yang, Y.; Wei, Y. Few-shot segmentation via cycle-consistent transformer. Adv. Neural Inf. Process. Syst. 2021, 34, 21984–21996. [Google Scholar]
  36. Shi, X.; Wei, D.; Zhang, Y.; Lu, D.; Ning, M.; Chen, J.; Ma, K.; Zheng, Y. Dense cross-query-and-support attention weighted mask aggregation for few-shot segmentation. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2022; pp. 151–168. [Google Scholar]
  37. Everingham, M.; Van Gool, L.; Williams, C.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
  38. Lin, T.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; Zitnick, C. Microsoft coco: Common objects in context. In Proceedings, Part V 13, Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
  39. Li, X.; Wei, T.; Chen, Y.; Tai, Y.; Tang, C. Fss-1000: A 1000-class dataset for few-shot segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2869–2878. [Google Scholar]
  40. Yang, L.; Zhuo, W.; Qi, L.; Shi, Y.; Gao, Y. Mining latent classes for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 8721–8730. [Google Scholar]
  41. Liu, Y.; Liu, N.; Yao, X.; Han, J. Intermediate prototype mining transformer for few-shot semantic segmentation. Adv. Neural Inf. Process. Syst. 2022, 35, 38020–38031. [Google Scholar]
  42. Yang, Y.; Chen, Q.; Feng, Y.; Huang, T. MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7131–7140. [Google Scholar]
  43. Tian, Z.; Zhao, H.; Shu, M.; Yang, Z.; Li, R.; Jia, J. Prior guided feature enrichment network for few-shot segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1050–1065. [Google Scholar] [CrossRef] [PubMed]
  44. Codella, N.; Rotemberg, V.; Tsch, L.P.; Celebi, M.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M. Others Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv 2019, arXiv:1902.03368. [Google Scholar]
  45. Lei, S.; Zhang, X.; He, J.; Chen, F.; Du, B.; Lu, C. Cross-domain few-shot semantic segmentation. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2022; pp. 73–90. [Google Scholar]
  46. Chen, H.; Dong, Y.; Lu, Z.; Yu, Y.; Han, J. Pixel Matching Network for Cross-Domain Few-Shot Segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 978–987. [Google Scholar]
Figure 1. The overall structure of the proposed self-supervised network. The unsupervised saliency mask is segmented; one part is used for masking and support, and the other part and the entire unsupervised salient area are used to calculate the loss function so as to guide model training.
Figure 1. The overall structure of the proposed self-supervised network. The unsupervised saliency mask is segmented; one part is used for masking and support, and the other part and the entire unsupervised salient area are used to calculate the loss function so as to guide model training.
Sensors 24 04975 g001
Figure 2. The architecture of our network with the proposed self-supervised meta-learning approach.
Figure 2. The architecture of our network with the proposed self-supervised meta-learning approach.
Sensors 24 04975 g002
Figure 3. Illustration of the proposed DACB.
Figure 3. Illustration of the proposed DACB.
Sensors 24 04975 g003
Figure 4. Comparison of visualization results on PASCAL- 5 i . Columns correspond to the query image with mask, support image with mask, MaskSplit results, and our results.
Figure 4. Comparison of visualization results on PASCAL- 5 i . Columns correspond to the query image with mask, support image with mask, MaskSplit results, and our results.
Sensors 24 04975 g004
Figure 5. Ablation experiments on the value of parameters a and b.
Figure 5. Ablation experiments on the value of parameters a and b.
Sensors 24 04975 g005
Table 1. Comparison of results on PASCAL- 5 i between our method and other popular methods.
Table 1. Comparison of results on PASCAL- 5 i between our method and other popular methods.
Method 5 0 5 1 5 2 5 3 avg
Supervised approaches
CWT [34]56.965.261.248.858.0
DAN [21]54.768.657.851.658.2
MLC [40]60.871.361.556.962.6
HSNet [7]67.372.362.063.166.2
CyCTR [35]69.372.756.558.664.3
IPMT [41]71.673.558.061.266.1
MIANet [42]68.575.867.563.268.7
Self-supervised approaches
Saliency * [10]51.549.148.139.046.9
MaskContrast * [10]53.650.750.739.948.7
IPMT * [41]57.957.255.443.953.6
MIANet * [42]57.256.855.945.253.8
MaskSplit [11]54.157.154.846.153.0
Ours58.457.958.746.055.1
* represents the results obtained by adapting the methods used to assess the same settings.
Table 2. Comparison of results on COCO- 20 i and FSS-1000 between our method and other popular methods.
Table 2. Comparison of results on COCO- 20 i and FSS-1000 between our method and other popular methods.
Method 20 0 20 1 20 2 20 3 avgFSS-1000
Supervised approaches
CWT [34]30.336.630.532.232.4
DAN [21]----24.485.2
MLC [40]50.237.827.130.436.4
HSNet [7]37.244.142.441.341.286.5
PEFNet [43]36.841.838.736.738.5
MIANet [42]42.553.047.847.447.7
Self-supervised approaches
Saliency * [10]22.724.320.422.222.4
HSNet [7]29.325.620.523.024.676.1
MIANet * [42]26.727.220.921.924.275.0
MaskSplit [11]22.326.120.624.323.372.1 *
Ours37.426.221.322.326.878.1
* represents the results obtained by adapting the methods used to assess the same settings.
Table 3. Comparative results on ISIC2018.
Table 3. Comparative results on ISIC2018.
MethodmIoU
PATNet [45]41.16%
PMNet [46]51.2%
Ours65.2%
Table 4. Computational complexity.
Table 4. Computational complexity.
MethodFLOPsParamsNumber of IterationsTime in Each Iteration
HSNet [7]103.8 G86.7 M9015 m
MLDAC (Ours)112.0 G96.1 M1815 m
Table 5. Ablation study on different layers of DACB.
Table 5. Ablation study on different layers of DACB.
1 8 1 16 1 32 Results
49.2
53.5
55.1
Table 6. Ablation experiments on the effectiveness of the proposed method on PASCAL- 5 i .
Table 6. Ablation experiments on the effectiveness of the proposed method on PASCAL- 5 i .
Fixed Learnable PE 1 4 Connection 1 8 Connection 1 4 + 1 8 ConnectionResults
54.6
54.3
54.1
53.9
54.5
55.1
54.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yi , K.; Wang , W.; Zhang , Y. A Self-Supervised Few-Shot Semantic Segmentation Method Based on Multi-Task Learning and Dense Attention Computation. Sensors 2024, 24, 4975. https://doi.org/10.3390/s24154975

AMA Style

Yi  K, Wang  W, Zhang  Y. A Self-Supervised Few-Shot Semantic Segmentation Method Based on Multi-Task Learning and Dense Attention Computation. Sensors. 2024; 24(15):4975. https://doi.org/10.3390/s24154975

Chicago/Turabian Style

Yi , Kai, Weihang Wang , and Yi Zhang . 2024. "A Self-Supervised Few-Shot Semantic Segmentation Method Based on Multi-Task Learning and Dense Attention Computation" Sensors 24, no. 15: 4975. https://doi.org/10.3390/s24154975

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop