Next Article in Journal
An Exploratory Study on Whether the Interference Effect Occurs When High-Intensity Strength Training Is Performed Prior to High-Intensity Interval Aerobic Training
Previous Article in Journal
Fundus-DANet: Dilated Convolution and Fusion Attention Mechanism for Multilabel Retinal Fundus Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Perturbation Consistency Framework in Semi-Supervised Medical Image Segmentation

School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, No. 15 Yongyuan Road, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8445; https://doi.org/10.3390/app14188445
Submission received: 2 September 2024 / Revised: 17 September 2024 / Accepted: 18 September 2024 / Published: 19 September 2024

Abstract

:
Semi-supervised medical image segmentation models often face challenges such as empirical mismatch and data imbalance. Traditional methods, like the two-stream perturbation model, tend to over-rely on strong perturbation, leaving weak perturbation and labeled images underutilized. To overcome these challenges, we propose an innovative hybrid copy-paste (HCP) method within the strong perturbation branch, encouraging unlabeled images to learn more comprehensive semantic information from labeled images and narrowing the empirical distribution gap. Additionally, we integrate contrastive learning into the weak perturbation branch, where contrastive learning samples are selected through semantic grouping contrastive sampling (SGCS) to address memory and variance issues. This sampling strategy ensures more effective use of weak perturbation data. This approach is particularly advantageous for pixel segmentation tasks with severely limited labels. Finally, our approach is validated on the public ACDC (Automated Cardiac Diagnosis Challenge) dataset, achieving a 90.6% DICE score, with just 7% labeled data. These results demonstrate the effectiveness of our method in improving segmentation performance with limited labeled data.

1. Introduction

Semantic segmentation is a technique that aims to provide pixel-level predictions for images, treating it as a dense pixel classification task. It is crucial in natural image processing for facilitating real-world applications like autonomous driving. The segmentation of internal structures from medical images, such as radiography, computed tomography (CT), or magnetic resonance imaging (MRI), holds significant importance for a plethora of clinical applications [1]. Numerous image semantic segmentation techniques based on fully supervised learning have been proposed [2,3]. However, traditional fully supervised scenarios [4,5] heavily rely on images meticulously labeled by human annotators, posing a significant barrier to their extensive application in the medical image domain. Annotating a large number of images is prohibitively costly, with some requiring annotation by clinicians with extensive expertise. Consequently, semi-supervised segmentation has emerged as a prominent area of interest in recent years, gaining traction in the field of medical image analysis.
In semi-supervised segmentation methods, the objective is to symmetrically train both labeled and unlabeled data in a consistent manner. For instance, pseudo-label generation by the teacher model supervises the unlabeled data in the student model. This work focuses on the consistent regularization framework, generalized in the field of semi-supervised classification by FixMatch [6]. This approach aims to showcase the utility of unlabeled data by augmenting it with varying degrees of enhancement and then imposing consistency constraints on the predictions. Specifically, predictions generated from a weakly perturbed version X w supervise strongly perturbed unlabeled images X s . The UniMatch [7] study found that strongly augmented data are more effective for learning and added two branches to the FixMatch approach, incorporating a novel strong enhancement branch and a feature-level perturbation branch.
Regarding strong perturbations, the CutMix [8] method utilized in UniMatch employs a straightforward data processing technique known as copy and paste (CP). This approach reinforces the consistency of weakly and strongly augmented pairs between unlabeled and labeled data. Traditional CP methods often overlook a unified learning strategy for labeled and unlabeled data, limiting their effectiveness. CP methods aim to enhance the network’s generalization ability but are constrained by low-precision pseudo-labels. An effective strategy involves motivating unlabeled data to learn common semantics present in labeled data, achieving distributional alignment through a consistent learning strategy.
This is accomplished by implementing a hybrid copy-paste (HCP) methodology within the auxiliary-main framework, where portions of labeled and unlabeled images are interchanged. The auxiliary network compares pseudo-labels of the unlabeled image with real labels, providing a supervisory signal to the main network. By interchanging parts of labeled and unlabeled images, HCP effectively increases the model’s ability to learn local features, thereby enhancing the contribution of unlabeled data during training. This approach ensures that the model can learn from finer details present in the images, improving the overall consistency of the model. The two hybrid images enable the network to learn the shared semantics between labeled and unlabeled data, both bidirectionally and symmetrically.
For the weakly perturbed branch, UniMatch halts gradient backpropagation and uses weakly perturbed images to supervise other branches. We introduce an additional branch integrated into a semi-supervised contrastive learning framework for medical image segmentation. This method, based on group-based sampling [9], constructs pixel groups and samples proportionally based on class distribution. The main concept involves partitioning images into grids, sampling semantically proximate pixels within the same grid while minimizing memory footprint.
The effectiveness of our method is demonstrated through experiments on the public ACDC dataset. Our approach surpasses existing semi-supervised segmentation methods and several doodle-supervised segmentation methods with comparable annotation costs. The primary contributions of this paper are:
  • We introduce a novel semi-supervised medical image segmentation model called hybrid contrast learning match (HCLM). This model integrates a weak-to-strong consistency framework along with a contrastive learning framework employing specialized sampling methods. Coupled with the low training cost of the lightweight model, this combination achieves segmentation results with relatively high accuracy.
  • We propose a novel strong perturbation method called hybrid copy-paste perturbation (HCP). This method involves cropping a portion of a labeled image and pasting it onto an unlabeled image, creating a new training sample.
  • We introduce a specialized sampling-based contrastive learning framework. Integrating contrastive learning involves incorporating weakly perturbed branches from the weak-to-strong consistency framework into the training process. We employ the semantic grouping contrastive sampling (SGCS) method to reduce variance, facilitating faster model convergence while alleviating memory burdens.
  • The experimental results demonstrate that our method achieves a DICE accuracy of 90.6% on the ACDC dataset at a 7% annotation rate. These results surpass those of existing similar methods, validating the effectiveness of our approach for this task.
This paper is structured as follows: Section 2 analyzes related work on semi-supervised learning and comparison learning. Section 3 details the specific implementation of our HCLM framework. Section 4 evaluates the segmentation performance through a comprehensive experimental analysis. Finally, Section 5 concludes the paper.

2. Related Work

2.1. Medical Image Segmentation

The segmentation of internal structures from medical images is essential for a wide range of clinical applications. Modern medical image segmentation methods predominantly employ fully convolutional networks (FCNs) or UNet [10] architectures, framing the task as a dense classification problem. Broadly, current approaches to medical image segmentation can be divided into two key areas: network design and optimization strategies. The first category focuses on the development of various 2D and 3D segmentation network architectures, as explored in works such as [11,12,13]. These studies aim to improve feature representation through techniques like dilated convolutions, deformable convolutions, pyramid pooling, and attention mechanisms. Since the introduction of the vision transformer (ViT), several studies [14,15] have reformulated segmentation as a sequence-to-sequence prediction task using the ViT architecture [16].
The second category leverages medical prior knowledge to enhance network training [17,18], focusing on the design of specialized loss functions tailored to medical imaging data. These loss functions aim to address challenges like class imbalance and improve the segmentation of uncertain pixel categories. In addition, our approach extends these concepts to real-world clinical scenarios by leveraging large amounts of unlabeled data and working with very limited labeled data during the training phase. The segmentation quality is further enhanced by providing additional supervision for each category of uncertain pixels, thereby improving the overall accuracy and robustness of the model.

2.2. Semi-Supervised Learning

The principal advantage of semi-supervised learning (SSL) is that a substantial amount of unlabeled data can be utilized to enhance the efficacy of the model during training. The main challenge resides in designing effective supervised signals for unlabeled data. Recently, there has been a surge in research on semi-supervised medical image segmentation [19]. Two primary loss functions proposed to tackle this issue are entropy minimization [20,21,22] and consistency regularization [23]. The fundamental concept of entropy minimization involves assigning pseudo-labels to unlabeled data, which are then combined with manually labeled data for retraining. The core principle of consistency regularization posits that images subjected to data perturbation and geometric transformations should maintain unchanged outputs, ensuring consistent predictions.
Among these approaches, FixMatch introduces a strong perturbation to the unlabeled image, with the training process supervised by the predictions from the weakly perturbed image. This approach incorporates the entropy minimization concept to obtain pseudo-labels while ensuring consistency across both strongly and weakly perturbed views. UniMatch demonstrated that strongly augmented data enhance model learning effectiveness, and thus, based on FixMatch, strongly augmented perturbations at the feature level were added to form a two-stream strong perturbation strategy alongside the original strongly augmented branch. Although this framework is relatively simple and efficient, the potential for improvement in pixel-level semantic segmentation of medical images remains unclear.

2.3. Contrastive Learning

In recent years, several powerful contrastive learning methods have emerged, achieving excellent performance across various computer vision tasks, thanks to the optimization of contrastive learning loss parameters. A popular loss function, infoNCE [24], works by drawing pairs of positive samples closer together while pushing negative samples apart. Recently, contrastive learning has been applied to enhance semantic segmentation using various design strategies. Traditionally, contrastive learning has been employed primarily for self-supervised pre-training, which leverages unlabeled data to learn useful visual representations [25]. This approach provides the model with robust initial parameters, effectively initializing it with a small number of labels for downstream tasks [26,27]. However, this strategy heavily relies on data augmentation to generate positive pairs and a suitable method for selecting positive and negative sample pairs. SIMCLR [28] achieved good results using training techniques such as larger batch sizes and effective image augmentation. However, these methods require significant hardware resources. Recent studies [29] have demonstrated the advantages of cross-image comparison learning in medical image segmentation.
However, a major drawback of contrastive learning in this context is the class conflict problem [30], where semantically similar patches are compared due to uninformed negative selection. Our method requires significantly less memory for contrastive learning through active sampling. Unlike these methods, which apply to stored feature libraries, our method focuses on on-the-fly feature sampling. While active sampling in the existing literature often relies on learnable, class-specific attention modules, our approach utilizes only relationship graphs and prediction confidence for sampling features, avoiding additional computational overhead and ensuring a simpler, more memory-efficient implementation.

3. Method

In the following sections, we will detail the implementation process and underlying principles of the proposed HCLM (hybrid contrast learning match) algorithm. This will include discussions on the framework overview, the HCP (hybrid copy-paste perturbation) strong perturbation method, and the contrastive learning sampling method.

3.1. Framework Overview

In order to facilitate comparison and ensure fairness, we utilize a two-dimensional network to segment two-dimensional slices of three-dimensional medical volume images. This approach is not only straightforward but can also be readily extended to three-dimensional segmentation methods and hybrid three-dimensional and two-dimensional methods. The framework comprises an instructor network ( F t ) and a main network ( F s ), with θ t and θ s as parameters. The main network is optimized using stochastic gradient descent, while the auxiliary network is optimized by an exponential moving average (EMA) of the main network. The model training process comprises two stages: a pre-training phase and a segmentation phase.
The pre-training phase is employed to train the auxiliary model F t from labeled data. In the segmentation phase, the auxiliary model F t serves a dual role: it generates pseudo labels for unlabeled weakly perturbed images as a supervised signal for the main model F s , and it provides a reference point for evaluating the performance of the main model F s . For simplicity, the auxiliary model F t is configured identically to the main model F s . In the image segmentation phase, we utilize the FixMatch weak-to-strong consistency regularization method in conjunction with the UniMatch feature layer perturbation idea.
It can be observed that the effect of weak-to-strong consistency regularization is largely dependent on the specific strong perturbation method employed, with the weakly perturbed branches merely acting as supervisory signals for the other branches. In response, we propose the HCP strong perturbation method, in addition to contrastive learning utilizing the SGCS sampling method, with the aim of optimizing this framework. The specific methods are described in Section 3.2 and Section 3.3.
The segmentation process during the segmentation stage is as follows: Firstly, the unlabeled image X un is segmented in three distinct ways. The first method involves directly inputting the image into the auxiliary model F t to obtain the predicted pseudo-label P L 1 . The second method adds a weak perturbation (e.g., CROP or RESIZE) and inputs the image into the main network F s to obtain the predicted pseudo-label P L 2 . This pseudo-label serves as a supervisory signal for both the feature perturbation branch and the strong perturbation branch. The third method introduces a strong perturbation (HCP) and inputs it into F s to obtain the prediction result P. The labeled image serves as the source for the HCP, while its ground truth and the prediction results from the auxiliary’s model F t are also considered. Additionally, the HCP operation is used to generate the new pseudo-label P L 3 , which acts as the supervisory signal for the strong perturbation branch. The prediction result P of the strong perturbation branch is supervised by the pseudo-labels P L 1 and P L 3 from the weakly perturbed branch. Finally, the model uses the segmented result P as the output. The specific HCLM model flow is depicted in Figure 1. The overall operation logic of the framework is illustrated in Algorithm 1.
Algorithm 1 Overall Framework Process
Require: Labeled data D L , Unlabeled data D U , Auxiliary model F t , Main model F s
Ensure: Segmented result P
  1:   Phase 1: Pre-training
  2:    F t Train on D L                ▹ Train auxiliary model on labeled data
  3:   Phase 2: Segmentation
  4:   for each X un D U  do
  5:        F s Supervise strong perturbation with weak and feature perturbation
  6:        segmentation loss Compute
  7:        segmentation loss . backward ( )           ▹ Backpropagate segmentation loss
  8:        SGCS sample selection on weak perturbation branch
  9:        contrastive loss Compute
10:        contrastive loss . backward ( )              ▹ Backpropagate contrastive loss
11:   end for
12:   return  P F s ( strong perturbation output )

3.2. A Novel Methodology for Strong Perturbations

The proposed hybrid copy-paste perturbation (HCP) method is integrated within the auxiliary-main model framework. The main process operates as follows: An unlabeled image X un and a labeled image X l from the training set are randomly selected. A portion of the foreground content from X l is cropped and pasted onto the corresponding position of X un before being fed into the main network F s for training. This generates a new hybrid image, X in , which is then fed into the main network to obtain the predicted segmentation results. The segmentation results are also supervised by the HCP of the predicted unlabeled images from the auxiliary’s network and the label map of the labeled images.
To perform HCP between pairs of images, we first generate an all-zero center mask, α { 0 , 1 } , indicating whether the pixel is from the foreground (0) or background (1) image. The size of the zero-valued region is β H × β W × β L , where β ( 0 , 1 ) . Subsequently, the labeled and unlabeled images are mixed and pasted as Equation (1). This equation represents the mixing of the labeled image X l and the unlabeled image X u using the mask α .
X in = X l α + X u ( 1 α ) .
P L = F t ( X un ; θ t ) .
The initial pseudo-labels P L are generated through predictions by the auxiliary model as Equation (2). This equation computes the initial pseudo-labels P L by passing the unlabeled image X un through the auxiliary model F t with parameters θ t . The formula for the supervised signal is presented in Equation (3). This equation is similar to Equation (1), but while Equation (1) focuses on the processing of images, Equation (3) deals with the processing of the mask.
Y in = Y l α + Y ˜ u ( 1 α ) .
For the loss function, each input image to the main network consists of elements from both labeled and unlabeled images. In fact, the true mask of a labeled image is typically more accurate than the pseudo-label of an unlabeled image. We use γ to regulate the contribution of unlabeled image pixels to the loss function. The loss function for X in is computed as follows:
L in = L seg ( Q in , Y in ) α + γ L seg ( Q in , Y in ) ( 1 α ) .
where L seg represents a linear combination of the Dice loss and the cross-entropy loss. The equation for calculating Q in is as shown in Equation (5):
Q in = F s ( X in ; Θ s ) .
The pseudo-code for the above algorithm is presented in Algorithm 2.
Algorithm 2 Hybrid Copy-paste Perturbation Algorithm
Require:  l a b e l e d _ i m a g e s , u n l a b e l e d _ i m a g e s , n u m _ c o p i e s
Ensure:  a u g m e n t e d _ i m a g e s
  1:   Initialize a u g m e n t e d _ i m a g e s = [ ]
  2:   for each i m a g e l in l a b e l e d _ i m a g e s  do
  3:         for  i = 1 to n u m _ c o p i e s  do
  4:                 Select a random i m a g e u from u n l a b e l e d _ i m a g e s
  5:                 Copy random patch from i m a g e l to i m a g e u creating i m a g e u
  6:                 Copy corresponding patch from i m a g e u to i m a g e l creating i m a g e l
  7:                 Add i m a g e u and i m a g e l to a u g m e n t e d _ i m a g e s
  8:         end for
  9:   end for
10:   return  a u g m e n t e d _ i m a g e s

3.3. A Novel Pixel Sampling Method

Our findings indicate that in all weak-to-strong consistent regularised segmentation frameworks, the predictions of the weakly perturbed branch consistently act as a supervisory signal for the other branches, which do not participate in the training of the model. Therefore, we explored the potential for integrating this branch into the training process. To achieve this, we incorporated the concept of contrastive learning, allowing this branch to initiate gradient backpropagation and participate in the training process. The contrastive learning component of the model is illustrated in Figure 2.
Indeed, if the positive and negative pairs correspond to the desired segmentation categories, the contrast loss will learn diverse representations for downstream medical segmentation tasks [31]. However, a key limitation in realistic scenarios is the memory bottleneck [32]. That is, the memory limit will be severely exceeded if all pixel points are sampled. The current mainstream solution is plain sampling (NS), but this approach is too random, typically produces high variance, and fails to recognize pixels that are similar in meaning, thus limiting the stability of CL.
To address these two key issues of memory bottleneck and high variance, we propose a simple but effective technique, semantic grouped comparison sampling (SGCS). This method aims to sample the most representative pixels from semantically similar pixel groups while alleviating the high variance constraint. In practice, images of different categories are first divided into grids of the same size. Pixels that are semantically close to each other within the same grid are then sampled with high probability, while minimizing the additional memory footprint. Figure 3 shows the schematic diagram of SGCS.
To maintain consistency with the previous representation, we define an arbitrary image in a given medical image dataset as x X , where X represents the set of pixels. For any function h : X × P R , the aggregation function H ( x ) can be expressed as Equation (6). This formula represents the aggregation result H ( x ) for a pixel x by summing over all parameters p and then averaging the results. In this way, different characteristics or processing results of the pixel can be synthesized based on the various parameters p.
H ( x ) = 1 | P | p P h ( x ; p ) .
As the direct computation of H ( x ) may become infeasible due to the large scale of P, a group sampling approach is utilized. The SGCS sampling method initially decomposes the pixels into M disjoint groups, designated as P m . For each group P m , a center C m is identified such that each sampling point p is symmetric with respect to C m . Subsequently, a subset of pixels is sampled within each group to form D m . This allows the aggregation function H S G C S to be expressed as Equation (7):
H ^ S G C S ( x ; D ) = 1 M m = 1 M 1 | D m | p D m h ( x ; p ) .
Let M be the total number of groups, and let D m be the subset of pixels obtained by symmetric sampling in group P m . The absolute value of D m is the number of pixels obtained by sampling in group m. h ( x ; p ) is a function applied to pixel p on image x. In SGCS sampling, the overall sampling variance is calculated by taking a weighted average of the sampling variances of all groups:
Var [ H ^ S G C S ] 1 M 2 m = 1 M n m σ m 2 .
The contrast learning loss function is based on SupCon [33], aiming to bring the positive samples closer together in the feature space while pushing the negative samples farther apart. The loss function for contrastive learning can then be defined as:
L C L = i = 1 2 M L i C L .
L i C L = 1 N i = 1 N j = 1 N M i j log e z i j / T k = 1 N e z i k / T .
In this context, N represents the total number of pixels, while i and j denote the indices of the samples to be compared. M i j is a mask that determines whether samples i and j are to be considered as positive samples. z i j is the cosine similarity between the features of sample i and those of sample j, used to measure their similarity. Finally, T is the temperature parameter that controls the sensitivity of the loss function. As the temperature increases, the impact of similarity on the loss function is reduced. The objective of this loss function is to minimize the distance between similar samples while maximizing the distance between dissimilar samples. This facilitates the distinction of different samples in the feature space.
The pseudo-code of the SGCS method is as shown in Algorithm 3:
Algorithm 3 Semantic Grouping Contrastive Sampling Algorithm
Require:  p i x e l s , M , s a m p l e _ s i z e _ p e r _ g r o u p
Ensure:  s a m p l e d _ p i x e l s
  1:   Divide p i x e l s into M groups based on features
  2:   Initialize s a m p l e d _ p i x e l s = [ ]
  3:   for each group in M do
  4:         Identify the center pixel c m of the group
  5:         Sort pixels in the group by distance to c m
  6:         Select the closest s a m p l e _ s i z e _ p e r _ g r o u p / 2 pixels to c m
  7:         Select the farthest s a m p l e _ s i z e _ p e r _ g r o u p / 2 pixels from c m
  8:         Add selected pixels to s a m p l e d _ p i x e l s
  9:   end for
10:   return  s a m p l e d _ p i x e l s

4. Experimental Results and Analysis

4.1. Datasets and Experimental Settings

The ACDC dataset [34] is a publicly available dataset containing cardiac MRI images from 100 patients. Each patient’s data include two-dimensional image sequences capturing the entire cardiac cycle at end-systole (ES) and end-diastole (ED). Each image sequence includes three segmentation targets: the right ventricle (RV), the left ventricle (LV), and the myocardium (MYO). The dataset was selected due to its prevalence in cardiac image segmentation studies, thus facilitating comparisons with other methods. To assess the efficacy of semi-supervised training, we employed the segmentation of three-dimensional volumetric images into two-dimensional slices. The training process was conducted on an NVIDIA Tesla V100 GPU. We used a crop size of 256 × 256 and a batch size of 8 per GPU. The model was trained for 300 epochs using a learning rate of 0.01 with an SGD optimizer. The optimizer had a momentum of 0.9 and a weight decay of 0.0001. A confidence threshold of 0.95 was applied during training to filter out uncertain predictions.

4.2. Deployment and Usage Details

Our framework is implemented using PyTorch and can be deployed on servers via Docker for scalability. For large-scale deployment, we recommend running the model on servers with GPU support, such as AWS or Google Cloud, to handle high computational demands during training and inference. Docker ensures the portability of the framework across different platforms, enabling easy scaling and maintenance of the segmentation model in various environments.

4.3. Comparison with Other Models

Table 1 presents the average performance of the four segmentation results on the ACDC dataset with label proportions of 3%, 7%, 20%, and 40%. To ensure a fair comparison, all experiments were conducted under identical settings. The most successful results are highlighted in bold font, indicating that correct segmentations were achieved for the right ventricle (RV), myocardium (Myo), and left ventricle (LV). To ensure a fair comparison, the same data enhancements were applied (i.e., random rotation, random cropping, and horizontal flipping). It was observed that the proposed method outperformed the current SSL-based method for different labeling ratios. This validates the superior performance of the proposed method in terms of segmentation accuracy and labeling efficiency. For instance, compared to the second-best ARCO, our HCLM on the ACDC dataset demonstrated an improvement of {0.9%↑, 0.8%↑, 1.4%↑, 1.5%↑} at {3%, 7%, 20%, 40%} label ratios of Dice index. In particular, our method achieved a Dice index of {89.2%, 90.6%, 91.5%, 92.6%} on the ACDC dataset. These results demonstrate that our method can be applied to diverse clinical scenarios and labeling ratios. The segmentation performance on the ACDC dataset under different labeling ratios for each model can be visualized in Figure 4. The segmentation performance of our model for varying labeling ratios can be observed in Figure 5.
Given the focus of our work on semi-supervised learning, fully supervised methods were not included in the comparison. Fully supervised methods rely on large amounts of labeled data, and their primary goal differs from that of semi-supervised methods. While fully supervised models achieve impressive performance when abundant labeled data are available, they are less applicable in scenarios where labeled data are scarce—a key challenge that semi-supervised approaches aim to address. Thus, comparing these two fundamentally different paradigms may not provide the most relevant insights into the specific advantages of semi-supervised methods.
To more effectively illustrate the segmentation efficacy of our model, we selected the method exhibiting the highest Dice index within the labeling scales of 3% and 7%, and calculated the HD95 and ASD metrics. As shown in Table 2, our model demonstrates the most consistent performance. To clearly highlight the advantages of our model, we plotted box plots of DICE, HD95, and ASD metrics for different models under the 3% and 7% labeling rates. These box plots provide a visual comparison of the performance distribution among various models. The box plots for the 3% labeling rate are shown in Figure 6, and those for the 7% labeling rate are shown in Figure 7.

4.4. Ablation Experiment

In this section, we conduct an ablation study on the ACDC dataset to evaluate the effectiveness of different components in our proposed framework. Specifically, we analyze the contribution of HCP (hybrid copy-paste perturbation) and SGCS (semantic grouping contrastive sampling) to the overall segmentation performance. The ablation results in Table 3 indicate that both HCP and SGCS significantly contribute to performance improvement.
The SGCS module is designed to optimize the contrastive learning process by effectively utilizing the weak perturbation branch. By sampling semantically grouped pixels, SGCS reduces variance and ensures more efficient contrastive learning, allowing the model to make better use of weak perturbations. This method enhances local semantic relationships, which improves model generalization across unlabeled data. SGCS and contrastive learning are complementary, as SGCS strengthens the sampling process, while contrastive learning further leverages these samples to enhance representation learning.
The HCP module introduces hybrid perturbations through a novel copy-paste method, where labeled and unlabeled image regions are interchanged. This mechanism encourages the model to learn from diverse spatial contexts, significantly boosting the model’s ability to generalize from limited labeled data. The addition of HCP strengthens the strongly perturbed branch by enhancing the consistency between pseudo-labels and ground truth labels, leading to improved segmentation results.
Our experiments show that the combination of SGCS and HCP yields the best performance, achieving optimal segmentation accuracy. These two modules complement each other: SGCS optimizes contrastive learning, while HCP strengthens the model’s capability to learn from strongly augmented data. The results shown in Figure 8 further illustrate the effectiveness of this combined approach.
As shown in Figure 8, we provide visual comparisons of segmentation results after adding different modules to the model. In particular, #4 shows the segmentation results with both SGCS and HCP added, which yields the best performance, and #3 illustrates the results when SGCS and contrastive learning are added, while #2 shows the effects of adding only HCP. Lastly, #1 represents the baseline model without these enhancements. From these visualizations, it is clear that the addition of each module brings notable improvements in segmentation accuracy. The comparison highlights how different components contribute to refining the model’s performance, with the combination of both SGCS and HCP leading to the most significant gains.

4.5. Discussion and Comparison

In comparison to existing methods such as FixMatch and CutMix, our HCLM framework shows significant improvements in segmentation accuracy, especially with limited labeled data. While traditional semi-supervised methods focus primarily on strong perturbations, our method incorporates both weak and strong perturbations, alongside contrastive learning, to fully utilize unlabeled data. This not only reduces the empirical distribution gap but also enhances the model’s ability to generalize to unseen data. From the experimental results, it is evident that our method achieves superior Dice scores, particularly under lower label proportions, validating the effectiveness of our approach. However, the computational cost of adding a strong perturbation branch (HCP) may increase training time, which is offset by faster convergence and more robust segmentation results. Moreover, due to the uniformity in the imaging process and the structural consistency of organs in medical images, the data distribution is relatively homogeneous. This reduces the need for complex image augmentations to simulate real-world scenarios, further mitigating the time cost. As a result, the additional modules do not impose a significant training burden and remain acceptable in terms of computational overhead.

5. Conclusions

This paper introduces a novel semi-supervised medical image segmentation framework designed to enhance model robustness and accelerate labeling efficiency. This work explores the potential of UniMatch in semi-supervised semantic segmentation and concludes that there is scope for further optimization. To further optimize its performance, two approaches were taken. First, in the strong perturbation branch, the HCP method extended the CutMix-based approach bidirectionally, reducing the distribution gap between labeled and unlabeled data. Second, the weakly perturbed branch facilitated contrastive learning via the SGCS sampling method, enabling its use in training the model and enhancing variance reduction for broad real-world applications. Additionally, the prediction results of the weakly perturbed branch are used in a contrastive learning process via the SGCS sampling method. Both components significantly improve the model’s segmentation performance. Experiments on a public cardiac MRI image segmentation dataset (ACDC) demonstrate that the final method, HCLM, outperforms recent semi-supervised segmentation methods.

Author Contributions

All authors contributed to the approach conception and design. Material preparation, data collection, and analysis were performed by K.L. The first draft of the manuscript was written by K.L. under the supervision of X.M. and D.S. All authors commented on previous versions of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This study utilized the ACDC dataset, which is publicly available. The dataset can be accessed from https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html (accessed on 1 September 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  2. Tragakis, A.; Kaul, C.; Murray-Smith, R.; Husmeier, D. The fully convolutional transformer for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 3660–3669. [Google Scholar]
  3. Wang, S.; Li, C.; Wang, R.; Liu, Z.; Wang, M.; Tan, H.; Wu, Y.; Liu, X.; Sun, H.; Yang, R.; et al. Annotation-efficient deep learning for automatic medical image segmentation. Nat. Commun. 2021, 12, 5915. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Z.; Xiao, P.; Tan, H. Spinal magnetic resonance image segmentation based on U-net. J. Radiat. Res. Appl. Sci. 2023, 16, 100627. [Google Scholar] [CrossRef]
  5. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  6. Sohn, K.; Berthelot, D.; Carlini, N.; Zhang, Z.; Zhang, H.; Raffel, C.A.; Cubuk, E.D.; Kurakin, A.; Li, C.-L. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Adv. Neural Inf. Process. Syst. 2020, 33, 596–608. [Google Scholar]
  7. Yang, L.; Qi, L.; Feng, L.; Zhang, W.; Shi, Y. Revisiting weak-to-strong consistency in semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7236–7246. [Google Scholar]
  8. Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6023–6032. [Google Scholar]
  9. You, C.; Dai, W.; Min, Y.; Liu, F.; Clifton, D.; Zhou, S.K.; Staib, L.; Duncan, J. Rethinking semi-supervised medical image segmentation: A variance-reduction perspective. Adv. Neural Inf. Process. Syst. 2024, 36, 9984. [Google Scholar]
  10. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. pp. 234–241. [Google Scholar]
  11. Li, J.; Chen, S.; Ma, S.; Guo, F.; Tang, J. MixUNet: Mix the 2D and 3D Models for Robust Medical Image Segmentation. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine, Istanbul, Turkiye, 5–8 October 2023; pp. 1242–1247. [Google Scholar]
  12. Li, W.; Lambert-Garcia, R.; Getley, A.C.M.; Kim, K.; Bhagavath, S.; Majkut, M.; Rack, A.; Lee, P.D.; Leung, C.L.A. AM-SegNet for Additive Manufacturing in Situ X-ray Image Segmentation and Feature Quantification. Virtual Phys. Prototyp. 2024, 19, e2325572. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Liao, Q.; Ding, L.; Zhang, J. Bridging 2D and 3D Segmentation Networks for Computation-Efficient Volumetric Medical Image Segmentation: An Empirical Study of 2.5 D Solutions. Comput. Med Imaging Graph. 2022, 99, 102088. [Google Scholar] [CrossRef] [PubMed]
  14. Cai, Y.; Long, Y.; Han, Z.; Liu, M.; Zheng, Y.; Yang, W.; Chen, L. Swin Unet3D: A Three-Dimensional Medical Image Segmentation Network Combining Vision Transformer and Convolution. BMC Med Informatics Decis. Mak. 2023, 23, 33. [Google Scholar] [CrossRef] [PubMed]
  15. Gao, Y.; Zhou, M.; Liu, D.; Yan, Z.; Zhang, S.; Metaxas, D.N. A Data-Scalable Transformer for Medical Image Segmentation: Architecture, Model Efficiency, and Benchmark. arXiv 2022, arXiv:2203.00131. [Google Scholar]
  16. Dosovitskiy, A. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  17. Cheng, P.; Lin, L.; Lyu, J.; Huang, Y.; Luo, W.; Tang, X. Prior: Prototype Representation Joint Learning from Medical Images and Reports. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 21361–21371. [Google Scholar]
  18. Yang, Z.; Huang, Y.; Feng, J. Learning to Leverage High-Order Medical Knowledge Graph for Joint Entity and Relation Extraction. In Findings of the Association for Computational Linguistics: ACL 2023; ACL: Stroudsburg, PA, USA, 2023; pp. 9023–9035. [Google Scholar]
  19. Luo, X.; Chen, J.; Song, T.; Wang, G. Semi-supervised medical image segmentation through dual-task consistency. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; Volume 35, pp. 8801–8809. [Google Scholar]
  20. Rumberger, J.L.; Franzen, J.; Hirsch, P.; Albrecht, J.-P.; Kainmueller, D. ACTIS: Improving Data Efficiency by Leveraging Semi-Supervised Augmentation Consistency Training for Instance Segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 3790–3799. [Google Scholar]
  21. Luo, Y.; Luo, G.; Qin, K.; Chen, A. Graph Entropy Minimization for Semi-Supervised Node Classification. arXiv 2023, arXiv:2305.19502. [Google Scholar]
  22. Gong, C.; Wang, D.; Liu, Q. Alphamatch: Improving consistency for semi-supervised learning with alpha-divergence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13683–13692. [Google Scholar]
  23. Fan, Y.; Kukleva, A.; Dai, D.; Schiele, B. Revisiting Consistency Regularization for Semi-Supervised Learning. Int. J. Comput. Vis. 2023, 131, 626–643. [Google Scholar] [CrossRef]
  24. van den Oord, A.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
  25. Zeng, D.; Wu, Y.; Hu, X.; Xu, X.; Yuan, H.; Huang, M.; Zhuang, J.; Hu, J.; Shi, Y. Positional contrastive learning for volumetric medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part II 24. pp. 221–230. [Google Scholar]
  26. Wang, X.; Zhang, S.; Qing, Z.; Gao, C.; Zhang, Y.; Zhao, D.; Sang, N. Molo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Action Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 18011–18021. [Google Scholar]
  27. Huang, Z.; Zhang, J.; Shan, H. Twin Contrastive Learning with Noisy Labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 11661–11670. [Google Scholar]
  28. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Online, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
  29. Yan, K.; Cai, J.; Jin, D.; Miao, S.; Guo, D.; Harrison, A.P.; Tang, Y.; Xiao, J.; Lu, J.; Lu, L. SAM: Self-supervised learning of pixel-wise anatomical embeddings in radiological images. IEEE Trans. Med. Imaging 2022, 41, 2658–2669. [Google Scholar] [CrossRef]
  30. Zhao, X.; Fang, C.; Fan, D.-J.; Lin, X.; Gao, F.; Li, G. Cross-level contrastive learning and consistency constraint for semi-supervised medical image segmentation. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–5. [Google Scholar]
  31. Jiang, Z.; Chen, T.; Chen, T.; Wang, Z. Improving contrastive learning on imbalanced data via open-world sampling. Adv. Neural Inf. Process. Syst. 2021, 34, 5997–6009. [Google Scholar]
  32. Chaitanya, K.; Erdil, E.; Karani, N.; Konukoglu, E. Contrastive learning of global and local features for medical image segmentation with limited annotations. Adv. Neural Inf. Process. Syst. 2020, 33, 12546–12558. [Google Scholar]
  33. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised contrastive learning. Adv. Neural Inf. Process. Syst. 2020, 33, 18661–18673. [Google Scholar]
  34. Bernard, O.; Lalande, A.; Zotti, C.; Cervenansky, F.; Yang, X.; Heng, P.-A.; Cetin, I.; Lekadir, K.; Camara, O.; Ballester, M.A.G.; et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE Trans. Med. Imaging 2018, 37, 2514–2525. [Google Scholar] [CrossRef] [PubMed]
  35. Bai, Y.; Chen, D.; Li, Q.; Shen, W.; Wang, Y. Bidirectional copy-paste for semi-supervised medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 11514–11524. [Google Scholar]
  36. Basak, H.; Bhattacharya, R.; Hussain, R.; Chatterjee, A. An exceedingly simple consistency regularization method for semi-supervised medical image segmentation. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–4. [Google Scholar]
  37. Liu, J.; Desrosiers, C.; Yu, D.; Zhou, Y. Semi-supervised medical image segmentation using cross-style consistency with shape-aware and local context constraints. IEEE Trans. Med. Imaging 2023, 43, 1449–1461. [Google Scholar] [CrossRef] [PubMed]
  38. Wang, Y.; Xiao, B.; Bi, X.; Li, W.; Gao, X. Mcf: Mutual correction framework for semi-supervised medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 15651–15660. [Google Scholar]
  39. Miao, J.; Chen, C.; Liu, F.; Wei, H.; Heng, P.-A. Caussl: Causality-inspired semi-supervised learning for medical image segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 21426–21437. [Google Scholar]
Figure 1. Overview of the HCLM (hybrid contrast learning match) model framework, illustrating the weak perturbation branch, feature perturbation branch, and strong perturbation branch, which combine weak-to-strong consistency with contrastive learning to enhance segmentation performance. The auxiliary and main networks are optimized simultaneously to provide consistent predictions for labeled and unlabeled images.
Figure 1. Overview of the HCLM (hybrid contrast learning match) model framework, illustrating the weak perturbation branch, feature perturbation branch, and strong perturbation branch, which combine weak-to-strong consistency with contrastive learning to enhance segmentation performance. The auxiliary and main networks are optimized simultaneously to provide consistent predictions for labeled and unlabeled images.
Applsci 14 08445 g001
Figure 2. Contrastive learning framework illustrating how positive and negative pairs are generated during training. The use of semantic grouping for pixel sampling ensures memory efficiency and high variance reduction. Solid arrows represent data flow, while dashed arrows represent supervision signals.
Figure 2. Contrastive learning framework illustrating how positive and negative pairs are generated during training. The use of semantic grouping for pixel sampling ensures memory efficiency and high variance reduction. Solid arrows represent data flow, while dashed arrows represent supervision signals.
Applsci 14 08445 g002
Figure 3. Schematic diagram of SGCS (semantic grouped contrastive sampling) showing the division of images into grids and the selection of representative pixels from each group for efficient contrastive learning. This method reduces variance while maintaining consistency in feature representation. In the diagram, the red stars represent the sampled pixels, which are selected based on their semantic relevance within each grid. The green stars denote pixels within the same grid that are semantically similar to the sampled pixels and spatially antithetic, ensuring a balanced and comprehensive representation of the local feature space.
Figure 3. Schematic diagram of SGCS (semantic grouped contrastive sampling) showing the division of images into grids and the selection of representative pixels from each group for efficient contrastive learning. This method reduces variance while maintaining consistency in feature representation. In the diagram, the red stars represent the sampled pixels, which are selected based on their semantic relevance within each grid. The green stars denote pixels within the same grid that are semantically similar to the sampled pixels and spatially antithetic, ensuring a balanced and comprehensive representation of the local feature space.
Applsci 14 08445 g003
Figure 4. Qualitative comparison of segmentation results on the ACDC dataset between our method (HCLM) and other state-of-the-art models. The figure highlights the segmentation performance for various cardiac structures such as the right ventricle (RV), myocardium (MYO), and left ventricle (LV).
Figure 4. Qualitative comparison of segmentation results on the ACDC dataset between our method (HCLM) and other state-of-the-art models. The figure highlights the segmentation performance for various cardiac structures such as the right ventricle (RV), myocardium (MYO), and left ventricle (LV).
Applsci 14 08445 g004
Figure 5. Training performance of the HCLM model under different label ratios on the ACDC dataset, demonstrating improvements in segmentation accuracy as the labeling ratio increases.
Figure 5. Training performance of the HCLM model under different label ratios on the ACDC dataset, demonstrating improvements in segmentation accuracy as the labeling ratio increases.
Applsci 14 08445 g005
Figure 6. Box plots of DICE, HD95, and ASD for different models at 3% labelling rate.
Figure 6. Box plots of DICE, HD95, and ASD for different models at 3% labelling rate.
Applsci 14 08445 g006
Figure 7. Box plots of DICE, HD95, and ASD for different models at 7% labeling rate.
Figure 7. Box plots of DICE, HD95, and ASD for different models at 7% labeling rate.
Applsci 14 08445 g007
Figure 8. Selection of ablation experiments demonstrating the impact of different modules on the model’s performance. The first column displays the original image, while the second column shows the ground truth. The remaining columns illustrate the segmentation performance after the addition of various modules.
Figure 8. Selection of ablation experiments demonstrating the impact of different modules on the model’s performance. The first column displays the original image, while the second column shows the ground truth. The remaining columns illustrate the segmentation performance after the addition of various modules.
Applsci 14 08445 g008
Table 1. Performance comparison between HCLM and other state-of-the-art methods on ACDC dataset. The presented table displays the average DICE score for each segmentation category and the overall average DICE score. Bold values indicate the superior performance.
Table 1. Performance comparison between HCLM and other state-of-the-art methods on ACDC dataset. The presented table displays the average DICE score for each segmentation category and the overall average DICE score. Bold values indicate the superior performance.
Method 3% Labeled7% Labeled20% Labeled40% Labeled
Avg. RV MY. LV Avg. RV MY. LV Avg. RV MY. LV Avg. RV MY. LV
BCP [35]88.085.985.792.488.886.087.293.189.586.887.993.991.691.489.693.7
ICTMSeg [36]86.683.484.691.788.987.987.291.591.189.590.093.791.691.489.693.7
SLC-NET [37]86.284.882.891.189.286.387.793.591.290.889.293.592.191.390.894.2
MCF [38]73.572.274.075.582.079.881.585.085.783.985.787.388.285.286.892.5
UniMatch86.685.283.291.589.386.387.893.690.687.389.694.891.589.785.389.4
UNet57.855.957.859.679.476.279.082.984.179.981.388.287.481.984.590.2
DTC [19]81.476.879.986.384.279.882.188.587.782.285.990.589.287.686.493.4
ACT-Net80.274.778.384.884.582.178.688.987.482.084.490.989.984.387.693.2
MCCauSSL [39]81.675.479.286.385.880.383.789.687.481.684.590.289.983.486.792.5
ARCO [9]88.386.986.291.989.888.588.292.790.188.988.493.191.190.488.994.0
Ours89.286.387.893.690.687.389.694.891.591.189.993.592.692.190.794.8
Table 2. Comparison of the performance of our method (HCLM) with other state-of-the-art methods on the ACDC dataset. This table shows the mean HD95 and ASD for each segmentation category, as well as the overall mean HD95 and ASD, with bold indicating the best scores. The upward arrow (↑) indicates that a higher value is better, while the downward arrow (↓) indicates that a lower value is better.
Table 2. Comparison of the performance of our method (HCLM) with other state-of-the-art methods on the ACDC dataset. This table shows the mean HD95 and ASD for each segmentation category, as well as the overall mean HD95 and ASD, with bold indicating the best scores. The upward arrow (↑) indicates that a higher value is better, while the downward arrow (↓) indicates that a lower value is better.
LabeledMethodDICE↑HD95↓ASD↓
3% LabeledBCP88.06.171.46
ICTMSeg86.65.891.81
SLC-NET86.26.581.32
UniMatch86.66.781.31
ARCO88.35.361.23
Ours89.25.571.07
7% LabeledBCP88.86.131.16
ICTMSeg88.95.411.48
SLC-NET89.35.511.12
UniMatch89.25.471.07
ARCO89.84.161.08
Ours90.63.451.12
Table 3. Results of image segmentation under different settings.
Table 3. Results of image segmentation under different settings.
NumberSGCSHCP3%7%20%40%
#1××0.8640.8890.9040.915
#2×0.8680.8910.9080.912
#3×0.8710.8990.9120.918
#40.8910.9060.9150.926
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, X.; Lian, K.; Sui, D. A Novel Perturbation Consistency Framework in Semi-Supervised Medical Image Segmentation. Appl. Sci. 2024, 14, 8445. https://doi.org/10.3390/app14188445

AMA Style

Ma X, Lian K, Sui D. A Novel Perturbation Consistency Framework in Semi-Supervised Medical Image Segmentation. Applied Sciences. 2024; 14(18):8445. https://doi.org/10.3390/app14188445

Chicago/Turabian Style

Ma, Xiaoxuan, Kuncheng Lian, and Dong Sui. 2024. "A Novel Perturbation Consistency Framework in Semi-Supervised Medical Image Segmentation" Applied Sciences 14, no. 18: 8445. https://doi.org/10.3390/app14188445

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop