Next Article in Journal
AASNet: A Novel Image Instance Segmentation Framework for Fine-Grained Fish Recognition via Linear Correlation Attention and Dynamic Adaptive Focal Loss
Previous Article in Journal
Decision-Making on Key Factors Driving the Demand for Electric Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Liver and Tumor Segmentation Using a Self-Supervised Swin-Transformer-Based Framework with Multitask Learning and Attention Mechanisms

1
Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu 610209, China
2
The School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3985; https://doi.org/10.3390/app15073985
Submission received: 13 February 2025 / Revised: 1 April 2025 / Accepted: 2 April 2025 / Published: 4 April 2025

Abstract

Automatic liver and tumor segmentation in contrast-enhanced magnetic resonance imaging (CE-MRI) images are of great value in clinical practice as they can reduce surgeons’ workload and increase the probability of success in surgery. However, this is still a challenging task due to the complex background, irregular shape, and low contrast between the organ and lesion. In addition, the size, number, shape, and spatial location of liver tumors vary from person to person, and existing automatic segmentation models are unable to achieve satisfactory results. In this work, drawing inspiration from self-attention mechanisms and multitask learning, we propose a segmentation network that leverages Swin-Transformer as the backbone, incorporating self-supervised learning strategies to enhance performance. In addition, accurately segmenting the boundaries and spatial location of liver tumors is the biggest challenge. To address this, we propose a multitask learning strategy based on segmentation and signed distance map (SDM), incorporating an attention gate into the skip connections. The strategy can perform liver tumor segmentation and SDM regression tasks simultaneously. The incorporation of the SDM regression branch effectively improves the detection and segmentation performance for small objects since it imposes additional shape and global constraints on the network. We performed comprehensive evaluations, both quantitative and qualitative, of our approach. The model we proposed outperforms existing state-of-the-art models in terms of DSC, 95HD, and ASD metrics. This research provides a valuable solution that lessens the burden on surgeons and improves the chances of successful surgeries.
Keywords: liver cancer; segmentation; transformer; self-supervised learning; multitask learning; attention mechanism liver cancer; segmentation; transformer; self-supervised learning; multitask learning; attention mechanism

Share and Cite

MDPI and ACS Style

Chen, Z.; Dou, M.; Luo, X.; Yao, Y. Enhanced Liver and Tumor Segmentation Using a Self-Supervised Swin-Transformer-Based Framework with Multitask Learning and Attention Mechanisms. Appl. Sci. 2025, 15, 3985. https://doi.org/10.3390/app15073985

AMA Style

Chen Z, Dou M, Luo X, Yao Y. Enhanced Liver and Tumor Segmentation Using a Self-Supervised Swin-Transformer-Based Framework with Multitask Learning and Attention Mechanisms. Applied Sciences. 2025; 15(7):3985. https://doi.org/10.3390/app15073985

Chicago/Turabian Style

Chen, Zhebin, Meng Dou, Xu Luo, and Yu Yao. 2025. "Enhanced Liver and Tumor Segmentation Using a Self-Supervised Swin-Transformer-Based Framework with Multitask Learning and Attention Mechanisms" Applied Sciences 15, no. 7: 3985. https://doi.org/10.3390/app15073985

APA Style

Chen, Z., Dou, M., Luo, X., & Yao, Y. (2025). Enhanced Liver and Tumor Segmentation Using a Self-Supervised Swin-Transformer-Based Framework with Multitask Learning and Attention Mechanisms. Applied Sciences, 15(7), 3985. https://doi.org/10.3390/app15073985

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop