Image Segmentation Techniques: Current Status and Future Directions (2nd Edition)

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 3231

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
Interests: computer vision; image processing; machine/deep learning; scientific computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics, Hangzhou Dianzi University, Hangzhou 310018, China
Interests: image processing; optimization; tensor analysis; computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image segmentation, as a fundamental and challenging task in many subjects such as image processing and computer vision, is of great importance but is constantly challenging to deliver. Briefly speaking, it is the process of assigning a label to every pixel in an image according to certain characteristics such as intensity, biometrics and semantics. It is generally a prerequisite and plays a key role in its ubiquitous practical applications, such as machine vision, medical imaging, detection, recognition, and autonomous driving. Researchers are increasing their efforts to develop new segmentation techniques based on, e.g., mathematical/statistical models, biometrics, and machine learning via deep neural networks to tackle existing and upcoming challenges.

This Special Issue aims to gather innovative research on image segmentation techniques, ranging from the current status to future directions, and from hand-crafted techniques to deep learning, etc. We also welcome submissions including, but not limited to, applications in digital imaging, medical imaging, object detection, and recognition tasks, among other related topics.

Dr. Xiaohao Cai
Prof. Dr. Gaohang Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image segmentation
  • image processing
  • classification
  • recognition
  • variational regularization algorithms
  • neural networks
  • machine learning
  • deep learning
  • digital imaging
  • medical imaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6639 KiB  
Article
Efficient Generative-Adversarial U-Net for Multi-Organ Medical Image Segmentation
by Haoran Wang, Gengshen Wu and Yi Liu
J. Imaging 2025, 11(1), 19; https://doi.org/10.3390/jimaging11010019 - 12 Jan 2025
Viewed by 379
Abstract
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial [...] Read more.
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial U-Net (EGAUNet), designed to facilitate rapid and accurate multi-organ labeling. To enhance the model’s capability to comprehend spatial information, we propose the Global Spatial-Channel Attention Mechanism (GSCA). This mechanism enables the model to concentrate more effectively on regions of interest. Additionally, we have integrated Efficient Mapping Convolutional Blocks (EMCB) into the feature-learning process, allowing for the extraction of multi-scale spatial information and the adjustment of feature map channels through optimized weight values. Moreover, the proposed framework progressively enhances its performance by utilizing a generative-adversarial learning strategy, which contributes to improvements in segmentation accuracy. Consequently, EGAUNet demonstrates exemplary segmentation performance on public multi-organ datasets while maintaining high efficiency. For instance, in evaluations on the CHAOS T2SPIR dataset, EGAUNet achieves approximately 2% higher performance on the Jaccard metric, 1% higher on the Dice metric, and nearly 3% higher on the precision metric in comparison to advanced networks such as Swin-Unet and TransUnet. Full article
Show Figures

Figure 1

17 pages, 2421 KiB  
Article
Exploring Multi-Pathology Brain Segmentation: From Volume-Based to Component-Based Deep Learning Analysis
by Ioannis Stathopoulos, Roman Stoklasa, Maria Anthi Kouri, Georgios Velonakis, Efstratios Karavasilis, Efstathios Efstathopoulos and Luigi Serio
J. Imaging 2025, 11(1), 6; https://doi.org/10.3390/jimaging11010006 - 31 Dec 2024
Viewed by 629
Abstract
Detection and segmentation of brain abnormalities using Magnetic Resonance Imaging (MRI) is an important task that, nowadays, the role of AI algorithms as supporting tools is well established both at the research and clinical-production level. While the performance of the state-of-the-art models is [...] Read more.
Detection and segmentation of brain abnormalities using Magnetic Resonance Imaging (MRI) is an important task that, nowadays, the role of AI algorithms as supporting tools is well established both at the research and clinical-production level. While the performance of the state-of-the-art models is increasing, reaching radiologists and other experts’ accuracy levels in many cases, there is still a lot of research needed on the direction of in-depth and transparent evaluation of the correct results and failures, especially in relation to important aspects of the radiological practice: abnormality position, intensity level, and volume. In this work, we focus on the analysis of the segmentation results of a pre-trained U-net model trained and validated on brain MRI examinations containing four different pathologies: Tumors, Strokes, Multiple Sclerosis (MS), and White Matter Hyperintensities (WMH). We present the segmentation results for both the whole abnormal volume and for each abnormal component inside the examinations of the validation set. In the first case, a dice score coefficient (DSC), sensitivity, and precision of 0.76, 0.78, and 0.82, respectively, were found, while in the second case the model detected and segmented correct (True positives) the 48.8% (DSC ≥ 0.5) of abnormal components, partially correct the 27.1% (0.05 > DSC > 0.5), and missed (False Negatives) the 24.1%, while it produced 25.1% False Positives. Finally, we present an extended analysis between the True positives, False Negatives, and False positives versus their position inside the brain, their intensity at three MRI modalities (FLAIR, T2, and T1ce) and their volume. Full article
Show Figures

Figure 1

13 pages, 6526 KiB  
Article
Towards Robust Supervised Pectoral Muscle Segmentation in Mammography Images
by Parvaneh Aliniya, Mircea Nicolescu, Monica Nicolescu and George Bebis
J. Imaging 2024, 10(12), 331; https://doi.org/10.3390/jimaging10120331 - 22 Dec 2024
Viewed by 503
Abstract
Mammography images are the most commonly used tool for breast cancer screening. The presence of pectoral muscle in images for the mediolateral oblique view makes designing a robust automated breast cancer detection system more challenging. Most of the current methods for removing the [...] Read more.
Mammography images are the most commonly used tool for breast cancer screening. The presence of pectoral muscle in images for the mediolateral oblique view makes designing a robust automated breast cancer detection system more challenging. Most of the current methods for removing the pectoral muscle are based on traditional machine learning approaches. This is partly due to the lack of segmentation masks of pectoral muscle in available datasets. In this paper, we provide the segmentation masks of the pectoral muscle for the INbreast, MIAS, and CBIS-DDSM datasets, which will enable the development of supervised methods and the utilization of deep learning. Training deep learning-based models using segmentation masks will also be a powerful tool for removing pectoral muscle for unseen data. To test the validity of this idea, we trained AU-Net separately on the INbreast and CBIS-DDSM for the segmentation of the pectoral muscle. We used cross-dataset testing to evaluate the performance of the models on an unseen dataset. In addition, the models were tested on all of the images in the MIAS dataset. The experimental results show that cross-dataset testing achieves a comparable performance to the same-dataset experiments. Full article
Show Figures

Figure 1

16 pages, 5582 KiB  
Article
Evaluating Brain Tumor Detection with Deep Learning Convolutional Neural Networks Across Multiple MRI Modalities
by Ioannis Stathopoulos, Luigi Serio, Efstratios Karavasilis, Maria Anthi Kouri, Georgios Velonakis, Nikolaos Kelekis and Efstathios Efstathopoulos
J. Imaging 2024, 10(12), 296; https://doi.org/10.3390/jimaging10120296 - 21 Nov 2024
Viewed by 1262
Abstract
Central Nervous System (CNS) tumors represent a significant public health concern due to their high morbidity and mortality rates. Magnetic Resonance Imaging (MRI) has emerged as a critical non-invasive modality for the detection, diagnosis, and management of brain tumors, offering high-resolution visualization of [...] Read more.
Central Nervous System (CNS) tumors represent a significant public health concern due to their high morbidity and mortality rates. Magnetic Resonance Imaging (MRI) has emerged as a critical non-invasive modality for the detection, diagnosis, and management of brain tumors, offering high-resolution visualization of anatomical structures. Recent advancements in deep learning, particularly convolutional neural networks (CNNs), have shown potential in augmenting MRI-based diagnostic accuracy for brain tumor detection. In this study, we evaluate the diagnostic performance of six fundamental MRI sequences in detecting tumor-involved brain slices using four distinct CNN architectures enhanced with transfer learning techniques. Our dataset comprises 1646 MRI slices from the examinations of 62 patients, encompassing both tumor-bearing and normal findings. With our approach, we achieved a classification accuracy of 98.6%, underscoring the high potential of CNN-based models in this context. Additionally, we assessed the performance of each MRI sequence across the different CNN models, identifying optimal combinations of MRI modalities and neural networks to meet radiologists’ screening requirements effectively. This study offers critical insights into the integration of deep learning with MRI for brain tumor detection, with implications for improving diagnostic workflows in clinical settings. Full article
Show Figures

Figure 1

Back to TopTop