Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,510)

Search Parameters:
Keywords = medical image segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 5744 KB  
Article
Multi-Scale Atrous Feature Fusion Based on a VGG19-UNet Encoder for Brain Tumor Segmentation
by Shoffan Saifullah and Rafał Dreżewski
Appl. Sci. 2026, 16(8), 3971; https://doi.org/10.3390/app16083971 - 19 Apr 2026
Viewed by 62
Abstract
Accurate brain tumor segmentation from magnetic resonance imaging (MRI) remains challenging due to heterogeneous tumor morphology, intensity variability, and multi-scale structural complexity. This study proposes a DeepLabV3+-based segmentation framework integrating a VGG19-UNet encoder, Atrous Spatial Pyramid Pooling (ASPP), and low-level feature refinement to [...] Read more.
Accurate brain tumor segmentation from magnetic resonance imaging (MRI) remains challenging due to heterogeneous tumor morphology, intensity variability, and multi-scale structural complexity. This study proposes a DeepLabV3+-based segmentation framework integrating a VGG19-UNet encoder, Atrous Spatial Pyramid Pooling (ASPP), and low-level feature refinement to simultaneously capture hierarchical semantics and boundary-sensitive spatial details. The architecture enhances receptive field coverage without additional downsampling while preserving fine-grained contour information during reconstruction. Extensive evaluation was conducted on the Figshare Brain Tumor Segmentation (FBTS) dataset and the BraTS 2021 and BraTS 2018 benchmarks, focusing on Whole Tumor segmentation across multiple MRI modalities and tumor grades. Under five-fold cross-validation, the proposed model achieved a mean Dice Similarity Coefficient of 0.9717 and Jaccard Index of 0.9456 on FBTS, with stable and competitive performance across FLAIR, T1, T2, and T1CE modalities in both HGG and LGG cases. Boundary-level analysis further confirmed controlled Hausdorff Distance and low Average Symmetric Surface Distance. Statistical validation and ablation analysis demonstrate consistent improvements over baseline U-Net configurations. The proposed framework provides a robust and computationally efficient solution for automated brain tumor segmentation across heterogeneous datasets. Full article
(This article belongs to the Special Issue Research on Artificial Intelligence in Healthcare)
18 pages, 2187 KB  
Article
DCN-KUnet: A DCNv3-Based Backbone and KAN Bottleneck for Chromosome Segmentation
by Yufei Yang and Min Chang
Electronics 2026, 15(8), 1649; https://doi.org/10.3390/electronics15081649 - 15 Apr 2026
Viewed by 176
Abstract
Chromosome foreground segmentation is a binary semantic segmentation problem that serves as a prerequisite for overlap reasoning, contact-region inspection, and automated karyotyping. Although simpler than full instance separation in formulation, it remains difficult in metaphase imagery because chromosomes are elongated, deformable, weakly bounded, [...] Read more.
Chromosome foreground segmentation is a binary semantic segmentation problem that serves as a prerequisite for overlap reasoning, contact-region inspection, and automated karyotyping. Although simpler than full instance separation in formulation, it remains difficult in metaphase imagery because chromosomes are elongated, deformable, weakly bounded, and frequently touching or partially overlapping. To address these chromosome-specific difficulties, we present DCN-KUnet as a task-oriented integration rather than a new generic segmentation family. The encoder–decoder backbone embeds DCNv3 modules to perform geometry-adaptive sampling for bending-aware and boundary-aware representation learning, while a B-spline KAN bottleneck refines the compressed semantic representation through lightweight nonlinear transformation. In addition, a hybrid objective composed of mask supervision, semantic consistency regularization, and internal feature regularization (Lcd+LSCR+LIFD) jointly constrains prediction accuracy, cross-stage semantic agreement, and feature compactness during training. Experiments on the public overlapping-chromosome dataset and on AutoKary2022 converted to binary foreground masks show that DCN-KUnet achieves stronger Dice, IoU, and HD95 with a moderate parameter budget. These results support the proposed framework as a practical and lightweight semantic foreground front-end for chromosome analysis pipelines rather than a full instance-disentanglement solution. Full article
Show Figures

Figure 1

18 pages, 2357 KB  
Article
Foreign Body Response to Neuroimplantation: Machine Learning-Assisted Quantitative Analysis of Astrogliosis
by Anastasiia A. Melnikova, Anton A. Egorchev, Alexander A. Rosin, Leniz F. Nurullin, Nikita S. Lipachev, Daria S. Vedischeva, Dmitry V. Derzhavin, Stepan S. Perepechenov, Ekaterina A. Sukhodolova, Gleb V. Shabernev, Angelina A. Titova, Ramziya G. Kiyamova, Andrey P. Kiyasov, Dmitry E. Chickrin, Albert V. Aganov, Dmitry V. Samigullin, Irina Yu. Popova and Mikhail Paveliev
Int. J. Mol. Sci. 2026, 27(8), 3524; https://doi.org/10.3390/ijms27083524 - 15 Apr 2026
Viewed by 306
Abstract
Neuroimplants represent an emerging medical technology, offering new therapeutic approaches for severe neurological and psychiatric disorders. One of the key limitations to long-term neuroimplant performance is the foreign body response elicited by intracortical implantation. Among the contributing cell types, astrocytes play a central [...] Read more.
Neuroimplants represent an emerging medical technology, offering new therapeutic approaches for severe neurological and psychiatric disorders. One of the key limitations to long-term neuroimplant performance is the foreign body response elicited by intracortical implantation. Among the contributing cell types, astrocytes play a central role in glial scar formation around the implant, which can compromise device functionality. Immunofluorescence of glial fibrillary acidic protein (GFAP) provides a well-established marker of astrogliosis (neuroinflammation), yet quantitative and reproducible assessment of astrocyte morphology remains challenging due to the complexity and variability of image analysis approaches. Here, we aimed to quantitatively assess implantation-induced astrogliosis and to determine how classifier training strategy influences segmentation outcomes and morphometric measurements. We present a machine learning-assisted pipeline based on the LabKit plugin in Fiji for segmentation and morphometric analysis of GFAP-positive astrocytes in peri-implant scar versus distant cortical regions. Using this approach, we demonstrate an increase in GFAP expression, cell area, and astrocytic process length as well as the redistribution of GFAP signal along astrocytic processes within scar regions. We show that different classifier training strategies produce systematically distinct segmentation outcomes, with rule-compliant annotation improving agreement with manually defined ground truth. These findings highlight the critical role of annotation strategy in shallow learning-based segmentation and provide a practical framework for improving reproducibility of astrocyte morphometry in studies of neuroinflammation and neuroimplant biocompatibility. Full article
(This article belongs to the Section Molecular Informatics)
Show Figures

Figure 1

23 pages, 1399 KB  
Review
Bibliometric Analysis of Artificial Intelligence in Pediatric Radiology and Medical Imaging: A Focus on Deep Learning Applications
by Ahmad Tijjani Garba, Aminu Bashir Suleiman, Wenze Du, Ahmed Ibrahim Mahmud, Harisu Abdullahi Shehu, Huseyin Kusetogullari and Md. Haidar Sharif
Bioengineering 2026, 13(4), 461; https://doi.org/10.3390/bioengineering13040461 - 14 Apr 2026
Viewed by 365
Abstract
This study presents the first dedicated bibliometric analysis of artificial intelligence (AI) and deep learning applications in pediatric radiology and medical imaging, mapping the intellectual structure of a rapidly evolving field. A total of 2688 articles and conference proceedings published between 2005 and [...] Read more.
This study presents the first dedicated bibliometric analysis of artificial intelligence (AI) and deep learning applications in pediatric radiology and medical imaging, mapping the intellectual structure of a rapidly evolving field. A total of 2688 articles and conference proceedings published between 2005 and 2025 were retrieved from the Web of Science Core Collection and analyzed using Bibliometrix R and VOSviewer. The findings reveal exponential growth in publications, from 7 papers in 2005 to 559 in 2025, with journal articles dominating the corpus (85.9%). The most-cited contributions, led by Kermany et al. (2018) with 2886 citations, are predominantly technical feasibility studies rather than clinical outcome trials, indicating a field that has advanced methodologically but remains in early stages of clinical translation. Thematic mapping identifies convolutional neural networks, pneumonia, and transfer learning as Motor Themes representing methodological maturity in chest imaging, while neuroimaging and image segmentation clusters occupy Niche Themes, reflecting insular development with limited cross-field connectivity. Geographic analysis reveals concentrated co-authorship along US–China and US–Europe corridors, with African, Latin American, and Southeast Asian institutions largely absent from knowledge production networks. Eight of the ten most productive affiliations are North American, highlighting structural inequities that risk producing AI tools optimized for high-resource settings rather than the global pediatric population. This analysis provides an empirical foundation for reorienting the field toward clinical validation, geographic inclusion, and methodological integration across isolated research communities. Full article
Show Figures

Figure 1

23 pages, 2799 KB  
Article
RPFeaNet: Rethinking Deep Progressive Prompt-Guided Feature Interaction Fusion Network for Medical Ultrasound Image Segmentation
by Lei Zhu and Yuqing Du
Sensors 2026, 26(8), 2394; https://doi.org/10.3390/s26082394 - 14 Apr 2026
Viewed by 275
Abstract
Although ultrasound image segmentation has advanced significantly with deep learning, existing methods still suffer from a lack of prior knowledge guidance, partly due to the low-contrast, speckle-noise-corrupted nature from clinical ultrasound sensors. This paper proposes a novel ultrasound segmentation framework (RPFeaNet) that extracts [...] Read more.
Although ultrasound image segmentation has advanced significantly with deep learning, existing methods still suffer from a lack of prior knowledge guidance, partly due to the low-contrast, speckle-noise-corrupted nature from clinical ultrasound sensors. This paper proposes a novel ultrasound segmentation framework (RPFeaNet) that extracts progressive prompts from a low-to-high level prompt generation mechanism. Furthermore, the high-level prompt-guided feature interaction module (HPGFIM) fuses progressive prompt via Mamba blocks and stage-wise condition injection. The dynamic selective-frequency decoder (DSFD) combines dynamically selecting a strategy with the fusion of high-frequency details to suppress noise and refine edge details. Extensive experiments on six datasets demonstrate that RPFeaNet achieves state-of-the-art performance compared to existing methods, validating its strong generalization and robustness across diverse clinical ultrasound scenarios. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

16 pages, 1470 KB  
Article
Physics-Guided Deep Learning for Interpretable Biomedical Image Reconstruction and Pattern Recognition in Diagnostic Frameworks
by Akeel Qadir, Saad Arif, Prajoona Valsalan and Osama Khan
Bioengineering 2026, 13(4), 457; https://doi.org/10.3390/bioengineering13040457 - 13 Apr 2026
Viewed by 310
Abstract
This study introduces a physics-guided deep learning architecture designed for the simulation, reconstruction, and pattern recognition of biomedical images. By explicitly integrating physical priors into the learning model, the framework addresses the black-box nature of traditional artificial intelligence (AI). It provides an explainable [...] Read more.
This study introduces a physics-guided deep learning architecture designed for the simulation, reconstruction, and pattern recognition of biomedical images. By explicitly integrating physical priors into the learning model, the framework addresses the black-box nature of traditional artificial intelligence (AI). It provides an explainable AI pathway that enhances diagnostic accuracy, robustness, and clinical interpretation. The proposed framework was evaluated through systematic simulation studies. It involved complex geometric configurations, multimodal physical fields, and noise-corrupted synthetic three-dimensional brain volumes. Quantitative analysis demonstrates consistent improvements in reconstruction fidelity, with the peak signal-to-noise ratio (PSNR) reaching 47 dB and the structural similarity index exceeding 0.90 across all scenarios. Notably, at moderate noise levels (0.05), the framework maintains a PSNR greater than 32 dB, ensuring structural integrity essential for computer-aided diagnosis. Volumetric brain experiments further reveal a 38–44% reduction in activation localization errors, highlighting the framework’s utility in functional imaging and disease prognosis. By grounding deep learning in physical constraints, this study provides a transparent and robust solution for automated disease classification and advanced biomedical imaging tasks within clinical decision support systems. Full article
Show Figures

Figure 1

24 pages, 6110 KB  
Article
Research on Medical Image Segmentation Based on Frequency-Domain Enhancement and Edge Awareness
by Jiamin Li, Yazhi Liu and Wei Li
Algorithms 2026, 19(4), 303; https://doi.org/10.3390/a19040303 - 12 Apr 2026
Viewed by 196
Abstract
Medical images commonly exhibit low contrast, weak boundaries, and complex textures. In addition, significant semantic differences exist between deep-level semantic features and shallow-level detail features, posing challenges for multi-scale feature fusion in terms of detail preservation and structural consistency. To address these issues, [...] Read more.
Medical images commonly exhibit low contrast, weak boundaries, and complex textures. In addition, significant semantic differences exist between deep-level semantic features and shallow-level detail features, posing challenges for multi-scale feature fusion in terms of detail preservation and structural consistency. To address these issues, a frequency-enhanced and bidirectional feature-guided segmentation network (FBNet) is proposed. The network comprises two core components. The frequency-based enhancement (FBE) module employs the Fast Fourier Transform and applies adaptive modulation to the amplitude spectrum through a content-aware gating mechanism, enhancing detail expression and inter-structural contrast. The Bidirectional Guided Feature Fusion module (BGF) enables bidirectional interaction between shallow and deep features. Additionally, the Structure and Edge Awareness (SEA) module is constructed using directional and variance attention mechanisms to achieve collaborative optimization of structural modeling and edge perception. Experiments on four medical image segmentation datasets show that, compared to the second-best method, FBNet achieves improvements of 2.12, 1.57, 1.37, and 1.56 percentage points on the mIoU metric and 1.54, 1.11, 0.84, and 1.03 percentage points on the mDice metric. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 1177 KB  
Review
Imaging Engineering and Artificial Intelligence in Urinary Stone Disease: Low-Dose Computed Tomography, Spectral Technologies, and Predictive Models
by Shota Iijima, Takanobu Utsumi, Rino Ikeda, Naoki Ishitsuka, Takahide Noro, Yuta Suzuki, Yuka Sugizaki, Takatoshi Somoto, Ryo Oka, Takumi Endo, Naoto Kamiya and Hiroyoshi Suzuki
Eng 2026, 7(4), 174; https://doi.org/10.3390/eng7040174 - 11 Apr 2026
Viewed by 357
Abstract
Urinary stone disease is common, recurrent, and increasingly managed through imaging-driven pathways, yet standard-dose CT of the kidneys, ureters, and bladder (CT KUB) raises concerns about cumulative radiation exposure and the limited use of quantitative imaging information for risk stratification. This review synthesizes [...] Read more.
Urinary stone disease is common, recurrent, and increasingly managed through imaging-driven pathways, yet standard-dose CT of the kidneys, ureters, and bladder (CT KUB) raises concerns about cumulative radiation exposure and the limited use of quantitative imaging information for risk stratification. This review synthesizes contemporary evidence on dose-optimized CT, advanced spectral technologies, and artificial intelligence (AI)-enabled analytics that are reshaping diagnosis, treatment selection, and triage. This review summarizes data supporting low-dose and ultra-low-dose CT protocols that preserve diagnostic accuracy while substantially reducing dose, and discusses how dual-energy CT, photon-counting CT, and radiomics facilitate noninvasive stone characterization and extraction of imaging biomarkers beyond size and location. It also reviews AI approaches for automated detection, segmentation, and volumetric quantification across CT, KUB, and ultrasounds, highlighting their potential to standardize stone-burden metrics. It further examines predictive models, including logistic regression, nomograms, and machine learning, for perioperative infectious complications, emergency department admission or intervention, procedure success, and long-term recurrence, and outlines reporting and validation frameworks and implementation considerations, including software as a medical device regulation and human oversight. In contrast to prior reviews that consider imaging and AI separately, this review integrates dose reduction, spectral characterization, and AI-driven analytics within real-world clinical pathways to distinguish established clinical applications from those that remain investigational. Integrating advanced CT and AI outputs into well-validated prediction models embedded in real-world workflows may enable safer imaging, more consistent triage, and more personalized follow-up for urinary stone disease. Full article
Show Figures

Figure 1

21 pages, 973 KB  
Article
ViTUNet: Vision Transformer U-Net Hybrid Model for Carious Lesions Segmentation on Bitewing Dental Images
by Vincent Majanga, Ernest Mnkandla, Ekundayo Olufisayo Sunday, Bosun Ajala and Thottempundi Sree
Appl. Sci. 2026, 16(8), 3693; https://doi.org/10.3390/app16083693 - 9 Apr 2026
Viewed by 171
Abstract
Meticulous segmentation of medical images requires obtaining both local and global spatial detailed information. The conventional U-Net model excels at local spatial feature extraction through residual convolutional blocks but struggles to capture global features. To resolve this issue, we propose the vision transformer [...] Read more.
Meticulous segmentation of medical images requires obtaining both local and global spatial detailed information. The conventional U-Net model excels at local spatial feature extraction through residual convolutional blocks but struggles to capture global features. To resolve this issue, we propose the vision transformer U-NeT (ViTUNet) model framework, which combines the self-attention mechanism of the vision transformer (ViT) to capture global information while maintaining the extraction of local features via U-NeT. This proposed architecture introduces vision transformers to the existing residual convolution blocks in the U-Net encoder path, thereby capturing both local and global features. The decoder path then rebuilds this information into high-quality segmentation maps with accurately highlighted boundaries/edges. This model is utilized to segment carious lesions in bitewing dental radiographs. These images are pre-processed using augmentation, morphological operations, and segmentation to identify the boundaries/edges of the regions of interest (caries/cavity). The proposed method is evaluated on an augmented dataset containing 3000 image–watershed mask pairs. It was trained on 2400 training images and tested on 600 testing images. The experimental results exemplified significant improvements in segmentation performance, achieving 98.45% validation accuracy, 97.88% validation Dice coefficient, and 95.87% validation intersection over union (IoU) metric scores. These results are superior compared to other conventional and state-of-the-art U-NeT models, thus highlighting the impact of transformer-based hybrid architectures in improving medical image segmentation tasks. Full article
(This article belongs to the Special Issue Advances in Medical Physics and Quantitative Imaging)
Show Figures

Figure 1

21 pages, 5808 KB  
Article
Segmentation of Skin Lesions Using Deep YOLO-Family Networks: A Comparison of the Performance of Selected Models on a New Dataset
by Zbigniew Omiotek, Natalia Krukar, Aleksandra Olejarz, Piotr Lichograj, Miłosz Komada and Magda Konieczna
Electronics 2026, 15(8), 1545; https://doi.org/10.3390/electronics15081545 - 8 Apr 2026
Viewed by 395
Abstract
The aim of this study was to develop an effective and fast tool to support the automatic segmentation of skin lesions, with particular emphasis on the precise differentiation between malignant and benign lesions. In response to the problem of high false positive rates [...] Read more.
The aim of this study was to develop an effective and fast tool to support the automatic segmentation of skin lesions, with particular emphasis on the precise differentiation between malignant and benign lesions. In response to the problem of high false positive rates in existing CAD systems, modern neural network architectures from the YOLO family (YOLOv8, YOLOv9, YOLOv11, YOLOv12, and YOLOv26) were used in this research. The models were trained and evaluated on a new, balanced dataset (7000 images) based on the ISIC archive, where the key innovation was the introduction of a dedicated background class representing healthy skin. Through a multi-stage, rigorous optimization process, it was demonstrated that the yolov11s-seg model is highly effective for this task. It achieved a strong balance between effectiveness and processing speed, obtaining an mAP@50 score of 0.840 and an overall precision of 0.852. From a clinical perspective, the model’s high sensitivity (85.9%) in detecting the most aggressive lesion, invasive melanoma (MI), is particularly noteworthy. Thanks to its extremely short inference time (only 4.8 ms), the proposed yolov11s-seg variant overcomes the limitations of heavy hybrid architecture, providing a stable and highly efficient solution showing significant potential for deployment in real-time medical mobile applications. Full article
Show Figures

Figure 1

14 pages, 1434 KB  
Data Descriptor
A Dataset of Annotated DICOM Images of Head CT Angiography for Intracranial Aneurysm Detection
by Evgenia Blagosklonova, Daria Dolotova, Natalia Polunina, Elena Grigorieva, Denis Pakhomov, Vladimir Krylov and Andrey Gavrilov
Data 2026, 11(4), 74; https://doi.org/10.3390/data11040074 - 3 Apr 2026
Viewed by 521
Abstract
Rupture of Intracranial Aneurysms (IAs) is the leading cause of non-traumatic intracranial hemorrhage. Early detection of aneurysms prior to rupture or their prompt identification in cases of intracranial hemorrhage is critical and guides treatment strategies. The development of artificial intelligence tools to automate [...] Read more.
Rupture of Intracranial Aneurysms (IAs) is the leading cause of non-traumatic intracranial hemorrhage. Early detection of aneurysms prior to rupture or their prompt identification in cases of intracranial hemorrhage is critical and guides treatment strategies. The development of artificial intelligence tools to automate the labor-intensive detection and analysis of IAs is an active research field, but it depends on the availability of large, well-curated datasets for robust model training, validation, and testing. Collaborative data sharing is essential for advancing this field, yet remains relatively uncommon. Here, we present a collection of 172 Computed Tomography Angiography (CTA) scan series—a widely available and commonly used modality for the diagnosis of IAs—supplemented with structured metadata. The dataset comprises 90 scans from healthy patients and 82 scans from patients with IAs of diverse shapes, sizes, and anatomical locations, annotated and validated by two experts. The annotations include 122 surface mesh models in STL format. This openly accessible dataset is intended to support the development of automated segmentation or classification tools, medical image analysis, and assessment of disease progression risks through morphometric and hemodynamic evaluations. Full article
Show Figures

Graphical abstract

18 pages, 3975 KB  
Technical Note
SAS-SemiUNet++: A Stochastic Consistency Regularized Framework with Scale-Aware Semantic Recalibration for Cardiac MRI Segmentation
by Jie Rao, Xinhao Ma and Xiang Li
Appl. Sci. 2026, 16(7), 3507; https://doi.org/10.3390/app16073507 - 3 Apr 2026
Viewed by 290
Abstract
Precise segmentation of cardiac substructures in magnetic resonance imaging is pivotal for diagnosis and treatment planning but remains impeded by anatomical scale heterogeneity and the scarcity of high-quality pixel-level annotations. Existing deep learning paradigms often struggle to simultaneously resolve the global geometry of [...] Read more.
Precise segmentation of cardiac substructures in magnetic resonance imaging is pivotal for diagnosis and treatment planning but remains impeded by anatomical scale heterogeneity and the scarcity of high-quality pixel-level annotations. Existing deep learning paradigms often struggle to simultaneously resolve the global geometry of ventricular cavities and the fine-grained boundaries of the myocardium, particularly in low-data regimes. To address these challenges, we propose SAS-SemiUNet++, a holistic semi-supervised segmentation framework. This architecture incorporates two novel mechanisms: (1) The Scale-Aware Semantic Recalibration (SASR) unit, which functions as a dynamic semantic gate to adaptively adjust receptive fields, mimicking a radiologist’s variable-focus mechanism to capture multi-scale anatomical details, and (2) Stochastic Consistency Regularization (SCR), a dual-path perturbation strategy that enforces geometric invariance on unlabeled data, thereby mitigating overfitting to noisy pseudo-labels. Comprehensive evaluations on the ACDC benchmark demonstrate that SAS-SemiUNet++ significantly outperforms state-of-the-art methods, achieving superior segmentation accuracy and boundary fidelity, particularly in reducing the 95% Hausdorff distance. This study presents a data-efficient and robust solution for cardiac image analysis, offering potential for scalable clinical deployment. Full article
(This article belongs to the Special Issue Cardiac Imaging and Heart Diseases: Recent Progress)
Show Figures

Figure 1

12 pages, 869 KB  
Article
Fraction Conversion Models Based on Ultrasound Attenuation Coefficient for Assessing Liver Steatosis
by Yin Zhang, Ting Jiang, Chuli Xu, Jiajun He, Hongjun Zhang, Tufeng Chen and Jie Zeng
Diagnostics 2026, 16(7), 1086; https://doi.org/10.3390/diagnostics16071086 - 3 Apr 2026
Viewed by 295
Abstract
Objectives: We aimed to develop models capable of converting the attenuation coefficient (AC) into a percentage-like index in patients with suspected metabolic dysfunction-associated steatotic liver disease (MASLD). Methods: In this retrospective, cross-sectional study, we consecutively enrolled participants with suspected MASLD from [...] Read more.
Objectives: We aimed to develop models capable of converting the attenuation coefficient (AC) into a percentage-like index in patients with suspected metabolic dysfunction-associated steatotic liver disease (MASLD). Methods: In this retrospective, cross-sectional study, we consecutively enrolled participants with suspected MASLD from the Weight Loss Medical Centre who had undergone both ultrasound examinations that yielded AC results and magnetic resonance imaging (MRI) scans including proton density fat fraction (PDFF). The first model, defined as the PDFF conversion fraction (PCF), used the MRI-PDFF results as the reference standard. The other model, defined as the attenuation level fraction (ALF), converted AC values into percentages based on the range of AC values from 0.5 to 1.0 dB/cm/MHz. Area under the receiver operating characteristic curve (AUC) analysis was used to evaluate the diagnostic performance of the two models. Results: Among the 199 participants (mean age, 38.12 ± 9.56 years; 110 male), the PDFF values differed significantly among the different liver segments (p < 0.05). The PDFF values of the left liver and right liver were 12.6% and 16.1%, respectively. There was a significant difference between them (p < 0.05). The AUCs of the AC, PCF, and ALF were 0.92, 0.93, and 0.87, respectively, for detecting mild steatosis (≥ S1), moderate steatosis (≥S2), and severe steatosis (≥S3) when PDFF values ≥ 5%, ≥15%, and ≥25% were used as the reference standard, respectively. Conclusions: The two fraction conversion models (PCF and ALF) yielded good and identical diagnostic accuracies in grading liver steatosis. Considering the heterogeneous pattern of liver steatosis, the ALF was a more objective parameter. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

17 pages, 1826 KB  
Review
Integrating AI Segmentation, Simulated Digital Twins, and Extended Reality into Medical Education: A Narrative Technical Review and Proof-of-Concept Case Study
by Parhesh Kumar, Ingharan Siddarthan, Catharine Kelsh Keim, Daniel K. Cho, John E. Rubin, Robert S. White and Rohan Jotwani
J. Pers. Med. 2026, 16(4), 202; https://doi.org/10.3390/jpm16040202 - 3 Apr 2026
Viewed by 545
Abstract
Background/Objectives: Simulation digital twins (DT) models that integrate patient-specific imaging with artificial intelligence (AI)-based segmentation and extended reality (XR) technologies are rapidly increasing in relevance in personalized medicine. While their clinical applications are expanding, their role as reusable educational tools and the [...] Read more.
Background/Objectives: Simulation digital twins (DT) models that integrate patient-specific imaging with artificial intelligence (AI)-based segmentation and extended reality (XR) technologies are rapidly increasing in relevance in personalized medicine. While their clinical applications are expanding, their role as reusable educational tools and the technical pipeline utilized for their development remain incompletely characterized. This narrative review examines current approaches to digital twin creation and XR integration, illustrated by a scoliosis-specific proof-of-concept educational case study. Methods: A narrative technical review was conducted by identifying relevant search keywords within the fields of AI-based image segmentation, extended reality in medicine, and medical education based on the authors’ expertise and familiarity with the subject. PubMed, Google Scholar, and Scopus were searched for English-language studies published primarily between 2015 and 2025 addressing patient-specific three-dimensional modeling, AI-driven segmentation, and XR applications in spine, orthopedic, anesthesiology, and interventional care. A de-identified case of scoliosis is used to present a proof-of-concept example of this process of creating a simulated digital twin for the purpose of medical education in a recorded XR format. Results: Prior studies demonstrated benefits of patient-specific 3D models for anatomical understanding and procedural planning, while highlighting limitations in segmentation accuracy and workflow integration. Nevertheless, while DTs have traditionally served clinical roles in surgical planning or pre-procedural rehearsal, their pedagogical potential remains under-explored. In the proof-of-concept case study, AI-assisted segmentation enabled rapid creation of an anatomically detailed scoliosis digital twin that was incorporated into XR and used to produce a reusable, spatially anchored instructional experience focused on neuraxial access. Conclusions: AI-enabled digital twin models integrated with XR represent a promising approach for personalized, anatomy-driven medical education. Further evaluation is needed to assess educational outcomes, scalability, and integration into clinical training workflows. Full article
Show Figures

Figure 1

17 pages, 9817 KB  
Article
SegMed: An Open-Source Desktop Tool for Deploying Pretrained Deep Learning Models in 3D Medical Image Segmentation
by Mhd Jafar Mortada, Agnese Sbrollini, Klaudia Proniewska-van Dam, Peter M. Van Dam and Laura Burattini
Appl. Sci. 2026, 16(7), 3490; https://doi.org/10.3390/app16073490 - 3 Apr 2026
Viewed by 409
Abstract
Deep learning has become central to semantic segmentation of three-dimensional medical images. However—despite many published models—their adoption in practice remains limited, as deployment often requires advanced programming skills and familiarity with specific machine learning frameworks. Thus, technical barriers restrict its use to specialized [...] Read more.
Deep learning has become central to semantic segmentation of three-dimensional medical images. However—despite many published models—their adoption in practice remains limited, as deployment often requires advanced programming skills and familiarity with specific machine learning frameworks. Thus, technical barriers restrict its use to specialized users. To address this, we present SegMed (version 1.0), an open-source, standalone desktop application that provides an end-to-end workflow for deep learning-based medical image segmentation. SegMed supports the loading and inspection of common medical image formats, as well as array-based formats. The application integrates standard preprocessing operations often used in the field and directly supports loading of pretrained segmentation models implemented in both PyTorch (version 2.X) and Keras (version 2.X) and those created using the Medical Open Network for AI framework (version 1.X). Models are automatically inspected to infer required configurations, such as input size and post-processing steps, enabling segmentation with minimal user intervention. Results can be exported as volumetric images or 3D surface meshes for downstream analysis, visualization, or special applications such as virtual reality. SegMed was tested using multiple publicly available pretrained models, demonstrating robustness and flexibility across diverse segmentation tasks. By abstracting low-level implementation details, SegMed lowers technical barriers, promotes reproducibility, and facilitates the integration of AI-assisted segmentation into medical imaging workflows. Full article
(This article belongs to the Special Issue Medical Image Processing, Reconstruction, and Visualization)
Show Figures

Figure 1

Back to TopTop