Journal Description
Journal of Imaging
Journal of Imaging
is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques published online monthly by MDPI.
- Open Accessfree for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), PubMed, PMC, dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Imaging Science and Photographic Technology) / CiteScore - Q1 (Radiology, Nuclear Medicine and Imaging)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15.3 days after submission; acceptance to publication is undertaken in 3.5 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.3 (2024);
5-Year Impact Factor:
3.3 (2024)
Latest Articles
A Comprehensive Evaluation of Thigh Mineral-Free Lean Mass Measures Using Dual Energy X-Ray Absorptiometry (DXA) in Young Children
J. Imaging 2025, 11(11), 374; https://doi.org/10.3390/jimaging11110374 (registering DOI) - 25 Oct 2025
Abstract
Purpose: This study aimed to (1) demonstrate the intra- and interrater reliability of quadriceps (QUADS) and hamstring (HAMS) mineral-free lean (MFL) mass measures using DXA scanning, (2) determine the association of total thigh MFL mass measures with MFL mass measures of the hamstrings
[...] Read more.
Purpose: This study aimed to (1) demonstrate the intra- and interrater reliability of quadriceps (QUADS) and hamstring (HAMS) mineral-free lean (MFL) mass measures using DXA scanning, (2) determine the association of total thigh MFL mass measures with MFL mass measures of the hamstrings and quadriceps combined and (3) analyze the association between total thigh MFL mass and total body MFL mass measures. Methods: A total of 80 young children (aged 5 to 11 yrs) participated and unique regions of interest were created using custom analysis software with manual tracing of the QUADS, HAMS, and total thigh MFL mass measures. Repeated-measure analysis of variance was used to determine if there were significant differences among the MFL measures while intraclass correlation coefficients (ICC), coefficients of variation (CV), and regression analysis were used to determine the intra- and interrater reliability and the explained variance in the association among MFL mass measures. Results: The right interrater QUADS MFL mass was the only significant group mean difference, and ICCs between (≥0.961) and within (≥0.919) raters were high for all MFL measures with low variation across all MFL measures (≤6.13%). The explained variance was 92.5% and 96.3% for the between-investigator analyses of the right and left total thigh MFL mass measures, respectively. Furthermore, 97.5% of the variance in total body MFL mass was explained by the total thigh MFL mass. Conclusions: DXA MFL mass measures of the QUADS, HAMS and total thigh can be confidently used in young children and may provide an alternative to CT or MRI scanning when assessing changes in MFL cross-sectional area or volume measures due to disease progression, training and rehabilitative strategies.
Full article
(This article belongs to the Section Medical Imaging)
Open AccessCorrection
Correction: El Othmani et al. AI-driven Automated Blood Cell Anomaly Detection: Enhancing Diagnostics and Telehealth in Hematology. J. Imaging 2025, 11, 157
by
Oussama El Othmani, Amine Mosbah, Aymen Yahyaoui, Amina Bouatay and Raouf Dhaouadi
J. Imaging 2025, 11(11), 373; https://doi.org/10.3390/jimaging11110373 - 24 Oct 2025
Abstract
The authors wish to make the following corrections to the published paper [...]
Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Open AccessArticle
Redefining MRI-Based Skull Segmentation Through AI-Driven Multimodal Integration
by
Michel Beyer, Alexander Aigner, Alexandru Burde, Alexander Brasse, Sead Abazi, Lukas B. Seifert, Jakob Wasserthal, Martin Segeroth, Mohamed Omar and Florian M. Thieringer
J. Imaging 2025, 11(11), 372; https://doi.org/10.3390/jimaging11110372 - 22 Oct 2025
Abstract
Skull segmentation in magnetic resonance imaging (MRI) is essential for cranio-maxillofacial (CMF) surgery planning, yet manual approaches are time-consuming and error-prone. Computed tomography (CT) provides superior bone contrast but exposes patients to ionizing radiation, which is particularly concerning in pediatric care. This study
[...] Read more.
Skull segmentation in magnetic resonance imaging (MRI) is essential for cranio-maxillofacial (CMF) surgery planning, yet manual approaches are time-consuming and error-prone. Computed tomography (CT) provides superior bone contrast but exposes patients to ionizing radiation, which is particularly concerning in pediatric care. This study presents an AI-based workflow that enables skull segmentation directly from routine MRI. Using 186 paired CT–MRI datasets, CT-based segmentations were transferred to MRI via multimodal registration to train dedicated deep learning models. Performance was evaluated against manually segmented CT ground truth using Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD), and Hausdorff Distance (HD). AI achieved higher performance on CT (DSC 0.981) than MRI (DSC 0.864), with MSD and HD also favoring CT. Despite lower absolute accuracy on MRI, the approach substantially improved segmentation quality compared with manual MRI methods, particularly in clinically relevant regions. This automated method enables accurate skull modeling from standard MRI without radiation exposure or specialized sequences. While CT remains more precise, the presented framework enhances MRI utility in surgical planning, reduces manual workload, and supports safer, patient-specific treatment, especially for pediatric and trauma cases.
Full article
(This article belongs to the Section AI in Imaging)
►▼
Show Figures

Figure 1
Open AccessReview
Current Trends and Future Opportunities of AI-Based Analysis in Mesenchymal Stem Cell Imaging: A Scoping Review
by
Maksim Solopov, Elizaveta Chechekhina, Viktor Turchin, Andrey Popandopulo, Dmitry Filimonov, Anzhelika Burtseva and Roman Ishchenko
J. Imaging 2025, 11(10), 371; https://doi.org/10.3390/jimaging11100371 - 18 Oct 2025
Abstract
This scoping review explores the application of artificial intelligence (AI) methods for analyzing mesenchymal stem cells (MSCs) images. The aim of this study was to identify key areas where AI-based image processing techniques are utilized for MSCs analysis, assess their effectiveness, and highlight
[...] Read more.
This scoping review explores the application of artificial intelligence (AI) methods for analyzing mesenchymal stem cells (MSCs) images. The aim of this study was to identify key areas where AI-based image processing techniques are utilized for MSCs analysis, assess their effectiveness, and highlight existing challenges. A total of 25 studies published between 2014 and 2024 were selected from six databases (PubMed, Dimensions, Scopus, Google Scholar, eLibrary, and Cochrane) for this review. The findings demonstrate that machine learning algorithms outperform traditional methods in terms of accuracy (up to 97.5%), processing speed and noninvasive capabilities. Among AI methods, convolutional neural networks (CNNs) are the most widely employed, accounting for 64% of the studies reviewed. The primary applications of AI in MSCs image analysis include cell classification (20%), segmentation and counting (20%), differentiation assessment (32%), senescence analysis (12%), and other tasks (16%). The advantages of AI methods include automation of image analysis, elimination of subjective biases, and dynamic monitoring of live cells without the need for fixation and staining. However, significant challenges persist, such as the high heterogeneity of the MSCs population, the absence of standardized protocols for AI implementation, and limited availability of annotated datasets. To advance this field, future efforts should focus on developing interpretable and multimodal AI models, creating standardized validation frameworks and open-access datasets, and establishing clear regulatory pathways for clinical translation. Addressing these challenges is crucial for accelerating the adoption of AI in MSCs biomanufacturing and enhancing the efficacy of cell therapies.
Full article
(This article belongs to the Special Issue Deep Learning in Biomedical Image Segmentation and Classification: Advancements, Challenges and Applications, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Federated Self-Supervised Few-Shot Face Recognition
by
Nursultan Makhanov, Beibut Amirgaliyev, Talgat Islamgozhayev and Didar Yedilkhan
J. Imaging 2025, 11(10), 370; https://doi.org/10.3390/jimaging11100370 - 18 Oct 2025
Abstract
This paper presents a systematic framework that combines federated learning, self-supervised learning, and few-shot learning paradigms for privacy-preserving face recognition. We use the large-scale CASIA-WebFace dataset for self-supervised pre-training using SimCLR in a federated setting, followed by federated few-shot fine-tuning on the LFW
[...] Read more.
This paper presents a systematic framework that combines federated learning, self-supervised learning, and few-shot learning paradigms for privacy-preserving face recognition. We use the large-scale CASIA-WebFace dataset for self-supervised pre-training using SimCLR in a federated setting, followed by federated few-shot fine-tuning on the LFW dataset using prototypical networks. Through comprehensive evaluation across six state-of-the-art architectures (ResNet, DenseNet, MobileViT, ViT-Small, CvT, and CoAtNet), we demonstrate that while our federated approach successfully preserves data privacy, it comes with significant performance trade-offs. Our results show 12–30% accuracy degradation compared to centralized methods, representing the substantial cost of privacy preservation. We find that traditional CNNs show superior robustness to federated constraints compared to transformer-based architectures, and that five-shot configurations provide an optimal balance between data efficiency and performance. This work provides important empirical insights and establishes benchmarks for federated few-shot face recognition, quantifying the privacy–utility trade-offs that practitioners must consider when deploying such systems in real-world applications.
Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
►▼
Show Figures

Figure 1
Open AccessArticle
Preclinical Application of Computer-Aided High-Frequency Ultrasound (HFUS) Imaging: A Preliminary Report on the In Vivo Characterization of Hepatic Steatosis Progression in Mouse Models
by
Sara Gargiulo, Matteo Gramanzini, Denise Bonente, Tiziana Tamborrino, Giovanni Inzalaco, Lisa Gherardini, Lorenzo Franci, Eugenio Bertelli, Virginia Barone and Mario Chiariello
J. Imaging 2025, 11(10), 369; https://doi.org/10.3390/jimaging11100369 - 17 Oct 2025
Abstract
Metabolic dysfunction-associated steatotic liver disease (MASLD) is one of the most common chronic liver disorders worldwide and can lead to inflammation, fibrosis, and liver cancer. To better understand the impact of an unbalanced hypercaloric diet on liver phenotype in impaired autophagy, the study
[...] Read more.
Metabolic dysfunction-associated steatotic liver disease (MASLD) is one of the most common chronic liver disorders worldwide and can lead to inflammation, fibrosis, and liver cancer. To better understand the impact of an unbalanced hypercaloric diet on liver phenotype in impaired autophagy, the study compared C57BL/6J wild type (WT) and MAPK15-ERK8 knockout (KO) male mice with C57BL/6J background fed for 17 weeks with “Western-type” (WD) or standard diet (SD). Liver features were monitored in vivo by high-frequency ultrasound (HFUS) using a semi-quantitative and parametric assessment of pathological changes in the parenchyma complemented by computer-aided diagnosis (CAD) methods. Liver histology was considered the reference standard. WD induced liver steatosis in both genotypes, although KO mice showed more pronounced dietary effects than WT mice. Overall, HFUS reliably detected steatosis-related parenchymal changes over time in the two mouse genotypes examined, consistent with histology. Furthermore, this study demonstrated the feasibility of extracting quantitative features from conventional B-mode ultrasound images of the liver in murine models at early clinical stages of MASLD using a computationally efficient and vendor-independent CAD method. This approach may contribute to the non-invasive characterization of genetically engineered mouse models of MASLD according to the principles of replacement, reduction, and refinement (3Rs), with interesting translational implications.
Full article
(This article belongs to the Special Issue Translational Preclinical Imaging: Techniques, Applications and Perspectives)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Unsupervised Segmentation of Bolus and Residue in Videofluoroscopy Swallowing Studies
by
Farnaz Khodami, Mehdy Dousty, James L. Coyle and Ervin Sejdić
J. Imaging 2025, 11(10), 368; https://doi.org/10.3390/jimaging11100368 - 17 Oct 2025
Abstract
Bolus tracking is a critical component of swallowing analysis, as the speed, course, and integrity of bolus movement from the mouth to the stomach, along with the presence of residue, serve as key indicators of potential abnormalities. Existing machine learning approaches for videofluoroscopic
[...] Read more.
Bolus tracking is a critical component of swallowing analysis, as the speed, course, and integrity of bolus movement from the mouth to the stomach, along with the presence of residue, serve as key indicators of potential abnormalities. Existing machine learning approaches for videofluoroscopic swallowing study (VFSS) analysis heavily rely on annotated data and often struggle to detect residue, which is visually subtle and underrepresented. This study proposes an unsupervised architecture to segment both bolus and residue, marking the first successful machine learning-based residue segmentation in swallowing analysis with quantitative evaluation. We introduce an unsupervised convolutional autoencoder that segments bolus and residue without requiring pixel-level annotations. To address the locality bias inherent in convolutional architectures, we incorporate positional encoding into the input representation, enabling the model to capture global spatial context. The proposed model was validated on a diverse set of VFSS images annotated by certified raters. Our method achieves an intersection over union (IoU) of 61% for bolus segmentation—comparable to state-of-the-art supervised methods—and 52% for residue detection. Despite not using pixel-wise labels for training, our model significantly outperforms top-performing supervised baselines in residue detection, as confirmed by statistical testing. These findings suggest that learning from negative space provides a robust and generalizable pathway for detecting clinically significant but sparsely represented features like residue.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
ImbDef-GAN: Defect Image-Generation Method Based on Sample Imbalance
by
Dengbiao Jiang, Nian Tao, Kelong Zhu, Yiming Wang and Haijian Shao
J. Imaging 2025, 11(10), 367; https://doi.org/10.3390/jimaging11100367 - 16 Oct 2025
Abstract
In industrial settings, defect detection using deep learning typically requires large numbers of defective samples. However, defective products are rare on production lines, creating a scarcity of defect samples and an overabundance of samples that contain only background. We introduce ImbDef-GAN, a sample
[...] Read more.
In industrial settings, defect detection using deep learning typically requires large numbers of defective samples. However, defective products are rare on production lines, creating a scarcity of defect samples and an overabundance of samples that contain only background. We introduce ImbDef-GAN, a sample imbalance generative framework, to address three persistent limitations in defect image generation: unnatural transitions at defect background boundaries, misalignment between defects and their masks, and out-of-bounds defect placement. The framework operates in two stages: (i) background image generation and (ii) defect image generation conditioned on the generated background. In the background image-generation stage, a lightweight StyleGAN3 variant jointly generates the background image and its segmentation mask. A Progress-coupled Gated Detail Injection module uses global scheduling driven by training progress and per-pixel gating to inject high-frequency information in a controlled manner, thereby enhancing background detail while preserving training stability. In the defect image-generation stage, the design augments the background generator with a residual branch that extracts defect features. By blending defect features with a smoothing coefficient, the resulting defect boundaries transition more naturally and gradually. A mask-aware matching discriminator enforces consistency between each defect image and its mask. In addition, an Edge Structure Loss and a Region Consistency Loss strengthen morphological fidelity and spatial constraints within the valid mask region. Extensive experiments on the MVTec AD dataset demonstrate that ImbDef-GAN surpasses existing methods in both the realism and diversity of generated defects. When the generated data are used to train a downstream detector, YOLOv11 achieves a 5.4% improvement in mAP@0.5, indicating that the proposed approach effectively improves detection accuracy under sample imbalance.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures

Figure 1
Open AccessEditorial
Editorial on the Special Issue: “Advances in Retinal Image Processing”
by
P. Jidesh and Vasudevan Lakshminarayanan
J. Imaging 2025, 11(10), 366; https://doi.org/10.3390/jimaging11100366 - 16 Oct 2025
Abstract
Retinal disorders are one of the major causes of visual impairment [...]
Full article
(This article belongs to the Special Issue Advances in Retinal Image Processing)
Open AccessArticle
Automatic Brain Tumor Segmentation in 2D Intra-Operative Ultrasound Images Using Magnetic Resonance Imaging Tumor Annotations
by
Mathilde Gajda Faanes, Ragnhild Holden Helland, Ole Solheim, Sébastien Muller and Ingerid Reinertsen
J. Imaging 2025, 11(10), 365; https://doi.org/10.3390/jimaging11100365 - 16 Oct 2025
Abstract
Automatic segmentation of brain tumors in intra-operative ultrasound (iUS) images could facilitate localization of tumor tissue during the resection surgery. The lack of large annotated datasets limits the current models performances. In this paper, we investigated the use of tumor annotations in magnetic
[...] Read more.
Automatic segmentation of brain tumors in intra-operative ultrasound (iUS) images could facilitate localization of tumor tissue during the resection surgery. The lack of large annotated datasets limits the current models performances. In this paper, we investigated the use of tumor annotations in magnetic resonance imaging (MRI) scans, which are more accessible than annotations in iUS images, for training of deep learning models for iUS brain tumor segmentation. We used 180 annotated MRI scans with corresponding unannotated iUS images, and 29 annotated iUS images. Image registration was performed to transfer the MRI annotations to the corresponding iUS images before training the nnU-Net model with different configurations of the data and label origins. The results showed similar performance for a model trained with only MRI annotated tumors compared to models trained with only iUS annotations and both, and to expert annotations, indicating that MRI tumor annotations can be used as a substitute for iUS tumor annotations to train a deep learning model for automatic brain tumor segmentation in the iUS images. The best model obtained an average Dice score of 0.62 ± 0.31, compared to 0.67 ± 0.25 for an expert neurosurgeon, where the performance on larger tumors was similar, but lower for the models on smaller tumors. In addition, the results showed that removing smaller tumors from the training sets improved the results.
Full article
(This article belongs to the Special Issue Progress and Challenges in Biomedical Image Analysis—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessCommunication
Surgical Instrument Segmentation via Segment-Then-Classify Framework with Instance-Level Spatiotemporal Consistency Modeling
by
Tiyao Zhang, Xue Yuan and Hongze Xu
J. Imaging 2025, 11(10), 364; https://doi.org/10.3390/jimaging11100364 - 15 Oct 2025
Abstract
Accurate segmentation of surgical instruments in endoscopic videos is crucial for robot-assisted surgery and intraoperative analysis. This paper presents a Segment-then-Classify framework that decouples mask generation from semantic classification to enhance spatial completeness and temporal stability. First, a Mask2Former-based segmentation backbone generates class-agnostic
[...] Read more.
Accurate segmentation of surgical instruments in endoscopic videos is crucial for robot-assisted surgery and intraoperative analysis. This paper presents a Segment-then-Classify framework that decouples mask generation from semantic classification to enhance spatial completeness and temporal stability. First, a Mask2Former-based segmentation backbone generates class-agnostic instance masks and region features. Then, a bounding box-guided instance-level spatiotemporal modeling module fuses geometric priors and temporal consistency through a lightweight transformer encoder. This design improves interpretability and robustness under occlusion and motion blur. Experiments on the EndoVis 2017 and 2018 datasets demonstrate that our framework achieves mIoU improvements of 3.06%, 2.99%, and 1.67% and mcIoU gains of 2.36%, 2.85%, and 6.06%, respectively, over previously state-of-the-art methods, while maintaining computational efficiency.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures

Figure 1
Open AccessArticle
Radiographic Markers of Hip Dysplasia and Femoroacetabular Impingement Are Associated with Deterioration in Acetabular and Femoral Cartilage Quality: Insights from T2 MRI Mapping
by
Adam Peszek, Kyle S. J. Jamar, Catherine C. Alder, Trevor J. Wait, Caleb J. Wipf, Carson L. Keeter, Stephanie W. Mayer, Charles P. Ho and James W. Genuario
J. Imaging 2025, 11(10), 363; https://doi.org/10.3390/jimaging11100363 - 14 Oct 2025
Abstract
Femoroacetabular impingement (FAI) and hip dysplasia have been shown to increase the risk of hip osteoarthritis in affected individuals. MRI with T2 mapping provides an objective measure of femoral and acetabular articular cartilage tissue quality. This study aims to evaluate the relationship between
[...] Read more.
Femoroacetabular impingement (FAI) and hip dysplasia have been shown to increase the risk of hip osteoarthritis in affected individuals. MRI with T2 mapping provides an objective measure of femoral and acetabular articular cartilage tissue quality. This study aims to evaluate the relationship between hip morphology measurements collected from three-dimensional (3D) reconstructed computed tomography (CT) scans and the T2 mapping values of hip articular cartilage assessed by three independent, blinded reviewers on the optimal sagittal cut. Hip morphology measures including lateral center edge angle (LCEA), acetabular version, Tönnis angle, acetabular coverage, alpha angle, femoral torsion, neck-shaft angle (FNSA), and combined version were recorded from preoperative CT scans. The relationship between T2 values and hip morphology was assessed using univariate linear mixed models with random effects for individual patients. Significant associations were observed between femoral and acetabular articular cartilage T2 values and all hip morphology measures except femoral torsion. Hip morphology measurements consistent with dysplastic anatomy including decreased LCEA, increased Tönnis angle, and decreased acetabular coverage were associated with increased cartilage damage (p < 0.001 for all). Articular cartilage T2 values were strongly associated with the radiographic markers of hip dysplasia, suggesting hip microinstability significantly contributes to cartilage damage. The relationships between hip morphology measurements and T2 values were similar for the femoral and acetabular sides, indicating that damage to both surfaces is comparable rather than preferentially affecting one side.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
CT Imaging Biomarkers in Rhinogenic Contact Point Headache: Quantitative Phenotyping and Diagnostic Correlations
by
Salvatore Lavalle, Salvatore Ferlito, Jerome Rene Lechien, Mario Lentini, Placido Romeo, Alberto Maria Saibene, Gian Luca Fadda and Antonino Maniaci
J. Imaging 2025, 11(10), 362; https://doi.org/10.3390/jimaging11100362 - 14 Oct 2025
Abstract
Rhinogenic contact point headache (RCPH) represents a diagnostic challenge due to different anatomical presentations and unstandardized imaging markers. This prospective multicenter study involving 120 patients aimed to develop and validate a CT-based imaging framework for RCPH diagnosis. High-resolution CT scans were systematically assessed
[...] Read more.
Rhinogenic contact point headache (RCPH) represents a diagnostic challenge due to different anatomical presentations and unstandardized imaging markers. This prospective multicenter study involving 120 patients aimed to develop and validate a CT-based imaging framework for RCPH diagnosis. High-resolution CT scans were systematically assessed for seven parameters: contact point (CP) type, contact intensity (CI), septal deviation, concha bullosa (CB) morphology, mucosal edema (ME), turbinate hypertrophy (TH), and associated anatomical variants. Results revealed CP-I (37.5%) and CP-II (22.5%) as predominant patterns, with moderate CI (45.8%) and septal deviation > 15° (71.7%) commonly observed. CB was found in 54.2% of patients, primarily bulbous type (26.7%). Interestingly, focal ME at CP was independently associated with greater pain severity in the multivariate model (p = 0.003). The framework demonstrated substantial to excellent interobserver reliability (κ = 0.76–0.91), with multivariate analysis identifying moderate–severe CI, focal ME, and specific septal deviation patterns as independent predictors of higher pain scores. Our imaging classification system highlights key radiological biomarkers associated with symptom severity and may facilitate future applications in quantitative imaging, automated phenotyping, and personalized treatment approaches.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
A Lesion-Aware Patch Sampling Approach with EfficientNet3D-UNet for Robust Multiple Sclerosis Lesion Segmentation
by
Hind Almaaz and Samia Dardouri
J. Imaging 2025, 11(10), 361; https://doi.org/10.3390/jimaging11100361 - 13 Oct 2025
Abstract
Accurate segmentation of multiple sclerosis (MS) lesions from 3D MRI scans is essential for diagnosis, disease monitoring, and treatment planning. However, this task remains challenging due to the sparsity, heterogeneity, and subtle appearance of lesions, as well as the difficulty in obtaining high-quality
[...] Read more.
Accurate segmentation of multiple sclerosis (MS) lesions from 3D MRI scans is essential for diagnosis, disease monitoring, and treatment planning. However, this task remains challenging due to the sparsity, heterogeneity, and subtle appearance of lesions, as well as the difficulty in obtaining high-quality annotations. In this study, we propose Efficient-Net3D-UNet, a deep learning framework that combines compound-scaled MBConv3D blocks with a lesion-aware patch sampling strategy to improve volumetric segmentation performance across multi-modal MRI sequences (FLAIR, T1, and T2). The model was evaluated against a conventional 3D U-Net baseline using standard metrics including Dice similarity coefficient, precision, recall, accuracy, and specificity. On a held-out test set, EfficientNet3D-UNet achieved a Dice score of 48.39%, precision of 49.76%, and recall of 55.41%, outperforming the baseline 3D U-Net, which obtained a Dice score of 31.28%, precision of 32.48%, and recall of 43.04%. Both models reached an overall accuracy of 99.14%. Notably, EfficientNet3D-UNet also demonstrated faster convergence and reduced overfitting during training. These results highlight the potential of EfficientNet3D-UNet as a robust and computationally efficient solution for automated MS lesion segmentation, offering promising applicability in real-world clinical settings.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Lung Nodule Malignancy Classification Integrating Deep and Radiomic Features in a Three-Way Attention-Based Fusion Module
by
Sadaf Khademi, Shahin Heidarian, Parnian Afshar, Arash Mohammadi, Abdul Sidiqi, Elsie T. Nguyen, Balaji Ganeshan and Anastasia Oikonomou
J. Imaging 2025, 11(10), 360; https://doi.org/10.3390/jimaging11100360 - 13 Oct 2025
Abstract
In this study, we propose a novel hybrid framework for assessing the invasiveness of an in-house dataset of 114 pathologically proven lung adenocarcinomas presenting as subsolid nodules on Computed Tomography (CT). Nodules were classified into group 1 (G1), which included atypical adenomatous hyperplasia,
[...] Read more.
In this study, we propose a novel hybrid framework for assessing the invasiveness of an in-house dataset of 114 pathologically proven lung adenocarcinomas presenting as subsolid nodules on Computed Tomography (CT). Nodules were classified into group 1 (G1), which included atypical adenomatous hyperplasia, adenocarcinoma in situ, and minimally invasive adenocarcinomas, and group 2 (G2), which included invasive adenocarcinomas. Our approach includes a three-way Integration of Visual, Spatial, and Temporal features with Attention, referred to as I-VISTA, obtained from three processing algorithms designed based on Deep Learning (DL) and radiomic models, leading to a more comprehensive analysis of nodule variations. The aforementioned processing algorithms are arranged in the following three parallel paths: (i) The Shifted Window (SWin) Transformer path, which is a hierarchical vision Transformer that extracts nodules’ related spatial features; (ii) The Convolutional Auto-Encoder (CAE) Transformer path, which captures informative features related to inter-slice relations via a modified Transformer encoder architecture; and (iii) a 3D Radiomic-based path that collects quantitative features based on texture analysis of each nodule. Extracted feature sets are then passed through the Criss-Cross attention fusion module to discover the most informative feature patterns and classify nodules type. The experiments were evaluated based on a ten-fold cross-validation scheme. I-VISTA framework achieved the best performance of overall accuracy, sensitivity, and specificity (mean ± std) of 93.93 ± 6.80%, 92.66 ± 9.04%, and 94.99 ± 7.63% with an Area under the ROC Curve (AUC) of 0.93 ± 0.08 for lung nodule classification among ten folds. The hybrid framework integrating DL and hand-crafted 3D Radiomic model outperformed the standalone DL and hand-crafted 3D Radiomic model in differentiating G1 from G2 subsolid nodules identified on CT.
Full article
(This article belongs to the Special Issue Progress and Challenges in Biomedical Image Analysis—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Non-Contrast Brain CT Images Segmentation Enhancement: Lightweight Pre-Processing Model for Ultra-Early Ischemic Lesion Recognition and Segmentation
by
Aleksei Samarin, Alexander Savelev, Aleksei Toropov, Aleksandra Dozortseva, Egor Kotenko, Artem Nazarenko, Alexander Motyko, Galiya Narova, Elena Mikhailova and Valentin Malykh
J. Imaging 2025, 11(10), 359; https://doi.org/10.3390/jimaging11100359 - 13 Oct 2025
Abstract
Timely identification and accurate delineation of ultra-early ischemic stroke lesions in non-contrast computed tomography (CT) scans of the human brain are of paramount importance for prompt medical intervention and improved patient outcomes. In this study, we propose a deep learning-driven methodology specifically designed
[...] Read more.
Timely identification and accurate delineation of ultra-early ischemic stroke lesions in non-contrast computed tomography (CT) scans of the human brain are of paramount importance for prompt medical intervention and improved patient outcomes. In this study, we propose a deep learning-driven methodology specifically designed for segmenting ultra-early ischemic regions, with a particular emphasis on both the ischemic core and the surrounding penumbra during the initial stages of stroke progression. We introduce a lightweight preprocessing model based on convolutional filtering techniques, which enhances image clarity while preserving the structural integrity of medical scans, a critical factor when detecting subtle signs of ultra-early ischemic strokes. Unlike conventional preprocessing methods that directly modify the image and may introduce artifacts or distortions, our approach ensures the absence of neural network-induced artifacts, which is especially crucial for accurate diagnosis and segmentation of ultra-early ischemic lesions. The model employs predefined differentiable filters with trainable parameters, allowing for artifact-free and precision-enhanced image refinement tailored to the challenges of ultra-early stroke detection. In addition, we incorporated into the combined preprocessing pipeline a newly proposed trainable linear combination of pretrained image filters, a concept first introduced in this study. For model training and evaluation, we utilize a publicly available dataset of acute ischemic stroke cases, focusing on the subset relevant to ultra-early stroke manifestations, which contains annotated non-contrast CT brain scans from 112 patients. The proposed model demonstrates high segmentation accuracy for ultra-early ischemic regions, surpassing existing methodologies across key performance metrics. The results have been rigorously validated on test subsets from the dataset, confirming the effectiveness of our approach in supporting the early-stage diagnosis and treatment planning for ultra-early ischemic strokes.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Lightweight Statistical and Texture Feature Approach for Breast Thermogram Analysis
by
Ana P. Romero-Carmona, Jose J. Rangel-Magdaleno, Francisco J. Renero-Carrillo, Juan M. Ramirez-Cortes and Hayde Peregrina-Barreto
J. Imaging 2025, 11(10), 358; https://doi.org/10.3390/jimaging11100358 - 13 Oct 2025
Abstract
Breast cancer is the most commonly diagnosed cancer in women globally and represents the leading cause of mortality related to malignant tumors. Currently, healthcare professionals are focused on developing and implementing innovative techniques to improve the early detection of this disease. Thermography, studied
[...] Read more.
Breast cancer is the most commonly diagnosed cancer in women globally and represents the leading cause of mortality related to malignant tumors. Currently, healthcare professionals are focused on developing and implementing innovative techniques to improve the early detection of this disease. Thermography, studied as a complementary method to traditional approaches, captures infrared radiation emitted by tissues and converts it into data about skin surface temperature. During tumor development, angiogenesis occurs, increasing blood flow to support tumor growth, which raises the surface temperature in the affected area. Automatic classification techniques have been explored to analyze thermographic images and develop an optimal classification tool to identify thermal anomalies. This study aims to design a concise description using statistical and texture features to accurately classify thermograms as control or highly probable to be cancer (with thermal anomalies). The importance of employing a short description lies in facilitating interpretation by medical professionals. In contrast, a characterization based on a large number of variables could make it more challenging to identify which values differentiate the thermograms between groups, thereby complicating the explanation of results to patients. A maximum accuracy of 91.97% was achieved by applying only seven features and using a Coarse Decision Tree (DT) classifier and robust Machine Learning (ML) model, which demonstrated competitive performance compared with previously reported studies.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
New Solution for Segmental Assessment of Left Ventricular Wall Thickness, Using Anatomically Accurate and Highly Reproducible Automated Cardiac MRI Software
by
Balázs Mester, Kristóf Attila Farkas-Sütő, Júlia Magdolna Tardy, Kinga Grebur, Márton Horváth, Flóra Klára Gyulánczi, Hajnalka Vágó, Béla Merkely and Andrea Szűcs
J. Imaging 2025, 11(10), 357; https://doi.org/10.3390/jimaging11100357 - 11 Oct 2025
Abstract
Introduction: Changes in left ventricular (LV) wall thickness serve as important diagnostic and prognostic indicators in various cardiovascular diseases. To date, no automated software exists for the measurement of myocardial segmental wall thickness in cardiac MRI (CMR), which leads to reliance on manual
[...] Read more.
Introduction: Changes in left ventricular (LV) wall thickness serve as important diagnostic and prognostic indicators in various cardiovascular diseases. To date, no automated software exists for the measurement of myocardial segmental wall thickness in cardiac MRI (CMR), which leads to reliance on manual caliper measurements that carry risks of inaccuracy. Aims: This paper aims to present a new automated segmental wall thickness measurement software, OptiLayer, developed to address this issue and to compare it with the conventional manual measurement method. Methods: In our pilot study, the algorithm of the OptiLayer software was tested on 50 HEALTHY individuals, and 50 excessively trabeculated noncompaction (LVET) subjects with preserved LV function, whose morphology makes it more challenging to measure left ventricular wall thickness, although often occurring with myocardial thinning. Measurements were performed by two independent investigators who assessed LV wall thicknesses in 16 segments, both manually using the Medis Suite QMass program and automatically with the new OptiLayer method, which enables high-density sampling across the distance between the epicardial and endocardial contours. Results: The results showed that the segmental wall thickness measurement values of the OptiLayer algorithm were significantly higher than those of the manual caliper. In comparisons of the HEALTHY and LVET subgroups, OptiLayer measurements demonstrated differences at several points than manual measurements. Between the investigators, manual measurements showed low intraclass correlations (ICC below 0.6 on average), while measurements with OptiLayer gave excellent agreement (ICC above 0.9 in 75% of segments). Conclusions: Our study suggests that OptiLayer, a new automated wall thickness measurement software based on high-precision anatomical segmentation, offers a faster, more accurate, and more reproducible alternative to manual measurements.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
AI Diffusion Models Generate Realistic Synthetic Dental Radiographs Using a Limited Dataset
by
Brian Kirkwood, Byeong Yeob Choi, James Bynum and Jose Salinas
J. Imaging 2025, 11(10), 356; https://doi.org/10.3390/jimaging11100356 - 11 Oct 2025
Abstract
Generative Artificial Intelligence (AI) has the potential to address the limited availability of dental radiographs for the development of Dental AI systems by creating clinically realistic synthetic dental radiographs (SDRs). Evaluation of artificially generated images requires both expert review and objective measures of
[...] Read more.
Generative Artificial Intelligence (AI) has the potential to address the limited availability of dental radiographs for the development of Dental AI systems by creating clinically realistic synthetic dental radiographs (SDRs). Evaluation of artificially generated images requires both expert review and objective measures of fidelity. A stepwise approach was used to processing 10,000 dental radiographs. First, a single dentist screened images to determine if specific image selection criterion was met; this identified 225 images. From these, 200 images were randomly selected for training an AI image generation model. Second, 100 images were randomly selected from the previous training dataset and evaluated by four dentists; the expert review identified 57 images that met image selection criteria to refine training for two additional AI models. The three models were used to generate 500 SDRs each and the clinical realism of the SDRs was assessed through expert review. In addition, the SDRs generated by each model were objectively evaluated using quantitative metrics: Fréchet Inception Distance (FID) and Kernel Inception Distance (KID). Evaluation of the SDR by a dentist determined that expert-informed curation improved SDR realism, and refinement of model architecture produced further gains. FID and KID analysis confirmed that expert input and technical refinement improve image fidelity. The convergence of subjective and objective assessments strengthens confidence that the refined model architecture can serve as a foundation for SDR image generation, while highlighting the importance of expert-informed data curation and domain-specific evaluation metrics.
Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
An Improved Capsule Network for Image Classification Using Multi-Scale Feature Extraction
by
Wenjie Huang, Ruiqing Kang, Lingyan Li and Wenhui Feng
J. Imaging 2025, 11(10), 355; https://doi.org/10.3390/jimaging11100355 - 10 Oct 2025
Abstract
In the realm of image classification, the capsule network is a network topology that packs the extracted features into many capsules, performs sophisticated capsule screening using a dynamic routing mechanism, and finally recognizes that each capsule corresponds to a category feature. Compared with
[...] Read more.
In the realm of image classification, the capsule network is a network topology that packs the extracted features into many capsules, performs sophisticated capsule screening using a dynamic routing mechanism, and finally recognizes that each capsule corresponds to a category feature. Compared with previous network topologies, the capsule network has more sophisticated operations, uses a large number of parameter matrices and vectors to express picture attributes, and has more powerful image classification capabilities. However, in the practical application field, the capsule network has always been constrained by the quantity of calculation produced by the complicated structure. In the face of basic datasets, it is prone to over-fitting and poor generalization and often cannot satisfy the high computational overhead when facing complex datasets. Based on the aforesaid problems, this research proposes a novel enhanced capsule network topology. The upgraded network boosts the feature extraction ability of the network by incorporating a multi-scale feature extraction module based on proprietary star structure convolution into the standard capsule network. At the same time, additional structural portions of the capsule network are changed, and a variety of optimization approaches such as dense connection, attention mechanism, and low-rank matrix operation are combined. Image classification studies are carried out on different datasets, and the novel structure suggested in this paper has good classification performance on CIFAR-10, CIFAR-100, and CUB datasets. At the same time, we also achieved 98.21% and 95.38% classification accuracy on two complicated datasets of skin cancer ISIC derived and Forged Face EXP.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures

Figure 1
Journal Menu
► ▼ Journal Menu-
- J. Imaging Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections
- Article Processing Charge
- Indexing & Archiving
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
- 10th Anniversary
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Electronics, MAKE, J. Imaging, Sensors
Applied Computer Vision and Pattern Recognition: 2nd Edition
Topic Editors: Antonio Fernández-Caballero, Byung-Gyu KimDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 31 March 2026
Topic in
Applied Sciences, Electronics, J. Imaging, MAKE, Information, BDCC, Signals
Applications of Image and Video Processing in Medical Imaging
Topic Editors: Jyh-Cheng Chen, Kuangyu ShiDeadline: 30 April 2026
Conferences
Special Issues
Special Issue in
J. Imaging
Bridging Medical Imaging and Biosignal Analysis: Innovations in Healthcare Diagnostics
Guest Editors: Simona Turco, Massimo SalviDeadline: 31 October 2025
Special Issue in
J. Imaging
Image Segmentation Techniques: Current Status and Future Directions (2nd Edition)
Guest Editors: Xiaohao Cai, Gaohang YuDeadline: 31 October 2025
Special Issue in
J. Imaging
Advancement in Multispectral and Hyperspectral Pansharpening Image Processing
Guest Editors: Simone Zini, Mirko Paolo Barbato, Flavio PiccoliDeadline: 10 November 2025
Special Issue in
J. Imaging
Object Detection in Video Surveillance Systems
Guest Editors: Jesús Ruiz-Santaquiteria Alegre, Juan Antonio Álvarez García, Harbinder SinghDeadline: 30 November 2025




