Advances in Biomedical Image Processing and Artificial Intelligence for Computer-Aided Diagnosis in Medicine

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 17747

Special Issue Editors


E-Mail Website
Guest Editor
1. Ri.MED Foundation, via Bandiera 11, 90133 Palermo, Italy
2. Research Affiliate Long Term, Laboratory of Computational Computer Vision (LCCV), School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Interests: biomedical image processing and analysis; radiomics; artificial intelligence; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, Via Ospedale 72, 09124 Cagliari, Italy
Interests: computer vision; image retrieval; biomedical image analysis; pattern recognition and machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy
Interests: computer vision; medical image analysis; shape analysis and matching; image retrieval and classification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
Interests: non-invasive imaging techniques: positron emission tomography (PET), computerized tomography (CT), and magnetic resonance (MR); radiomics and artificial intelligence in clinical health care applications; processing, quantification, and correction methods for ex vivo and in vivo medical images
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

With the digitization of medical data, various artificial intelligence techniques are now being employed. Radiomics and texture analysis are possible through the use of positron emission tomography, computerized tomography, and magnetic resonance imaging. Machine and deep learning techniques can aid in improving therapeutic tools, diagnostic decisions, and rehabilitation. Nevertheless, diagnosing patients has become more difficult due to the abundance of data from different imaging techniques, patient diversity, and the necessity to consider data from various sources, leading to the domain shift problem. Radiologists and pathologists rely on computer-aided diagnosis (CAD) systems to analyze biomedical images and address these challenges. CAD systems help reduce inter- and intra-observer variability, which occurs when different physicians assess the area under the same assumptions or at different times. Additionally, data access can be prevented due to privacy, security, and intellectual property concerns. Synthetic data are increasingly being explored in this context to address these issues. This Special Issue is connected to the 2nd International Workshop on Artificial Intelligence and Radiomics in Computer-Aided Diagnosis (AIRCAD 2023) but is open to additional submissions addressing topics relevant to the Special Issue’s scope. This Special Issue will cover the latest developments in biomedical image processing using machine learning, deep learning, artificial intelligence, and radiomics features, focusing on practical applications and their integration into the medical image processing workflow.

Potential topics include but are not limited to the following: biomedical image processing; machine and deep learning techniques for image analysis (i.e., the segmentation of cells, tissues, organs, and lesions and the classification of cells, diseases, tumors, etc.); image registration techniques; image preprocessing techniques; image-based 3D reconstruction; computer-aided detection and diagnosis systems (CADs); biomedical image analysis; radiomics and artificial intelligence for personalized medicine; machine and deep learning as tools to support medical diagnoses and decisions; image retrieval (e.g., context-based retrieval and lesion similarity); CAD architectures; advanced architectures for biomedical image remote processing, elaboration, and transmission; 3D vision, virtual, augmented, and mixed reality applications for remote surgery; image processing techniques for privacy-preserving AI in medicine.

Dr. Andrea Loddo
Dr. Albert Comelli
Dr. Cecilia Di Ruberto
Dr. Lorenzo Putzu
Dr. Alessandro Stefano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • CAD architectures
  • biomedical image processing
  • machine learning
  • deep learning
  • biomedical image analysis
  • radiomics
  • artificial intelligence
  • personalized medicine
  • privacy-preserving AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

23 pages, 2595 KiB  
Article
Joint Image Processing with Learning-Driven Data Representation and Model Behavior for Non-Intrusive Anemia Diagnosis in Pediatric Patients
by Tarek Berghout
J. Imaging 2024, 10(10), 245; https://doi.org/10.3390/jimaging10100245 - 2 Oct 2024
Viewed by 879
Abstract
Anemia diagnosis is crucial for pediatric patients due to its impact on growth and development. Traditional methods, like blood tests, are effective but pose challenges, such as discomfort, infection risk, and frequent monitoring difficulties, underscoring the need for non-intrusive diagnostic methods. In light [...] Read more.
Anemia diagnosis is crucial for pediatric patients due to its impact on growth and development. Traditional methods, like blood tests, are effective but pose challenges, such as discomfort, infection risk, and frequent monitoring difficulties, underscoring the need for non-intrusive diagnostic methods. In light of this, this study proposes a novel method that combines image processing with learning-driven data representation and model behavior for non-intrusive anemia diagnosis in pediatric patients. The contributions of this study are threefold. First, it uses an image-processing pipeline to extract 181 features from 13 categories, with a feature-selection process identifying the most crucial data for learning. Second, a deep multilayered network based on long short-term memory (LSTM) is utilized to train a model for classifying images into anemic and non-anemic cases, where hyperparameters are optimized using Bayesian approaches. Third, the trained LSTM model is integrated as a layer into a learning model developed based on recurrent expansion rules, forming a part of a new deep network called a recurrent expansion network (RexNet). RexNet is designed to learn data representations akin to traditional deep-learning methods while also understanding the interaction between dependent and independent variables. The proposed approach is applied to three public datasets, namely conjunctival eye images, palmar images, and fingernail images of children aged up to 6 years. RexNet achieves an overall evaluation of 99.83 ± 0.02% across all classification metrics, demonstrating significant improvements in diagnostic results and generalization compared to LSTM networks and existing methods. This highlights RexNet’s potential as a promising alternative to traditional blood-based methods for non-intrusive anemia diagnosis. Full article
Show Figures

Figure 1

10 pages, 5992 KiB  
Article
Comparison of Visual and Quantra Software Mammographic Density Assessment According to BI-RADS® in 2D and 3D Images
by Francesca Morciano, Cristina Marcazzan, Rossella Rella, Oscar Tommasini, Marco Conti, Paolo Belli, Andrea Spagnolo, Andrea Quaglia, Stefano Tambalo, Andreea Georgiana Trisca, Claudia Rossati, Francesca Fornasa and Giovanna Romanucci
J. Imaging 2024, 10(9), 238; https://doi.org/10.3390/jimaging10090238 - 23 Sep 2024
Viewed by 582
Abstract
Mammographic density (MD) assessment is subject to inter- and intra-observer variability. An automated method, such as Quantra software, could be a useful tool for an objective and reproducible MD assessment. Our purpose was to evaluate the performance of Quantra software in assessing MD, [...] Read more.
Mammographic density (MD) assessment is subject to inter- and intra-observer variability. An automated method, such as Quantra software, could be a useful tool for an objective and reproducible MD assessment. Our purpose was to evaluate the performance of Quantra software in assessing MD, according to BI-RADS® Atlas Fifth Edition recommendations, verifying the degree of agreement with the gold standard, given by the consensus of two breast radiologists. A total of 5009 screening examinations were evaluated by two radiologists and analysed by Quantra software to assess MD. The agreement between the three assigned values was expressed as intraclass correlation coefficients (ICCs). The agreement between the software and the two readers (R1 and R2) was moderate with ICC values of 0.725 and 0.713, respectively. A better agreement was demonstrated between the software’s assessment and the average score of the values assigned by the two radiologists, with an index of 0.793, which reflects a good correlation. Quantra software appears a promising tool in supporting radiologists in the MD assessment and could be part of a personalised screening protocol soon. However, some fine-tuning is needed to improve its accuracy, reduce its tendency to overestimate, and ensure it excludes high-density structures from its assessment. Full article
Show Figures

Figure 1

23 pages, 5832 KiB  
Article
Enhancing Deep Learning Model Explainability in Brain Tumor Datasets Using Post-Heuristic Approaches
by Konstantinos Pasvantis and Eftychios Protopapadakis
J. Imaging 2024, 10(9), 232; https://doi.org/10.3390/jimaging10090232 - 18 Sep 2024
Viewed by 936
Abstract
The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint by enhancing the interpretability robustness. The primary focus [...] Read more.
The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved through post-processing mechanisms based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results in the context of medical diagnosis. Full article
Show Figures

Graphical abstract

13 pages, 1308 KiB  
Article
Decoding Breast Cancer: Using Radiomics to Non-Invasively Unveil Molecular Subtypes Directly from Mammographic Images
by Manon A. G. Bakker, Maria de Lurdes Ovalho, Nuno Matela and Ana M. Mota
J. Imaging 2024, 10(9), 218; https://doi.org/10.3390/jimaging10090218 - 4 Sep 2024
Viewed by 997
Abstract
Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from [...] Read more.
Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from screening digital mammography (DM) images. A retrospective study was performed using the OPTIMAM Mammography Image Database (OMI-DB). Four binary classification tasks were performed: luminal A vs. non-luminal A, luminal B vs. non-luminal B, TNBC vs. non-TNBC, and HER2 vs. non-HER2. Feature selection was carried out by Pearson correlation and LASSO. The support vector machine (SVM) and naive Bayes (NB) ML classifiers were used, and their performance was evaluated with the accuracy and the area under the receiver operating characteristic curve (AUC). A total of 186 patients were included in the study: 58 luminal A, 35 luminal B, 52 TNBC, and 41 HER2. The SVM classifier resulted in AUCs during testing of 0.855 for luminal A, 0.812 for luminal B, 0.789 for TNBC, and 0.755 for HER2, respectively. The NB classifier showed AUCs during testing of 0.714 for luminal A, 0.746 for luminal B, 0.593 for TNBC, and 0.714 for HER2. The SVM classifier outperformed NB with statistical significance for luminal A (p = 0.0268) and TNBC (p = 0.0073). Our study showed the potential of radiomics for non-invasive breast cancer subtype classification. Full article
Show Figures

Figure 1

29 pages, 4861 KiB  
Article
A New Approach for Effective Retrieval of Medical Images: A Step towards Computer-Assisted Diagnosis
by Suchita Sharma and Ashutosh Aggarwal
J. Imaging 2024, 10(9), 210; https://doi.org/10.3390/jimaging10090210 - 26 Aug 2024
Viewed by 703
Abstract
The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the [...] Read more.
The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the quality of patient’s diagnosis. Therefore, content-based retrieval of medical images has a very prominent role in fulfilling our ultimate goal of developing automated computer-assisted diagnosis systems. Therefore, this paper presents a content-based medical image retrieval system that extracts multi-resolution, noise-resistant, rotation-invariant texture features in the form of a novel pattern descriptor, i.e., MsNrRiTxP, from medical images. In the proposed approach, the input medical image is initially decomposed into three neutrosophic images on its transformation into the neutrosophic domain. Afterwards, three distinct pattern descriptors, i.e., MsTrP, NrTxP, and RiTxP, are derived at multiple scales from the three neutrosophic images. The proposed MsNrRiTxP pattern descriptor is obtained by scale-wise concatenation of the joint histograms of MsTrP×RiTxP and NrTxP×RiTxP. To demonstrate the efficacy of the proposed system, medical images of different modalities, i.e., CT and MRI, from four test datasets are considered in our experimental setup. The retrieval performance of the proposed approach is exhaustively compared with several existing, recent, and state-of-the-art local binary pattern-based variants. The retrieval rates obtained by the proposed approach for the noise-free and noisy variants of the test datasets are observed to be substantially higher than the compared ones. Full article
Show Figures

Figure 1

27 pages, 14394 KiB  
Article
Celiac Disease Deep Learning Image Classification Using Convolutional Neural Networks
by Joaquim Carreras
J. Imaging 2024, 10(8), 200; https://doi.org/10.3390/jimaging10080200 - 16 Aug 2024
Viewed by 1170
Abstract
Celiac disease (CD) is a gluten-sensitive immune-mediated enteropathy. This proof-of-concept study used a convolutional neural network (CNN) to classify hematoxylin and eosin (H&E) CD histological images, normal small intestine control, and non-specified duodenal inflammation (7294, 11,642, and 5966 images, respectively). The trained network [...] Read more.
Celiac disease (CD) is a gluten-sensitive immune-mediated enteropathy. This proof-of-concept study used a convolutional neural network (CNN) to classify hematoxylin and eosin (H&E) CD histological images, normal small intestine control, and non-specified duodenal inflammation (7294, 11,642, and 5966 images, respectively). The trained network classified CD with high performance (accuracy 99.7%, precision 99.6%, recall 99.3%, F1-score 99.5%, and specificity 99.8%). Interestingly, when the same network (already trained for the 3 class images), analyzed duodenal adenocarcinoma (3723 images), the new images were classified as duodenal inflammation in 63.65%, small intestine control in 34.73%, and CD in 1.61% of the cases; and when the network was retrained using the 4 histological subtypes, the performance was above 99% for CD and 97% for adenocarcinoma. Finally, the model added 13,043 images of Crohn’s disease to include other inflammatory bowel diseases; a comparison between different CNN architectures was performed, and the gradient-weighted class activation mapping (Grad-CAM) technique was used to understand why the deep learning network made its classification decisions. In conclusion, the CNN-based deep neural system classified 5 diagnoses with high performance. Narrow artificial intelligence (AI) is designed to perform tasks that typically require human intelligence, but it operates within limited constraints and is task-specific. Full article
Show Figures

Graphical abstract

24 pages, 410 KiB  
Article
Gastric Cancer Image Classification: A Comparative Analysis and Feature Fusion Strategies
by Andrea Loddo, Marco Usai and Cecilia Di Ruberto
J. Imaging 2024, 10(8), 195; https://doi.org/10.3390/jimaging10080195 - 10 Aug 2024
Viewed by 967
Abstract
Gastric cancer is the fifth most common and fourth deadliest cancer worldwide, with a bleak 5-year survival rate of about 20%. Despite significant research into its pathobiology, prognostic predictability remains insufficient due to pathologists’ heavy workloads and the potential for diagnostic errors. Consequently, [...] Read more.
Gastric cancer is the fifth most common and fourth deadliest cancer worldwide, with a bleak 5-year survival rate of about 20%. Despite significant research into its pathobiology, prognostic predictability remains insufficient due to pathologists’ heavy workloads and the potential for diagnostic errors. Consequently, there is a pressing need for automated and precise histopathological diagnostic tools. This study leverages Machine Learning and Deep Learning techniques to classify histopathological images into healthy and cancerous categories. By utilizing both handcrafted and deep features and shallow learning classifiers on the GasHisSDB dataset, we conduct a comparative analysis to identify the most effective combinations of features and classifiers for differentiating normal from abnormal histopathological images without employing fine-tuning strategies. Our methodology achieves an accuracy of 95% with the SVM classifier, underscoring the effectiveness of feature fusion strategies. Additionally, cross-magnification experiments produced promising results with accuracies close to 80% and 90% when testing the models on unseen testing images with different resolutions. Full article
Show Figures

Figure 1

31 pages, 5788 KiB  
Article
Automated Lung Cancer Diagnosis Applying Butterworth Filtering, Bi-Level Feature Extraction, and Sparce Convolutional Neural Network to Luna 16 CT Images
by Nasr Y. Gharaibeh, Roberto De Fazio, Bassam Al-Naami, Abdel-Razzak Al-Hinnawi and Paolo Visconti
J. Imaging 2024, 10(7), 168; https://doi.org/10.3390/jimaging10070168 - 15 Jul 2024
Viewed by 1319
Abstract
Accurate prognosis and diagnosis are crucial for selecting and planning lung cancer treatments. As a result of the rapid development of medical imaging technology, the use of computed tomography (CT) scans in pathology is becoming standard practice. An intricate interplay of requirements and [...] Read more.
Accurate prognosis and diagnosis are crucial for selecting and planning lung cancer treatments. As a result of the rapid development of medical imaging technology, the use of computed tomography (CT) scans in pathology is becoming standard practice. An intricate interplay of requirements and obstacles characterizes computer-assisted diagnosis, which relies on the precise and effective analysis of pathology images. In recent years, pathology image analysis tasks such as tumor region identification, prognosis prediction, tumor microenvironment characterization, and metastasis detection have witnessed the considerable potential of artificial intelligence, especially deep learning techniques. In this context, an artificial intelligence (AI)-based methodology for lung cancer diagnosis is proposed in this research work. As a first processing step, filtering using the Butterworth smooth filter algorithm was applied to the input images from the LUNA 16 lung cancer dataset to remove noise without significantly degrading the image quality. Next, we performed the bi-level feature selection step using the Chaotic Crow Search Algorithm and Random Forest (CCSA-RF) approach to select features such as diameter, margin, spiculation, lobulation, subtlety, and malignancy. Next, the Feature Extraction step was performed using the Multi-space Image Reconstruction (MIR) method with Grey Level Co-occurrence Matrix (GLCM). Next, the Lung Tumor Severity Classification (LTSC) was implemented by using the Sparse Convolutional Neural Network (SCNN) approach with a Probabilistic Neural Network (PNN). The developed method can detect benign, normal, and malignant lung cancer images using the PNN algorithm, which reduces complexity and efficiently provides classification results. Performance parameters, namely accuracy, precision, F-score, sensitivity, and specificity, were determined to evaluate the effectiveness of the implemented hybrid method and compare it with other solutions already present in the literature. Full article
Show Figures

Figure 1

21 pages, 1918 KiB  
Article
Residual-Based Multi-Stage Deep Learning Framework for Computer-Aided Alzheimer’s Disease Detection
by Najmul Hassan, Abu Saleh Musa Miah and Jungpil Shin
J. Imaging 2024, 10(6), 141; https://doi.org/10.3390/jimaging10060141 - 11 Jun 2024
Viewed by 1573
Abstract
Alzheimer’s Disease (AD) poses a significant health risk globally, particularly among the elderly population. Recent studies underscore its prevalence, with over 50% of elderly Japanese facing a lifetime risk of dementia, primarily attributed to AD. As the most prevalent form of dementia, AD [...] Read more.
Alzheimer’s Disease (AD) poses a significant health risk globally, particularly among the elderly population. Recent studies underscore its prevalence, with over 50% of elderly Japanese facing a lifetime risk of dementia, primarily attributed to AD. As the most prevalent form of dementia, AD gradually erodes brain cells, leading to severe neurological decline. In this scenario, it is important to develop an automatic AD-detection system, and many researchers have been working to develop an AD-detection system by taking advantage of the advancement of deep learning (DL) techniques, which have shown promising results in various domains, including medical image analysis. However, existing approaches for AD detection often suffer from limited performance due to the complexities associated with training hierarchical convolutional neural networks (CNNs). In this paper, we introduce a novel multi-stage deep neural network architecture based on residual functions to address the limitations of existing AD-detection approaches. Inspired by the success of residual networks (ResNets) in image-classification tasks, our proposed system comprises five stages, each explicitly formulated to enhance feature effectiveness while maintaining model depth. Following feature extraction, a deep learning-based feature-selection module is applied to mitigate overfitting, incorporating batch normalization, dropout and fully connected layers. Subsequently, machine learning (ML)-based classification algorithms, including Support Vector Machines (SVM), Random Forest (RF) and SoftMax, are employed for classification tasks. Comprehensive evaluations conducted on three benchmark datasets, namely ADNI1: Complete 1Yr 1.5T, MIRAID and OASIS Kaggle, demonstrate the efficacy of our proposed model. Impressively, our model achieves accuracy rates of 99.47%, 99.10% and 99.70% for ADNI1: Complete 1Yr 1.5T, MIRAID and OASIS datasets, respectively, outperforming existing systems in binary class problems. Our proposed model represents a significant advancement in the AD-analysis domain. Full article
Show Figures

Figure 1

21 pages, 12872 KiB  
Article
Optimizing Vision Transformers for Histopathology: Pretraining and Normalization in Breast Cancer Classification
by Giulia Lucrezia Baroni, Laura Rasotto, Kevin Roitero, Angelica Tulisso, Carla Di Loreto and Vincenzo Della Mea
J. Imaging 2024, 10(5), 108; https://doi.org/10.3390/jimaging10050108 - 30 Apr 2024
Cited by 2 | Viewed by 1939
Abstract
This paper introduces a self-attention Vision Transformer model specifically developed for classifying breast cancer in histology images. We examine various training strategies and configurations, including pretraining, dimension resizing, data augmentation and color normalization strategies, patch overlap, and patch size configurations, in order to [...] Read more.
This paper introduces a self-attention Vision Transformer model specifically developed for classifying breast cancer in histology images. We examine various training strategies and configurations, including pretraining, dimension resizing, data augmentation and color normalization strategies, patch overlap, and patch size configurations, in order to evaluate their impact on the effectiveness of the histology image classification. Additionally, we provide evidence for the increase in effectiveness gathered through geometric and color data augmentation techniques. We primarily utilize the BACH dataset to train and validate our methods and models, but we also test them on two additional datasets, BRACS and AIDPATH, to verify their generalization capabilities. Our model, developed from a transformer pretrained on ImageNet, achieves an accuracy rate of 0.91 on the BACH dataset, 0.74 on the BRACS dataset, and 0.92 on the AIDPATH dataset. Using a model based on the prostate small and prostate medium HistoEncoder models, we achieve accuracy rates of 0.89 and 0.86, respectively. Our results suggest that pretraining on large-scale general datasets like ImageNet is advantageous. We also show the potential benefits of using domain-specific pretraining datasets, such as extensive histopathological image collections as in HistoEncoder, though not yet with clear advantages. Full article
Show Figures

Figure 1

14 pages, 5416 KiB  
Article
Attention-Enhanced Unpaired xAI-GANs for Transformation of Histological Stain Images
by Tibor Sloboda, Lukáš Hudec, Matej Halinkovič and Wanda Benesova
J. Imaging 2024, 10(2), 32; https://doi.org/10.3390/jimaging10020032 - 25 Jan 2024
Viewed by 1979
Abstract
Histological staining is the primary method for confirming cancer diagnoses, but certain types, such as p63 staining, can be expensive and potentially damaging to tissues. In our research, we innovate by generating p63-stained images from H&E-stained slides for metaplastic breast cancer. This is [...] Read more.
Histological staining is the primary method for confirming cancer diagnoses, but certain types, such as p63 staining, can be expensive and potentially damaging to tissues. In our research, we innovate by generating p63-stained images from H&E-stained slides for metaplastic breast cancer. This is a crucial development, considering the high costs and tissue risks associated with direct p63 staining. Our approach employs an advanced CycleGAN architecture, xAI-CycleGAN, enhanced with context-based loss to maintain structural integrity. The inclusion of convolutional attention in our model distinguishes between structural and color details more effectively, thus significantly enhancing the visual quality of the results. This approach shows a marked improvement over the base xAI-CycleGAN and standard CycleGAN models, offering the benefits of a more compact network and faster training even with the inclusion of attention. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

36 pages, 7878 KiB  
Review
Advances in Real-Time 3D Reconstruction for Medical Endoscopy
by Alexander Richter, Till Steinmann, Jean-Claude Rosenthal and Stefan J. Rupitsch
J. Imaging 2024, 10(5), 120; https://doi.org/10.3390/jimaging10050120 - 14 May 2024
Viewed by 2233
Abstract
This contribution is intended to provide researchers with a comprehensive overview of the current state-of-the-art concerning real-time 3D reconstruction methods suitable for medical endoscopy. Over the past decade, there have been various technological advancements in computational power and an increased research effort in [...] Read more.
This contribution is intended to provide researchers with a comprehensive overview of the current state-of-the-art concerning real-time 3D reconstruction methods suitable for medical endoscopy. Over the past decade, there have been various technological advancements in computational power and an increased research effort in many computer vision fields such as autonomous driving, robotics, and unmanned aerial vehicles. Some of these advancements can also be adapted to the field of medical endoscopy while coping with challenges such as featureless surfaces, varying lighting conditions, and deformable structures. To provide a comprehensive overview, a logical division of monocular, binocular, trinocular, and multiocular methods is performed and also active and passive methods are distinguished. Within these categories, we consider both flexible and non-flexible endoscopes to cover the state-of-the-art as fully as possible. The relevant error metrics to compare the publications presented here are discussed, and the choice of when to choose a GPU rather than an FPGA for camera-based 3D reconstruction is debated. We elaborate on the good practice of using datasets and provide a direct comparison of the presented work. It is important to note that in addition to medical publications, publications evaluated on the KITTI and Middlebury datasets are also considered to include related methods that may be suited for medical 3D reconstruction. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

22 pages, 604 KiB  
Systematic Review
The Accuracy of Three-Dimensional Soft Tissue Simulation in Orthognathic Surgery—A Systematic Review
by Anna Olejnik, Laurence Verstraete, Tomas-Marijn Croonenborghs, Constantinus Politis and Gwen R. J. Swennen
J. Imaging 2024, 10(5), 119; https://doi.org/10.3390/jimaging10050119 - 14 May 2024
Cited by 1 | Viewed by 1223
Abstract
Three-dimensional soft tissue simulation has become a popular tool in the process of virtual orthognathic surgery planning and patient–surgeon communication. To apply 3D soft tissue simulation software in routine clinical practice, both qualitative and quantitative validation of its accuracy are required. The objective [...] Read more.
Three-dimensional soft tissue simulation has become a popular tool in the process of virtual orthognathic surgery planning and patient–surgeon communication. To apply 3D soft tissue simulation software in routine clinical practice, both qualitative and quantitative validation of its accuracy are required. The objective of this study was to systematically review the literature on the accuracy of 3D soft tissue simulation in orthognathic surgery. The Web of Science, PubMed, Cochrane, and Embase databases were consulted for the literature search. The systematic review (SR) was conducted according to the PRISMA statement, and 40 articles fulfilled the inclusion and exclusion criteria. The Quadas-2 tool was used for the risk of bias assessment for selected studies. A mean error varying from 0.27 mm to 2.9 mm for 3D soft tissue simulations for the whole face was reported. In the studies evaluating 3D soft tissue simulation accuracy after a Le Fort I osteotomy only, the upper lip and paranasal regions were reported to have the largest error, while after an isolated bilateral sagittal split osteotomy, the largest error was reported for the lower lip and chin regions. In the studies evaluating simulation after bimaxillary osteotomy with or without genioplasty, the highest inaccuracy was reported at the level of the lips, predominantly the lower lip, chin, and, sometimes, the paranasal regions. Due to the variability in the study designs and analysis methods, a direct comparison was not possible. Therefore, based on the results of this SR, guidelines to systematize the workflow for evaluating the accuracy of 3D soft tissue simulations in orthognathic surgery in future studies are proposed. Full article
Show Figures

Figure 1

Back to TopTop