Advances in Biomedical Image Processing and Artificial Intelligence for Computer-Aided Diagnosis in Medicine

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 6782

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Interests: computer vision; image processing; machine learning; deep learning; artificial intelligence; medical image analysis; biomedical image analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Ri.MED Foundation, via Bandiera 11, 90133 Palermo, Italy
2. Research Affiliate Long Term, Laboratory of Computational Computer Vision (LCCV), School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Interests: biomedical image processing and analysis; radiomics; artificial intelligence; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, Via Ospedale 72, 09124 Cagliari, Italy
Interests: computer vision; image retrieval; biomedical image analysis; pattern recognition and machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy
Interests: computer vision; medical image analysis; shape analysis and matching; image retrieval and classification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
Interests: non-invasive imaging techniques: positron emission tomography (PET), computerized tomography (CT), and magnetic resonance (MR); radiomics and artificial intelligence in clinical health care applications; processing, quantification, and correction methods for ex vivo and in vivo medical images
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

With the digitization of medical data, various artificial intelligence techniques are now being employed. Radiomics and texture analysis are possible through the use of positron emission tomography, computerized tomography, and magnetic resonance imaging. Machine and deep learning techniques can aid in improving therapeutic tools, diagnostic decisions, and rehabilitation. Nevertheless, diagnosing patients has become more difficult due to the abundance of data from different imaging techniques, patient diversity, and the necessity to consider data from various sources, leading to the domain shift problem. Radiologists and pathologists rely on computer-aided diagnosis (CAD) systems to analyze biomedical images and address these challenges. CAD systems help reduce inter- and intra-observer variability, which occurs when different physicians assess the area under the same assumptions or at different times. Additionally, data access can be prevented due to privacy, security, and intellectual property concerns. Synthetic data are increasingly being explored in this context to address these issues. This Special Issue is connected to the 2nd International Workshop on Artificial Intelligence and Radiomics in Computer-Aided Diagnosis (AIRCAD 2023) but is open to additional submissions addressing topics relevant to the Special Issue’s scope. This Special Issue will cover the latest developments in biomedical image processing using machine learning, deep learning, artificial intelligence, and radiomics features, focusing on practical applications and their integration into the medical image processing workflow.

Potential topics include but are not limited to the following: biomedical image processing; machine and deep learning techniques for image analysis (i.e., the segmentation of cells, tissues, organs, and lesions and the classification of cells, diseases, tumors, etc.); image registration techniques; image preprocessing techniques; image-based 3D reconstruction; computer-aided detection and diagnosis systems (CADs); biomedical image analysis; radiomics and artificial intelligence for personalized medicine; machine and deep learning as tools to support medical diagnoses and decisions; image retrieval (e.g., context-based retrieval and lesion similarity); CAD architectures; advanced architectures for biomedical image remote processing, elaboration, and transmission; 3D vision, virtual, augmented, and mixed reality applications for remote surgery; image processing techniques for privacy-preserving AI in medicine.

Dr. Andrea Loddo
Dr. Albert Comelli
Dr. Cecilia Di Ruberto
Dr. Lorenzo Putzu
Dr. Alessandro Stefano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • CAD architectures
  • biomedical image processing
  • machine learning
  • deep learning
  • biomedical image analysis
  • radiomics
  • artificial intelligence
  • personalized medicine
  • privacy-preserving AI

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

31 pages, 5788 KiB  
Article
Automated Lung Cancer Diagnosis Applying Butterworth Filtering, Bi-Level Feature Extraction, and Sparce Convolutional Neural Network to Luna 16 CT Images
by Nasr Y. Gharaibeh, Roberto De Fazio, Bassam Al-Naami, Abdel-Razzak Al-Hinnawi and Paolo Visconti
J. Imaging 2024, 10(7), 168; https://doi.org/10.3390/jimaging10070168 - 15 Jul 2024
Viewed by 339
Abstract
Accurate prognosis and diagnosis are crucial for selecting and planning lung cancer treatments. As a result of the rapid development of medical imaging technology, the use of computed tomography (CT) scans in pathology is becoming standard practice. An intricate interplay of requirements and [...] Read more.
Accurate prognosis and diagnosis are crucial for selecting and planning lung cancer treatments. As a result of the rapid development of medical imaging technology, the use of computed tomography (CT) scans in pathology is becoming standard practice. An intricate interplay of requirements and obstacles characterizes computer-assisted diagnosis, which relies on the precise and effective analysis of pathology images. In recent years, pathology image analysis tasks such as tumor region identification, prognosis prediction, tumor microenvironment characterization, and metastasis detection have witnessed the considerable potential of artificial intelligence, especially deep learning techniques. In this context, an artificial intelligence (AI)-based methodology for lung cancer diagnosis is proposed in this research work. As a first processing step, filtering using the Butterworth smooth filter algorithm was applied to the input images from the LUNA 16 lung cancer dataset to remove noise without significantly degrading the image quality. Next, we performed the bi-level feature selection step using the Chaotic Crow Search Algorithm and Random Forest (CCSA-RF) approach to select features such as diameter, margin, spiculation, lobulation, subtlety, and malignancy. Next, the Feature Extraction step was performed using the Multi-space Image Reconstruction (MIR) method with Grey Level Co-occurrence Matrix (GLCM). Next, the Lung Tumor Severity Classification (LTSC) was implemented by using the Sparse Convolutional Neural Network (SCNN) approach with a Probabilistic Neural Network (PNN). The developed method can detect benign, normal, and malignant lung cancer images using the PNN algorithm, which reduces complexity and efficiently provides classification results. Performance parameters, namely accuracy, precision, F-score, sensitivity, and specificity, were determined to evaluate the effectiveness of the implemented hybrid method and compare it with other solutions already present in the literature. Full article
Show Figures

Figure 1

21 pages, 1918 KiB  
Article
Residual-Based Multi-Stage Deep Learning Framework for Computer-Aided Alzheimer’s Disease Detection
by Najmul Hassan, Abu Saleh Musa Miah and Jungpil Shin
J. Imaging 2024, 10(6), 141; https://doi.org/10.3390/jimaging10060141 - 11 Jun 2024
Viewed by 985
Abstract
Alzheimer’s Disease (AD) poses a significant health risk globally, particularly among the elderly population. Recent studies underscore its prevalence, with over 50% of elderly Japanese facing a lifetime risk of dementia, primarily attributed to AD. As the most prevalent form of dementia, AD [...] Read more.
Alzheimer’s Disease (AD) poses a significant health risk globally, particularly among the elderly population. Recent studies underscore its prevalence, with over 50% of elderly Japanese facing a lifetime risk of dementia, primarily attributed to AD. As the most prevalent form of dementia, AD gradually erodes brain cells, leading to severe neurological decline. In this scenario, it is important to develop an automatic AD-detection system, and many researchers have been working to develop an AD-detection system by taking advantage of the advancement of deep learning (DL) techniques, which have shown promising results in various domains, including medical image analysis. However, existing approaches for AD detection often suffer from limited performance due to the complexities associated with training hierarchical convolutional neural networks (CNNs). In this paper, we introduce a novel multi-stage deep neural network architecture based on residual functions to address the limitations of existing AD-detection approaches. Inspired by the success of residual networks (ResNets) in image-classification tasks, our proposed system comprises five stages, each explicitly formulated to enhance feature effectiveness while maintaining model depth. Following feature extraction, a deep learning-based feature-selection module is applied to mitigate overfitting, incorporating batch normalization, dropout and fully connected layers. Subsequently, machine learning (ML)-based classification algorithms, including Support Vector Machines (SVM), Random Forest (RF) and SoftMax, are employed for classification tasks. Comprehensive evaluations conducted on three benchmark datasets, namely ADNI1: Complete 1Yr 1.5T, MIRAID and OASIS Kaggle, demonstrate the efficacy of our proposed model. Impressively, our model achieves accuracy rates of 99.47%, 99.10% and 99.70% for ADNI1: Complete 1Yr 1.5T, MIRAID and OASIS datasets, respectively, outperforming existing systems in binary class problems. Our proposed model represents a significant advancement in the AD-analysis domain. Full article
Show Figures

Figure 1

21 pages, 12872 KiB  
Article
Optimizing Vision Transformers for Histopathology: Pretraining and Normalization in Breast Cancer Classification
by Giulia Lucrezia Baroni, Laura Rasotto, Kevin Roitero, Angelica Tulisso, Carla Di Loreto and Vincenzo Della Mea
J. Imaging 2024, 10(5), 108; https://doi.org/10.3390/jimaging10050108 - 30 Apr 2024
Viewed by 1274
Abstract
This paper introduces a self-attention Vision Transformer model specifically developed for classifying breast cancer in histology images. We examine various training strategies and configurations, including pretraining, dimension resizing, data augmentation and color normalization strategies, patch overlap, and patch size configurations, in order to [...] Read more.
This paper introduces a self-attention Vision Transformer model specifically developed for classifying breast cancer in histology images. We examine various training strategies and configurations, including pretraining, dimension resizing, data augmentation and color normalization strategies, patch overlap, and patch size configurations, in order to evaluate their impact on the effectiveness of the histology image classification. Additionally, we provide evidence for the increase in effectiveness gathered through geometric and color data augmentation techniques. We primarily utilize the BACH dataset to train and validate our methods and models, but we also test them on two additional datasets, BRACS and AIDPATH, to verify their generalization capabilities. Our model, developed from a transformer pretrained on ImageNet, achieves an accuracy rate of 0.91 on the BACH dataset, 0.74 on the BRACS dataset, and 0.92 on the AIDPATH dataset. Using a model based on the prostate small and prostate medium HistoEncoder models, we achieve accuracy rates of 0.89 and 0.86, respectively. Our results suggest that pretraining on large-scale general datasets like ImageNet is advantageous. We also show the potential benefits of using domain-specific pretraining datasets, such as extensive histopathological image collections as in HistoEncoder, though not yet with clear advantages. Full article
Show Figures

Figure 1

14 pages, 5416 KiB  
Article
Attention-Enhanced Unpaired xAI-GANs for Transformation of Histological Stain Images
by Tibor Sloboda, Lukáš Hudec, Matej Halinkovič and Wanda Benesova
J. Imaging 2024, 10(2), 32; https://doi.org/10.3390/jimaging10020032 - 25 Jan 2024
Viewed by 1682
Abstract
Histological staining is the primary method for confirming cancer diagnoses, but certain types, such as p63 staining, can be expensive and potentially damaging to tissues. In our research, we innovate by generating p63-stained images from H&E-stained slides for metaplastic breast cancer. This is [...] Read more.
Histological staining is the primary method for confirming cancer diagnoses, but certain types, such as p63 staining, can be expensive and potentially damaging to tissues. In our research, we innovate by generating p63-stained images from H&E-stained slides for metaplastic breast cancer. This is a crucial development, considering the high costs and tissue risks associated with direct p63 staining. Our approach employs an advanced CycleGAN architecture, xAI-CycleGAN, enhanced with context-based loss to maintain structural integrity. The inclusion of convolutional attention in our model distinguishes between structural and color details more effectively, thus significantly enhancing the visual quality of the results. This approach shows a marked improvement over the base xAI-CycleGAN and standard CycleGAN models, offering the benefits of a more compact network and faster training even with the inclusion of attention. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

36 pages, 7878 KiB  
Review
Advances in Real-Time 3D Reconstruction for Medical Endoscopy
by Alexander Richter, Till Steinmann, Jean-Claude Rosenthal and Stefan J. Rupitsch
J. Imaging 2024, 10(5), 120; https://doi.org/10.3390/jimaging10050120 - 14 May 2024
Viewed by 938
Abstract
This contribution is intended to provide researchers with a comprehensive overview of the current state-of-the-art concerning real-time 3D reconstruction methods suitable for medical endoscopy. Over the past decade, there have been various technological advancements in computational power and an increased research effort in [...] Read more.
This contribution is intended to provide researchers with a comprehensive overview of the current state-of-the-art concerning real-time 3D reconstruction methods suitable for medical endoscopy. Over the past decade, there have been various technological advancements in computational power and an increased research effort in many computer vision fields such as autonomous driving, robotics, and unmanned aerial vehicles. Some of these advancements can also be adapted to the field of medical endoscopy while coping with challenges such as featureless surfaces, varying lighting conditions, and deformable structures. To provide a comprehensive overview, a logical division of monocular, binocular, trinocular, and multiocular methods is performed and also active and passive methods are distinguished. Within these categories, we consider both flexible and non-flexible endoscopes to cover the state-of-the-art as fully as possible. The relevant error metrics to compare the publications presented here are discussed, and the choice of when to choose a GPU rather than an FPGA for camera-based 3D reconstruction is debated. We elaborate on the good practice of using datasets and provide a direct comparison of the presented work. It is important to note that in addition to medical publications, publications evaluated on the KITTI and Middlebury datasets are also considered to include related methods that may be suited for medical 3D reconstruction. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

22 pages, 604 KiB  
Systematic Review
The Accuracy of Three-Dimensional Soft Tissue Simulation in Orthognathic Surgery—A Systematic Review
by Anna Olejnik, Laurence Verstraete, Tomas-Marijn Croonenborghs, Constantinus Politis and Gwen R. J. Swennen
J. Imaging 2024, 10(5), 119; https://doi.org/10.3390/jimaging10050119 - 14 May 2024
Viewed by 746
Abstract
Three-dimensional soft tissue simulation has become a popular tool in the process of virtual orthognathic surgery planning and patient–surgeon communication. To apply 3D soft tissue simulation software in routine clinical practice, both qualitative and quantitative validation of its accuracy are required. The objective [...] Read more.
Three-dimensional soft tissue simulation has become a popular tool in the process of virtual orthognathic surgery planning and patient–surgeon communication. To apply 3D soft tissue simulation software in routine clinical practice, both qualitative and quantitative validation of its accuracy are required. The objective of this study was to systematically review the literature on the accuracy of 3D soft tissue simulation in orthognathic surgery. The Web of Science, PubMed, Cochrane, and Embase databases were consulted for the literature search. The systematic review (SR) was conducted according to the PRISMA statement, and 40 articles fulfilled the inclusion and exclusion criteria. The Quadas-2 tool was used for the risk of bias assessment for selected studies. A mean error varying from 0.27 mm to 2.9 mm for 3D soft tissue simulations for the whole face was reported. In the studies evaluating 3D soft tissue simulation accuracy after a Le Fort I osteotomy only, the upper lip and paranasal regions were reported to have the largest error, while after an isolated bilateral sagittal split osteotomy, the largest error was reported for the lower lip and chin regions. In the studies evaluating simulation after bimaxillary osteotomy with or without genioplasty, the highest inaccuracy was reported at the level of the lips, predominantly the lower lip, chin, and, sometimes, the paranasal regions. Due to the variability in the study designs and analysis methods, a direct comparison was not possible. Therefore, based on the results of this SR, guidelines to systematize the workflow for evaluating the accuracy of 3D soft tissue simulations in orthognathic surgery in future studies are proposed. Full article
Show Figures

Figure 1

Back to TopTop