Computational Medical Image Analysis

A special issue of Computation (ISSN 2079-3197). This special issue belongs to the section "Computational Biology".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 32066

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

John Walton Muscular Dystrophy Research Centre, Translational and Clinical Research Institute, Newcastle University and Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne NE1 3BZ, UK
Interests: medical imaging; mathematical modelling; image quality; pathology correlation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is my great pleasure to invite you to contribute to this Special Issue of Computation, titled “Computational Medical Image Analysis”, which is devoted to understanding the modern methodologies used in a variety of medical imaging applications. Colleagues from all over the world are invited to submit their manuscripts. These papers will follow a rigorous peer-review process to satisfy a high standard of publication.

Computational methods are extensively used in medical image analysis. With the development of high-performance systems as well as methodologies that can harness the power of these systems (e.g., machine learning and deep learning), this is an exciting era for imaging research. With novel methodologies it has been possible to provide previously unfathomable solutions to important problems. In this Special Issue we hope to put together a collection of such methods.

The scope of the issue is vast. The application must be clinically relevant and patient oriented. Usage of both synthetic and real data is acceptable. Applications from a diverse range of imaging modalities including CT, MR, SPECT, PET, ultrasound, photoacoustic as well as digital pathology are encouraged. Topics for the Special Issue include but are not limited to:

  • Image processing
  • Dual and multi-modality imaging
  • Image segmentation
  • Image registration
  • Tomographic reconstruction
  • Image quality assessment
  • Digital pathology applications
  • Dosimetry
  • Radiation oncology applications
  • Machine learning
  • Neural networks

Dr. Anando Sen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational imaging
  • medical imaging
  • anatomical imaging
  • functional imaging
  • machine learning
  • neural networks
  • digital pathology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 149 KiB  
Editorial
Computational Medical Image Analysis: A Preface
by Anando Sen
Computation 2024, 12(6), 109; https://doi.org/10.3390/computation12060109 - 24 May 2024
Viewed by 727
Abstract
There has been immense progress in medical image analysis over the past decade [...] Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)

Research

Jump to: Editorial, Review

18 pages, 447 KiB  
Article
COVID-19 Image Classification: A Comparative Performance Analysis of Hand-Crafted vs. Deep Features
by Sadiq Alinsaif
Computation 2024, 12(4), 66; https://doi.org/10.3390/computation12040066 - 30 Mar 2024
Cited by 2 | Viewed by 2167
Abstract
This study investigates techniques for medical image classification, specifically focusing on COVID-19 scans obtained through computer tomography (CT). Firstly, handcrafted methods based on feature engineering are explored due to their suitability for training traditional machine learning (TML) classifiers (e.g., Support Vector Machine (SVM)) [...] Read more.
This study investigates techniques for medical image classification, specifically focusing on COVID-19 scans obtained through computer tomography (CT). Firstly, handcrafted methods based on feature engineering are explored due to their suitability for training traditional machine learning (TML) classifiers (e.g., Support Vector Machine (SVM)) when faced with limited medical image datasets. In this context, I comprehensively evaluate and compare 27 descriptor sets. More recently, deep learning (DL) models have successfully analyzed and classified natural and medical images. However, the scarcity of well-annotated medical images, particularly those related to COVID-19, presents challenges for training DL models from scratch. Consequently, I leverage deep features extracted from 12 pre-trained DL models for classification tasks. This work presents a comprehensive comparative analysis between TML and DL approaches in COVID-19 image classification. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

14 pages, 2138 KiB  
Article
Creation of a Simulated Sequence of Dynamic Susceptibility Contrast—Magnetic Resonance Imaging Brain Scans as a Tool to Verify the Quality of Methods for Diagnosing Diseases Affecting Brain Tissue Perfusion
by Seweryn Lipiński
Computation 2024, 12(3), 54; https://doi.org/10.3390/computation12030054 - 8 Mar 2024
Cited by 1 | Viewed by 1516
Abstract
DSC-MRI examination is one of the best methods of diagnosis for brain diseases. For this purpose, the so-called perfusion parameters are defined, of which the most used are CBF, CBV, and MTT. There are many approaches to determining these parameters, but regardless of [...] Read more.
DSC-MRI examination is one of the best methods of diagnosis for brain diseases. For this purpose, the so-called perfusion parameters are defined, of which the most used are CBF, CBV, and MTT. There are many approaches to determining these parameters, but regardless of the approach, there is a problem with the quality assessment of methods. To solve this problem, this article proposes virtual DSC-MRI brain examination, which consists of two steps. The first step is to create curves that are typical for DSC-MRI studies and characteristic of different brain regions, i.e., the gray and white matter, and blood vessels. Using perfusion descriptors, the curves are classified into three sets, which give us the model curves for each of the three regions. The curves corresponding to the perfusion of different regions of the brain in a suitable arrangement (consistent with human anatomy) form a model of the DSC-MRI examination. In the created model, one knows in advance the values of the complex perfusion parameters, as well as basic perfusion descriptors. The shown model study can be disturbed in a controlled manner—not only by adding noise, but also by determining the location of disturbances that are characteristic of specific brain diseases. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

17 pages, 2693 KiB  
Article
Predicting Time-to-Healing from a Digital Wound Image: A Hybrid Neural Network and Decision Tree Approach Improves Performance
by Aravind Kolli, Qi Wei and Stephen A. Ramsey
Computation 2024, 12(3), 42; https://doi.org/10.3390/computation12030042 - 28 Feb 2024
Cited by 1 | Viewed by 2201
Abstract
Despite the societal burden of chronic wounds and despite advances in image processing, automated image-based prediction of wound prognosis is not yet in routine clinical practice. While specific tissue types are known to be positive or negative prognostic indicators, image-based wound healing prediction [...] Read more.
Despite the societal burden of chronic wounds and despite advances in image processing, automated image-based prediction of wound prognosis is not yet in routine clinical practice. While specific tissue types are known to be positive or negative prognostic indicators, image-based wound healing prediction systems that have been demonstrated to date do not (1) use information about the proportions of tissue types within the wound and (2) predict time-to-healing (most predict categorical clinical labels). In this work, we analyzed a unique dataset of time-series images of healing wounds from a controlled study in dogs, as well as human wound images that are annotated for the tissue type composition. In the context of a hybrid-learning approach (neural network segmentation and decision tree regression) for the image-based prediction of time-to-healing, we tested whether explicitly incorporating tissue type-derived features into the model would improve the accuracy for time-to-healing prediction versus not including such features. We tested four deep convolutional encoder–decoder neural network models for wound image segmentation and identified, in the context of both original wound images and an augmented wound image-set, that a SegNet-type network trained on an augmented image set has best segmentation performance. Furthermore, using three different regression algorithms, we evaluated models for predicting wound time-to-healing using features extracted from the four best-performing segmentation models. We found that XGBoost regression using features that are (i) extracted from a SegNet-type network and (ii) reduced using principal components analysis performed the best for time-to-healing prediction. We demonstrated that a neural network model can classify the regions of a wound image as one of four tissue types, and demonstrated that adding features derived from the superpixel classifier improves the performance for healing-time prediction. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

14 pages, 1780 KiB  
Article
Two-Stage Input-Space Image Augmentation and Interpretable Technique for Accurate and Explainable Skin Cancer Diagnosis
by Catur Supriyanto, Abu Salam, Junta Zeniarja and Adi Wijaya
Computation 2023, 11(12), 246; https://doi.org/10.3390/computation11120246 - 5 Dec 2023
Cited by 1 | Viewed by 2955
Abstract
This research paper presents a deep-learning approach to early detection of skin cancer using image augmentation techniques. We introduce a two-stage image augmentation process utilizing geometric augmentation and a generative adversarial network (GAN) to differentiate skin cancer categories. The public HAM10000 dataset was [...] Read more.
This research paper presents a deep-learning approach to early detection of skin cancer using image augmentation techniques. We introduce a two-stage image augmentation process utilizing geometric augmentation and a generative adversarial network (GAN) to differentiate skin cancer categories. The public HAM10000 dataset was used to test how well the proposed model worked. Various pre-trained convolutional neural network (CNN) models, including Xception, Inceptionv3, Resnet152v2, EfficientnetB7, InceptionresnetV2, and VGG19, were employed. Our approach demonstrates an accuracy of 96.90%, precision of 97.07%, recall of 96.87%, and F1-score of 96.97%, surpassing the performance of other state-of-the-art methods. The paper also discusses the use of Shapley Additive Explanations (SHAP), an interpretable technique for skin cancer diagnosis, which can help clinicians understand the reasoning behind the diagnosis and improve trust in the system. Overall, the proposed method presents a promising approach to automated skin cancer detection that could improve patient outcomes and reduce healthcare costs. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

14 pages, 2530 KiB  
Article
Regional Contribution in Electrophysiological-Based Classifications of Attention Deficit Hyperactive Disorder (ADHD) Using Machine Learning
by Nishant Chauhan and Byung-Jae Choi
Computation 2023, 11(9), 180; https://doi.org/10.3390/computation11090180 - 8 Sep 2023
Cited by 5 | Viewed by 1675
Abstract
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental condition in children and is characterized by challenges in maintaining attention, hyperactivity, and impulsive behaviors. Despite ongoing research, we still do not fully understand what causes ADHD. Electroencephalography (EEG) has emerged as a valuable [...] Read more.
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental condition in children and is characterized by challenges in maintaining attention, hyperactivity, and impulsive behaviors. Despite ongoing research, we still do not fully understand what causes ADHD. Electroencephalography (EEG) has emerged as a valuable tool for investigating ADHD-related neural patterns due to its high temporal resolution and non-invasiveness. This study aims to contribute to diagnostic accuracy by leveraging EEG data to classify children with ADHD and healthy controls. We used a dataset containing EEG recordings from 60 children with ADHD and 60 healthy controls. The EEG data were captured during cognitive tasks and comprised signals from 19 channels across the scalp. Our primary objective was to develop a machine learning model capable of distinguishing ADHD subjects from controls using EEG data as discriminatory features. We employed several well-known classifiers, including a support vector machine, random forest, decision tree, AdaBoost, Naive Bayes, and linear discriminant analysis, to discern distinctive EEG patterns. To further enhance classification accuracy, we explored the impact of regional data on the classification outcomes. We arranged the EEG data according to the brain regions from which they were derived (namely frontal, temporal, central, parietal, and occipital) and examined their collective effects on the accuracy of our classifications. Notably, we considered combinations of three regions at a time and found that certain combinations led to enhanced accuracy. Our findings underscore the potential of EEG-based classification in distinguishing children with ADHD from healthy controls. The Naive Bayes classifier yielded the highest accuracy (84%) when applied to specific region combinations. Moreover, we evaluated the classification performance based on hemisphere-specific EEG data and found promising results, particularly when using the right hemisphere region channels. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

23 pages, 5297 KiB  
Article
Feet Segmentation for Regional Analgesia Monitoring Using Convolutional RFF and Layer-Wise Weighted CAM Interpretability
by Juan Carlos Aguirre-Arango, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computation 2023, 11(6), 113; https://doi.org/10.3390/computation11060113 - 8 Jun 2023
Cited by 2 | Viewed by 1575
Abstract
Regional neuraxial analgesia for pain relief during labor is a universally accepted, safe, and effective procedure involving administering medication into the epidural. Still, an adequate assessment requires continuous patient monitoring after catheter placement. This research introduces a cutting-edge semantic thermal image segmentation method [...] Read more.
Regional neuraxial analgesia for pain relief during labor is a universally accepted, safe, and effective procedure involving administering medication into the epidural. Still, an adequate assessment requires continuous patient monitoring after catheter placement. This research introduces a cutting-edge semantic thermal image segmentation method emphasizing superior interpretability for regional neuraxial analgesia monitoring. Namely, we propose a novel Convolutional Random Fourier Features-based approach, termed CRFFg, and custom-designed layer-wise weighted class-activation maps created explicitly for foot segmentation. Our method aims to enhance three well-known semantic segmentation (FCN, UNet, and ResUNet). We have rigorously evaluated our methodology on a challenging dataset of foot thermal images from pregnant women who underwent epidural anesthesia. Its limited size and significant variability distinguish this dataset. Furthermore, our validation results indicate that our proposed methodology not only delivers competitive results in foot segmentation but also significantly improves the explainability of the process. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

43 pages, 10592 KiB  
Article
Performance Investigation for Medical Image Evaluation and Diagnosis Using Machine-Learning and Deep-Learning Techniques
by Baidaa Mutasher Rashed and Nirvana Popescu
Computation 2023, 11(3), 63; https://doi.org/10.3390/computation11030063 - 20 Mar 2023
Cited by 9 | Viewed by 3744
Abstract
Today, medical image-based diagnosis has advanced significantly in the world. The number of studies being conducted in this field is enormous, and they are producing findings with a significant impact on humanity. The number of databases created in this field is skyrocketing. Examining [...] Read more.
Today, medical image-based diagnosis has advanced significantly in the world. The number of studies being conducted in this field is enormous, and they are producing findings with a significant impact on humanity. The number of databases created in this field is skyrocketing. Examining these data is crucial to find important underlying patterns. Classification is an effective method for identifying these patterns. This work proposes a deep investigation and analysis to evaluate and diagnose medical image data using various classification methods and to critically evaluate these methods’ effectiveness. The classification methods utilized include machine-learning (ML) algorithms like artificial neural networks (ANN), support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), random forest (RF), Naïve Bayes (NB), logistic regression (LR), random subspace (RS), fuzzy logic and a convolution neural network (CNN) model of deep learning (DL). We applied these methods to two types of datasets: chest X-ray datasets to classify lung images into normal and abnormal, and melanoma skin cancer dermoscopy datasets to classify skin lesions into benign and malignant. This work aims to present a model that aids in investigating and assessing the effectiveness of ML approaches and DL using CNN in classifying the medical databases and comparing these methods to identify the most robust ones that produce the best performance in diagnosis. Our results have shown that the used classification algorithms have good results in terms of performance measures. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

17 pages, 2378 KiB  
Article
Enhanced Pre-Trained Xception Model Transfer Learned for Breast Cancer Detection
by Shubhangi A. Joshi, Anupkumar M. Bongale, P. Olof Olsson, Siddhaling Urolagin, Deepak Dharrao and Arunkumar Bongale
Computation 2023, 11(3), 59; https://doi.org/10.3390/computation11030059 - 13 Mar 2023
Cited by 16 | Viewed by 4448
Abstract
Early detection and timely breast cancer treatment improve survival rates and patients’ quality of life. Hence, many computer-assisted techniques based on artificial intelligence are being introduced into the traditional diagnostic workflow. This inclusion of automatic diagnostic systems speeds up diagnosis and helps medical [...] Read more.
Early detection and timely breast cancer treatment improve survival rates and patients’ quality of life. Hence, many computer-assisted techniques based on artificial intelligence are being introduced into the traditional diagnostic workflow. This inclusion of automatic diagnostic systems speeds up diagnosis and helps medical professionals by relieving their work pressure. This study proposes a breast cancer detection framework based on a deep convolutional neural network. To mine useful information about breast cancer through breast histopathology images of the 40× magnification factor that are publicly available, the BreakHis dataset and IDC(Invasive ductal carcinoma) dataset are used. Pre-trained convolutional neural network (CNN) models EfficientNetB0, ResNet50, and Xception are tested for this study. The top layers of these architectures are replaced by custom layers to make the whole architecture specific to the breast cancer detection task. It is seen that the customized Xception model outperformed other frameworks. It gave an accuracy of 93.33% for the 40× zoom images of the BreakHis dataset. The networks are trained using 70% data consisting of BreakHis 40× histopathological images as training data and validated on 30% of the total 40× images as unseen testing and validation data. The histopathology image set is augmented by performing various image transforms. Dropout and batch normalization are used as regularization techniques. Further, the proposed model with enhanced pre-trained Xception CNN is fine-tuned and tested on a part of the IDC dataset. For the IDC dataset training, validation, and testing percentages are kept as 60%, 20%, and 20%, respectively. It obtained an accuracy of 88.08% for the IDC dataset for recognizing invasive ductal carcinoma from H&E-stained histopathological tissue samples of breast tissues. Weights learned during training on the BreakHis dataset are kept the same while training the model on IDC dataset. Thus, this study enhances and customizes functionality of pre-trained model as per the task of classification on the BreakHis and IDC datasets. This study also tries to apply the transfer learning approach for the designed model to another similar classification task. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

21 pages, 4660 KiB  
Article
Sparse Reconstruction Using Hyperbolic Tangent as Smooth l1-Norm Approximation
by Hassaan Haider, Jawad Ali Shah, Kushsairy Kadir and Najeeb Khan
Computation 2023, 11(1), 7; https://doi.org/10.3390/computation11010007 - 4 Jan 2023
Cited by 4 | Viewed by 2148
Abstract
In the Compressed Sensing (CS) framework, the underdetermined system of linear equation (USLE) can have infinitely many possible solutions. However, we intend to find the sparsest possible solution, which is l0-norm minimization. However, finding an l0 norm solution out of [...] Read more.
In the Compressed Sensing (CS) framework, the underdetermined system of linear equation (USLE) can have infinitely many possible solutions. However, we intend to find the sparsest possible solution, which is l0-norm minimization. However, finding an l0 norm solution out of infinitely many possible solutions is NP-hard problem that becomes non-convex optimization problem. It has been a practically proven fact that l0 norm penalty can be adequately estimated by l1 norm, which recasts a non-convex minimization problem to a convex problem. However, l1 norm non-differentiable and gradient-based minimization algorithms are not applicable, due to this very reason there is a need to approximate l1 norm by its smooth approximation. Iterative shrinkage algorithms provide an efficient method to numerically minimize l1-regularized least square optimization problem. These algorithms are required to induce sparsity in their solutions to meet the CS recovery requirement. In this research article, we have developed a novel recovery method that uses hyperbolic tangent function to recover undersampled signal/images in CS framework. In our work, l1 norm and soft thresholding are both approximated with the hyperbolic tangent functions. We have also proposed the criteria to tune optimization parameters to get optimal results. The error bounds for the proposed l1 norm approximation are evaluated. To evaluate performance of our proposed method, we have utilized a dataset comprised of 1-D sparse signal, compressively sampled MR image and cardiac cine MRI. The MRI is an important imaging modality for assessing cardiac vascular function. It provides the ejection fraction and cardiac output of the heart. However, this advantage comes at the cost of a slow acquisition process. Hence, it is essential to speed up the acquisition process to take the full benefits of cardiac cine MRI. Numerical results based on performance metrics, such as Structural Similarity (SSIM), Peak Signal to Noise Ratio (PSNR) and Root Mean Square Error (RMSE) show that the proposed tangent hyperbolic based CS recovery offers a much better performance as compared to the traditional Iterative Soft Thresholding (IST) recovery methods. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

16 pages, 2213 KiB  
Article
Secure Medical Image Transmission Scheme Using Lorenz’s Attractor Applied in Computer Aided Diagnosis for the Detection of Eye Melanoma
by Daniel Fernando Santos and Helbert Eduardo Espitia
Computation 2022, 10(9), 158; https://doi.org/10.3390/computation10090158 - 14 Sep 2022
Cited by 3 | Viewed by 1811
Abstract
Early detection of diseases is vital for patient recovery. This article explains the design and technical matters of a computer-supported diagnostic system for eye melanoma detection implementing a security approach using chaotic-based encryption to guarantee communication security. The system is intended to provide [...] Read more.
Early detection of diseases is vital for patient recovery. This article explains the design and technical matters of a computer-supported diagnostic system for eye melanoma detection implementing a security approach using chaotic-based encryption to guarantee communication security. The system is intended to provide a diagnosis; it can be applied in a cooperative environment for hospitals or telemedicine and can be extended to detect other types of eye diseases. The introduced method has been tested to assess the secret key, sensitivity, histogram, correlation, Number of Pixel Change Rate (NPCR), Unified Averaged Changed Intensity (UACI), and information entropy analysis. The main contribution is to offer a proposal for a diagnostic aid system for uveal melanoma. Considering the average values for 145 processed images, the results show that near-maximum NPCR values of 0.996 are obtained along with near-safe UACI values of 0.296 and high entropy of 7.954 for the ciphered images. The presented design demonstrates an encryption technique based on chaotic attractors for image transfer through the network. In this article, important theoretical considerations for implementing this system are provided, the requirements and architecture of the system are explained, and the stages in which the diagnosis is carries out are described. Finally, the encryption process is explained and the results and conclusions are presented. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

18 pages, 1903 KiB  
Review
Machine Learning in X-ray Diagnosis for Oral Health: A Review of Recent Progress
by Mónica Vieira Martins, Luís Baptista, Henrique Luís, Victor Assunção, Mário-Rui Araújo and Valentim Realinho
Computation 2023, 11(6), 115; https://doi.org/10.3390/computation11060115 - 10 Jun 2023
Cited by 5 | Viewed by 2985
Abstract
The past few decades have witnessed remarkable progress in the application of artificial intelligence (AI) and machine learning (ML) in medicine, notably in medical imaging. The application of ML to dental and oral imaging has also been developed, powered by the availability of [...] Read more.
The past few decades have witnessed remarkable progress in the application of artificial intelligence (AI) and machine learning (ML) in medicine, notably in medical imaging. The application of ML to dental and oral imaging has also been developed, powered by the availability of clinical dental images. The present work aims to investigate recent progress concerning the application of ML in the diagnosis of oral diseases using oral X-ray imaging, namely the quality and outcome of such methods. The specific research question was developed using the PICOT methodology. The review was conducted in the Web of Science, Science Direct, and IEEE Xplore databases, for articles reporting the use of ML and AI for diagnostic purposes in X-ray-based oral imaging. Imaging types included panoramic, periapical, bitewing X-ray images, and oral cone beam computed tomography (CBCT). The search was limited to papers published in the English language from 2018 to 2022. The initial search included 104 papers that were assessed for eligibility. Of these, 22 were included for a final appraisal. The full text of the articles was carefully analyzed and the relevant data such as the clinical application, the ML models, the metrics used to assess their performance, and the characteristics of the datasets, were registered for further analysis. The paper discusses the opportunities, challenges, and limitations found. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

17 pages, 6304 KiB  
Review
Survey of Recent Deep Neural Networks with Strong Annotated Supervision in Histopathology
by Dominika Petríková and Ivan Cimrák
Computation 2023, 11(4), 81; https://doi.org/10.3390/computation11040081 - 14 Apr 2023
Cited by 2 | Viewed by 2454
Abstract
Deep learning (DL) and convolutional neural networks (CNNs) have achieved state-of-the-art performance in many medical image analysis tasks. Histopathological images contain valuable information that can be used to diagnose diseases and create treatment plans. Therefore, the application of DL for the classification of [...] Read more.
Deep learning (DL) and convolutional neural networks (CNNs) have achieved state-of-the-art performance in many medical image analysis tasks. Histopathological images contain valuable information that can be used to diagnose diseases and create treatment plans. Therefore, the application of DL for the classification of histological images is a rapidly expanding field of research. The popularity of CNNs has led to a rapid growth in the number of works related to CNNs in histopathology. This paper aims to provide a clear overview for better navigation. In this paper, recent DL-based classification studies in histopathology using strongly annotated data have been reviewed. All the works have been categorized from two points of view. First, the studies have been categorized into three groups according to the training approach and model construction: 1. fine-tuning of pre-trained networks for one-stage classification, 2. training networks from scratch for one-stage classification, and 3. multi-stage classification. Second, the papers summarized in this study cover a wide range of applications (e.g., breast, lung, colon, brain, kidney). To help navigate through the studies, the classification of reviewed works into tissue classification, tissue grading, and biomarker identification was used. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop