Medical Imaging & Image Processing III

A special issue of Technologies (ISSN 2227-7080). This special issue belongs to the section "Information and Communication Technologies".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 29142

Special Issue Editors


grade E-Mail Website
Guest Editor
Informatics Building School of Informatics, University of Leicester, Leicester LE1 7RH, UK
Interests: deep learning; artificial intelligence; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Professor, Molecular Imaging and Neuropathology Division, Columbia University, New York, NY 10032, USA
2. Research Scientist, New York State Psychiatric Institute, New York, NY 10032, USA
Interests: magnetic resonance spectroscopy imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging is becoming an essential component in various fields of biomedical research and clinical practice: neuroscientists detect regional metabolic brain activity from positron emission tomography (PET), functional magnetic resonance imaging (MRI), and magnetic resonance spectrum imaging (MRSI) scans. Biologists study cells and generate 3D confocal microscopy data sets. Virologists generate 3D reconstructions of viruses from micrographs. Radiologists identify and quantify tumors from MRI and computed tomography (CT) scans.

On the other hand, image processing includes the analysis, enhancement, and display of biomedical images. Image reconstruction and modeling techniques allow instant processing of 2D signals to create 3D images. Image processing and analysis can be used to determine the diameter, volume, and vasculature of a tumor or organ, flow parameters of blood or other fluids, and microscopic changes that have yet to raise any otherwise discernible flags. Image classification techniques help to detect subjects suffering from particular diseases and to detect disease-related regions.

This Special Issue aims to provide a diverse, but complementary, set of contributions to demonstrate new developments and applications in the field of medical imaging and image processing.

The relevant Special Issue can be found here:

https://www.mdpi.com/journal/technologies/special_issues/medical_imaging

https://www.mdpi.com/journal/technologies/special_issues/medical_imaging_II

Prof. Dr. Yudong Zhang
Dr. Zhengchao Dong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Technologies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical imaging
  • magnetic resonance imaging
  • neuroimaging X-ray
  • computed tomography
  • mammography
  • image processing and analysis
  • computer vision machine learning
  • artificial intelligence
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 451 KiB  
Editorial
Medical Imaging and Image Processing
by Yudong Zhang and Zhengchao Dong
Technologies 2023, 11(2), 54; https://doi.org/10.3390/technologies11020054 - 5 Apr 2023
Cited by 8 | Viewed by 6177
Abstract
Medical imaging (MI) [...] Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

Research

Jump to: Editorial, Review

28 pages, 10052 KiB  
Article
A Collaborative Federated Learning Framework for Lung and Colon Cancer Classifications
by Md. Munawar Hossain, Md. Robiul Islam, Md. Faysal Ahamed, Mominul Ahsan and Julfikar Haider
Technologies 2024, 12(9), 151; https://doi.org/10.3390/technologies12090151 - 4 Sep 2024
Viewed by 594
Abstract
Lung and colon cancers are common types of cancer with significant fatality rates. Early identification considerably improves the odds of survival for those suffering from these diseases. Histopathological image analysis is crucial for detecting cancer by identifying morphological anomalies in tissue samples. Regulations [...] Read more.
Lung and colon cancers are common types of cancer with significant fatality rates. Early identification considerably improves the odds of survival for those suffering from these diseases. Histopathological image analysis is crucial for detecting cancer by identifying morphological anomalies in tissue samples. Regulations such as the HIPAA and GDPR impose considerable restrictions on the sharing of sensitive patient data, mostly because of privacy concerns. Federated learning (FL) is a promising technique that allows the training of strong models while maintaining data privacy. The use of a federated learning strategy has been suggested in this study to address privacy concerns in cancer categorization. To classify histopathological images of lung and colon cancers, this methodology uses local models with an Inception-V3 backbone. The global model is then updated on the basis of the local weights. The images were obtained from the LC25000 dataset, which consists of five separate classes. Separate analyses were performed for lung cancer, colon cancer, and their combined classification. The implemented model successfully classified lung cancer images into three separate classes with a classification accuracy of 99.867%. The classification of colon cancer images was achieved with 100% accuracy. More significantly, for the lung and colon cancers combined, the accuracy reached an impressive 99.720%. Compared with other current approaches, the proposed framework showed an improved performance. A heatmap, visual saliency map, and GradCAM were generated to pinpoint the crucial areas in the histopathology pictures of the test set where the models focused in particular during cancer class predictions. This approach demonstrates the potential of federated learning to enhance collaborative efforts in automated disease diagnosis through medical image analysis while ensuring patient data privacy. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

12 pages, 3040 KiB  
Article
Full-Wave Simulation of a Helmholtz Radiofrequency Coil for Magnetic Resonance Applications
by Giulio Giovannetti, Denis Burov, Angelo Galante and Francesca Frijia
Technologies 2024, 12(9), 150; https://doi.org/10.3390/technologies12090150 - 3 Sep 2024
Viewed by 497
Abstract
Magnetic resonance imaging (MRI) is a non-invasive diagnostic technique able to provide information about the anatomical, structural, and functional properties of different organs. A magnetic resonance (MR) scanner employs radiofrequency (RF) coils to generate a magnetic field to excite the nuclei in the [...] Read more.
Magnetic resonance imaging (MRI) is a non-invasive diagnostic technique able to provide information about the anatomical, structural, and functional properties of different organs. A magnetic resonance (MR) scanner employs radiofrequency (RF) coils to generate a magnetic field to excite the nuclei in the sample (transmit coil) and pick up the signals emitted by the nuclei (receive coil). To avoid trial-and-error approaches and optimize the RF coil performance for a given application, accurate design and simulation processes must be performed. We describe the full-wave simulation of a Helmholtz coil for high-field MRI performed with the finite-difference time-domain (FDTD) method, investigating magnetic field pattern differences between loaded and unloaded conditions. Moreover, the self-inductance of the single loops constituting the Helmholtz coil was estimated, as well as the frequency splitting between loops due to inductive coupling and the sample-induced resistance. The result accuracy was verified with data acquired with a Helmholtz prototype for small phantom experiments with a 3T MR clinical scanner. Finally, the magnetic field variations and coil detuning after the insertion of the RF shield were evaluated. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

20 pages, 1102 KiB  
Article
Training Artificial Neural Networks to Detect Multiple Sclerosis Lesions Using Granulometric Data from Preprocessed Magnetic Resonance Images with Morphological Transformations
by Edgar Rafael Ponce de Leon-Sanchez, Jorge Domingo Mendiola-Santibañez, Omar Arturo Dominguez-Ramirez, Ana Marcela Herrera-Navarro, Alberto Vazquez-Cervantes, Hugo Jimenez-Hernandez, Diana Margarita Cordova-Esparza, María de los Angeles Cuán Hernández and Horacio Senties-Madrid
Technologies 2024, 12(9), 145; https://doi.org/10.3390/technologies12090145 - 31 Aug 2024
Viewed by 585
Abstract
The symptoms of multiple sclerosis (MS) are determined by the location of demyelinating lesions in the white matter of the brain and spinal cord. Currently, magnetic resonance imaging (MRI) is the most common tool used for diagnosing MS, understanding the course of the [...] Read more.
The symptoms of multiple sclerosis (MS) are determined by the location of demyelinating lesions in the white matter of the brain and spinal cord. Currently, magnetic resonance imaging (MRI) is the most common tool used for diagnosing MS, understanding the course of the disease, and analyzing the effects of treatments. However, undesirable components may appear during the generation of MRI scans, such as noise or intensity variations. Mathematical morphology (MM) is a powerful image analysis technique that helps to filter the image and extract relevant structures. Granulometry is an image measurement tool for measuring MM that determines the size distribution of objects in an image without explicitly segmenting each object. While several methods have been proposed for the automatic segmentation of MS lesions in MRI scans, in some cases, only simple data preprocessing, such as image resizing to standardize the input dimensions, has been performed before the algorithm training. Therefore, this paper proposes an MRI preprocessing algorithm capable of performing elementary morphological transformations in brain images of MS patients and healthy individuals in order to delete undesirable components and extract the relevant structures such as MS lesions. Also, the algorithm computes the granulometry in MRI scans to describe the size qualities of lesions. Using this algorithm, we trained two artificial neural networks (ANNs) to predict MS diagnoses. By computing the differences in granulometry measurements between an image with MS lesions and a reference image (without lesions), we determined the size characterization of the lesions. Then, the ANNs were evaluated with the validation set, and the performance results (test accuracy = 0.9753; cross-entropy loss = 0.0247) show that the proposed algorithm can support specialists in making decisions to diagnose MS and estimating the disease progress based on granulometry values. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

42 pages, 13997 KiB  
Article
Multi-Scale CNN: An Explainable AI-Integrated Unique Deep Learning Framework for Lung-Affected Disease Classification
by Ovi Sarkar, Md. Robiul Islam, Md. Khalid Syfullah, Md. Tohidul Islam, Md. Faysal Ahamed, Mominul Ahsan and Julfikar Haider
Technologies 2023, 11(5), 134; https://doi.org/10.3390/technologies11050134 - 30 Sep 2023
Cited by 5 | Viewed by 3120
Abstract
Lung-related diseases continue to be a leading cause of global mortality. Timely and precise diagnosis is crucial to save lives, but the availability of testing equipment remains a challenge, often coupled with issues of reliability. Recent research has highlighted the potential of Chest [...] Read more.
Lung-related diseases continue to be a leading cause of global mortality. Timely and precise diagnosis is crucial to save lives, but the availability of testing equipment remains a challenge, often coupled with issues of reliability. Recent research has highlighted the potential of Chest X-ray (CXR) images in identifying various lung diseases, including COVID-19, fibrosis, pneumonia, and more. In this comprehensive study, four publicly accessible datasets have been combined to create a robust dataset comprising 6650 CXR images, categorized into seven distinct disease groups. To effectively distinguish between normal and six different lung-related diseases (namely, bacterial pneumonia, COVID-19, fibrosis, lung opacity, tuberculosis, and viral pneumonia), a Deep Learning (DL) architecture called a Multi-Scale Convolutional Neural Network (MS-CNN) is introduced. The model is adapted to classify multiple numbers of lung disease classes, which is considered to be a persistent challenge in the field. While prior studies have demonstrated high accuracy in binary and limited-class scenarios, the proposed framework maintains this accuracy across a diverse range of lung conditions. The innovative model harnesses the power of combining predictions from multiple feature maps at different resolution scales, significantly enhancing disease classification accuracy. The approach aims to shorten testing duration compared to the state-of-the-art models, offering a potential solution toward expediting medical interventions for patients with lung-related diseases and integrating explainable AI (XAI) for enhancing prediction capability. The results demonstrated an impressive accuracy of 96.05%, with average values for precision, recall, F1-score, and AUC at 0.97, 0.95, 0.95, and 0.94, respectively, for the seven-class classification. The model exhibited exceptional performance across multi-class classifications, achieving accuracy rates of 100%, 99.65%, 99.21%, 98.67%, and 97.47% for two, three, four, five, and six-class scenarios, respectively. The novel approach not only surpasses many pre-existing state-of-the-art (SOTA) methodologies but also sets a new standard for the diagnosis of lung-affected diseases using multi-class CXR data. Furthermore, the integration of XAI techniques such as SHAP and Grad-CAM enhanced the transparency and interpretability of the model’s predictions. The findings hold immense promise for accelerating and improving the accuracy and confidence of diagnostic decisions in the field of lung disease identification. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

17 pages, 6492 KiB  
Article
Multi-Classification of Lung Infections Using Improved Stacking Convolution Neural Network
by Usharani Bhimavarapu, Nalini Chintalapudi and Gopi Battineni
Technologies 2023, 11(5), 128; https://doi.org/10.3390/technologies11050128 - 17 Sep 2023
Cited by 1 | Viewed by 1853
Abstract
Lung disease is a respiratory disease that poses a high risk to people worldwide and includes pneumonia and COVID-19. As such, quick and precise identification of lung disease is vital in medical treatment. Early detection and diagnosis can significantly reduce the life-threatening nature [...] Read more.
Lung disease is a respiratory disease that poses a high risk to people worldwide and includes pneumonia and COVID-19. As such, quick and precise identification of lung disease is vital in medical treatment. Early detection and diagnosis can significantly reduce the life-threatening nature of lung diseases and improve the quality of life of human beings. Chest X-ray and computed tomography (CT) scan images are currently the best techniques to detect and diagnose lung infection. The increase in the chest X-ray or CT scan images at the time of training addresses the overfitting dilemma, and multi-class classification of lung diseases will deal with meaningful information and overfitting. Overfitting deteriorates the performance of the model and gives inaccurate results. This study reduces the overfitting issue and computational complexity by proposing a new enhanced kernel convolution function. Alongside an enhanced kernel convolution function, this study used convolution neural network (CNN) models to determine pneumonia and COVID-19. Each CNN model was applied to the collected dataset to extract the features and later applied these features as input to the classification models. This study shows that extracting deep features from the common layers of the CNN models increased the performance of the classification procedure. The multi-class classification improves the diagnostic performance, and the evaluation metrics improved significantly with the improved support vector machine (SVM). The best results were obtained using the improved SVM classifier fed with the features provided by CNN, and the success rate of the improved SVM was 99.8%. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

27 pages, 27415 KiB  
Article
Efficient Deep Learning-Based Data-Centric Approach for Autism Spectrum Disorder Diagnosis from Facial Images Using Explainable AI
by Mohammad Shafiul Alam, Muhammad Mahbubur Rashid, Ahmed Rimaz Faizabadi, Hasan Firdaus Mohd Zaki, Tasfiq E. Alam, Md Shahin Ali, Kishor Datta Gupta and Md Manjurul Ahsan
Technologies 2023, 11(5), 115; https://doi.org/10.3390/technologies11050115 - 29 Aug 2023
Cited by 4 | Viewed by 4181
Abstract
The research describes an effective deep learning-based, data-centric approach for diagnosing autism spectrum disorder from facial images. To classify ASD and non-ASD subjects, this method requires training a convolutional neural network using the facial image dataset. As a part of the data-centric approach, [...] Read more.
The research describes an effective deep learning-based, data-centric approach for diagnosing autism spectrum disorder from facial images. To classify ASD and non-ASD subjects, this method requires training a convolutional neural network using the facial image dataset. As a part of the data-centric approach, this research applies pre-processing and synthesizing of the training dataset. The trained model is subsequently evaluated on an independent test set in order to assess the performance matrices of various data-centric approaches. The results reveal that the proposed method that simultaneously applies the pre-processing and augmentation approach on the training dataset outperforms the recent works, achieving excellent 98.9% prediction accuracy, sensitivity, and specificity while having 99.9% AUC. This work enhances the clarity and comprehensibility of the algorithm by integrating explainable AI techniques, providing clinicians with valuable and interpretable insights into the decision-making process of the ASD diagnosis model. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

18 pages, 2301 KiB  
Article
The U-Net Family for Epicardial Adipose Tissue Segmentation and Quantification in Low-Dose CT
by Lu Liu, Runlei Ma, Peter M. A. van Ooijen, Matthijs Oudkerk, Rozemarijn Vliegenthart, Raymond N. J. Veldhuis and Christoph Brune
Technologies 2023, 11(4), 104; https://doi.org/10.3390/technologies11040104 - 5 Aug 2023
Viewed by 1985
Abstract
Epicardial adipose tissue (EAT) is located between the visceral pericardium and myocardium, and EAT volume is correlated with cardiovascular risk. Nowadays, many deep learning-based automated EAT segmentation and quantification methods in the U-net family have been developed to reduce the workload for radiologists. [...] Read more.
Epicardial adipose tissue (EAT) is located between the visceral pericardium and myocardium, and EAT volume is correlated with cardiovascular risk. Nowadays, many deep learning-based automated EAT segmentation and quantification methods in the U-net family have been developed to reduce the workload for radiologists. The automatic assessment of EAT on non-contrast low-dose CT calcium score images poses a greater challenge compared to the automatic assessment on coronary CT angiography, which requires a higher radiation dose to capture the intricate details of the coronary arteries. This study comprehensively examined and evaluated state-of-the-art segmentation methods while outlining future research directions. Our dataset consisted of 154 non-contrast low-dose CT scans from the ROBINSCA study, with two types of labels: (a) region inside the pericardium and (b) pixel-wise EAT labels. We selected four advanced methods from the U-net family: 3D U-net, 3D attention U-net, an extended 3D attention U-net, and U-net++. For evaluation, we performed both four-fold cross-validation and hold-out tests. Agreement between the automatic segmentation/quantification and the manual quantification was evaluated with the Pearson correlation and the Bland–Altman analysis. Generally, the models trained with label type (a) showed better performance compared to models trained with label type (b). The U-net++ model trained with label type (a) showed the best performance for segmentation and quantification. The U-net++ model trained with label type (a) efficiently provided better EAT segmentation results (hold-out test: DCS = 80.18±0.20%, mIoU = 67.13±0.39%, sensitivity = 81.47±0.43%, specificity = 99.64±0.00%, Pearson correlation = 0.9405) and EAT volume compared to the other U-net-based networks and the recent EAT segmentation method. Interestingly, our findings indicate that 3D convolutional neural networks do not consistently outperform 2D networks in EAT segmentation and quantification. Moreover, utilizing labels representing the region inside the pericardium proved advantageous in training more accurate EAT segmentation models. These insights highlight the potential of deep learning-based methods for achieving robust EAT segmentation and quantification outcomes. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

18 pages, 2498 KiB  
Article
Infrared Thermal Imaging and Artificial Neural Networks to Screen for Wrist Fractures in Pediatrics
by Olamilekan Shobayo, Reza Saatchi and Shammi Ramlakhan
Technologies 2022, 10(6), 119; https://doi.org/10.3390/technologies10060119 - 22 Nov 2022
Cited by 4 | Viewed by 2038
Abstract
Paediatric wrist fractures are commonly seen injuries at emergency departments. Around 50% of the X-rays taken to identify these injuries indicate no fracture. The aim of this study was to develop a model using infrared thermal imaging (IRTI) data and multilayer perceptron (MLP) [...] Read more.
Paediatric wrist fractures are commonly seen injuries at emergency departments. Around 50% of the X-rays taken to identify these injuries indicate no fracture. The aim of this study was to develop a model using infrared thermal imaging (IRTI) data and multilayer perceptron (MLP) neural networks as a screening tool to assist clinicians in deciding which patients require X-ray imaging to diagnose a fracture. Forty participants with wrist injury (19 with a fracture, 21 without, X-ray confirmed), mean age 10.50 years, were included. IRTI of both wrists was performed with the contralateral as reference. The injured wrist region of interest (ROI) was segmented and represented by the means of cells of 10 × 10 pixels. The fifty largest means were selected, the mean temperature of the contralateral ROI was subtracted, and they were expressed by their standard deviation, kurtosis, and interquartile range for MLP processing. Training and test files were created, consisting of randomly split 2/3 and 1/3 of the participants, respectively. To avoid bias of participant inclusion in the two files, the experiments were repeated 100 times, and the MLP outputs were averaged. The model’s sensitivity and specificity were 84.2% and 71.4%, respectively. Further work involves a larger sample size, adults, and other bone fractures. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

17 pages, 8852 KiB  
Article
Deep Neural Network for Lung Image Segmentation on Chest X-ray
by Mahesh Chavan, Vijayakumar Varadarajan, Shilpa Gite and Ketan Kotecha
Technologies 2022, 10(5), 105; https://doi.org/10.3390/technologies10050105 - 30 Sep 2022
Cited by 12 | Viewed by 4242
Abstract
COVID-19 patients require effective diagnostic methods, which are currently in short supply. In this study, we explained how to accurately identify the lung regions on the X-ray scans of such people’s lungs. Images from X-rays or CT scans are critical in the healthcare [...] Read more.
COVID-19 patients require effective diagnostic methods, which are currently in short supply. In this study, we explained how to accurately identify the lung regions on the X-ray scans of such people’s lungs. Images from X-rays or CT scans are critical in the healthcare business. Image data categorization and segmentation algorithms have been developed to help doctors save time and reduce manual errors during the diagnosis. Over time, CNNs have consistently outperformed other image segmentation algorithms. Various architectures are presently based on CNNs such as ResNet, U-Net, VGG-16, etc. This paper merged the U-Net image segmentation and ResNet feature extraction networks to construct the ResUNet++ network. The paper’s novelty lies in the detailed discussion and implementation of the ResUNet++ architecture in lung image segmentation. In this research paper, we compared the ResUNet++ architecture with two other popular segmentation architectures. The ResNet residual block helps us in lowering the feature reduction issues. ResUNet++ performed well compared with the UNet and ResNet architectures by achieving high evaluation scores with the validation dice coefficient (96.36%), validation mean IoU (94.17%), and validation binary accuracy (98.07%). The novelty of this research paper lies in a detailed discussion of the UNet and ResUNet architectures and the implementation of ResUNet++ in lung images. As per our knowledge, until now, the ResUNet++ architecture has not been performed on lung image segmentation. We ran both the UNet and ResNet models for the same amount of epochs and found that the ResUNet++ architecture achieved higher accuracy with fewer epochs. In addition, the ResUNet model gave us higher accuracy (94%) than the UNet model (92%). Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

17 pages, 3975 KiB  
Review
A Review of Automatic Pain Assessment from Facial Information Using Machine Learning
by Najib Ben Aoun
Technologies 2024, 12(6), 92; https://doi.org/10.3390/technologies12060092 - 20 Jun 2024
Cited by 1 | Viewed by 1248
Abstract
Pain assessment has become an important component in modern healthcare systems. It aids medical professionals in patient diagnosis and providing the appropriate care and therapy. Conventionally, patients are asked to provide their pain level verbally. However, this subjective method is generally inaccurate, not [...] Read more.
Pain assessment has become an important component in modern healthcare systems. It aids medical professionals in patient diagnosis and providing the appropriate care and therapy. Conventionally, patients are asked to provide their pain level verbally. However, this subjective method is generally inaccurate, not possible for non-communicative people, can be affected by physiological and environmental factors and is time-consuming, which renders it inefficient in healthcare settings. So, there has been a growing need to build objective, reliable and automatic pain assessment alternatives. In fact, due to the efficiency of facial expressions as pain biomarkers that accurately expand the pain intensity and the power of machine learning methods to effectively learn the subtle nuances of pain expressions and accurately predict pain intensity, automatic pain assessment methods have evolved rapidly. This paper reviews recent spatial facial expressions and machine learning-based pain assessment methods. Moreover, we highlight the pain intensity scales, datasets and method performance evaluation criteria. In addition, these methods’ contributions, strengths and limitations will be reported and discussed. Additionally, the review lays the groundwork for further study and improvement for more accurate automatic pain assessment. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

Back to TopTop