Next Article in Journal
A Novel Expectation-Maximization-Based Blind Receiver for Low-Complexity Uplink STLC-NOMA Systems
Previous Article in Journal
A Systematic Review of Internet of Things in Clinical Laboratories: Opportunities, Advantages, and Challenges
Previous Article in Special Issue
A New Approach for Detecting Fundus Lesions Using Image Processing and Deep Neural Network Architecture Based on YOLO Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Special Issue “Computer Aided Diagnosis Sensors”

1
Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
2
Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(20), 8052; https://doi.org/10.3390/s22208052
Submission received: 12 October 2022 / Accepted: 19 October 2022 / Published: 21 October 2022
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)

1. Introduction

Sensors used to diagnose, monitor or treat diseases in the medical domain are known as medical sensors. There are several types of medical sensors that can be utilized for various applications, such as temperature probes; force sensors; pressure sensors; oximeters; electrocardiogram sensors which measure the electrical activity of the heart; heart rate sensors; electroencephalogram sensors, which measure the electrical activity of the brain; electromyogram sensors that record electrical activity produced by skeletal muscles; and respiration-rate sensors that count how many times the chest rises in a minute. The output of these sensors used to be interpreted by humans, which was time consuming and tedious; however, interpretation became easy with the advance in artificial intelligence (AI) techniques and the integration of the sensor outputs into computer-aided diagnostic (CAD) systems.
This Special Issue has accepted 34 papers that present some of the state-of-the-art AI approaches used to diagnose different diseases and disorders based on the data collected from different medical sensors. This contributes towards achieving our goal, which is to develop comprehensive and automated computer-aided diagnosis tools by focusing on the different machine learning algorithms that can be used for this purpose as well as novel applications in the medical field.

2. Overview of Contribution

Fraiwan and Faouri [1] used deep transfer learning for the automatic detection and classification of skin cancer. Al Mudawi and Alazeb [2] presented an astute way to predict cervical cancer with machine learning (ML) algorithms. AlSaeed and Omar [3] proposed a pre-trained convolutional neural network (CNN) deep learning model (ResNet50) as an automatic feature extraction method for diagnosing Alzheimer’s disease from magnetic resonance imaging (MRI). Yasser et al. [4] described a novel framework that can detect diabetic retinopathy (DR) from optical coherence tomography angiography (OCTA) based on capturing the appearance and morphological markers of the retinal vascular system. Holubiac et al. [5] discussed the effect of a strength-training protocol on bone mineral density for postmenopausal women with osteopenia/osteoporosis assessed by dual-energy X-ray absorptiometry (DEXA). Ayyad et al. [6] proposed a new framework for the precise identification of prostatic adenocarcinoma from two imaging modalities.
Tariq et al. [7] proposed a novel feature-based fusion network called FDC-FS for classifying heart and lung sounds. ElNakieb et al. [8] provided a thorough study of implementing feature engineering tools to find discriminant insights from brain imaging of white-matter connectivity and using a machine learning framework for the accurate classification of autistic individuals. Diab et al. [9] presented a brain strategy algorithm for multiple-object tracking based on merging semantic attributes and appearance features. Fraiwan et al. [10] presented a non-contact spirometry using a mobile thermal camera and AI regression. Ramesh et al. [11] proposed the design and implementation of an explainable deep learning 1D-CNN model for use in smart healthcare systems with general-purpose devices such as smart wearables and smartphones. Liang et al. [12] developed a new flow sensor-based suction index from a measured pump flow (SIMPF) control strategy for rotary left ventricular assist devices (LVADs) to provide adequate cardiac output and prevent left ventricle (LV) suction.
Al-Mohannadi et al. [13] proposed a deep-learning-based approach to apply semantic segmentation for the intima-media complex (IMC) and to calculate the cIMT measurement. Alshboul and Fraiwan [14] developed an algorithm to count the number of chews in eating video recordings using discrete wavelet decomposition and low pass filtration. Hammouda et al. [15] introduced a deep learning-based CAD system to classify the grade groups (GG) system using digitized prostate biopsy specimens (PBSs) using pyramidal CNN, with patch- and pixel-wise classifications. Ahmad et al. [16] provided proof-of-principle for an optical-based, quick, simple, and sensitive screening technology for the detection of SARS-CoV-2, utilizing antigen-antibody binding interactions. Fournelle et al. [17] developed a new mobile ultrasound device for long-term and automated bladder monitoring without user interaction consisting of 32 transmit and receive electronic components as well as a 32-element, phased array, 3 MHz transducer. Khasawneh et al. [18] customized and pre-trained deep learning models based on convolutional neural networks were used to detect pneumonia caused by COVID-19 respiratory complications. Al Ahmad et al. [19] presented a novel immunophenotyping technique using electrical characterization to differentiate between the following two most important cell types of the innate immune system: dendritic cells (DCs) and macrophages (MACs).
Haweel et al. [20] proposed a novel CAD framework to classify 50 autism spectrum disorder (ASD) and 50 typically developed toddlers with the adoption of CNN deep networks. The CAD system includes both local and global diagnosis in a response to speech task. Sharafeldeen et al. [21] presented a new segmentation technique for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussians (LCG). Haggag et al. [22] proposed a novel framework for the automatic quantification of the vitreous on optical coherence tomography (OCT) with application for use in the grading of vitreous inflammation. The proposed pipeline consists of two stages, vitreous region segmentation followed by a neural network classifier. In the first stage, the vitreous region is automatically segmented using a U-net CNN (U-CNN). El-Gamal et al. [23] developed a personalized, cortical region-based CAD system that helps visualize the severity of Alzheimer’s disease (AD) in different local brain regions. Shehata et al. [24] developed a comprehensive CAD system for the early assessment of renal cancer tumors. The CAD system identifies and integrates the optimal discriminating morphological, textural, and functional features that best describe the malignancy status of a given renal tumor. Alwateer et al. [25] introduced a novel approach for processing healthcare data and predicting useful information with minimum computational cost, using a hybrid algorithm that consists of two phases.
Wagner et al. [26] compared a medical-grade electrocardiography (ECG) system with an ECG sensor of the low-cost DiY (Do-it-Yourself) hardware toolkit BITalino. Their results showed that the BITalino system can be considered as an equivalent recording device for stationary ECG recordings in psychophysiological experiments. Naglah et al. [27] proposed a novel multimodal MRI-based CAD system that differentiates malignant from benign thyroid nodules, based on a novel CNN-based texture learning architecture. Alyoubi et al. [28] proposed a screening system for DR fundus image classification and lesions Localization to help ophthalmologists determine the patients’ DR stage. Abdelmaksoud et al. [29] developed a CAD system that detects and identifies prostate cancer from diffusion-weighted imaging (DWI). The identification of prostate cancer was achieved using two previously trained CNN models (AlexNet and VGGNet) that were fed with the estimated ADC maps of the segmented prostate regions. Jo et al. [30] introduced a novel customized optical imaging system for human conjunctiva with deep learning-based segmentation and motion correction. The image segmentation process was performed by the Attention-UNet structure to achieve high-performance segmentation results in conjunctiva images with motion blur.
Dghim et al. [31] evaluated two different strategies of the automatic detection and recognition of Nosema cells from microscopic images and achieved the identification of a robust and successful methodology for automated identification and recognition of Nosema cells versus the other existing objects in the same microscopic images. Hasnul et al. [32] presented a review on emotion recognition research that adopted electrocardiograms as a unimodal approach as well as part of a multimodal approach for emotion-recognition systems. Ayyad et al. [33] presented a literature review of the use of histopathology images and its challenges in detecting prostate cancer, studying different steps of the histopathology image analysis methodology. Santos et al. [34] proposed a new approach based on image-processing techniques, data augmentation, transfer learning, and deep neural networks to assist in the medical diagnosis of fundus lesions.
We express our heartfelt thanks to all the authors for their contributions. We also thank the reviewers for volunteering their time to provide insightful comments and criticism on the submissions. Finally, we appreciate the support of the Editorial Board and Editorial Office of MDPI Sensors for making this Special Issue possible.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fraiwan, M.; Faouri, E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. Sensors 2022, 22, 4963. [Google Scholar] [CrossRef] [PubMed]
  2. Al Mudawi, N.; Alazeb, A. A Model for Predicting Cervical Cancer Using Machine Learning Algorithms. Sensors 2022, 22, 4132. [Google Scholar] [CrossRef] [PubMed]
  3. AlSaeed, D.; Omar, S.F. Brain MRI Analysis for Alzheimer’s Disease Diagnosis Using CNN-Based Feature Extraction and Machine Learning. Sensors 2022, 22, 2911. [Google Scholar] [CrossRef]
  4. Yasser, I.; Khalifa, F.; Abdeltawab, H.; Ghazal, M.; Sandhu, H.S.; El-Baz, A. Automated Diagnosis of Optical Coherence Tomography Angiography (OCTA) Based on Machine Learning Techniques. Sensors 2022, 22, 2342. [Google Scholar] [CrossRef]
  5. Holubiac, I.Ș.; Leuciuc, F.V.; Crăciun, D.M.; Dobrescu, T. Effect of Strength Training Protocol on Bone Mineral Density for Postmenopausal Women with Osteopenia/Osteoporosis Assessed by Dual-Energy X-ray Absorptiometry (DEXA). Sensors 2022, 22, 1904. [Google Scholar] [CrossRef] [PubMed]
  6. Ayyad, S.M.; Badawy, M.A.; Shehata, M.; Alksas, A.; Mahmoud, A.; Abou El-Ghar, M.; Ghazal, M.; El-Melegy, M.; Abdel-Hamid, N.B.; Labib, L.M.; et al. A New Framework for Precise Identification of Prostatic Adenocarcinoma. Sensors 2022, 22, 1848. [Google Scholar] [CrossRef] [PubMed]
  7. Tariq, Z.; Shah, S.K.; Lee, Y. Feature-Based Fusion Using CNN for Lung and Heart Sound Classification. Sensors 2022, 22, 1521. [Google Scholar] [CrossRef] [PubMed]
  8. ElNakieb, Y.; Ali, M.T.; Elnakib, A.; Shalaby, A.; Soliman, A.; Mahmoud, A.; Ghazal, M.; Barnes, G.N.; El-Baz, A. The Role of Diffusion Tensor MR Imaging (DTI) of the Brain in Diagnosing Autism Spectrum Disorder: Promising Results. Sensors 2021, 21, 8171. [Google Scholar] [CrossRef] [PubMed]
  9. Diab, M.S.; Elhosseini, M.A.; El-Sayed, M.S.; Ali, H.A. Brain Strategy Algorithm for Multiple Object Tracking Based on Merging Semantic Attributes and Appearance Features. Sensors 2021, 21, 7604. [Google Scholar] [CrossRef] [PubMed]
  10. Fraiwan, L.; Khasawneh, N.; Lweesy, K.; Elbalki, M.; Almarzooqi, A.; Abu Hamra, N. Non-Contact Spirometry Using a Mobile Thermal Camera and AI Regression. Sensors 2021, 21, 7574. [Google Scholar] [CrossRef] [PubMed]
  11. Ramesh, J.; Solatidehkordi, Z.; Aburukba, R.; Sagahyroon, A. Atrial Fibrillation Classification with Smart Wearables Using Short-Term Heart Rate Variability and Deep Convolutional Neural Networks. Sensors 2021, 21, 7233. [Google Scholar] [CrossRef] [PubMed]
  12. Liang, L.; Qin, K.; El-Baz, A.S.; Roussel, T.J.; Sethu, P.; Giridharan, G.A.; Wang, Y. A Flow Sensor-Based Suction-Index Control Strategy for Rotary Left Ventricular Assist Devices. Sensors 2021, 21, 6890. [Google Scholar] [CrossRef] [PubMed]
  13. Al-Mohannadi, A.; Al-Maadeed, S.; Elharrouss, O.; Sadasivuni, K.K. Encoder-Decoder Architecture for Ultrasound IMC Segmentation and cIMT Measurement. Sensors 2021, 21, 6839. [Google Scholar] [CrossRef]
  14. Alshboul, S.; Fraiwan, M. Determination of Chewing Count from Video Recordings Using Discrete Wavelet Decomposition and Low Pass Filtration. Sensors 2021, 21, 6806. [Google Scholar] [CrossRef] [PubMed]
  15. Hammouda, K.; Khalifa, F.; El-Melegy, M.; Ghazal, M.; Darwish, H.E.; Abou El-Ghar, M.; El-Baz, A. A Deep Learning Pipeline for Grade Groups Classification Using Digitized Prostate Biopsy Specimens. Sensors 2021, 21, 6708. [Google Scholar] [CrossRef] [PubMed]
  16. Ahmad, M.A.; Mustafa, F.; Panicker, N.; Rizvi, T.A. Optical Detection of SARS-CoV-2 Utilizing Antigen-Antibody Binding Interactions. Sensors 2021, 21, 6596. [Google Scholar] [CrossRef]
  17. Fournelle, M.; Grün, T.; Speicher, D.; Weber, S.; Yilmaz, M.; Schoeb, D.; Miernik, A.; Reis, G.; Tretbar, S.; Hewener, H. Portable Ultrasound Research System for Use in Automated Bladder Monitoring with Machine-Learning-Based Segmentation. Sensors 2021, 21, 6481. [Google Scholar] [CrossRef]
  18. Khasawneh, N.; Fraiwan, M.; Fraiwan, L.; Khassawneh, B.; Ibnian, A. Detection of COVID-19 from Chest X-ray Images Using Deep Convolutional Neural Networks. Sensors 2021, 21, 5940. [Google Scholar] [CrossRef]
  19. Al Ahmad, M.; Nasser, R.A.; Olule, L.J.A.; Ali, B.R. Electrical Detection of Innate Immune Cells. Sensors 2021, 21, 5886. [Google Scholar] [CrossRef]
  20. Haweel, R.; Seada, N.; Ghoniemy, S.; Alghamdi, N.S.; El-Baz, A. A CNN Deep Local and Global ASD Classification Approach with Continuous Wavelet Transform Using Task-Based FMRI. Sensors 2021, 21, 5822. [Google Scholar] [CrossRef]
  21. Sharafeldeen, A.; Elsharkawy, M.; Alghamdi, N.S.; Soliman, A.; El-Baz, A. Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints. Sensors 2021, 21, 5482. [Google Scholar] [CrossRef] [PubMed]
  22. Haggag, S.; Khalifa, F.; Abdeltawab, H.; Elnakib, A.; Ghazal, M.; Mohamed, M.A.; Sandhu, H.S.; Alghamdi, N.S.; El-Baz, A. An Automated CAD System for Accurate Grading of Uveitis Using Optical Coherence Tomography Images. Sensors 2021, 21, 5457. [Google Scholar] [CrossRef]
  23. El-Gamal, F.E.-Z.A.; Elmogy, M.; Mahmoud, A.; Shalaby, A.; Switala, A.E.; Ghazal, M.; Soliman, H.; Atwan, A.; Alghamdi, N.S.; Barnes, G.N.; et al. A Personalized Computer-Aided Diagnosis System for Mild Cognitive Impairment (MCI) Using Structural MRI (sMRI). Sensors 2021, 21, 5416. [Google Scholar] [CrossRef] [PubMed]
  24. Shehata, M.; Alksas, A.; Abouelkheir, R.T.; Elmahdy, A.; Shaffie, A.; Soliman, A.; Ghazal, M.; Abu Khalifeh, H.; Salim, R.; Abdel Razek, A.A.K.; et al. A Comprehensive Computer-Assisted Diagnosis System for Early Assessment of Renal Cancer Tumors. Sensors 2021, 21, 4928. [Google Scholar] [CrossRef]
  25. Alwateer, M.; Almars, A.M.; Areed, K.N.; Elhosseini, M.A.; Haikal, A.Y.; Badawy, M. Ambient Healthcare Approach with Hybrid Whale Optimization Algorithm and Naïve Bayes Classifier. Sensors 2021, 21, 4579. [Google Scholar] [CrossRef] [PubMed]
  26. Wagner, R.E.; Plácido da Silva, H.; Gramann, K. Validation of a Low-Cost Electrocardiography (ECG) System for Psychophysiological Research. Sensors 2021, 21, 4485. [Google Scholar] [CrossRef]
  27. Naglah, A.; Khalifa, F.; Khaled, R.; Abdel Razek, A.A.K.; Ghazal, M.; Giridharan, G.; El-Baz, A. Novel MRI-Based CAD System for Early Detection of Thyroid Cancer Using Multi-Input CNN. Sensors 2021, 21, 3878. [Google Scholar] [CrossRef] [PubMed]
  28. Alyoubi, W.L.; Abulkhair, M.F.; Shalash, W.M. Diabetic Retinopathy Fundus Image Classification and Lesions Localization System Using Deep Learning. Sensors 2021, 21, 3704. [Google Scholar] [CrossRef] [PubMed]
  29. Abdelmaksoud, I.R.; Shalaby, A.; Mahmoud, A.; Elmogy, M.; Aboelfetouh, A.; Abou El-Ghar, M.; El-Melegy, M.; Alghamdi, N.S.; El-Baz, A. Precise Identification of Prostate Cancer from DWI Using Transfer Learning. Sensors 2021, 21, 3664. [Google Scholar] [CrossRef]
  30. Jo, H.-C.; Jeong, H.; Lee, J.; Na, K.-S.; Kim, D.-Y. Quantification of Blood Flow Velocity in the Human Conjunctival Microvessels Using Deep Learning-Based Stabilization Algorithm. Sensors 2021, 21, 3224. [Google Scholar] [CrossRef]
  31. Dghim, S.; Travieso-González, C.M.; Burget, R. Analysis of the Nosema Cells Identification for Microscopic Images. Sensors 2021, 21, 3068. [Google Scholar] [CrossRef] [PubMed]
  32. Hasnul, M.A.; Aziz, N.A.A.; Alelyani, S.; Mohana, M.; Aziz, A.A. Electrocardiogram-Based Emotion Recognition Systems and Their Applications in Healthcare—A Review. Sensors 2021, 21, 5015. [Google Scholar] [CrossRef]
  33. Ayyad, S.M.; Shehata, M.; Shalaby, A.; Abou El-Ghar, M.; Ghazal, M.; El-Melegy, M.; Abdel-Hamid, N.B.; Labib, L.M.; Ali, H.A.; El-Baz, A. Role of AI and Histopathological Images in Detecting Prostate Cancer: A Survey. Sensors 2021, 21, 2586. [Google Scholar] [CrossRef] [PubMed]
  34. Santos, C.; Aguiar, M.; Welfer, D.; Belloni, B. A New Approach for Detecting Fundus Lesions Using Image Processing and Deep Neural Network Architecture Based on YOLO Model. Sensors 2022, 22, 6441. [Google Scholar] [CrossRef] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

El-Baz, A.; Giridharan, G.A.; Shalaby, A.; Mahmoud, A.H.; Ghazal, M. Special Issue “Computer Aided Diagnosis Sensors”. Sensors 2022, 22, 8052. https://doi.org/10.3390/s22208052

AMA Style

El-Baz A, Giridharan GA, Shalaby A, Mahmoud AH, Ghazal M. Special Issue “Computer Aided Diagnosis Sensors”. Sensors. 2022; 22(20):8052. https://doi.org/10.3390/s22208052

Chicago/Turabian Style

El-Baz, Ayman, Guruprasad A. Giridharan, Ahmed Shalaby, Ali H. Mahmoud, and Mohammed Ghazal. 2022. "Special Issue “Computer Aided Diagnosis Sensors”" Sensors 22, no. 20: 8052. https://doi.org/10.3390/s22208052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop