sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence in Medical Imaging and Visual Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 August 2022) | Viewed by 38425
Please contact the Guest Editor or the Section Managing Editor at ([email protected]) for any queries.

Special Issue Editors


E-Mail Website
Guest Editor
Director of the Computer Vision & Image Processing Laboratory, Professor of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
Interests: Computer Vision (sensors for smart systems and object reconstruction), Biomedical Imaging (CAD systems for early detection and diagnosis of colon and lung cancers, and image-guided minimally-invasive interventions), Bioemtrics (facial information modeling for education and biomedical applications) and Oral health (3D optical scanning of the human jaw and applications in orthodontics and telehealth)

E-Mail Website
Guest Editor
Senior Research Scientist, Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
Interests: Image Modeling, Sensor Planning for Smart Systems, Multimodality Imaging, Face Recognition at a distance, and quantifying student engagement in STEM subjects using non-intrusive sensors, and wireless biometric sensor networks

Special Issue Information

Dear Colleagues,

Sensors welcomes submissions to this Special Issue on “Artificial Intelligence in Medical Imaging and Visual Sensing”. The term “visual sensor” denotes a sensor, which, similar to the human eye, captures at least a two-dimensional image of the object being measured. The intensity distribution of this optical image is detected and evaluated by a sensor. A wide variety of visual systems can be found, from the classical monocular systems to omnidirectional, RGB-D, laser-structured light vision systems, medical imaging modalities, and more sophisticated 3D systems. Every visual sensing technique has specific characteristics that make them useful to solve different problems. Their range of applications is wide and varied.

Recently, Artificial Intelligence (AI) research within medicine is growing rapidly. Artificial intelligence can achieve outstanding performance in many health technologies. AI provides techniques that enhances sensors performance and acquisition time. Artificial intelligence has been well accepted as the future of medical sensor applications.

This Special Issue will present some of the possibilities that visual sensing techniques offer, focusing on the different configurations that can be used and novel applications in medical applications. Also, it will present the current trends in medical AI. Furthermore, scholarly reviews by established researchers presenting a deep analysis of a specific problem of visual sensing sensors and systems with biomedical focus would also be appropriate.

Prof. Aly Farag
Dr. Asem M. Ali
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

The Special Issue topics include, but are not limited to the following:

  • Computer Vision
  • Medical Imaging
  • Machine Learning
  • Deep Learning
  • Artificial Intelligence
  • Medical imaging process
  • Medical image analysis
  • Novel technologies in Medical Imaging
  • Visual sensors in medical Application
  • Computer vision in modeling and reconstruction from medical images
  • Imaging and Sensing in medicine

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4455 KiB  
Article
An Experimental Platform for Real-Time Students Engagement Measurements from Video in STEM Classrooms
by Islam Alkabbany, Asem M. Ali, Chris Foreman, Thomas Tretter, Nicholas Hindy and Aly Farag
Sensors 2023, 23(3), 1614; https://doi.org/10.3390/s23031614 - 2 Feb 2023
Cited by 10 | Viewed by 4110
Abstract
The ability to measure students’ engagement in an educational setting may facilitate timely intervention in both the learning and the teaching process in a variety of classroom settings. In this paper, a real-time automatic student engagement measure is proposed through investigating two of [...] Read more.
The ability to measure students’ engagement in an educational setting may facilitate timely intervention in both the learning and the teaching process in a variety of classroom settings. In this paper, a real-time automatic student engagement measure is proposed through investigating two of the main components of engagement: the behavioral engagement and the emotional engagement. A biometric sensor network (BSN) consisting of web cameras, a wall-mounted camera and a high-performance computing machine was designed to capture students’ head poses, eye gaze, body movements, and facial emotions. These low-level features are used to train an AI-based model to estimate the behavioral and emotional engagement in the class environment. A set of experiments was conducted to compare the proposed technology with the state-of-the-art frameworks. The proposed framework shows better accuracy in estimating both behavioral and emotional engagement. In addition, it offers superior flexibility to work in any educational environment. Further, this approach allows a quantitative comparison of teaching methods. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

14 pages, 4272 KiB  
Article
An AI-Based Colonic Polyp Classifier for Colorectal Cancer Screening Using Low-Dose Abdominal CT
by Islam Alkabbany, Asem M. Ali, Mostafa Mohamed, Salwa M. Elshazly and Aly Farag
Sensors 2022, 22(24), 9761; https://doi.org/10.3390/s22249761 - 13 Dec 2022
Cited by 1 | Viewed by 4218
Abstract
Among the non-invasive Colorectal cancer (CRC) screening approaches, Computed Tomography Colonography (CTC) and Virtual Colonoscopy (VC), are much more accurate. This work proposes an AI-based polyp detection framework for virtual colonoscopy (VC). Two main steps are addressed in this work: automatic segmentation to [...] Read more.
Among the non-invasive Colorectal cancer (CRC) screening approaches, Computed Tomography Colonography (CTC) and Virtual Colonoscopy (VC), are much more accurate. This work proposes an AI-based polyp detection framework for virtual colonoscopy (VC). Two main steps are addressed in this work: automatic segmentation to isolate the colon region from its background, and automatic polyp detection. Moreover, we evaluate the performance of the proposed framework on low-dose Computed Tomography (CT) scans. We build on our visualization approach, Fly-In (FI), which provides “filet”-like projections of the internal surface of the colon. The performance of the Fly-In approach confirms its ability with helping gastroenterologists, and it holds a great promise for combating CRC. In this work, these 2D projections of FI are fused with the 3D colon representation to generate new synthetic images. The synthetic images are used to train a RetinaNet model to detect polyps. The trained model has a 94% f1-score and 97% sensitivity. Furthermore, we study the effect of dose variation in CT scans on the performance of the the FI approach in polyp visualization. A simulation platform is developed for CTC visualization using FI, for regular CTC and low-dose CTC. This is accomplished using a novel AI restoration algorithm that enhances the Low-Dose CT images so that a 3D colon can be successfully reconstructed and visualized using the FI approach. Three senior board-certified radiologists evaluated the framework for the peak voltages of 30 KV, and the average relative sensitivities of the platform were 92%, whereas the 60 KV peak voltage produced average relative sensitivities of 99.5%. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

16 pages, 5063 KiB  
Article
Deep Learning Models for Classification of Dental Diseases Using Orthopantomography X-ray OPG Images
by Yassir Edrees Almalki, Amsa Imam Din, Muhammad Ramzan, Muhammad Irfan, Khalid Mahmood Aamir, Abdullah Almalki, Saud Alotaibi, Ghada Alaglan, Hassan A Alshamrani and Saifur Rahman
Sensors 2022, 22(19), 7370; https://doi.org/10.3390/s22197370 - 28 Sep 2022
Cited by 28 | Viewed by 10728
Abstract
The teeth are the most challenging material to work with in the human body. Existing methods for detecting teeth problems are characterised by low efficiency, the complexity of the experiential operation, and a higher level of user intervention. Older oral disease detection approaches [...] Read more.
The teeth are the most challenging material to work with in the human body. Existing methods for detecting teeth problems are characterised by low efficiency, the complexity of the experiential operation, and a higher level of user intervention. Older oral disease detection approaches were manual, time-consuming, and required a dentist to examine and evaluate the disease. To address these concerns, we propose a novel approach for detecting and classifying the four most common teeth problems: cavities, root canals, dental crowns, and broken-down root canals, based on the deep learning model. In this study, we apply the YOLOv3 deep learning model to develop an automated tool capable of diagnosing and classifying dental abnormalities, such as dental panoramic X-ray images (OPG). Due to the lack of dental disease datasets, we created the Dental X-rays dataset to detect and classify these diseases. The size of datasets used after augmentation was 1200 images. The dataset comprises dental panoramic images with dental disorders such as cavities, root canals, BDR, dental crowns, and so on. The dataset was divided into 70% training and 30% testing images. The trained model YOLOv3 was evaluated on test images after training. The experiments demonstrated that the proposed model achieved 99.33% accuracy and performed better than the existing state-of-the-art models in terms of accuracy and universality if we used our datasets on other models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

22 pages, 4241 KiB  
Article
An Edge-Based Selection Method for Improving Regions-of-Interest Localizations Obtained Using Multiple Deep Learning Object-Detection Models in Breast Ultrasound Images
by Mohammad I. Daoud, Aamer Al-Ali, Rami Alazrai, Mahasen S. Al-Najar, Baha A. Alsaify, Mostafa Z. Ali and Sahel Alouneh
Sensors 2022, 22(18), 6721; https://doi.org/10.3390/s22186721 - 6 Sep 2022
Cited by 4 | Viewed by 2217
Abstract
Computer-aided diagnosis (CAD) systems can be used to process breast ultrasound (BUS) images with the goal of enhancing the capability of diagnosing breast cancer. Many CAD systems operate by analyzing the region-of-interest (ROI) that contains the tumor in the BUS image using conventional [...] Read more.
Computer-aided diagnosis (CAD) systems can be used to process breast ultrasound (BUS) images with the goal of enhancing the capability of diagnosing breast cancer. Many CAD systems operate by analyzing the region-of-interest (ROI) that contains the tumor in the BUS image using conventional texture-based classification models and deep learning-based classification models. Hence, the development of these systems requires automatic methods to localize the ROI that contains the tumor in the BUS image. Deep learning object-detection models can be used to localize the ROI that contains the tumor, but the ROI generated by one model might be better than the ROIs generated by other models. In this study, a new method, called the edge-based selection method, is proposed to analyze the ROIs generated by different deep learning object-detection models with the goal of selecting the ROI that improves the localization of the tumor region. The proposed method employs edge maps computed for BUS images using the recently introduced Dense Extreme Inception Network (DexiNed) deep learning edge-detection model. To the best of our knowledge, our study is the first study that has employed a deep learning edge-detection model to detect the tumor edges in BUS images. The proposed edge-based selection method is applied to analyze the ROIs generated by four deep learning object-detection models. The performance of the proposed edge-based selection method and the four deep learning object-detection models is evaluated using two BUS image datasets. The first dataset, which is used to perform cross-validation evaluation analysis, is a private dataset that includes 380 BUS images. The second dataset, which is used to perform generalization evaluation analysis, is a public dataset that includes 630 BUS images. For both the cross-validation evaluation analysis and the generalization evaluation analysis, the proposed method obtained the overall ROI detection rate, mean precision, mean recall, and mean F1-score values of 98%, 0.91, 0.90, and 0.90, respectively. Moreover, the results show that the proposed edge-based selection method outperformed the four deep learning object-detection models as well as three baseline-combining methods that can be used to combine the ROIs generated by the four deep learning object-detection models. These findings suggest the potential of employing our proposed method to analyze the ROIs generated using different deep learning object-detection models to select the ROI that improves the localization of the tumor region. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

14 pages, 4659 KiB  
Article
Semantic Segmentation of the Malignant Breast Imaging Reporting and Data System Lexicon on Breast Ultrasound Images by Using DeepLab v3+
by Wei-Chung Shia, Fang-Rong Hsu, Seng-Tong Dai, Shih-Lin Guo and Dar-Ren Chen
Sensors 2022, 22(14), 5352; https://doi.org/10.3390/s22145352 - 18 Jul 2022
Cited by 8 | Viewed by 2276
Abstract
In this study, an advanced semantic segmentation method and deep convolutional neural network was applied to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images, thereby facilitating image interpretation and diagnosis by providing radiologists an objective second opinion. [...] Read more.
In this study, an advanced semantic segmentation method and deep convolutional neural network was applied to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images, thereby facilitating image interpretation and diagnosis by providing radiologists an objective second opinion. A total of 684 images (380 benign and 308 malignant tumours) from 343 patients (190 benign and 153 malignant breast tumour patients) were analysed in this study. Six malignancy-related standardised BI-RADS features were selected after analysis. The DeepLab v3+ architecture and four decode networks were used, and their semantic segmentation performance was evaluated and compared. Subsequently, DeepLab v3+ with the ResNet-50 decoder showed the best performance in semantic segmentation, with a mean accuracy and mean intersection over union (IU) of 44.04% and 34.92%, respectively. The weighted IU was 84.36%. For the diagnostic performance, the area under the curve was 83.32%. This study aimed to automate identification of the malignant BI-RADS lexicon on breast ultrasound images to facilitate diagnosis and improve its quality. The evaluation showed that DeepLab v3+ with the ResNet-50 decoder was suitable for solving this problem, offering a better balance of performance and computational resource usage than a fully connected network and other decoders. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

15 pages, 4672 KiB  
Article
An Effective Skin Cancer Classification Mechanism via Medical Vision Transformer
by Suliman Aladhadh, Majed Alsanea, Mohammed Aloraini, Taimoor Khan, Shabana Habib and Muhammad Islam
Sensors 2022, 22(11), 4008; https://doi.org/10.3390/s22114008 - 25 May 2022
Cited by 40 | Viewed by 5213
Abstract
Skin Cancer (SC) is considered the deadliest disease in the world, killing thousands of people every year. Early SC detection can increase the survival rate for patients up to 70%, hence it is highly recommended that regular head-to-toe skin examinations are conducted to [...] Read more.
Skin Cancer (SC) is considered the deadliest disease in the world, killing thousands of people every year. Early SC detection can increase the survival rate for patients up to 70%, hence it is highly recommended that regular head-to-toe skin examinations are conducted to determine whether there are any signs or symptoms of SC. The use of Machine Learning (ML)-based methods is having a significant impact on the classification and detection of SC diseases. However, there are certain challenges associated with the accurate classification of these diseases such as a lower detection accuracy, poor generalization of the models, and an insufficient amount of labeled data for training. To address these challenges, in this work we developed a two-tier framework for the accurate classification of SC. During the first stage of the framework, we applied different methods for data augmentation to increase the number of image samples for effective training. As part of the second tier of the framework, taking into consideration the promising performance of the Medical Vision Transformer (MVT) in the analysis of medical images, we developed an MVT-based classification model for SC. This MVT splits the input image into image patches and then feeds these patches to the transformer in a sequence structure, like word embedding. Finally, Multi-Layer Perceptron (MLP) is used to classify the input image into the corresponding class. Based on the experimental results achieved on the Human Against Machine (HAM10000) datasets, we concluded that the proposed MVT-based model achieves better results than current state-of-the-art techniques for SC classification. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

17 pages, 2406 KiB  
Article
An Adaptive Learning Model for Multiscale Texture Features in Polyp Classification via Computed Tomographic Colonography
by Weiguo Cao, Marc J. Pomeroy, Shu Zhang, Jiaxing Tan, Zhengrong Liang, Yongfeng Gao, Almas F. Abbasi and Perry J. Pickhardt
Sensors 2022, 22(3), 907; https://doi.org/10.3390/s22030907 - 25 Jan 2022
Cited by 7 | Viewed by 2889
Abstract
Objective: As an effective lesion heterogeneity depiction, texture information extracted from computed tomography has become increasingly important in polyp classification. However, variation and redundancy among multiple texture descriptors render a challenging task of integrating them into a general characterization. Considering these two problems, [...] Read more.
Objective: As an effective lesion heterogeneity depiction, texture information extracted from computed tomography has become increasingly important in polyp classification. However, variation and redundancy among multiple texture descriptors render a challenging task of integrating them into a general characterization. Considering these two problems, this work proposes an adaptive learning model to integrate multi-scale texture features. Methods: To mitigate feature variation, the whole feature set is geometrically split into several independent subsets that are ranked by a learning evaluation measure after preliminary classifications. To reduce feature redundancy, a bottom-up hierarchical learning framework is proposed to ensure monotonic increase of classification performance while integrating these ranked sets selectively. Two types of classifiers, traditional (random forest + support vector machine)- and convolutional neural network (CNN)-based, are employed to perform the polyp classification under the proposed framework with extended Haralick measures and gray-level co-occurrence matrix (GLCM) as inputs, respectively. Experimental results are based on a retrospective dataset of 63 polyp masses (defined as greater than 3 cm in largest diameter), including 32 adenocarcinomas and 31 benign adenomas, from adult patients undergoing first-time computed tomography colonography and who had corresponding histopathology of the detected masses. Results: We evaluate the performance of the proposed models by the area under the curve (AUC) of the receiver operating characteristic curve. The proposed models show encouraging performances of an AUC score of 0.925 with the traditional classification method and an AUC score of 0.902 with CNN. The proposed adaptive learning framework significantly outperforms nine well-established classification methods, including six traditional methods and three deep learning ones with a large margin. Conclusions: The proposed adaptive learning model can combat the challenges of feature variation through a multiscale grouping of feature inputs, and the feature redundancy through a hierarchal sorting of these feature groups. The improved classification performance against comparative models demonstrated the feasibility and utility of this adaptive learning procedure for feature integration. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

14 pages, 21156 KiB  
Article
CAT: Centerness-Aware Anchor-Free Tracker
by Haoyi Ma, Scott T. Acton and Zongli Lin
Sensors 2022, 22(1), 354; https://doi.org/10.3390/s22010354 - 4 Jan 2022
Cited by 2 | Viewed by 2077
Abstract
Accurate and robust scale estimation in visual object tracking is a challenging task. To obtain a scale estimation of the target object, most methods rely either on a multi-scale searching scheme or on refining a set of predefined anchor boxes. These methods require [...] Read more.
Accurate and robust scale estimation in visual object tracking is a challenging task. To obtain a scale estimation of the target object, most methods rely either on a multi-scale searching scheme or on refining a set of predefined anchor boxes. These methods require heuristically selected parameters, such as scale factors of the multi-scale searching scheme, or sizes and aspect ratios of the predefined candidate anchor boxes. On the contrary, a centerness-aware anchor-free tracker (CAT) is designed in this work. First, the location and scale of the target object are predicted in an anchor-free fashion by decomposing tracking into parallel classification and regression problems. The proposed anchor-free design obviates the need for hyperparameters related to the anchor boxes, making CAT more generic and flexible. Second, the proposed centerness-aware classification branch can identify the foreground from the background while predicting the normalized distance from the location within the foreground to the target center, i.e., the centerness. The proposed centerness-aware classification branch improves the tracking accuracy and robustness significantly by suppressing low-quality state estimates. The experiments show that our centerness-aware anchor-free tracker, with its appealing features, achieves salient performance in a wide variety of tracking scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

19 pages, 3984 KiB  
Article
Study on Reconstruction and Feature Tracking of Silicone Heart 3D Surface
by Ziyan Zhang, Yan Liu, Jiawei Tian, Shan Liu, Bo Yang, Longhai Xiang, Lirong Yin and Wenfeng Zheng
Sensors 2021, 21(22), 7570; https://doi.org/10.3390/s21227570 - 14 Nov 2021
Cited by 16 | Viewed by 2268
Abstract
At present, feature-based 3D reconstruction and tracking technology is widely applied in the medical field. In minimally invasive surgery, the surgeon can achieve three-dimensional reconstruction through the images obtained by the endoscope in the human body, restore the three-dimensional scene of the area [...] Read more.
At present, feature-based 3D reconstruction and tracking technology is widely applied in the medical field. In minimally invasive surgery, the surgeon can achieve three-dimensional reconstruction through the images obtained by the endoscope in the human body, restore the three-dimensional scene of the area to be operated on, and track the motion of the soft tissue surface. This enables doctors to have a clearer understanding of the location depth of the surgical area, greatly reducing the negative impact of 2D image defects and ensuring smooth operation. In this study, firstly, the 3D coordinates of each feature point are calculated by using the parameters of the parallel binocular endoscope and the spatial geometric constraints. At the same time, the discrete feature points are divided into multiple triangles using the Delaunay triangulation method. Then, the 3D coordinates of feature points and the division results of each triangle are combined to complete the 3D surface reconstruction. Combined with the feature matching method based on convolutional neural network, feature tracking is realized by calculating the three-dimensional coordinate changes of the same feature point in different frames. Finally, experiments are carried out on the endoscope image to complete the 3D surface reconstruction and feature tracking. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

Back to TopTop