sensors-logo

Journal Browser

Journal Browser

Deep Learning-Based Imaging and Sensing Technologies for Biomedical Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 4931

Special Issue Editor


E-Mail Website
Guest Editor
Department of Medical Imaging, Institute of Medical Science (IMS); University of Toronto, Toronto, Canada
Interests: machine learning; medical imaging; AI in Medicine

Special Issue Information

Dear Colleagues,

With the advent of deep learning, Artificial Intelligence (AI) models, including convolutional neural networks (CNNs), have delivered promising results for health monitoring and detection and prediction of different diseases using biomedical imaging and sensing technologies. These technologies help to improve the overall patient outcome by providing personalized diagnostics, prognostics, and treatment, improving the quality of life of patients. The unique challenges of developing AI models for health monitoring and disease diagnosis and prognosis using imaging and sensing technologies require customized models that go beyond off-the-shelf and generic AI solutions. These challenges include high accuracy, reliability, and explainability of the AI results for biomedical applications. To bring state-of-the-art research together, research papers reporting novel AI-driven imaging and/or sensing technologies with clinical applications are invited for submission to this Special Issue. The scope and topic of this Special Issue includes but is not limited to:

  • AI-driven advances in biomedical optical imaging/sensing technologies (e.g., optical imaging, optical coherence tomography, near infrared spectroscopy, diffuse optical spectroscopy) for biomedical applications;
  • AI-driven advances in medical image analysis using deep learning for different imaging modalities including X-ray, CT, MRI, PET, ultrasound, etc.;
  • Advances in AI-based solutions for disease diagnosis and prognosis using imaging and/or sensing technologies;
  • Advances in AI explainability solutions for imaging and/or sensing technologies that address different aspects of AI explainability, including novel attention map generators as well as ways to interpret the results and integrate them into clinical settings.

Dr. Farzad Khalvati
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biomedical imaging and sensing;
  • Deep learning architectures:
    • Convolutional neural networks;
    • Generative adversarial networks;
    • Autoencoders;
    • etc.
  • Attention networks;
  • Explainable AI in imaging and sensing technologies;
  • Reliable AI in imaging and sensing for biomedical applications;
  • Explainability vs. Performance;
  • Human-centric explainable AI in imaging and sensing.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1540 KiB  
Article
Automatic Hyoid Bone Tracking in Real-Time Ultrasound Swallowing Videos Using Deep Learning Based and Correlation Filter Based Trackers
by Shurui Feng, Queenie-Tsung-Kwan Shea, Kwok-Yan Ng, Cheuk-Ning Tang, Elaine Kwong and Yongping Zheng
Sensors 2021, 21(11), 3712; https://doi.org/10.3390/s21113712 - 26 May 2021
Cited by 13 | Viewed by 4390
Abstract
(1) Background: Ultrasound provides a radiation-free and portable method for assessing swallowing. Hyoid bone locations and displacements are often used as important indicators for the evaluation of swallowing disorders. However, this requires clinicians to spend a great deal of time reviewing the ultrasound [...] Read more.
(1) Background: Ultrasound provides a radiation-free and portable method for assessing swallowing. Hyoid bone locations and displacements are often used as important indicators for the evaluation of swallowing disorders. However, this requires clinicians to spend a great deal of time reviewing the ultrasound images. (2) Methods: In this study, we applied tracking algorithms based on deep learning and correlation filters to detect hyoid locations in ultrasound videos collected during swallowing. Fifty videos were collected from 10 young, healthy subjects for training, evaluation, and testing of the trackers. (3) Results: The best performing deep learning algorithm, Fully-Convolutional Siamese Networks (SiamFC), proved to have reliable performance in getting accurate hyoid bone locations from each frame of the swallowing ultrasound videos. While having a real-time frame rate (175 fps) when running on an RTX 2060, SiamFC also achieved a precision of 98.9% at the threshold of 10 pixels (3.25 mm) and 80.5% at the threshold of 5 pixels (1.63 mm). The tracker’s root-mean-square error and average error were 3.9 pixels (1.27 mm) and 3.3 pixels (1.07 mm), respectively. (4) Conclusions: Our results pave the way for real-time automatic tracking of the hyoid bone in ultrasound videos for swallowing assessment. Full article
Show Figures

Figure 1

Back to TopTop