sensors-logo

Journal Browser

Journal Browser

Vision- and Image-Based Biomedical Diagnostics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 15091

Special Issue Editors


E-Mail Website
Guest Editor
Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
Interests: computer vision; medical imaging; artificial intelligence; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Research Council Canada, 1200 Montréal Road, Ottawa, ON K1A 0R6, Canada
Interests: machine learning; computer vision; deep learning; signal processing; geometry processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Research Council Canada, 1200 Montréal Road, Ottawa, ON K1A 0R6, Canada
Interests: artificial intelligence; machine learning; deep learning; network analysis; computer vision; text analytics

Special Issue Information

Dear Colleagues,

Recent significant advances in biomedical imaging have led to not only greatly increased visual fidelity, but also new insights and understanding into disease that were previously not possible to capture for supporting improved diagnostics and clinical decision support. In parallel, significant advances have been made in image analysis and artificial intelligence techniques for extracting critical visual information from biomedical imaging data and leveraging this extracted information for predicting the presence of disease as well as the severity of disease to support clinical diagnosis and decision-making processes. The goal of this Special Issue on vision- and image-based biomedical diagnostics is to promote recent technical advances in all relevant aspects, ranging from improved biomedical imaging systems for diagnosis to applications and techniques for image analysis and artificial intelligence algorithms for diagnostic prediction. Topics of interest to this Special Issue include, but are not limited to, the following:

  • Biomedical imaging systems and techniques for diagnosis purposes;
  • Machine learning and AI algorithms for diagnosis and prognosis of disease;
  • Image analysis algorithms (segmentation, region-of-interest detection, feature extraction, etc.) for supporting diagnosis and prognosis of disease;
  • Intelligent systems for vision- and image-based biomedical diagnostics;
  • Multimodal fusion methods for combining information from different imaging systems and/or a mix of imaging and clinical metadata.

Prof. Dr. Alexander Wong
Dr. Pengcheng Xi
Dr. Ashkan Ebadi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical
  • medical
  • imaging
  • image
  • vision
  • diagnostics
  • prognosis
  • artificial intelligence
  • machine learning
  • imaging modalities
  • signal processing

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4357 KiB  
Article
Segmentation of Low-Light Optical Coherence Tomography Angiography Images under the Constraints of Vascular Network Topology
by Zhi Li, Gaopeng Huang, Binfeng Zou, Wenhao Chen, Tianyun Zhang, Zhaoyang Xu, Kunyan Cai, Tingyu Wang, Yaoqi Sun, Yaqi Wang, Kai Jin and Xingru Huang
Sensors 2024, 24(3), 774; https://doi.org/10.3390/s24030774 - 25 Jan 2024
Cited by 1 | Viewed by 859
Abstract
Optical coherence tomography angiography (OCTA) offers critical insights into the retinal vascular system, yet its full potential is hindered by challenges in precise image segmentation. Current methodologies struggle with imaging artifacts and clarity issues, particularly under low-light conditions and when using various high-speed [...] Read more.
Optical coherence tomography angiography (OCTA) offers critical insights into the retinal vascular system, yet its full potential is hindered by challenges in precise image segmentation. Current methodologies struggle with imaging artifacts and clarity issues, particularly under low-light conditions and when using various high-speed CMOS sensors. These challenges are particularly pronounced when diagnosing and classifying diseases such as branch vein occlusion (BVO). To address these issues, we have developed a novel network based on topological structure generation, which transitions from superficial to deep retinal layers to enhance OCTA segmentation accuracy. Our approach not only demonstrates improved performance through qualitative visual comparisons and quantitative metric analyses but also effectively mitigates artifacts caused by low-light OCTA, resulting in reduced noise and enhanced clarity of the images. Furthermore, our system introduces a structured methodology for classifying BVO diseases, bridging a critical gap in this field. The primary aim of these advancements is to elevate the quality of OCTA images and bolster the reliability of their segmentation. Initial evaluations suggest that our method holds promise for establishing robust, fine-grained standards in OCTA vascular segmentation and analysis. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

15 pages, 12954 KiB  
Article
Towards Building a Trustworthy Deep Learning Framework for Medical Image Analysis
by Kai Ma, Siyuan He, Grant Sinha, Ashkan Ebadi, Adrian Florea, Stéphane Tremblay, Alexander Wong and Pengcheng Xi
Sensors 2023, 23(19), 8122; https://doi.org/10.3390/s23198122 - 27 Sep 2023
Viewed by 1475
Abstract
Computer vision and deep learning have the potential to improve medical artificial intelligence (AI) by assisting in diagnosis, prediction, and prognosis. However, the application of deep learning to medical image analysis is challenging due to limited data availability and imbalanced data. While model [...] Read more.
Computer vision and deep learning have the potential to improve medical artificial intelligence (AI) by assisting in diagnosis, prediction, and prognosis. However, the application of deep learning to medical image analysis is challenging due to limited data availability and imbalanced data. While model performance is undoubtedly essential for medical image analysis, model trust is equally important. To address these challenges, we propose TRUDLMIA, a trustworthy deep learning framework for medical image analysis, which leverages image features learned through self-supervised learning and utilizes a novel surrogate loss function to build trustworthy models with optimal performance. The framework is validated on three benchmark data sets for detecting pneumonia, COVID-19, and melanoma, and the created models prove to be highly competitive, even outperforming those designed specifically for the tasks. Furthermore, we conduct ablation studies, cross-validation, and result visualization and demonstrate the contribution of proposed modules to both model performance (up to 21%) and model trust (up to 5%). We expect that the proposed framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises, improving patient outcomes, increasing diagnostic accuracy, and enhancing the overall quality of healthcare delivery. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

18 pages, 5044 KiB  
Article
Deep Learning-Based Non-Contact IPPG Signal Blood Pressure Measurement Research
by Hanquan Cheng, Jiping Xiong, Zehui Chen and Jingwei Chen
Sensors 2023, 23(12), 5528; https://doi.org/10.3390/s23125528 - 13 Jun 2023
Cited by 1 | Viewed by 2330
Abstract
In this paper, a multi-stage deep learning blood pressure prediction model based on imaging photoplethysmography (IPPG) signals is proposed to achieve accurate and convenient monitoring of human blood pressure. A camera-based non-contact human IPPG signal acquisition system is designed. The system can perform [...] Read more.
In this paper, a multi-stage deep learning blood pressure prediction model based on imaging photoplethysmography (IPPG) signals is proposed to achieve accurate and convenient monitoring of human blood pressure. A camera-based non-contact human IPPG signal acquisition system is designed. The system can perform experimental acquisition under ambient light, effectively reducing the cost of non-contact pulse wave signal acquisition while simplifying the operation process. The first open-source dataset IPPG-BP for IPPG signal and blood pressure data is constructed by this system, and a multi-stage blood pressure estimation model combining a convolutional neural network and bidirectional gated recurrent neural network is designed. The results of the model conform to both BHS and AAMI international standards. Compared with other blood pressure estimation methods, the multi-stage model automatically extracts features through a deep learning network and combines different morphological features of diastolic and systolic waveforms, which reduces the workload while improving accuracy. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

18 pages, 9841 KiB  
Article
COVID-Net USPro: An Explainable Few-Shot Deep Prototypical Network for COVID-19 Screening Using Point-of-Care Ultrasound
by Jessy Song, Ashkan Ebadi, Adrian Florea, Pengcheng Xi, Stéphane Tremblay and Alexander Wong
Sensors 2023, 23(5), 2621; https://doi.org/10.3390/s23052621 - 27 Feb 2023
Cited by 6 | Viewed by 1606
Abstract
As the Coronavirus Disease 2019 (COVID-19) continues to impact many aspects of life and the global healthcare systems, the adoption of rapid and effective screening methods to prevent the further spread of the virus and lessen the burden on healthcare providers is a [...] Read more.
As the Coronavirus Disease 2019 (COVID-19) continues to impact many aspects of life and the global healthcare systems, the adoption of rapid and effective screening methods to prevent the further spread of the virus and lessen the burden on healthcare providers is a necessity. As a cheap and widely accessible medical image modality, point-of-care ultrasound (POCUS) imaging allows radiologists to identify symptoms and assess severity through visual inspection of the chest ultrasound images. Combined with the recent advancements in computer science, applications of deep learning techniques in medical image analysis have shown promising results, demonstrating that artificial intelligence-based solutions can accelerate the diagnosis of COVID-19 and lower the burden on healthcare professionals. However, the lack of large, well annotated datasets poses a challenge in developing effective deep neural networks, especially in the case of rare diseases and new pandemics. To address this issue, we present COVID-Net USPro, an explainable few-shot deep prototypical network that is designed to detect COVID-19 cases from very few ultrasound images. Through intensive quantitative and qualitative assessments, the network not only demonstrates high performance in identifying COVID-19 positive cases, using an explainability component, but it is also shown that the network makes decisions based on the actual representative patterns of the disease. Specifically, COVID-Net USPro achieves 99.55% overall accuracy, 99.93% recall, and 99.83% precision for COVID-19-positive cases when trained with only five shots. In addition to the quantitative performance assessment, our contributing clinician with extensive experience in POCUS interpretation verified the analytic pipeline and results, ensuring that the network’s decisions are based on clinically relevant image patterns integral to COVID-19 diagnosis. We believe that network explainability and clinical validation are integral components for the successful adoption of deep learning in the medical field. As part of the COVID-Net initiative, and to promote reproducibility and foster further innovation, the network is open-sourced and available to the public. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

25 pages, 8475 KiB  
Article
Neural Networks Application for Accurate Retina Vessel Segmentation from OCT Fundus Reconstruction
by Tomasz Marciniak, Agnieszka Stankiewicz and Przemyslaw Zaradzki
Sensors 2023, 23(4), 1870; https://doi.org/10.3390/s23041870 - 7 Feb 2023
Cited by 3 | Viewed by 1753
Abstract
The use of neural networks for retinal vessel segmentation has gained significant attention in recent years. Most of the research related to the segmentation of retinal blood vessels is based on fundus images. In this study, we examine five neural network architectures to [...] Read more.
The use of neural networks for retinal vessel segmentation has gained significant attention in recent years. Most of the research related to the segmentation of retinal blood vessels is based on fundus images. In this study, we examine five neural network architectures to accurately segment vessels in fundus images reconstructed from 3D OCT scan data. OCT-based fundus reconstructions are of much lower quality compared to color fundus photographs due to noise and lower and disproportionate resolutions. The fundus image reconstruction process was performed based on the segmentation of the retinal layers in B-scans. Three reconstruction variants were proposed, which were then used in the process of detecting blood vessels using neural networks. We evaluated performance using a custom dataset of 24 3D OCT scans (with manual annotations performed by an ophthalmologist) using 6-fold cross-validation and demonstrated segmentation accuracy up to 98%. Our results indicate that the use of neural networks is a promising approach to segmenting the retinal vessel from a properly reconstructed fundus. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

13 pages, 1618 KiB  
Article
Intraprocedure Artificial Intelligence Alert System for Colonoscopy Examination
by Chen-Ming Hsu, Chien-Chang Hsu, Zhe-Ming Hsu, Tsung-Hsing Chen and Tony Kuo
Sensors 2023, 23(3), 1211; https://doi.org/10.3390/s23031211 - 20 Jan 2023
Cited by 3 | Viewed by 1845
Abstract
Colonoscopy is a valuable tool for preventing and reducing the incidence and mortality of colorectal cancer. Although several computer-aided colorectal polyp detection and diagnosis systems have been proposed for clinical application, many remain susceptible to interference problems, including low image clarity, unevenness, and [...] Read more.
Colonoscopy is a valuable tool for preventing and reducing the incidence and mortality of colorectal cancer. Although several computer-aided colorectal polyp detection and diagnosis systems have been proposed for clinical application, many remain susceptible to interference problems, including low image clarity, unevenness, and low accuracy for the analysis of dynamic images; these drawbacks affect the robustness and practicality of these systems. This study proposed an intraprocedure alert system for colonoscopy examination developed on the basis of deep learning. The proposed system features blurred image detection, foreign body detection, and polyp detection modules facilitated by convolutional neural networks. The training and validation datasets included high-quality images and low-quality images, including blurred images and those containing folds, fecal matter, and opaque water. For the detection of blurred images and images containing folds, fecal matter, and opaque water, the accuracy rate was 96.2%. Furthermore, the study results indicated a per-polyp detection accuracy of 100% when the system was applied to video images. The recall rates for high-quality image frames and polyp image frames were 95.7% and 92%, respectively. The overall alert accuracy rate and the false-positive rate of low quality for video images obtained through per-frame analysis were 95.3% and 0.18%, respectively. The proposed system can be used to alert colonoscopists to the need to slow their procedural speed or to perform flush or lumen inflation in cases where the colonoscope is being moved too rapidly, where fecal residue is present in the intestinal tract, or where the colon has been inadequately distended. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

12 pages, 855 KiB  
Article
Abdominal Aortic Thrombus Segmentation in Postoperative Computed Tomography Angiography Images Using Bi-Directional Convolutional Long Short-Term Memory Architecture
by Younhyun Jung, Suhyeon Kim, Jihu Kim, Byunghoon Hwang, Sungmin Lee, Eun Young Kim, Jeong Ho Kim and Hyoseok Hwang
Sensors 2023, 23(1), 175; https://doi.org/10.3390/s23010175 - 24 Dec 2022
Viewed by 1326
Abstract
Abdominal aortic aneurysm (AAA) is a fatal clinical condition with high mortality. Computed tomography angiography (CTA) imaging is the preferred minimally invasive modality for the long-term postoperative observation of AAA. Accurate segmentation of the thrombus region of interest (ROI) in a postoperative CTA [...] Read more.
Abdominal aortic aneurysm (AAA) is a fatal clinical condition with high mortality. Computed tomography angiography (CTA) imaging is the preferred minimally invasive modality for the long-term postoperative observation of AAA. Accurate segmentation of the thrombus region of interest (ROI) in a postoperative CTA image volume is essential for quantitative assessment and rapid clinical decision making by clinicians. Few investigators have proposed the adoption of convolutional neural networks (CNN). Although these methods demonstrated the potential of CNN architectures by automating the thrombus ROI segmentation, the segmentation performance can be further improved. The existing methods performed the segmentation process independently per 2D image and were incapable of using adjacent images, which could be useful for the robust segmentation of thrombus ROIs. In this work, we propose a thrombus ROI segmentation method to utilize not only the spatial features of a target image, but also the volumetric coherence available from adjacent images. We newly adopted a recurrent neural network, bi-directional convolutional long short-term memory (Bi-CLSTM) architecture, which can learn coherence between a sequence of data. This coherence learning capability can be useful for challenging situations, for example, when the target image exhibits inherent postoperative artifacts and noises, the inclusion of adjacent images would facilitate learning more robust features for thrombus ROI segmentation. We demonstrate the segmentation capability of our Bi-CLSTM-based method with a comparison of the existing 2D-based thrombus ROI segmentation counterpart as well as other established 2D- and 3D-based alternatives. Our comparison is based on a large-scale clinical dataset of 60 patient studies (i.e., 60 CTA image volumes). The results suggest the superior segmentation performance of our Bi–CLSTM-based method by achieving the highest scores of the evaluation metrics, e.g., our Bi-CLSTM results were 0.0331 higher on total overlap and 0.0331 lower on false negative when compared to 2D U-net++ as the second-best. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

12 pages, 5422 KiB  
Article
Experimental Validation of Shifted Position-Diffuse Reflectance Imaging (SP-DRI) on Optical Phantoms
by Moritz Späth, Alexander Romboy, Ijeoma Nzenwata, Maximilian Rohde, Dongqin Ni, Lisa Ackermann, Florian Stelzle, Martin Hohmann and Florian Klämpfl
Sensors 2022, 22(24), 9880; https://doi.org/10.3390/s22249880 - 15 Dec 2022
Viewed by 1027
Abstract
Numerous diseases such as hemorrhage, sepsis or cardiogenic shock induce a heterogeneous perfusion of the capillaries. To detect such alterations in the human blood flow pattern, diagnostic devices must provide an appropriately high spatial resolution. Shifted position-diffuse reflectance imaging (SP-DRI) has the potential [...] Read more.
Numerous diseases such as hemorrhage, sepsis or cardiogenic shock induce a heterogeneous perfusion of the capillaries. To detect such alterations in the human blood flow pattern, diagnostic devices must provide an appropriately high spatial resolution. Shifted position-diffuse reflectance imaging (SP-DRI) has the potential to do so; it is an all-optical diagnostic technique. So far, SP-DRI has mainly been developed using Monte Carlo simulations. The present study is therefore validating this algorithm experimentally on realistic optical phantoms with thread structures down to 10 μm in diameter; a SP-DRI sensor prototype was developed and realized by means of additive manufacturing. SP-DRI turned out to be functional within this experimental framework. The position of the structures within the optical phantoms become clearly visible using SP-DRI, and the structure thickness is reflected as modulation in the SP-DRI signal amplitude; this performed well for a shift along the x axis as well as along the y axis. Moreover, SP-DRI successfully masked the pronounced influence of the illumination cone on the data. The algorithm showed significantly superior to a mere raw data inspection. Within the scope of the study, the constructive design of the SP-DRI sensor prototype is discussed and potential for improvement is explored. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

8 pages, 2376 KiB  
Article
Motion Analysis of the Extensor Carpi Ulnaris in Triangular Fibrocartilage Complex Injury Using Ultrasonography Images
by Shuya Tanaka, Atsuyuki Inui, Yutaka Mifune, Hanako Nishimoto, Tomoya Yoshikawa, Issei Shinohara, Takahiro Furukawa, Tatsuo Kato, Masaya Kusunose and Ryosuke Kuroda
Sensors 2022, 22(21), 8216; https://doi.org/10.3390/s22218216 - 27 Oct 2022
Cited by 2 | Viewed by 1633
Abstract
The subsheath of the extensor carpi ulnaris (ECU) tendon, a component of the triangular fibrocartilage complex (TFCC), is particularly important as it dynamically stabilizes the distal radioulnar joint. However, the relationship between TFCC injury and ECU dynamics remains unclear. This study aimed to [...] Read more.
The subsheath of the extensor carpi ulnaris (ECU) tendon, a component of the triangular fibrocartilage complex (TFCC), is particularly important as it dynamically stabilizes the distal radioulnar joint. However, the relationship between TFCC injury and ECU dynamics remains unclear. This study aimed to analyze ECU movement and morphology using ultrasonography (US) images. Twenty wrists of patients with TFCC injury, who underwent TFCC repair, were included in the injury group, and 20 wrists of healthy volunteers were in the control group. For static image analysis, curvature and linearity ratios of the ECU in US long-axis images captured during radioulnar deviation were analyzed. For dynamic analysis of the ECU, the wrist was moved from radial deviation to ulnar deviation at a constant speed, and the velocity of the tendon was analyzed using particle image velocimetry. The static analysis showed that the ECU tendon was more curved in ulnar deviation in the injury group than in the control group, and the dynamic analysis showed that only vertical velocity toward the deep side during ulnar deviation was higher in the injury group. These results suggest that TFCC injury caused ECU curvature during ulnar deviation and increased the vertical velocity of the ECU during wrist deviation. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

Back to TopTop