Artificial Intelligence in Biomedical Imaging and Biomedical Signal Processing

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 1979

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
Interests: biomedical engineering; ultrasound imaging; biomedical imaging; biomedical signal processing

Special Issue Information

Dear Colleagues,

Biomedical imaging is of great importance in medical diagnosis. The fast, accurate, and rapid detection of particular illnesses is critical. As an example, the detection of cancerous tumors in their early stages may result in proper treatment of the disease. The tremendous advancements in artificial intelligence over recent decades have dramatically changed the field of biomedical image processing and diagnosis based on medical imaging. Nowadays, artificial intelligence is extensively used in biomedical imaging; for example, AI is used for image classification, image segmentation, image retrieval, and image fusion for various types of medical images such as X-rays, MRIs, and CT scans.

In this Special Issue of Bioengineering, Artificial Intelligence in Biomedical Imaging and Biomedical Signal Processing, we invite submissions of original research papers and comprehensive surveys that explore the application of artificial intelligence in medical image processing. The major topics of interest for this Special Issue include (but are not limited to):

  • Medical image segmentation;
  • Medical image classification;
  • Knowledge extraction from medical images;
  • Novel architectures for the application of deep learning in medical image processing;
  • Pattern recognition in biomedical signals;
  • Application of AI in image-guided surgery;
  • Application of AI in medical diagnosis (e.g., cancerous tumor detection and skin lesion classification);
  • Biomedical image retrieval;
  • Biomedical image fusion;
  • Biomedical image watermarking.

Prof. Dr. Zvi Friedman
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biomedical imaging
  • biomedical signal processing
  • machine learning
  • deep learning
  • image segmentation
  • image classification
  • image retrieval
  • knowledge extraction

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 7020 KiB  
Article
Automated Restarting Fast Proximal Gradient Descent Method for Single-View Cone-Beam X-ray Luminescence Computed Tomography Based on Depth Compensation
by Peng Gao, Huangsheng Pu, Tianshuai Liu, Yilin Cao, Wangyang Li, Shien Huang, Ruijing Li, Hongbing Lu and Junyan Rong
Bioengineering 2024, 11(2), 123; https://doi.org/10.3390/bioengineering11020123 - 26 Jan 2024
Viewed by 809
Abstract
Single-view cone-beam X-ray luminescence computed tomography (CB-XLCT) has recently gained attention as a highly promising imaging technique that allows for the efficient and rapid three-dimensional visualization of nanophosphor (NP) distributions in small animals. However, the reconstruction performance is hindered by the ill-posed nature [...] Read more.
Single-view cone-beam X-ray luminescence computed tomography (CB-XLCT) has recently gained attention as a highly promising imaging technique that allows for the efficient and rapid three-dimensional visualization of nanophosphor (NP) distributions in small animals. However, the reconstruction performance is hindered by the ill-posed nature of the inverse problem and the effects of depth variation as only a single view is acquired. To tackle this issue, we present a methodology that integrates an automated restarting strategy with depth compensation to achieve reconstruction. The present study employs a fast proximal gradient descent (FPGD) method, incorporating L0 norm regularization, to achieve efficient reconstruction with accelerated convergence. The proposed approach offers the benefit of retrieving neighboring multitarget distributions without the need for CT priors. Additionally, the automated restarting strategy ensures reliable reconstructions without the need for manual intervention. Numerical simulations and physical phantom experiments were conducted using a custom CB-XLCT system to demonstrate the accuracy of the proposed method in resolving adjacent NPs. The results showed that this method had the lowest relative error compared to other few-view techniques. This study signifies a significant progression in the development of practical single-view CB-XLCT for high-resolution 3−D biomedical imaging. Full article
Show Figures

Graphical abstract

17 pages, 4006 KiB  
Article
CL-SPO2Net: Contrastive Learning Spatiotemporal Attention Network for Non-Contact Video-Based SpO2 Estimation
by Jiahe Peng, Weihua Su, Haiyong Chen, Jingsheng Sun and Zandong Tian
Bioengineering 2024, 11(2), 113; https://doi.org/10.3390/bioengineering11020113 - 24 Jan 2024
Viewed by 974
Abstract
Video-based peripheral oxygen saturation (SpO2) estimation, utilizing solely RGB cameras, offers a non-contact approach to measuring blood oxygen levels. Previous studies set a stable and unchanging environment as the premise for non-contact blood oxygen estimation. Additionally, they utilized a small amount of labeled [...] Read more.
Video-based peripheral oxygen saturation (SpO2) estimation, utilizing solely RGB cameras, offers a non-contact approach to measuring blood oxygen levels. Previous studies set a stable and unchanging environment as the premise for non-contact blood oxygen estimation. Additionally, they utilized a small amount of labeled data for system training and learning. However, it is challenging to train optimal model parameters with a small dataset. The accuracy of blood oxygen detection is easily affected by ambient light and subject movement. To address these issues, this paper proposes a contrastive learning spatiotemporal attention network (CL-SPO2Net), an innovative semi-supervised network for video-based SpO2 estimation. Spatiotemporal similarities in remote photoplethysmography (rPPG) signals were found in video segments containing facial or hand regions. Subsequently, integrating deep neural networks with machine learning expertise enabled the estimation of SpO2. The method had good feasibility in the case of small-scale labeled datasets, with the mean absolute error between the camera and the reference pulse oximeter of 0.85% in the stable environment, 1.13% with lighting fluctuations, and 1.20% in the facial rotation situation. Full article
Show Figures

Figure 1

Back to TopTop