sensors-logo

Journal Browser

Journal Browser

Multi-Sensor Fusion in Medical Imaging, Diagnosis and Therapy

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 20 November 2024 | Viewed by 1435

Special Issue Editor


E-Mail Website
Guest Editor
College of Life Science & Technology, Huazhong University of Science and Technology, Wuhan, China
Interests: medical image processing; artificial intelligence for medical diagnosis; surgical guidance; surgical robots
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Multi-sensor fusion plays an important role in medical imaging, diagnosis and therapy. Recently, with the advancement of various imaging modalities such as ultrasound, computer tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET), the fusion of various imaging data has aroused wide interest among researchers. Effective fusion algorithms have a significant influence on the quality of fused data, thereby affecting the final diagnosis and therapy. Traditional fusion methods based on sparse representation and multi-scale decomposition have been explored in depth. Deep learning-based fusion methods can generally deliver efficient data fusion by combining the convolutional neural network or transformer and unsupervised loss function. Apart from the research on fusion methods, great efforts have been made to explore the application of fusion methods to disease diagnosis and therapy, such as PET-CT fusion for lung cancer detection and MR-ultrasound fusion for targeted prostate biopsy. Thus, the aim of this Special Issue, titled “Multi-Sensor Fusion in Medical Imaging, Diagnosis and Therapy”, is to collect high-quality research papers on multi-sensor fusion methods and their application to disease diagnosis and therapy.

Dr. Xuming Zhang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data fusion
  • sparse representation
  • multi-scale decomposition
  • machine learning
  • deep learning model
  • convolutional neural network
  • transformer
  • disease diagnosis and therapy

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 2259 KiB  
Article
CIRF: Coupled Image Reconstruction and Fusion Strategy for Deep Learning Based Multi-Modal Image Fusion
by Junze Zheng, Junyan Xiao, Yaowei Wang and Xuming Zhang
Sensors 2024, 24(11), 3545; https://doi.org/10.3390/s24113545 - 30 May 2024
Abstract
Multi-modal medical image fusion (MMIF) is crucial for disease diagnosis and treatment because the images reconstructed from signals collected by different sensors can provide complementary information. In recent years, deep learning (DL) based methods have been widely used in MMIF. However, these methods [...] Read more.
Multi-modal medical image fusion (MMIF) is crucial for disease diagnosis and treatment because the images reconstructed from signals collected by different sensors can provide complementary information. In recent years, deep learning (DL) based methods have been widely used in MMIF. However, these methods often adopt a serial fusion strategy without feature decomposition, causing error accumulation and confusion of characteristics across different scales. To address these issues, we have proposed the Coupled Image Reconstruction and Fusion (CIRF) strategy. Our method parallels the image fusion and reconstruction branches which are linked by a common encoder. Firstly, CIRF uses the lightweight encoder to extract base and detail features, respectively, through the Vision Transformer (ViT) and the Convolutional Neural Network (CNN) branches, where the two branches interact to supplement information. Then, two types of features are fused separately via different blocks and finally decoded into fusion results. In the loss function, both the supervised loss from the reconstruction branch and the unsupervised loss from the fusion branch are included. As a whole, CIRF increases its expressivity by adding multi-task learning and feature decomposition. Additionally, we have also explored the impact of image masking on the network’s feature extraction ability and validated the generalization capability of the model. Through experiments on three datasets, it has been demonstrated both subjectively and objectively, that the images fused by CIRF exhibit appropriate brightness and smooth edge transition with more competitive evaluation metrics than those fused by several other traditional and DL-based methods. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion in Medical Imaging, Diagnosis and Therapy)
23 pages, 7592 KiB  
Article
Rehabilitation Assessment System for Stroke Patients Based on Fusion-Type Optoelectronic Plethysmography Device and Multi-Modality Fusion Model: Design and Validation
by Liangwen Yan, Ze Long, Jie Qian, Jianhua Lin, Sheng Quan Xie and Bo Sheng
Sensors 2024, 24(9), 2925; https://doi.org/10.3390/s24092925 - 3 May 2024
Viewed by 705
Abstract
This study aimed to propose a portable and intelligent rehabilitation evaluation system for digital stroke-patient rehabilitation assessment. Specifically, the study designed and developed a fusion device capable of emitting red, green, and infrared lights simultaneously for photoplethysmography (PPG) acquisition. Leveraging the different penetration [...] Read more.
This study aimed to propose a portable and intelligent rehabilitation evaluation system for digital stroke-patient rehabilitation assessment. Specifically, the study designed and developed a fusion device capable of emitting red, green, and infrared lights simultaneously for photoplethysmography (PPG) acquisition. Leveraging the different penetration depths and tissue reflection characteristics of these light wavelengths, the device can provide richer and more comprehensive physiological information. Furthermore, a Multi-Channel Convolutional Neural Network–Long Short-Term Memory–Attention (MCNN-LSTM-Attention) evaluation model was developed. This model, constructed based on multiple convolutional channels, facilitates the feature extraction and fusion of collected multi-modality data. Additionally, it incorporated an attention mechanism module capable of dynamically adjusting the importance weights of input information, thereby enhancing the accuracy of rehabilitation assessment. To validate the effectiveness of the proposed system, sixteen volunteers were recruited for clinical data collection and validation, comprising eight stroke patients and eight healthy subjects. Experimental results demonstrated the system’s promising performance metrics (accuracy: 0.9125, precision: 0.8980, recall: 0.8970, F1 score: 0.8949, and loss function: 0.1261). This rehabilitation evaluation system holds the potential for stroke diagnosis and identification, laying a solid foundation for wearable-based stroke risk assessment and stroke rehabilitation assistance. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

Back to TopTop