sensors-logo

Journal Browser

Journal Browser

Medical Imaging Using Deep Learning Intelligence Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (10 August 2023) | Viewed by 6578

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia, Kelowna, BC V1V 1V7, Canada
Interests: computer vision; video and image processing; intelligent cameras; biomedical applications; new algorithms for emerging industrial applications and software design of video surveillance systems

E-Mail Website
Guest Editor
Computer Vision and Machine Learning Engineer at Verafin Inc., Kelowna, BC V1V 1V7, Canada
Interests: computer vision; deep learning; software middleware/framework; biomedical applications

E-Mail Website
Guest Editor
Machine Learning Researcher, Stanford University, Stanford, CA 94305, USA
Interests: developing and optimizing machine (and deep) learning solutions for the early diagnosis and prediction of brain diseases

E-Mail Website
Guest Editor
School of Computing Sciences and Computer Engineering, University of Southern Mississippi, MS 39406, USA
Interests: cybersecurity; security and privacy-preserving schemes in autonomous vehicles; vehicular ad hoc networks; Internet of things (IoT) applications; smart grid advanced metering infrastructure network

Special Issue Information

Dear Colleagues,

Biomedical imaging techniques, deep learning, and artificial intelligence bring many healthcare protection benefits. We can collect, measure, and analyze vast volumes of health-related data using computing and networking technologies, leading to tremendous healthcare and excellent opportunities for medical imaging communities. Meanwhile, these technologies have also brought new challenges and issues. Predictive models, especially disease diagnosis and analysis, is considered one of the most promising directions for healthcare development.

Biomedical intelligent systems include hardware, computational models, databases, and software that optimize the acquisition, transmission, processing, storage, retrieval, analysis, and interpretation of vast volumes of multi-modal health-related data. Currently, these systems have been deployed in solutions that integrate a variety of technologies, including deep learning, computer vision, Internet of things, E-Health, bioinformatics, sensors, and so on, to achieve patient-centric healthcare. The aims of this Special Issue are 1) to present the state-of-the-art research on multi-modal computing for biomedical intelligence systems, and 2) to provide a forum for experts to disseminate their recent advances and views on future perspectives in the field. Researchers from academic fields and industries worldwide are encouraged to submit high quality unpublished original research articles and review articles in broad areas relevant to multi-modal computing theories and technologies for biomedical intelligence systems.

Dr. Mohamed Shehata
Dr. Agwad Eltantawy
Dr. Ramy Hussien
Dr. Ahmed Sherif
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • deep learning-based diagnostic analysis
  • intelligent interrogation systems
  • cloud-based patient information systems
  • medical image transformation

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1850 KiB  
Article
Improved UNet with Attention for Medical Image Segmentation
by Ahmed AL Qurri and Mohamed Almekkawy
Sensors 2023, 23(20), 8589; https://doi.org/10.3390/s23208589 - 20 Oct 2023
Cited by 9 | Viewed by 4094
Abstract
Medical image segmentation is crucial for medical image processing and the development of computer-aided diagnostics. In recent years, deep Convolutional Neural Networks (CNNs) have been widely adopted for medical image segmentation and have achieved significant success. UNet, which is based on CNNs, is [...] Read more.
Medical image segmentation is crucial for medical image processing and the development of computer-aided diagnostics. In recent years, deep Convolutional Neural Networks (CNNs) have been widely adopted for medical image segmentation and have achieved significant success. UNet, which is based on CNNs, is the mainstream method used for medical image segmentation. However, its performance suffers owing to its inability to capture long-range dependencies. Transformers were initially designed for Natural Language Processing (NLP), and sequence-to-sequence applications have demonstrated the ability to capture long-range dependencies. However, their abilities to acquire local information are limited. Hybrid architectures of CNNs and Transformer, such as TransUNet, have been proposed to benefit from Transformer’s long-range dependencies and CNNs’ low-level details. Nevertheless, automatic medical image segmentation remains a challenging task due to factors such as blurred boundaries, the low-contrast tissue environment, and in the context of ultrasound, issues like speckle noise and attenuation. In this paper, we propose a new model that combines the strengths of both CNNs and Transformer, with network architectural improvements designed to enrich the feature representation captured by the skip connections and the decoder. To this end, we devised a new attention module called Three-Level Attention (TLA). This module is composed of an Attention Gate (AG), channel attention, and spatial normalization mechanism. The AG preserves structural information, whereas channel attention helps to model the interdependencies between channels. Spatial normalization employs the spatial coefficient of the Transformer to improve spatial attention akin to TransNorm. To further improve the skip connection and reduce the semantic gap, skip connections between the encoder and decoder were redesigned in a manner similar to that of the UNet++ dense connection. Moreover, deep supervision using a side-output channel was introduced, analogous to BASNet, which was originally used for saliency predictions. Two datasets from different modalities, a CT scan dataset and an ultrasound dataset, were used to evaluate the proposed UNet architecture. The experimental results showed that our model consistently improved the prediction performance of the UNet across different datasets. Full article
(This article belongs to the Special Issue Medical Imaging Using Deep Learning Intelligence Systems)
Show Figures

Figure 1

17 pages, 615 KiB  
Article
A Federated Learning Model Based on Hardware Acceleration for the Early Detection of Alzheimer’s Disease
by Kasem Khalil, Mohammad Mahbubur Rahman Khan Mamun, Ahmed Sherif, Mohamed Said Elsersy, Ahmad Abdel-Aliem Imam, Mohamed Mahmoud and Maazen Alsabaan
Sensors 2023, 23(19), 8272; https://doi.org/10.3390/s23198272 - 6 Oct 2023
Cited by 2 | Viewed by 1695
Abstract
Alzheimer’s disease (AD) is a progressive illness with a slow start that lasts many years; the disease’s consequences are devastating to the patient and the patient’s family. If detected early, the disease’s impact and prognosis can be altered significantly. Blood biosamples are often [...] Read more.
Alzheimer’s disease (AD) is a progressive illness with a slow start that lasts many years; the disease’s consequences are devastating to the patient and the patient’s family. If detected early, the disease’s impact and prognosis can be altered significantly. Blood biosamples are often employed in simple medical testing since they are cost-effective and easy to collect and analyze. This research provides a diagnostic model for Alzheimer’s disease based on federated learning (FL) and hardware acceleration using blood biosamples. We used blood biosample datasets provided by the ADNI website to compare and evaluate the performance of our models. FL has been used to train a shared model without sharing local devices’ raw data with a central server to preserve privacy. We developed a hardware acceleration approach for building our FL model so that we could speed up the training and testing procedures. The VHDL hardware description language and an Altera 10 GX FPGA are utilized to construct the hardware-accelerator approach. The results of the simulations reveal that the proposed methods achieve accuracy and sensitivity for early detection of 89% and 87%, respectively, while simultaneously requiring less time to train than other algorithms considered to be state-of-the-art. The proposed algorithms have a power consumption ranging from 35 to 39 mW, which qualifies them for use in limited devices. Furthermore, the result shows that the proposed method has a lower inference latency (61 ms) than the existing methods with fewer resources. Full article
(This article belongs to the Special Issue Medical Imaging Using Deep Learning Intelligence Systems)
Show Figures

Figure 1

Back to TopTop