applsci-logo

Journal Browser

Journal Browser

Medical Signal and Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (20 November 2021) | Viewed by 75497

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Engineering and Informatics, University of Patras, 26500 Rio-Patras, Greece
Interests: object detection; Hough transforms; X-ray imaging; array signal processing; eye; face recognition; feature extraction; gaze tracking; geophysical signal processing; geophysical techniques; image representation; optimisation; radiography; seismic waves; transforms; FIR filters; iterative methods; computational complexity; computer vision; digital filters; filtering and prediction theory; filtering theory; image registration; parameter estimation; seismology

Special Issue Information

Dear Colleagues,

Medical imaging is the technique of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues. Image and signal processing have become, in recent years, an essential part of medical practice, both in diagnostic as well as in treatment procedures such as minimal invasive surgery, etc.

Medical image processing presents with multiple research challenges concerning the standalone or joint use of different image modalities, where traditional problems such as segmentation, registration, inpainting, etc. must be examined, taking into consideration the unique characteristics of those images.

Recent advances in machine learning methods, such as deep learning, create potential in improving previous image analysis and processing approaches in respect of time and accuracy, both critical aspects in medical image and signal processing. Using this new framework, medical imaging problems that can be cast as classification, regression, or image inpainting, to name a few, could be efficiently solved.

Moreover, recent work showed that visual cortical activity measured by fMRI can be almost perfectly encoded by a pretrained deep neural network, thus providing a new tool that can be used for analyzing the internal contents of the brain. This, in turn, despite neural representations being noisy, high-dimensional, and containing incomplete information about image details, promises that currently too complicated problems such as the reconstruction of an image from brain activity will be also solved in a more efficient way.

Prof. Dr. Emmanouil Z. Psarakis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Medical image processing
  • Medical image classification/regression problems
  • Medical image Inpainting
  • Medical image fusion
  • Segmentation
  • Non-invasive/laparoscopic surgery image
  • Multiple modalities, X-ray, CT, MRI, fMRI, PET ultrasound
  • 2D/3D nonrigid/elastic registration
  • Deep learning
  • Brain activity
  • Deep image reconstruction
  • EEG/ECG signal processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2733 KiB  
Article
Attentive Octave Convolutional Capsule Network for Medical Image Classification
by Hong Zhang, Zhengzhen Li, Hao Zhao, Zan Li and Yanping Zhang
Appl. Sci. 2022, 12(5), 2634; https://doi.org/10.3390/app12052634 - 3 Mar 2022
Cited by 3 | Viewed by 2228
Abstract
Medical image classification plays an essential role in disease diagnosis and clinical treatment. More and more research efforts have been dedicated to the design of effective methods for medical image classification. As an effective framework, the capsule network (CapsNet) can realize translation equivariance. [...] Read more.
Medical image classification plays an essential role in disease diagnosis and clinical treatment. More and more research efforts have been dedicated to the design of effective methods for medical image classification. As an effective framework, the capsule network (CapsNet) can realize translation equivariance. Lots of current research applies capsule networks in medical image analysis. In this paper, we propose an attentive octave convolutional capsule network (AOC-Caps) for medical image classification. In AOC-Caps, an AOC module is used to replace the traditional convolution operation. The purpose of the AOC module is to process and fuse the high- and low-frequency information in the input image simultaneously, and weigh the important parts automatically. Following the AOC module, a matrix capsule is used and the expectation maximization (EM) algorithm is applied to update the routing weights. The proposed AOC-Caps and comparative methods are tested on seven datasets, including PathMNIST, DermaMNIST, OCTMNIST, PneumoniaMNIST, OrganMNIST_Axial, OrganMNIST_Coronal, and OrganMNIST_Sagittal, which are from MedMNIST. In the experiments, baselines include the traditional CNN models, automated machine learning (AutoML) methods, and related capsule network methods. The experimental results demonstrate that the proposed AOC-Caps achieves better performance on most of the seven medical image datasets. Full article
(This article belongs to the Special Issue Medical Signal and Image Processing)
Show Figures

Figure 1

15 pages, 6661 KiB  
Article
A Dual-Stage Vocabulary of Features (VoF)-Based Technique for COVID-19 Variants’ Classification
by Sonain Jamil and MuhibUr Rahman
Appl. Sci. 2021, 11(24), 11902; https://doi.org/10.3390/app112411902 - 14 Dec 2021
Cited by 10 | Viewed by 2410
Abstract
Novel coronavirus, known as COVID-19, is a very dangerous virus. Initially detected in China, it has since spread all over the world causing many deaths. There are several variants of COVID-19, which have been categorized into two major groups. These groups are variants [...] Read more.
Novel coronavirus, known as COVID-19, is a very dangerous virus. Initially detected in China, it has since spread all over the world causing many deaths. There are several variants of COVID-19, which have been categorized into two major groups. These groups are variants of concern and variants of interest. Variants of concern are more dangerous, and there is a need to develop a system that can detect and classify COVID-19 and its variants without touching an infected person. In this paper, we propose a dual-stage-based deep learning framework to detect and classify COVID-19 and its variants. CT scans and chest X-ray images are used. Initially, the detection is done through a convolutional neural network, and then spatial features are extracted with deep convolutional models, while handcrafted features are extracted from several handcrafted descriptors. Both spatial and handcrafted features are combined to make a feature vector. This feature vector is called the vocabulary of features (VoF), as it contains spatial and handcrafted features. This feature vector is fed as an input to the classifier to classify different variants. The proposed model is evaluated based on accuracy, F1-score, specificity, sensitivity, specificity, Cohen’s kappa, and classification error. The experimental results show that the proposed method outperforms all the existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Medical Signal and Image Processing)
Show Figures

Figure 1

13 pages, 2664 KiB  
Article
Sleep State Classification Using Power Spectral Density and Residual Neural Network with Multichannel EEG Signals
by Md Junayed Hasan, Dongkoo Shon, Kichang Im, Hyun-Kyun Choi, Dae-Seung Yoo and Jong-Myon Kim
Appl. Sci. 2020, 10(21), 7639; https://doi.org/10.3390/app10217639 - 29 Oct 2020
Cited by 34 | Viewed by 4001
Abstract
This paper proposes a classification framework for automatic sleep stage detection in both male and female human subjects by analyzing the electroencephalogram (EEG) data of polysomnography (PSG) recorded for three regions of the human brain, i.e., the pre-frontal, central, and occipital lobes. Without [...] Read more.
This paper proposes a classification framework for automatic sleep stage detection in both male and female human subjects by analyzing the electroencephalogram (EEG) data of polysomnography (PSG) recorded for three regions of the human brain, i.e., the pre-frontal, central, and occipital lobes. Without considering any artifact removal approach, the residual neural network (ResNet) architecture is used to automatically learn the distinctive features of different sleep stages from the power spectral density (PSD) of the raw EEG data. The residual block of the ResNet learns the intrinsic features of different sleep stages from the EEG data while avoiding the vanishing gradient problem. The proposed approach is validated using the sleep dataset of the Dreams database, which comprises of EEG signals for 20 healthy human subjects, 16 female and 4 male. Our experimental results demonstrate the effectiveness of the ResNet based approach in identifying different sleep stages in both female and male subjects compared to state-of-the-art methods with classification accuracies of 87.8% and 83.7%, respectively. Full article
(This article belongs to the Special Issue Medical Signal and Image Processing)
Show Figures

Figure 1

13 pages, 1839 KiB  
Article
Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network
by Milica M. Badža and Marko Č. Barjaktarović
Appl. Sci. 2020, 10(6), 1999; https://doi.org/10.3390/app10061999 - 15 Mar 2020
Cited by 346 | Viewed by 58771
Abstract
The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in [...] Read more.
The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics. Full article
(This article belongs to the Special Issue Medical Signal and Image Processing)
Show Figures

Figure 1

11 pages, 3581 KiB  
Article
Compressed-Sensing Magnetic Resonance Image Reconstruction Using an Iterative Convolutional Neural Network Approach
by Fumio Hashimoto, Kibo Ote, Takenori Oida, Atsushi Teramoto and Yasuomi Ouchi
Appl. Sci. 2020, 10(6), 1902; https://doi.org/10.3390/app10061902 - 11 Mar 2020
Cited by 16 | Viewed by 4066
Abstract
Convolutional neural networks (CNNs) demonstrate excellent performance when employed to reconstruct the images obtained by compressed-sensing magnetic resonance imaging (CS-MRI). Our study aimed to enhance image quality by developing a novel iterative reconstruction approach that utilizes image-based CNNs and k-space correction to preserve [...] Read more.
Convolutional neural networks (CNNs) demonstrate excellent performance when employed to reconstruct the images obtained by compressed-sensing magnetic resonance imaging (CS-MRI). Our study aimed to enhance image quality by developing a novel iterative reconstruction approach that utilizes image-based CNNs and k-space correction to preserve original k-space data. In the proposed method, CNNs represent a priori information concerning image spaces. First, the CNNs are trained to map zero-filling images onto corresponding full-sampled images. Then, they recover the zero-filled part of the k-space data. Subsequently, k-space corrections, which involve the replacement of unfilled regions by original k-space data, are implemented to preserve the original k-space data. The above-mentioned processes are used iteratively. The performance of the proposed method was validated using a T2-weighted brain-image dataset, and experiments were conducted with several sampling masks. Finally, the proposed method was compared with other noniterative approaches to demonstrate its effectiveness. The aliasing artifacts in the reconstructed images obtained using the proposed approach were reduced compared to those using other state-of-the-art techniques. In addition, the quantitative results obtained in the form of the peak signal-to-noise ratio and structural similarity index demonstrated the effectiveness of the proposed method. The proposed CS-MRI method enhanced MR image quality with high-throughput examinations. Full article
(This article belongs to the Special Issue Medical Signal and Image Processing)
Show Figures

Figure 1

15 pages, 2676 KiB  
Article
Extracting Retinal Anatomy and Pathological Structure Using Multiscale Segmentation
by Lei Geng, Hengyi Che, Zhitao Xiao and Yanbei Liu
Appl. Sci. 2019, 9(18), 3669; https://doi.org/10.3390/app9183669 - 4 Sep 2019
Cited by 2 | Viewed by 2508
Abstract
Fundus image segmentation technology has always been an important tool in the medical imaging field. Recent studies have validated that deep learning techniques can effectively segment retinal anatomy and determine pathological structure in retinal fundus photographs. However, several groups of image segmentation methods [...] Read more.
Fundus image segmentation technology has always been an important tool in the medical imaging field. Recent studies have validated that deep learning techniques can effectively segment retinal anatomy and determine pathological structure in retinal fundus photographs. However, several groups of image segmentation methods used in medical imaging only provide a single retinopathic feature (e.g., roth spots and exudates). In this paper, we propose a more accurate and clinically oriented framework for the segmentation of fundus images from end-to-end input. We design a four-path multiscale input network structure that learns network features and finds overall characteristics via our network. Our network’s structure is not limited by segmentation of single retinopathic features. Our method is suitable for exudates, roth spots, blood vessels, and optic discs segmentation. The structure has general applicability to many fundus models; therefore, we use our own dataset for training. In cooperation with hospitals and board-certified ophthalmologists, the proposed framework is validated on retinal images from large databases and can improve diagnostic performance compared to state-of-the-art methods that use smaller databases for training. The proposed framework detects blood vessels with an accuracy of 0.927, which is comparable to exudate accuracy (0.939) and roth spot accuracy (0.904), providing ophthalmologists with a practical diagnostic and a robust analytical tool. Full article
(This article belongs to the Special Issue Medical Signal and Image Processing)
Show Figures

Figure 1

Back to TopTop