sensors-logo

Journal Browser

Journal Browser

Computer Vision and Machine Learning for Medical Imaging System

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 45165

Special Issue Editors


E-Mail Website
Guest Editor
1. Ritsumeikan University, Biwako-Kusatsu, Kusatsu, Shiga, Japan;
2. Zhejiang Lab, Hangzhou, Zhejiang, China
Interests: medical image analysis; computer vision

E-Mail Website
Guest Editor
Zhejiang University, Hangzhou, Zhejiang, Japan
Interests: artificial intelligence; medical image analysis

Special Issue Information

Dear Colleagues,

Medical imaging, including computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound imaging, has made remarkable progress. Multi-detector row CT scanners (MDCT) can acquire whole body CT images in about 20 seconds with approximately 0.5 mm resolution, and open configuration magnetic resonance scanners (Open-MR) can be used as an image navigation system for minimally invasive treatments. These medical imaging systems have opened up new possibilities for radiologists and surgeons. Computer-assisted diagnosis (CAD) and surgery (CAS) have become major research subjects. Computer vision and machine learning techniques, such as medical image classification, detection, and segmentation, are fundamental and essential techniques for CAD and CAS. Recently, deep-learning-based techniques are being widely used in the field of medical image analysis and have achieved state-of-the-art results. The aim of this Special Issue is to publicize new ideas, original trend analysis, originally developed software, new methods, and other research results in computer vision and pattern machine learning for medical imaging system. Both researchers and practitioners are welcome to submit their papers.

Prof. Dr. Yen-wei Chen
Prof. Dr. Lanfen Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image classification
  • medical image detection
  • medical image segmentation
  • medical image registration
  • medical image enhancement
  • medical super resolution
  • computer-assisted diagnosis (CAD)
  • computer-assisted surgery (CAS)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 6738 KiB  
Article
A Convolutional Neural Network Combining Discriminative Dictionary Learning and Sequence Tracking for Left Ventricular Detection
by Xuchu Wang, Fusheng Wang and Yanmin Niu
Sensors 2021, 21(11), 3693; https://doi.org/10.3390/s21113693 - 26 May 2021
Cited by 5 | Viewed by 2756
Abstract
Cardiac MRI left ventricular (LV) detection is frequently employed to assist cardiac registration or segmentation in computer-aided diagnosis of heart diseases. Focusing on the challenging problems in LV detection, such as the large span and varying size of LV areas in MRI, as [...] Read more.
Cardiac MRI left ventricular (LV) detection is frequently employed to assist cardiac registration or segmentation in computer-aided diagnosis of heart diseases. Focusing on the challenging problems in LV detection, such as the large span and varying size of LV areas in MRI, as well as the heterogeneous myocardial and blood pool parts in LV areas, a convolutional neural network (CNN) detection method combining discriminative dictionary learning and sequence tracking is proposed in this paper. To efficiently represent the different sub-objects in LV area, the method deploys discriminant dictionary to classify the superpixel oversegmented regions, then the target LV region is constructed by label merging and multi-scale adaptive anchors are generated in the target region for handling the varying sizes. Combining with non-differential anchors in regional proposal network, the left ventricle object is localized by the CNN based regression and classification strategy. In order to solve the problem of slow classification speed of discriminative dictionary, a fast generation module of left ventricular scale adaptive anchors based on sequence tracking is also proposed on the same individual. The method and its variants were tested on the heart atlas data set. Experimental results verified the effectiveness of the proposed method and according to some evaluation indicators, it obtained 92.95% in AP50 metric and it was the most competitive result compared to typical related methods. The combination of discriminative dictionary learning and scale adaptive anchor improves adaptability of the proposed algorithm to the varying left ventricular areas. This study would be beneficial in some cardiac image processing such as region-of-interest cropping and left ventricle volume measurement. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

17 pages, 24079 KiB  
Article
Exploiting Global Structure Information to Improve Medical Image Segmentation
by Jaemoon Hwang and Sangheum Hwang
Sensors 2021, 21(9), 3249; https://doi.org/10.3390/s21093249 - 7 May 2021
Cited by 4 | Viewed by 2627
Abstract
In this paper, we propose a method to enhance the performance of segmentation models for medical images. The method is based on convolutional neural networks that learn the global structure information, which corresponds to anatomical structures in medical images. Specifically, the proposed method [...] Read more.
In this paper, we propose a method to enhance the performance of segmentation models for medical images. The method is based on convolutional neural networks that learn the global structure information, which corresponds to anatomical structures in medical images. Specifically, the proposed method is designed to learn the global boundary structures via an autoencoder and constrain a segmentation network through a loss function. In this manner, the segmentation model performs the prediction in the learned anatomical feature space. Unlike previous studies that considered anatomical priors by using a pre-trained autoencoder to train segmentation networks, we propose a single-stage approach in which the segmentation network and autoencoder are jointly learned. To verify the effectiveness of the proposed method, the segmentation performance is evaluated in terms of both the overlap and distance metrics on the lung area and spinal cord segmentation tasks. The experimental results demonstrate that the proposed method can enhance not only the segmentation performance but also the robustness against domain shifts. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

19 pages, 3533 KiB  
Article
Semantic Cardiac Segmentation in Chest CT Images Using K-Means Clustering and the Mathematical Morphology Method
by Beanbonyka Rim, Sungjin Lee, Ahyoung Lee, Hyo-Wook Gil and Min Hong
Sensors 2021, 21(8), 2675; https://doi.org/10.3390/s21082675 - 10 Apr 2021
Cited by 11 | Viewed by 3772
Abstract
Whole cardiac segmentation in chest CT images is important to identify functional abnormalities that occur in cardiovascular diseases, such as coronary artery disease (CAD) detection. However, manual efforts are time-consuming and labor intensive. Additionally, labeling the ground truth for cardiac segmentation requires the [...] Read more.
Whole cardiac segmentation in chest CT images is important to identify functional abnormalities that occur in cardiovascular diseases, such as coronary artery disease (CAD) detection. However, manual efforts are time-consuming and labor intensive. Additionally, labeling the ground truth for cardiac segmentation requires the extensive manual annotation of images by the radiologist. Due to the difficulty in obtaining the annotated data and the required expertise as an annotator, an unsupervised approach is proposed. In this paper, we introduce a semantic whole-heart segmentation combining K-Means clustering as a threshold criterion of the mean-thresholding method and mathematical morphology method as a threshold shifting enhancer. The experiment was conducted on 500 subjects in two cases: (1) 56 slices per volume containing full heart scans, and (2) 30 slices per volume containing about half of the top of heart scans before the liver appears. In both cases, the results showed an average silhouette score of the K-Means method of 0.4130. Additionally, the experiment on 56 slices per volume achieved an overall accuracy (OA) and mean intersection over union (mIoU) of 34.90% and 41.26%, respectively, while the performance for the first 30 slices per volume achieved an OA and mIoU of 55.10% and 71.46%, respectively. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

16 pages, 2148 KiB  
Article
Discriminative Learning Approach Based on Flexible Mixture Model for Medical Data Categorization and Recognition
by Fahd Alharithi, Ahmed Almulihi, Sami Bourouis, Roobaea Alroobaea and Nizar Bouguila
Sensors 2021, 21(7), 2450; https://doi.org/10.3390/s21072450 - 2 Apr 2021
Cited by 18 | Viewed by 2502
Abstract
In this paper, we propose a novel hybrid discriminative learning approach based on shifted-scaled Dirichlet mixture model (SSDMM) and Support Vector Machines (SVMs) to address some challenging problems of medical data categorization and recognition. The main goal is to capture accurately the intrinsic [...] Read more.
In this paper, we propose a novel hybrid discriminative learning approach based on shifted-scaled Dirichlet mixture model (SSDMM) and Support Vector Machines (SVMs) to address some challenging problems of medical data categorization and recognition. The main goal is to capture accurately the intrinsic nature of biomedical images by considering the desirable properties of both generative and discriminative models. To achieve this objective, we propose to derive new data-based SVM kernels generated from the developed mixture model SSDMM. The proposed approach includes the following steps: the extraction of robust local descriptors, the learning of the developed mixture model via the expectation–maximization (EM) algorithm, and finally the building of three SVM kernels for data categorization and classification. The potential of the implemented framework is illustrated through two challenging problems that concern the categorization of retinal images into normal or diabetic cases and the recognition of lung diseases in chest X-rays (CXR) images. The obtained results demonstrate the merits of our hybrid approach as compared to other methods. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

23 pages, 5688 KiB  
Article
A Classification Method for the Cellular Images Based on Active Learning and Cross-Modal Transfer Learning
by Caleb Vununu, Suk-Hwan Lee and Ki-Ryong Kwon
Sensors 2021, 21(4), 1469; https://doi.org/10.3390/s21041469 - 20 Feb 2021
Cited by 9 | Viewed by 2795
Abstract
In computer-aided diagnosis (CAD) systems, the automatic classification of the different types of the human epithelial type 2 (HEp-2) cells represents one of the critical steps in the diagnosis procedure of autoimmune diseases. Most of the methods prefer to tackle this task using [...] Read more.
In computer-aided diagnosis (CAD) systems, the automatic classification of the different types of the human epithelial type 2 (HEp-2) cells represents one of the critical steps in the diagnosis procedure of autoimmune diseases. Most of the methods prefer to tackle this task using the supervised learning paradigm. However, the necessity of having thousands of manually annotated examples constitutes a serious concern for the state-of-the-art HEp-2 cells classification methods. We present in this work a method that uses active learning in order to minimize the necessity of annotating the majority of the examples in the dataset. For this purpose, we use cross-modal transfer learning coupled with parallel deep residual networks. First, the parallel networks, which take simultaneously different wavelet coefficients as inputs, are trained in a fully supervised way by using a very small and already annotated dataset. Then, the trained networks are utilized on the targeted dataset, which is quite larger compared to the first one, using active learning techniques in order to only select the images that really need to be annotated among all the examples. The obtained results show that active learning, when mixed with an efficient transfer learning technique, can allow one to achieve a quite pleasant discrimination performance with only a few annotated examples in hands. This will help in building CAD systems by simplifying the burdensome task of labeling images while maintaining a similar performance with the state-of-the-art methods. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

16 pages, 2893 KiB  
Article
A-DenseUNet: Adaptive Densely Connected UNet for Polyp Segmentation in Colonoscopy Images with Atrous Convolution
by Sirojbek Safarov and Taeg Keun Whangbo
Sensors 2021, 21(4), 1441; https://doi.org/10.3390/s21041441 - 19 Feb 2021
Cited by 59 | Viewed by 7830
Abstract
Colon carcinoma is one of the leading causes of cancer-related death in both men and women. Automatic colorectal polyp segmentation and detection in colonoscopy videos help endoscopists to identify colorectal disease more easily, making it a promising method to prevent colon cancer. In [...] Read more.
Colon carcinoma is one of the leading causes of cancer-related death in both men and women. Automatic colorectal polyp segmentation and detection in colonoscopy videos help endoscopists to identify colorectal disease more easily, making it a promising method to prevent colon cancer. In this study, we developed a fully automated pixel-wise polyp segmentation model named A-DenseUNet. The proposed architecture adapts different datasets, adjusting for the unknown depth of the network by sharing multiscale encoding information to the different levels of the decoder side. We also used multiple dilated convolutions with various atrous rates to observe a large field of view without increasing the computational cost and prevent loss of spatial information, which would cause dimensionality reduction. We utilized an attention mechanism to remove noise and inappropriate information, leading to the comprehensive re-establishment of contextual features. Our experiments demonstrated that the proposed architecture achieved significant segmentation results on public datasets. A-DenseUNet achieved a 90% Dice coefficient score on the Kvasir-SEG dataset and a 91% Dice coefficient score on the CVC-612 dataset, both of which were higher than the scores of other deep learning models such as UNet++, ResUNet, U-Net, PraNet, and ResUNet++ for segmenting polyps in colonoscopy images. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

20 pages, 10446 KiB  
Article
A Machine Learning Approach to Diagnosing Lung and Colon Cancer Using a Deep Learning-Based Classification Framework
by Mehedi Masud, Niloy Sikder, Abdullah-Al Nahid, Anupam Kumar Bairagi and Mohammed A. AlZain
Sensors 2021, 21(3), 748; https://doi.org/10.3390/s21030748 - 22 Jan 2021
Cited by 282 | Viewed by 15125
Abstract
The field of Medicine and Healthcare has attained revolutionary advancements in the last forty years. Within this period, the actual reasons behind numerous diseases were unveiled, novel diagnostic methods were designed, and new medicines were developed. Even after all these achievements, diseases like [...] Read more.
The field of Medicine and Healthcare has attained revolutionary advancements in the last forty years. Within this period, the actual reasons behind numerous diseases were unveiled, novel diagnostic methods were designed, and new medicines were developed. Even after all these achievements, diseases like cancer continue to haunt us since we are still vulnerable to them. Cancer is the second leading cause of death globally; about one in every six people die suffering from it. Among many types of cancers, the lung and colon variants are the most common and deadliest ones. Together, they account for more than 25% of all cancer cases. However, identifying the disease at an early stage significantly improves the chances of survival. Cancer diagnosis can be automated by using the potential of Artificial Intelligence (AI), which allows us to assess more cases in less time and cost. With the help of modern Deep Learning (DL) and Digital Image Processing (DIP) techniques, this paper inscribes a classification framework to differentiate among five types of lung and colon tissues (two benign and three malignant) by analyzing their histopathological images. The acquired results show that the proposed framework can identify cancer tissues with a maximum of 96.33% accuracy. Implementation of this model will help medical professionals to develop an automatic and reliable system capable of identifying various types of lung and colon cancers. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

15 pages, 1458 KiB  
Article
Computer-Aided Diagnosis of Alzheimer’s Disease through Weak Supervision Deep Learning Framework with Attention Mechanism
by Shuang Liang and Yu Gu
Sensors 2021, 21(1), 220; https://doi.org/10.3390/s21010220 - 31 Dec 2020
Cited by 51 | Viewed by 5323
Abstract
Alzheimer’s disease (AD) is the most prevalent neurodegenerative disease causing dementia and poses significant health risks to middle-aged and elderly people. Brain magnetic resonance imaging (MRI) is the most widely used diagnostic method for AD. However, it is challenging to collect sufficient brain [...] Read more.
Alzheimer’s disease (AD) is the most prevalent neurodegenerative disease causing dementia and poses significant health risks to middle-aged and elderly people. Brain magnetic resonance imaging (MRI) is the most widely used diagnostic method for AD. However, it is challenging to collect sufficient brain imaging data with high-quality annotations. Weakly supervised learning (WSL) is a machine learning technique aimed at learning effective feature representation from limited or low-quality annotations. In this paper, we propose a WSL-based deep learning (DL) framework (ADGNET) consisting of a backbone network with an attention mechanism and a task network for simultaneous image classification and image reconstruction to identify and classify AD using limited annotations. The ADGNET achieves excellent performance based on six evaluation metrics (Kappa, sensitivity, specificity, precision, accuracy, F1-score) on two brain MRI datasets (2D MRI and 3D MRI data) using fine-tuning with only 20% of the labels from both datasets. The ADGNET has an F1-score of 99.61% and sensitivity is 99.69%, outperforming two state-of-the-art models (ResNext WSL and SimCLR). The proposed method represents a potential WSL-based computer-aided diagnosis method for AD in clinical practice. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

Back to TopTop