Deep Learning, Reconfigurable Computing and Machine Learning in Healthcare

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 38861

Special Issue Editors


E-Mail Website
Guest Editor
Research Center in Digitalization and Industrial Robotics, Instituto Politécnico de Bragança, 5300-253 Bragança, Portugal
Interests: deep learning; machine learning; distributed systems; natural language processing

E-Mail Website
Co-Guest Editor
Computer Science Department, Chung-Ang University, Seoul 156-756, Republic of Korea
Interests: computer vision; machine learning; medical image analysis; image processing; deep learning; optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The machine learning research community has been constantly evolving since Artificial Intelligence was founded as an academic discipline in the 1950s. Through several ups, with the development of information theory in the 1960s, with the comeback of neural networks in the 1990s, and the development of deep learning in this decade; as well as downs, with the barrier of limited processing and storage capacities in the 1970s, and with the disappointing results and the collapse of dedicated hardware vendors in the early 2000; the latest developments and research on deep learning, dedicated hardware, big data, and high-speed networks has been achieving astonishing results. Machine learning, and in particular, the deep learning subfield, has receiving an extraordinary attention in both the scientific and professional communities. It is being applied in many areas of human knowledge, such as medicine, economics, education, and manufacturing. The combination of large datasets, with powerful computer vision, pattern recognition, and text analysis algorithms, enables us to develop practical solutions in a variety of intelligent software and applications. These successes only seem to accelerate, with new algorithms, faster hardware, and carefully annotated datasets appearing every day.

The aim of this Special Issue is to provide researchers and professionals with high-quality research papers addressing the latest advances in the following domains: machine learning, deep learning, dedicated accelerator hardware, and reconfigurable computing.

Prof. Dr. Rui Pedro Lopes
Prof. Dr. Byung-Woo Hong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neural networks
  • deep learning
  • convolutional neural networks
  • reconfigurable computing
  • near-data processing
  • parallelization

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1803 KiB  
Article
Atrial Fibrillation Detection Based on a Residual CNN Using BCG Signals
by Qiushi Su, Yanqi Huang, Xiaomei Wu, Biyong Zhang, Peilin Lu and Tan Lyu
Electronics 2022, 11(18), 2974; https://doi.org/10.3390/electronics11182974 - 19 Sep 2022
Cited by 2 | Viewed by 2007
Abstract
Atrial fibrillation (AF) is the most common arrhythmia and can seriously threaten patient health. Research on AF detection carries important clinical significance. This manuscript proposes an AF detection method based on ballistocardiogram (BCG) signals collected by a noncontact sensor. We first constructed a [...] Read more.
Atrial fibrillation (AF) is the most common arrhythmia and can seriously threaten patient health. Research on AF detection carries important clinical significance. This manuscript proposes an AF detection method based on ballistocardiogram (BCG) signals collected by a noncontact sensor. We first constructed a BCG signal dataset consisting of 28,214 ten-second nonoverlapping segments collected from 45 inpatients during overnight sleep, including 9438 for AF, 9570 for sinus rhythm (SR), and 9206 for motion artifacts (MA). Then, we designed a residual convolutional neural network (CNN) for AF detection. The network has four modules, namely a downsampling convolutional module, a local feature learning module, a global feature learning module, and a classification module, and it extracts local and global features from BCG signals for AF detection. The model achieved precision, sensitivity, specificity, F1 score, and accuracy of 96.8%, 93.7%, 98.4%, 95.2%, and 96.8%, respectively. The results indicate that the AF detection method proposed in this manuscript could serve as a basis for long-term screening of AF at home based on BCG signal acquisition. Full article
Show Figures

Figure 1

17 pages, 3008 KiB  
Article
Accurate ECG Classification Based on Spiking Neural Network and Attentional Mechanism for Real-Time Implementation on Personal Portable Devices
by Yuxuan Xing, Lei Zhang, Zhixian Hou, Xiaoran Li, Yueting Shi, Yiyang Yuan, Feng Zhang, Sen Liang, Zhenzhong Li and Liang Yan
Electronics 2022, 11(12), 1889; https://doi.org/10.3390/electronics11121889 - 16 Jun 2022
Cited by 6 | Viewed by 2769
Abstract
Electrocardiogram (ECG) heartbeat classification plays a vital role in early diagnosis and effective treatment, which provide opportunities for earlier prevention and intervention. In an effort to continuously monitor and detect abnormalities in patients’ ECG signals on portable devices, this paper present a lightweight [...] Read more.
Electrocardiogram (ECG) heartbeat classification plays a vital role in early diagnosis and effective treatment, which provide opportunities for earlier prevention and intervention. In an effort to continuously monitor and detect abnormalities in patients’ ECG signals on portable devices, this paper present a lightweight ECG heartbeat classification method based on a spiking neural network (SNN), a relatively shallow SNN model integrated with a channel-wise attentional module. We further explore the best-optimized architecture, which benefits from leveraging the full advantages of the SNN potential with the attention mechanism to process the classification task at low power and capture prominent features concerning the time, morphology, and multi-channel representations of the ECG signal. Results show that our model achieves overall classification accuracy of 98.26%, sensitivity of 94.75%, and F1 score of 89.09% on the MIT-BIH database, with energy consumption of 346.33 μJ per beat and runtime of 1.37 ms. Moreover, we have conducted multiple experiments to compare against current state-of-the-art methods using their assessment strategies to evaluate our model implementation on FPGA. So far, our work achieves comparable overall performance with all the literature in terms of classification accuracy, energy consumption, and real-time capability. Full article
Show Figures

Figure 1

10 pages, 1319 KiB  
Article
Segmentation of Echocardiography Based on Deep Learning Model
by Helin Huang, Zhenyi Ge, Hairui Wang, Jing Wu, Chunqiang Hu, Nan Li, Xiaomei Wu and Cuizhen Pan
Electronics 2022, 11(11), 1714; https://doi.org/10.3390/electronics11111714 - 27 May 2022
Cited by 3 | Viewed by 2123
Abstract
In order to achieve the classification of mitral regurgitation, a deep learning network VDS-UNET was designed to automatically segment the critical regions of echocardiography with three sections of apical two-chamber, apical three-chamber, and apical four-chamber. First, an expert-labeled dataset of 153 echocardiographic videos [...] Read more.
In order to achieve the classification of mitral regurgitation, a deep learning network VDS-UNET was designed to automatically segment the critical regions of echocardiography with three sections of apical two-chamber, apical three-chamber, and apical four-chamber. First, an expert-labeled dataset of 153 echocardiographic videos and 2183 images from 49 subjects was constructed. Then, the convolution layer in the VGG16 network was used to replace the contraction path in the original UNet network to extract image features, and depth supervision was added to the expansion path to achieve the segmentation of LA, LV, and MV. The results showed that the Dice coefficients of LA, LV, and MV were 0.935, 0.915, and 0.757, respectively. The proposed deep learning network can achieve simultaneous and accurate segmentation of LA, LV, and MV in multi-section echocardiography, laying a foundation for quantitative measurement of clinical parameters related to mitral regurgitation. Full article
Show Figures

Figure 1

17 pages, 1646 KiB  
Article
Unsupervised Object Segmentation Based on Bi-Partitioning Image Model Integrated with Classification
by Hyun-Tae Choi and Byung-Woo Hong
Electronics 2021, 10(18), 2296; https://doi.org/10.3390/electronics10182296 - 18 Sep 2021
Cited by 1 | Viewed by 1736
Abstract
The development of convolutional neural networks for deep learning has significantly contributed to image classification and segmentation areas. For high performance in supervised image segmentation, we need many ground-truth data. However, high costs are required to make these data, so unsupervised manners are [...] Read more.
The development of convolutional neural networks for deep learning has significantly contributed to image classification and segmentation areas. For high performance in supervised image segmentation, we need many ground-truth data. However, high costs are required to make these data, so unsupervised manners are actively being studied. The Mumford–Shah and Chan–Vese models are well-known unsupervised image segmentation models. However, the Mumford–Shah model and the Chan–Vese model cannot separate the foreground and background of the image because they are based on pixel intensities. In this paper, we propose a weakly supervised model for image segmentation based on the segmentation models (Mumford–Shah model and Chan–Vese model) and classification. The segmentation model (i.e., Mumford–Shah model or Chan–Vese model) is to find a base image mask for classification, and the classification network uses the mask from the segmentation models. With the classifcation network, the output mask of the segmentation model changes in the direction of increasing the performance of the classification network. In addition, the mask can distinguish the foreground and background of images naturally. Our experiment shows that our segmentation model, integrated with a classifier, can segment the input image to the foreground and the background only with the image’s class label, which is the image-level label. Full article
Show Figures

Figure 1

16 pages, 10882 KiB  
Article
Digital Technologies for Innovative Mental Health Rehabilitation
by Rui Pedro Lopes, Bárbara Barroso, Leonel Deusdado, André Novo, Manuel Guimarães, João Paulo Teixeira and Paulo Leitão
Electronics 2021, 10(18), 2260; https://doi.org/10.3390/electronics10182260 - 14 Sep 2021
Cited by 16 | Viewed by 4168
Abstract
Schizophrenia is a chronic mental illness, characterized by the loss of the notion of reality, failing to distinguish it from the imaginary. It affects the patient in life’s major areas, such as work, interpersonal relationships, or self-care, and the usual treatment is performed [...] Read more.
Schizophrenia is a chronic mental illness, characterized by the loss of the notion of reality, failing to distinguish it from the imaginary. It affects the patient in life’s major areas, such as work, interpersonal relationships, or self-care, and the usual treatment is performed with the help of anti-psychotic medication, which targets primarily the hallucinations, delirium, etc. Other symptoms, such as the decreased emotional expression or avolition, require a multidisciplinary approach, including psychopharmacology, cognitive training, and many forms of therapy. In this context, this paper addresses the use of digital technologies to design and develop innovative rehabilitation techniques, particularly focusing on mental health rehabilitation, and contributing for the promotion of well-being and health from a holistic perspective. In this context, serious games and virtual reality allows for creation of immersive environments that contribute to a more effective and lasting recovery, with improvements in terms of quality of life. The use of machine learning techniques will allow the real-time analysis of the data collected during the execution of the rehabilitation procedures, as well as enable their dynamic and automatic adaptation according to the profile and performance of the patients, by increasing or reducing the exercises’ difficulty. It relies on the acquisition of biometric and physiological signals, such as voice, heart rate, and game performance, to estimate the stress level, thus adapting the difficulty of the experience to the skills of the patient. The system described in this paper is currently in development, in collaboration with a health unit, and is an engineering effort that combines hardware and software to develop a rehabilitation tool for schizophrenic patients. A clinical trial is also planned for assessing the effectiveness of the system among negative symptoms in schizophrenia patients. Full article
Show Figures

Figure 1

12 pages, 3466 KiB  
Article
Learning-Rate Annealing Methods for Deep Neural Networks
by Kensuke Nakamura, Bilel Derbel, Kyoung-Jae Won and Byung-Woo Hong
Electronics 2021, 10(16), 2029; https://doi.org/10.3390/electronics10162029 - 22 Aug 2021
Cited by 18 | Viewed by 12282
Abstract
Deep neural networks (DNNs) have achieved great success in the last decades. DNN is optimized using the stochastic gradient descent (SGD) with learning rate annealing that overtakes the adaptive methods in many tasks. However, there is no common choice regarding the scheduled-annealing for [...] Read more.
Deep neural networks (DNNs) have achieved great success in the last decades. DNN is optimized using the stochastic gradient descent (SGD) with learning rate annealing that overtakes the adaptive methods in many tasks. However, there is no common choice regarding the scheduled-annealing for SGD. This paper aims to present empirical analysis of learning rate annealing based on the experimental results using the major data-sets on the image classification that is one of the key applications of the DNNs. Our experiment involves recent deep neural network models in combination with a variety of learning rate annealing methods. We also propose an annealing combining the sigmoid function with warmup that is shown to overtake both the adaptive methods and the other existing schedules in accuracy in most cases with DNNs. Full article
Show Figures

Figure 1

19 pages, 846 KiB  
Article
Deep Learning Based on Fourier Convolutional Neural Network Incorporating Random Kernels
by Yuna Han and Byung-Woo Hong
Electronics 2021, 10(16), 2004; https://doi.org/10.3390/electronics10162004 - 19 Aug 2021
Cited by 17 | Viewed by 7290
Abstract
In recent years, convolutional neural networks have been studied in the Fourier domain for a limited environment, where competitive results can be expected for conventional image classification tasks in the spatial domain. We present a novel efficient Fourier convolutional neural network, where a [...] Read more.
In recent years, convolutional neural networks have been studied in the Fourier domain for a limited environment, where competitive results can be expected for conventional image classification tasks in the spatial domain. We present a novel efficient Fourier convolutional neural network, where a new activation function is used, the additional shift Fourier transformation process is eliminated, and the number of learnable parameters is reduced. First, the Phase Rectified Linear Unit (PhaseReLU) is proposed, which is equivalent to the Rectified Linear Unit (ReLU) in the spatial domain. Second, in the proposed Fourier network, the shift Fourier transform is removed since the process is inessential for training. Lastly, we introduce two ways of reducing the number of weight parameters in the Fourier network. The basic method is to use a three-by-three sized kernel instead of five-by-five in our proposed Fourier convolutional neural network. We use the random kernel in our efficient Fourier convolutional neural network, whose standard deviation of the Gaussian distribution is used as a weight parameter. In other words, since only two scalars for each imaginary and real component per channel are required, a very small number of parameters is applied compressively. Therefore, as a result of experimenting in shallow networks, such as LeNet-3 and LeNet-5, our method achieves competitive accuracy with conventional convolutional neural networks while dramatically reducing the number of parameters. Furthermore, our proposed Fourier network, using a basic three-by-three kernel, mostly performs with higher accuracy than traditional convolutional neural networks in shallow and deep neural networks. Our experiments represent that presented kernel methods have the potential to be applied in all architecture based on convolutional neural networks. Full article
Show Figures

Figure 1

19 pages, 4099 KiB  
Article
Assessment of Machine Learning Techniques in IoT-Based Architecture for the Monitoring and Prediction of COVID-19
by Abdullah Aljumah
Electronics 2021, 10(15), 1834; https://doi.org/10.3390/electronics10151834 - 30 Jul 2021
Cited by 12 | Viewed by 2698
Abstract
From the end of 2019, the world has been facing the threat of COVID-19. It is predicted that, before herd immunity is achieved globally via vaccination, people around the world will have to tackle the COVID-19 pandemic using precautionary steps. This paper suggests [...] Read more.
From the end of 2019, the world has been facing the threat of COVID-19. It is predicted that, before herd immunity is achieved globally via vaccination, people around the world will have to tackle the COVID-19 pandemic using precautionary steps. This paper suggests a COVID-19 identification and control system that operates in real-time. The proposed system utilizes the Internet of Things (IoT) platform to capture users’ time-sensitive symptom information to detect potential cases of coronaviruses early on, to track the clinical measures adopted by survivors, and to gather and examine appropriate data to verify the existence of the virus. There are five key components in the framework: symptom data collection and uploading (via communication technology), a quarantine/isolation center, an information processing core (using artificial intelligent techniques), cloud computing, and visualization to healthcare doctors. This research utilizes eight machine/deep learning techniques—Neural Network, Decision Table, Support Vector Machine (SVM), Naive Bayes, OneR, K-Nearest Neighbor (K-NN), Dense Neural Network (DNN), and the Long Short-Term Memory technique—to detect coronavirus cases from time-sensitive information. A simulation was performed to verify the eight algorithms, after selecting the relevant symptoms, on real-world COVID-19 data values. The results showed that five of these eight algorithms obtained an accuracy of over 90%. Conclusively, it is shown that real-world symptomatic information would enable these three algorithms to identify potential COVID-19 cases effectively with enhanced accuracy. Additionally, the framework presents responses to treatment for COVID-19 patients. Full article
Show Figures

Figure 1

9 pages, 3199 KiB  
Article
An Effective Learning Method for Automatic Speech Recognition in Korean CI Patients’ Speech
by Jiho Jeong, S. I. M. M. Raton Mondol, Yeon Wook Kim and Sangmin Lee
Electronics 2021, 10(7), 807; https://doi.org/10.3390/electronics10070807 - 29 Mar 2021
Cited by 2 | Viewed by 2544
Abstract
The automatic speech recognition (ASR) model usually requires a large amount of training data to provide better results compared with the ASR models trained with a small amount of training data. It is difficult to apply the ASR model to non-standard speech such [...] Read more.
The automatic speech recognition (ASR) model usually requires a large amount of training data to provide better results compared with the ASR models trained with a small amount of training data. It is difficult to apply the ASR model to non-standard speech such as that of cochlear implant (CI) patients, owing to privacy concerns or difficulty of access. In this paper, an effective finetuning and augmentation ASR model is proposed. Experiments compare the character error rate (CER) after training the ASR model with the basic and the proposed method. The proposed method achieved a CER of 36.03% on the CI patient’s speech test dataset using only 2 h and 30 min of training data, which is a 62% improvement over the basic method. Full article
Show Figures

Figure 1

Back to TopTop