Next Article in Journal
The Effect of 3D Printing Tilt Angle on the Penetration of 3D-Printed Microneedle Arrays
Next Article in Special Issue
Single-Line Multi-Channel Flexible Stress Sensor Arrays
Previous Article in Journal
Spiral Chiral Metamaterial Structure Shape for Optical Activity Improvements
Previous Article in Special Issue
Simple Immunosensor Based on Carboxyl-Functionalized Multi-Walled Carbon Nanotubes @ Antimony-Doped Tin Oxide Composite Membrane for Aflatoxin B1 Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Arrhythmia Classification Model Based on Vision Transformer with Deformable Attention

1
School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
2
Suzhou Institute of Biomedical Engineering and Technology, China Academy of Sciences, Suzhou 215163, China
3
School of Electronics and Information Technology, Soochow University, Suzhou 215031, China
*
Authors to whom correspondence should be addressed.
Micromachines 2023, 14(6), 1155; https://doi.org/10.3390/mi14061155
Submission received: 28 April 2023 / Revised: 28 May 2023 / Accepted: 29 May 2023 / Published: 30 May 2023

Abstract

The electrocardiogram (ECG) is a highly effective non-invasive tool for monitoring heart activity and diagnosing cardiovascular diseases (CVDs). Automatic detection of arrhythmia based on ECG plays a critical role in the early prevention and diagnosis of CVDs. In recent years, numerous studies have focused on using deep learning methods to address arrhythmia classification problems. However, the transformer-based neural network in current research still has a limited performance in detecting arrhythmias for the multi-lead ECG. In this study, we propose an end-to-end multi-label arrhythmia classification model for the 12-lead ECG with varied-length recordings. Our model, called CNN-DVIT, is based on a combination of convolutional neural networks (CNNs) with depthwise separable convolution, and a vision transformer structure with deformable attention. Specifically, we introduce the spatial pyramid pooling layer to accept varied-length ECG signals. Experimental results show that our model achieved an F1 score of 82.9% in CPSC-2018. Notably, our CNN-DVIT outperforms the latest transformer-based ECG classification algorithms. Furthermore, ablation experiments reveal that the deformable multi-head attention and depthwise separable convolution are both efficient in extracting features from multi-lead ECG signals for diagnosis. The CNN-DVIT achieved good performance for the automatic arrhythmia detection of ECG signals. This indicates that our research can assist doctors in clinical ECG analysis, providing important support for the diagnosis of arrhythmia and contributing to the development of computer-aided diagnosis technology.
Keywords: arrhythmia; deep learning; ECG signal; deformable attention transformer; depthwise separable convolution arrhythmia; deep learning; ECG signal; deformable attention transformer; depthwise separable convolution

Share and Cite

MDPI and ACS Style

Dong, Y.; Zhang, M.; Qiu, L.; Wang, L.; Yu, Y. An Arrhythmia Classification Model Based on Vision Transformer with Deformable Attention. Micromachines 2023, 14, 1155. https://doi.org/10.3390/mi14061155

AMA Style

Dong Y, Zhang M, Qiu L, Wang L, Yu Y. An Arrhythmia Classification Model Based on Vision Transformer with Deformable Attention. Micromachines. 2023; 14(6):1155. https://doi.org/10.3390/mi14061155

Chicago/Turabian Style

Dong, Yanfang, Miao Zhang, Lishen Qiu, Lirong Wang, and Yong Yu. 2023. "An Arrhythmia Classification Model Based on Vision Transformer with Deformable Attention" Micromachines 14, no. 6: 1155. https://doi.org/10.3390/mi14061155

APA Style

Dong, Y., Zhang, M., Qiu, L., Wang, L., & Yu, Y. (2023). An Arrhythmia Classification Model Based on Vision Transformer with Deformable Attention. Micromachines, 14(6), 1155. https://doi.org/10.3390/mi14061155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop