Next Article in Journal
HDGFI: Hierarchical Dual-Level Graph Feature Interaction Model for Personalized Recommendation
Previous Article in Journal
Information Geometry, Complexity Measures and Data Analysis
Previous Article in Special Issue
Institution Publication Feature Analysis Based on Time-Series Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attention-Based Sequence-to-Sequence Model for Time Series Imputation

School of Computer Science and Technology, Jiangsu Normal University, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(12), 1798; https://doi.org/10.3390/e24121798
Submission received: 26 October 2022 / Revised: 22 November 2022 / Accepted: 7 December 2022 / Published: 9 December 2022
(This article belongs to the Special Issue Intelligent Time Series Model and Its Applications)

Abstract

Time series data are usually characterized by having missing values, high dimensionality, and large data volume. To solve the problem of high-dimensional time series with missing values, this paper proposes an attention-based sequence-to-sequence model to imputation missing values in time series (ASSM), which is a sequence-to-sequence model based on the combination of feature learning and data computation. The model consists of two parts, encoder and decoder. The encoder part is a BIGRU recurrent neural network and incorporates a self-attentive mechanism to make the model more capable of handling long-range time series; The decoder part is a GRU recurrent neural network and incorporates a cross-attentive mechanism into associate with the encoder part. The relationship weights between the generated sequences in the decoder part and the known sequences in the encoder part are calculated to achieve the purpose of focusing on the sequences with a high degree of correlation. In this paper, we conduct comparison experiments with four evaluation metrics and six models on four real datasets. The experimental results show that the model proposed in this paper outperforms the six comparative missing value interpolation algorithms.
Keywords: deep learning; time series; missing value imputation; sequence-to-sequence; self-attention deep learning; time series; missing value imputation; sequence-to-sequence; self-attention

Share and Cite

MDPI and ACS Style

Li, Y.; Du, M.; He, S. Attention-Based Sequence-to-Sequence Model for Time Series Imputation. Entropy 2022, 24, 1798. https://doi.org/10.3390/e24121798

AMA Style

Li Y, Du M, He S. Attention-Based Sequence-to-Sequence Model for Time Series Imputation. Entropy. 2022; 24(12):1798. https://doi.org/10.3390/e24121798

Chicago/Turabian Style

Li, Yurui, Mingjing Du, and Sheng He. 2022. "Attention-Based Sequence-to-Sequence Model for Time Series Imputation" Entropy 24, no. 12: 1798. https://doi.org/10.3390/e24121798

APA Style

Li, Y., Du, M., & He, S. (2022). Attention-Based Sequence-to-Sequence Model for Time Series Imputation. Entropy, 24(12), 1798. https://doi.org/10.3390/e24121798

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop