Multiview Spatial-Temporal Meta-Learning for Multivariate Time Series Forecasting
Abstract
:1. Introduction
- We propose a novel framework based on meta-learning approximate probabilistic inference, entitled ST-MeLaPI, for the efficient and versatile learning of multivariate time series data. ST-MeLaPI regards that each short time span serves as the dataset for a specific task and then leverages a pairwise variable relationship learner (MRI) to capture task-specific inter-variable dependencies, hence enabling a spatial dependency- and temporal pattern-induced approximate posterior for one forward all-timestamp prediction (MV-MeLaPI).
- We developed a multiview position-aware meta-learning and inference strategy: MV-MeLaPI. It enables fast and versatile learning via position-aware stochastic neurons. Spatial dependency- and temporal pattern-induced view-specific stochastic inputs from the support set are generated. Then, a cross-view information concatenation of stochastic inputs and encoded (view-specific) query set was designed to the approximate posterior predictive distribution, which realizes multiview meta-probabilistic inference learning at each timestamp.
- ST-MeLaPI can be used for both long-sequence and short-sequence time-series forecasting tasks. In this study, we conducted several experiments with real-world data. The extensive experimental results show improved learning performances.
2. Related Work
- (1)
- (2)
- Deep sequential neural networks. Nonlinear recurrent neural network (RNN) models [26] have shown advantages in capturing high-order complex dynamics. For instance, LSTNet [27] combines the convolutional neural network with recurrent skip connections to extract short-term and long-term temporal patterns. DeepAR [11] performs probabilistic prediction through the combination of an autoregressive method and RNN. Transformer-based models [1,2,28,29,30] utilize self-attention mechanisms [31] to capture the long-range dependencies for prediction. In addition, another popular and competitive architecture is the causal convolution or temporal convolutional network (TCN) [32]. For example, DeepGLO [33] combines a global matrix factorization model with temporal convolutional networks. Even though recent state-of-the-art decomposition-based models [2,34,35,36] achieve superior performances, they are mainly designed for capturing the seasonal or trend-cyclical patterns in time series, which is far from satisfactory due to the versatile patterns of time series, especially considering the evolving nature of the data. In light of this, some works adopt meta-learning strategies [18] to realize quick task adaption, or choose multitask learning [37] for adaptive learning and forecasting in non-stationary data contexts.
- (3)
- Multivariate relationship-aware techniques. The underlying inter-variable dependencies, e.g., correlation and causation, have shown promise in enhancing the prediction performance [9]. Therefore, recent advances focus on spatio-temporal learning; in particular, traffic forecasting models [3,4,10,38,39,40,41,42] consider spatial connections through the combination of graph neural networks (GNNs) [43] and recurrent neural networks, thus forecasting all time series simultaneously. However, the aforementioned methods assume the spatial connections of roads as prior knowledge, which is often not available in most multivariate time series forecasting tasks. Therefore, automatic inter-variable dependency structure learning has been integrated into temporal dynamics modeling [8,9,12,44]. For instance, NRI [8] uses an autoencoder approach that learns different structures given different multivariate time series inputs. GTS [9] first learns a probabilistic graph for the whole time series and then combines the graph neural network and temporal recurrent processing to model inter-variable structure-aware temporal dynamics. Nevertheless, these models either only learn a single structure given one set of K series spanning a whole time window [9] or cannot capture the diverse temporal patterns with fixed model parameters despite dynamic structures [8]. Recently, ML-PIP [6,22] was proposed, adopting a multitask meta-learning framework to perform approximate probabilistic inference under low-resource learning environments. However, on the one hand, ML-PIP assumes that spatial information is given and divides all data into different learning tasks according to spatial and temporal information simultaneously; on the other hand, it does not consider the multivariate time series prediction problem, much less a position-aware multiview meta-learning strategy. Hence, it is very different from our learning tasks and proposed model. But it provides us the insight that each short time period can be naturally regarded as a specific learning task given the modality-rich and time-evolving nature of multivariate time series, thus realizing data-efficient and versatile learning. However, the inherent multi-step nature of time series present challenges about how to realize the time-specific diversity. In particular, the effectiveness of sequential dynamics-only [1,11,27,33], graph structure-only [12,13] and entangled spatio-temporal modeling [9,44] motivated us to design multivariate relationships and temporal dynamics in a more sophisticated way. For example, Qu et al. [13] proposed a sparse connectivity network to identify temporal dynamics using an adaptive regularization approach; Informer [1] shows the power of self-attention in sequential modeling. In particular, both temporal and contextual information [15] enhance the representation learning of multivariate time series data, strengthening the necessities of the fine-grained multiview fusion of temporal and spatial information.
3. Methodology
3.1. Problem Definition
3.2. Multivariate Relationship Identification
3.3. Multiview Meta-Learning and Probabilistic Inference
3.3.1. Multiview Feature Encoder
3.3.2. Multiview Position-Aware Amortization Network
3.3.3. Cross-View Gated Generator
3.3.4. End-to-End Stochastic Training
4. Experimental Settings
4.1. Datasets
4.2. Experimental Details
4.2.1. Baselines
4.2.2. Hyperparameter Tuning
4.2.3. Evaluation Metrics
4.3. Meta-Task Construction
5. Results and Analysis
5.1. Traffic Flow Forecast
5.2. Ablation Studies
5.3. Few-Data Training
5.4. Dynamic Graph Generation
5.5. Task Relevance Verification
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
- Pan, Z.; Zhang, W.; Liang, Y.; Zhang, W.; Yu, Y.; Zhang, J.; Zheng, Y. Spatio-temporal meta learning for urban traffic prediction. IEEE Trans. Knowl. Data Eng. 2020, 34, 1462–1476. [Google Scholar] [CrossRef]
- Han, L.; Du, B.; Sun, L.; Fu, Y.; Lv, Y.; Xiong, H. Dynamic and multi-faceted spatio-temporal deep learning for traffic speed forecasting. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–19 August 2021; pp. 547–555. [Google Scholar]
- Cao, D.; Wang, Y.; Duan, J.; Zhang, C.; Zhu, X.; Huang, C.; Tong, Y.; Xu, B.; Bai, J.; Tong, J.; et al. Spectral temporal graph neural network for multivariate time-series forecasting. Adv. Neural Inf. Process. Syst. 2020, 33, 17766–17778. [Google Scholar]
- Qin, H.; Ke, S.; Yang, X.; Xu, H.; Zhan, X.; Zheng, Y. Robust spatio-temporal purchase prediction via deep meta learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2021; Volume 35, pp. 4312–4319. [Google Scholar]
- An, Y.; Zhang, L.; Yang, H.; Sun, L.; Jin, B.; Liu, C.; Yu, R.; Wei, X. Prediction of treatment medicines with dual adaptive sequential networks. IEEE Trans. Knowl. Data Eng. 2021, 34, 5496–5509. [Google Scholar] [CrossRef]
- Kipf, T.; Fetaya, E.; Wang, K.C.; Welling, M.; Zemel, R. Neural relational inference for interacting systems. In Proceedings of the International Conference on Machine Learning—PMLR, Stockholm Sweden, 10–15 July 2018; pp. 2688–2697. [Google Scholar]
- Shang, C.; Chen, J.; Bi, J. Discrete graph structure learning for forecasting multiple time series. arXiv 2021, arXiv:2101.06861. [Google Scholar]
- Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; Li, H. T-gcn: A temporal graph convolutional network for traffic prediction. IEEE Trans. Intell. Transp. Syst. 2019, 21, 3848–3858. [Google Scholar] [CrossRef]
- Salinas, D.; Flunkert, V.; Gasthaus, J.; Januschowski, T. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast. 2020, 36, 1181–1191. [Google Scholar] [CrossRef]
- Choi, E.; Xu, Z.; Li, Y.; Dusenberry, M.; Flores, G.; Xue, E.; Dai, A. Learning the graphical structure of electronic health records with graph convolutional transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 606–613. [Google Scholar]
- Qu, Y.; Liu, C.; Zhang, K.; Xiao, K.; Jin, B.; Xiong, H. Diagnostic sparse connectivity networks with regularization template. IEEE Trans. Knowl. Data Eng. 2021, 35, 307–320. [Google Scholar] [CrossRef]
- Li, Y.; Chen, Z.; Zha, D.; Du, M.; Ni, J.; Zhang, D.; Chen, H.; Hu, X. Towards Learning Disentangled Representations for Time Series. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 3270–3278. [Google Scholar]
- Eldele, E.; Ragab, M.; Chen, Z.; Wu, M.; Kwoh, C.K.; Li, X.; Guan, C. Time-Series Representation Learning via Temporal and Contextual Contrasting. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Montreal, QC, Canada, 19–27 August 2021; pp. 2352–2359. [Google Scholar]
- Pham, Q.; Liu, C.; Sahoo, D.; Hoi, S.C. Learning Fast and Slow for Online Time Series Forecasting. arXiv 2022, arXiv:2202.11672. [Google Scholar]
- Finn, C.; Abbeel, P.; Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning—PMLR, Sydney, Australia, 6–11 August 2017; pp. 1126–1135. [Google Scholar]
- Oreshkin, B.N.; Carpov, D.; Chapados, N.; Bengio, Y. Meta-learning framework with applications to zero-shot time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2021; Volume 35, pp. 9242–9250. [Google Scholar]
- Lu, B.; Gan, X.; Zhang, W.; Yao, H.; Fu, L.; Wang, X. Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge Transfer. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 1162–1172. [Google Scholar]
- Sohn, K.; Lee, H.; Yan, X. Learning structured output representation using deep conditional generative models. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar]
- Zhang, Y.; Li, Y.; Zhou, X.; Luo, J.; Zhang, Z.L. Urban traffic dynamics prediction—A continuous spatial-temporal meta-learning approach. ACM Trans. Intell. Syst. Technol. 2022, 13, 1–19. [Google Scholar] [CrossRef]
- Gordon, J.; Bronskill, J.; Bauer, M.; Nowozin, S.; Turner, R. Meta-Learning Probabilistic Inference for Prediction. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Makridakis, S.; Hibon, M. ARMA models and the Box–Jenkins methodology. J. Forecast. 1997, 16, 147–163. [Google Scholar] [CrossRef]
- de Bézenac, E.; Rangapuram, S.S.; Benidis, K.; Bohlke-Schneider, M.; Kurle, R.; Stella, L.; Hasson, H.; Gallinari, P.; Januschowski, T. Normalizing kalman filters for multivariate time series analysis. Adv. Neural Inf. Process. Syst. 2020, 33, 2995–3007. [Google Scholar]
- Hyndman, R.; Koehler, A.B.; Ord, J.K.; Snyder, R.D. Forecasting with Exponential Smoothing: The State Space Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
- Lai, G.; Chang, W.C.; Yang, Y.; Liu, H. Modeling long-and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [Google Scholar]
- Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
- Kitaev, N.; Kaiser, L.; Levskaya, A. Reformer: The Efficient Transformer. In Proceedings of the 8th International Conference on Learning Representations, ICLR, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Madhusudhanan, K.; Burchert, J.; Duong-Trung, N.; Born, S.; Schmidt-Thieme, L. Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting. arXiv 2021, arXiv:2110.08255. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Oord, A.v.d.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
- Sen, R.; Yu, H.F.; Dhillon, I.S. Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Oreshkin, B.N.; Carpov, D.; Chapados, N.; Bengio, Y. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. In Proceedings of the 8th International Conference on Learning Representations, ICLR, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting. arXiv 2022, arXiv:2202.01575. [Google Scholar]
- Challu, C.; Olivares, K.G.; Oreshkin, B.N.; Garza, F.; Mergenthaler, M.; Dubrawski, A. N-hits: Neural hierarchical interpolation for time series forecasting. arXiv 2022, arXiv:2201.12886. [Google Scholar]
- Du, Y.; Wang, J.; Feng, W.; Pan, S.; Qin, T.; Xu, R.; Wang, C. Adarnn: Adaptive learning and forecasting of time series. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; pp. 402–411. [Google Scholar]
- Yu, B.; Yin, H.; Zhu, Z. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. arXiv 2017, arXiv:1709.04875. [Google Scholar]
- Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In Proceedings of the International Conference on Learning Representations (ICLR’18), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph WaveNet for Deep Spatial-Temporal Graph Modeling. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence IJCAI-19, Macao, China, 10–16 August 2019. [Google Scholar]
- Li, M.; Zhu, Z. Spatial-temporal fusion graph neural networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2021; Volume 35, pp. 4189–4196. [Google Scholar]
- Jin, Y.; Chen, K.; Yang, Q. Selective cross-city transfer learning for traffic prediction via source city region re-weighting. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 731–741. [Google Scholar]
- Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How Powerful are Graph Neural Networks? In Proceedings of the 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Duan, Z.; Xu, H.; Wang, Y.; Huang, Y.; Ren, A.; Xu, Z.; Sun, Y.; Wang, W. Multivariate time-series classification with hierarchical variational graph pooling. Neural Netw. 2022, 154, 481–490. [Google Scholar] [CrossRef] [PubMed]
- Jang, E.; Gu, S.; Poole, B. Categorical Reparameterization with Gumbel-Softmax. In Proceedings of the 5th International Conference on Learning Representations, ICLR, Toulon, France, 24–26 April 2017. [Google Scholar]
- Jagadish, H.V.; Gehrke, J.; Labrinidis, A.; Papakonstantinou, Y.; Patel, J.M.; Ramakrishnan, R.; Shahabi, C. Big Data and Its Technical Challenges. Commun. ACM 2014, 57, 86–94. [Google Scholar] [CrossRef]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph wavenet for deep spatial-temporal graph modeling. arXiv 2019, arXiv:1906.00121. [Google Scholar]
- Xu, M.; Dai, W.; Liu, C.; Gao, X.; Lin, W.; Qi, G.J.; Xiong, H. Spatial-temporal transformer networks for traffic flow forecasting. arXiv 2020, arXiv:2001.02908. [Google Scholar]
- Zheng, C.; Fan, X.; Wang, C.; Qi, J. Gman: A graph multi-attention network for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1234–1241. [Google Scholar]
- Seo, Y.; Defferrard, M.; Vandergheynst, P.; Bresson, X. Structured sequence modeling with graph convolutional recurrent networks. In Proceedings of the Neural Information Processing: 25th International Conference, ICONIP 2018, Proceedings, Part I 25, Siem Reap, Cambodia, 13–16 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 362–373. [Google Scholar]
- Ye, J.; Sun, L.; Du, B.; Fu, Y.; Xiong, H. Coupled layer-wise graph convolution for transportation demand prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2021; Volume 35, pp. 4617–4625. [Google Scholar]
- Lee, H.; Jin, S.; Chu, H.; Lim, H.; Ko, S. Learning to remember patterns: Pattern matching memory networks for traffic forecasting. arXiv 2021, arXiv:2110.10380. [Google Scholar]
- Jiang, R.; Wang, Z.; Yong, J.; Jeph, P.; Chen, Q.; Kobayashi, Y.; Song, X.; Fukushima, S.; Suzumura, T. Spatio-temporal meta-graph learning for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 8078–8086. [Google Scholar]
Dataset | METR-LA | PEMS-BAY | EXPY-TKY |
---|---|---|---|
Start Time | 1 March 2012 | 1 January 2017 | 1 October 2021 |
End Time | 30 June 2012 | 31 May 2017 | 31 December 2021 |
Time Interval | 5 min | 5 min | 10 min |
Timesteps | 34,272 | 52,116 | 13,248 |
Spatial Units | 207 sensors | 325 sensors | 1843 road links |
Datasets | Models | 15 min/Horizon 3 | 30 min/Horizon 6 | 60 min/Horizon 12 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | ||
METR-LA | HA [39] | 4.16 | 13.0 | 7.80 | 4.16 | 13.0 | 7.80 | 4.16 | 13.0 | 7.80 |
STGCN [38] | 2.88 | 7.6 | 5.74 | 3.47 | 9.5 | 7.24 | 4.59 | 12.7 | 9.40 | |
DCRNN [39] | 2.77 | 7.3 | 5.38 | 3.15 | 8.8 | 6.45 | 3.60 | 10.5 | 7.60 | |
GW-Net [47] | 2.69 | 6.9 | 5.15 | 3.07 | 8.4 | 6.22 | 3.53 | 10.0 | 7.37 | |
STTN [48] | 2.79 | 7.2 | 5.48 | 3.16 | 8.5 | 6.50 | 3.60 | 10.2 | 7.60 | |
GMAN [49] | 2.80 | 7.4 | 5.55 | 3.12 | 8.7 | 6.49 | 3.44 | 10.1 | 7.35 | |
CCRNN [51] | 2.85 | 7.5 | 5.54 | 3.24 | 8.9 | 6.54 | 3.73 | 10.6 | 7.65 | |
GTS [9] | 2.65 | 6.8 | 5.20 | 3.05 | 8.3 | 6.22 | 3.47 | 9.8 | 7.29 | |
PM-MemNet [52] | 2.65 | 7.0 | 5.29 | 3.03 | 8.4 | 6.29 | 3.46 | 10.0 | 7.29 | |
ST-GFSL [19] | 2.90 | N/A | 5.59 | 3.56 | N/A | 6.96 | N/A | N/A | N/A | |
MegaCRN [53] | 2.52 | 6.4 | 4.94 | 2.93 | 7.9 | 6.06 | 3.38 | 9.7 | 7.23 | |
ST-MeLaPI | 2.12 | 5.3 | 3.72 | 2.60 | 6.7 | 4.13 | 2.64 | 6.9 | 4.25 | |
Generate-ST-MeLaPI | 2.50 | 6.3 | 4.99 | 2.69 | 6.6 | 5.58 | 2.61 | 6.7 | 4.83 | |
15 min/horizon 3 | 30 min/horizon 6 | 60 min/horizon 12 | ||||||||
MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | ||
PEMS-BAY | HA [39] | 2.88 | 6.8 | 5.59 | 2.88 | 6.8 | 5.59 | 2.88 | 6.8 | 5.59 |
STGCN [38] | 1.36 | 2.9 | 2.96 | 1.81 | 4.1 | 4.27 | 2.49 | 5.7 | 5.69 | |
DCRNN [39] | 1.38 | 2.9 | 2.95 | 1.74 | 3.9 | 3.97 | 2.07 | 4.9 | 4.74 | |
GW-Net [47] | 1.30 | 2.7 | 2.74 | 1.63 | 3.6 | 3.70 | 1.95 | 4.6 | 4.52 | |
STTN [48] | 1.36 | 2.8 | 2.87 | 1.67 | 3.7 | 3.79 | 1.95 | 4.5 | 4.50 | |
GMAN [49] | 1.35 | 2.8 | 2.90 | 1.65 | 3.7 | 3.82 | 1.92 | 4.5 | 4.49 | |
CCRNN [51] | 1.38 | 2.9 | 2.90 | 1.74 | 3.9 | 3.87 | 2.07 | 4.8 | 4.65 | |
GTS [9] | 1.34 | 2.8 | 2.84 | 1.67 | 3.7 | 3.83 | 1.98 | 4.5 | 4.56 | |
PM-MemNet [52] | 1.34 | 2.8 | 2.82 | 1.65 | 3.7 | 3.76 | 1.95 | 4.5 | 4.49 | |
ST-GFSL [19] | 1.56 | N/A | 3.18 | 2.07 | N/A | 4.58 | N/A | N/A | N/A | |
MegaCRN [53] | 1.28 | 2.6 | 2.72 | 1.60 | 3.5 | 3.68 | 1.88 | 4.4 | 4.42 | |
ST-MeLaPI | 1.08 | 2.2 | 1.81 | 1.21 | 2.6 | 2.24 | 1.46 | 3.0 | 2.44 | |
Generate-ST-MeLaPI | 0.93 | 1.9 | 1.63 | 1.13 | 2.4 | 2.09 | 1.41 | 3.1 | 2.83 | |
10 min/horizon 1 | 30 min/horizon 3 | 60 min/horizon 6 | ||||||||
MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | ||
EXPY-TKY | HA [39] | 7.63 | 31.2 | 11.96 | 7.63 | 31.2 | 11.96 | 7.63 | 31.2 | 11.96 |
STGCN [38] | 6.09 | 24.8 | 9.60 | 6.91 | 30.2 | 10.99 | 8.41 | 32.9 | 12.70 | |
DCRNN [39] | 6.04 | 25.5 | 9.44 | 6.85 | 31.0 | 10.87 | 7.45 | 34.6 | 11.86 | |
GW-Net [47] | 5.91 | 25.2 | 9.30 | 6.59 | 29.7 | 10.54 | 6.89 | 31.7 | 11.07 | |
STTN [48] | 5.90 | 25.6 | 9.27 | 6.53 | 29.8 | 10.40 | 6.99 | 32.5 | 11.23 | |
GMAN [49] | 6.09 | 26.5 | 9.49 | 6.64 | 30.1 | 10.55 | 7.05 | 32.9 | 11.28 | |
CCRNN [51] | 5.90 | 24.5 | 9.29 | 6.68 | 29.9 | 10.77 | 7.11 | 32.5 | 11.56 | |
GTS [9] | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | |
PM-MemNet [52] | 5.94 | 25.1 | 9.25 | 6.52 | 28.9 | 10.42 | 6.87 | 31.2 | 11.14 | |
ST-GFSL [19] | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | |
MegaCRN [53] | 5.81 | 24.4 | 9.20 | 6.44 | 28.9 | 10.33 | 6.83 | 31.0 | 11.04 | |
ST-MeLaPI | 5.02 | 22.3 | 8.41 | 5.89 | 26.1 | 9.45 | 6.77 | 31.3 | 10.83 | |
Generate-ST-MeLaPI | 5.36 | 22.9 | 8.79 | 5.92 | 26.0 | 9.66 | 6.88 | 31.6 | 11.29 |
Datasets | Models | 15 min | 30 min | 60 min | ||||||
---|---|---|---|---|---|---|---|---|---|---|
MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | MAE | MAPE (%) | RMSE | ||
PEMS-BAY | ST-MeLaPI | 1.08 | 2.2 | 1.81 | 1.21 | 2.6 | 2.24 | 1.46 | 3.0 | 2.44 |
ST-MeLaPI | 1.78 | 3.7 | 2.87 | 2.15 | 4.9 | 3.55 | 3.18 | 8.6 | 5.41 | |
ST-MeLaPI | 1.28 | 2.6 | 2.28 | 1.73 | 3.6 | 3.11 | 2.32 | 5.0 | 4.28 | |
ST-MeLaPI | 3.08 | 7.3 | 4.86 | 3.14 | 8.1 | 5.19 | 4.70 | 13.4 | 7.26 | |
ST-MeLaPI | 0.88 | 1.7 | 1.73 | 1.23 | 2.6 | 2.51 | 1.80 | 4.1 | 3.52 | |
ST-MeLaPI | 1.60 | 3.6 | 2.57 | 2.26 | 5.1 | 3.47 | 2.44 | 5.6 | 3.78 | |
ST-MeLaPI | 1.42 | 2.9 | 2.45 | 1.52 | 3.1 | 2.53 | 1.61 | 3.3 | 2.60 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, L.; Zhu, J.; Jin, B.; Wei, X. Multiview Spatial-Temporal Meta-Learning for Multivariate Time Series Forecasting. Sensors 2024, 24, 4473. https://doi.org/10.3390/s24144473
Zhang L, Zhu J, Jin B, Wei X. Multiview Spatial-Temporal Meta-Learning for Multivariate Time Series Forecasting. Sensors. 2024; 24(14):4473. https://doi.org/10.3390/s24144473
Chicago/Turabian StyleZhang, Liang, Jianping Zhu, Bo Jin, and Xiaopeng Wei. 2024. "Multiview Spatial-Temporal Meta-Learning for Multivariate Time Series Forecasting" Sensors 24, no. 14: 4473. https://doi.org/10.3390/s24144473
APA StyleZhang, L., Zhu, J., Jin, B., & Wei, X. (2024). Multiview Spatial-Temporal Meta-Learning for Multivariate Time Series Forecasting. Sensors, 24(14), 4473. https://doi.org/10.3390/s24144473