Short-Term and Long-Term Travel Time Prediction Using Transformer-Based Techniques
Abstract
:1. Introduction
- The proposed SLIT, which incorporates the Enhanced Data Preprocessing (EDP) and the Short-Term and Long-Term Integrated Encoder–Decoder (SLIED), represents a significant advancement in traffic prediction. SLIT demonstrates the capability to predict travel times across various periods.
- The EDP module effectively utilizes periodic segments and temporal attributes in data processing, significantly enhancing the model’s predictive capabilities for both short-term and long-term traffic scenarios.
- The SLIED module, consisting of Short-Term and Long-Term Integrated Encoding (SLI-E) and Decoding (SLI-D), captures dependencies over different time frames. This design mitigates error accumulation commonly observed in autoregressive models.
- Comprehensive testing on a large-scale real-world dataset proves that SLIT outperforms contemporary methods in both short-term and long-term travel time predictions. Notably, SLIT exhibits significant enhancement in short-term prediction, achieving improvements of up to 9.67%, 9.2%, and 8.66% in terms of MAE, RMSE, and SMAPE, respectively.
- Across a wide range of road complexity conditions, SLIT consistently achieves notable results. Such performance demonstrates SLIT’s capability to handle varied road conditions effectively, proving its adaptability and robustness across multiple traffic environments.
2. Related Work
2.1. Transformer Models in Traffic Prediction
2.2. Sequence-to-Sequence Models in Long-Term Prediction
3. Proposed Methodology
3.1. Problem Formulation
3.2. Overall Structure of the Proposed Framework
3.3. Enhanced Data Preprocessing
3.4. Short-Term and Long-Term Integrated Encoder–Decoder
Algorithm 1 Short-Term and Long-Term Integrated Encoder–Decoder. |
|
4. Experimental Setup
4.1. Datasets
4.2. Competitive Methods
- HA [28]: The HA model employs a traditional time series regression approach by using historical data averages as the basis for its forecasts.
- LSTM [29]: LSTM models, a specialized form of recurrent neural networks, are adept at handling sequence prediction tasks, including the forecasting of travel times.
- DNN [30]: This deep neural network model, consisting of six layers, is designed to tackle a range of traffic-related prediction tasks, such as estimating travel times and analyzing traffic flows.
- DE-SLSTM [22]: This DE-SLSTM enhances the capabilities of traditional LSTM models by focusing on both short-term and long-term historical traffic data dependencies, aiming to improve travel time prediction accuracy.
- MTSMTT [23]: A framework for multivariate time series forecasting that integrates BiLSTM units with attention mechanisms to unveil hidden data patterns for precise forecasting efforts.
- DHM [8]: This DHM integrates the Gated Recurrent Unit (GRU) with the XGBoost algorithm for freeway travel time predictions, employing linear regression for the integration process.
- TFT [24]: TFT leverages Temporal Fusion Transformers to integrate various input types, showcasing its adaptability in predicting freeway speeds under different conditions.
- PASS2S [5]: The PASS2S model embeds an attention mechanism within a sequence-to-sequence LSTM framework, targeting specifically the challenge of long-term travel time forecasting.
4.3. Parameter Settings
4.4. Evaluation Metrics
- Mean Absolute Error (MAE): Measures the average magnitude of errors in a set of predictions, without considering their direction.
- Root Mean Square Error (RMSE): Provides a measure of the magnitude of errors by squaring the average difference between predicted and actual values, emphasizing larger errors.
- Symmetric Mean Absolute Percentage Error (SMAPE): Offers a normalized measure of prediction error in percentage terms, symmetrically penalizing both over-predictions and under-predictions, thus ensuring fairness regardless of the direction of errors.
- is the actual travel time for the i-th sample at the j-th time point;
- is the predicted travel time for the i-th sample at the j-th time point;
- N represents the total number of samples in the dataset; and
- l is the number of time points in the prediction horizon.
5. Results and Discussions
5.1. Comprehensive Comparison across All Road Segments
5.1.1. Short-Term Travel Time Prediction
5.1.2. Long-Term Travel Time Prediction
5.1.3. Statistical Analysis of Predictive Performance
5.2. Performance Comparison on Road Segments of Varying Complexities
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Algorithm for EDP
Algorithm A1 Enhanced Data Preprocessing. |
|
Appendix B. Statistical Analysis of Predictive Performance
- Null Hypothesis ():
- Alternative Hypothesis ():
- Null Hypothesis ():
- Alternative Hypothesis ():
- Null Hypothesis ():
- Alternative Hypothesis ():
Competitive Methods | U Statistic | p-Value |
---|---|---|
DE-SLSTM | 0.0 | 0.00029 |
MTSMFF | 6.0 | 0.00874 |
DHM | 5.0 | 0.00554 |
TFT | 5.0 | 0.00554 |
Appendix C. Experimental Results for Peak and Off-Peak Hours
Appendix C.1. Short-Term Prediction
Peak | Off-Peak | |||||
---|---|---|---|---|---|---|
MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | |
HA [28] | 43.326 | 87.989 | 7.076 | 32.654 | 70.332 | 6.161 |
LSTM [29] | 36.152 | 81.328 | 5.541 | 21.001 | 57.223 | 3.931 |
DNN [30] | 32.605 | 68.975 | 5.342 | 21.827 | 53.581 | 4.189 |
DE-SLSTM [22] | 29.003 | 62.941 | 4.861 | 19.315 | 48.498 | 3.740 |
MTSMFF [23] | 32.49 | 70.536 | 5.837 | 23.742 | 54.207 | 4.825 |
DHM [8] | 26.035 | 64.071 | 4.449 | 17.284 | 47.132 | 3.448 |
TFT [24] | 44.852 | 89.494 | 8.029 | 28.362 | 62.695 | 5.620 |
PASS2S [5] | 28.222 | 61.011 | 4.778 | 18.738 | 46.972 | 3.668 |
SLIT | 22.667 | 53.892 | 3.846 | 16.656 | 42.846 | 3.295 |
Improvement ratio (%) | 12.938 | 11.668 | 13.555 | 3.634 | 8.783 | 4.432 |
Appendix C.2. Long-Term Prediction
Peak | Off-Peak | |||||
---|---|---|---|---|---|---|
MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | |
HA [28] | 43.335 | 87.007 | 7.081 | 32.659 | 70.134 | 6.141 |
LSTM [29] | 43.440 | 89.895 | 6.958 | 29.011 | 69.080 | 5.488 |
DNN [30] | 41.065 | 84 | 6.861 | 29.219 | 66.087 | 5.681 |
DE-SLSTM [22] | 41.68 | 84.723 | 6.841 | 29.899 | 68.765 | 5.632 |
MTSMFF [23] | 41.23 | 83.999 | 7.026 | 28.193 | 63.141 | 5.509 |
DHM [8] | 43.506 | 86.560 | 7.459 | 29.609 | 64.803 | 5.808 |
TFT [24] | 46.439 | 91.406 | 7.965 | 29.023 | 62.462 | 5.705 |
PASS2S [5] | 37.596 | 78.879 | 6.211 | 27.105 | 62.947 | 5.197 |
SLIT | 37.879 | 81.329 | 6.152 | 26.248 | 63.620 | 4.992 |
Improvement ratio (%) | −0.753 | −3.106 | 0.948 | 3.1627 | −1.855 | 3.935 |
References
- Qi, X.; Mei, G.; Tu, J.; Xi, N.; Piccialli, F. A Deep Learning Approach for Long-Term Traffic Flow Prediction With Multifactor Fusion Using Spatiotemporal Graph Convolutional Network. IEEE Trans. Intell. Transp. Syst. 2023, 24, 8687–8700. [Google Scholar] [CrossRef]
- Hou, Z.; Li, X. Repeatability and Similarity of Freeway Traffic Flow and Long-Term Prediction Under Big Data. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1786–1796. [Google Scholar] [CrossRef]
- Li, R.; Hu, Y.; Liang, Q. T2F-LSTM Method for Long-term Traffic Volume Prediction. IEEE Trans. Fuzzy Syst. 2020, 28, 3256–3264. [Google Scholar] [CrossRef]
- Xie, Y.; Niu, J.; Zhang, Y.; Ren, F. Multisize Patched Spatial-Temporal Transformer Network for Short- and Long-Term Crowd Flow Prediction. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21548–21568. [Google Scholar] [CrossRef]
- Huang, Y.; Dai, H.; Tseng, V.S. Periodic Attention-based Stacked Sequence to Sequence framework for long-term travel time prediction. Knowl.-Based Syst. 2022, 258, 109976. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Y.; Yang, X.; Zhang, L. Short-term travel time prediction by deep learning: A comparison of different LSTM-DNN models. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–8. [Google Scholar]
- Zhao, F.; Zeng, G.Q.; Lu, K.D. EnLSTM-WPEO: Short-Term Traffic Flow Prediction by Ensemble LSTM, NNCT Weight Integration, and Population Extremal Optimization. IEEE Trans. Veh. Technol. 2020, 69, 101–113. [Google Scholar] [CrossRef]
- Ting, P.Y.; Wada, T.; Chiu, Y.L.; Sun, M.T.; Sakai, K.; Ku, W.S.; Jeng, A.A.K.; Hwu, J.S. Freeway Travel Time Prediction Using Deep Hybrid Model – Taking Sun Yat-Sen Freeway as an Example. IEEE Trans. Veh. Technol. 2020, 69, 8257–8266. [Google Scholar] [CrossRef]
- Belhadi, A.; Djenouri, Y.; Djenouri, D.; Lin, J. A recurrent neural network for urban long-term traffic flow forecasting. Appl. Intell. 2020, 50, 3252–3265. [Google Scholar] [CrossRef]
- Li, Y.; Chai, S.; Ma, Z.; Wang, G. A Hybrid Deep Learning Framework for Long-Term Traffic Flow Prediction. IEEE Access 2021, 9, 11264–11271. [Google Scholar] [CrossRef]
- Reza, S.; Ferreira, M.; Machado, J.; Tavares, J. A Multi-head Attention-based Transformer Model for Traffic Flow Forecasting with a Comparative Analysis to Recurrent Neural Networks. Expert Syst. Appl. 2022, 202, 1–11. [Google Scholar] [CrossRef]
- Jin, D.; Shi, J.; Wang, R.; Li, Y.; Huang, Y.; Yang, Y.B. Trafformer: Unify Time and Space in Traffic Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 22–25 February 2023; Volume 37, pp. 8114–8122. [Google Scholar]
- Oluwasanmi, A.; Aftab, M.U.; Qin, Z.; Sarfraz, M.S.; Yu, Y.; Rauf, H.T. Multi-head spatiotemporal attention graph convolutional network for traffic prediction. Sensors 2023, 23, 3836. [Google Scholar] [CrossRef] [PubMed]
- Mashurov, V.; Chopurian, V.; Porvatov, V.; Ivanov, A.; Semenova, N. GCT-TTE: Graph Convolutional Transformer for Travel Time Estimation. arXiv 2023, arXiv:2301.07945. [Google Scholar] [CrossRef]
- Jiang, J.; Han, C.; Zhao, W.X.; Wang, J. PDFormer: Propagation Delay-Aware Dynamic Long-Range Transformer for Traffic Flow Prediction. In Proceedings of the AAAI conference on artificial intelligence, Vancouver, BC, Canada, 22–25 February 2023; Volume 37, pp. 4365–4373. [Google Scholar]
- Wu, L.; Wang, Y.Q.; Liu, J.B.; Shan, D.H. Developing a time-series speed prediction model using Transformer networks for freeway interchange areas. Comput. Electr. Eng. 2023, 110, 108860. [Google Scholar] [CrossRef]
- Chen, C.; Liu, Y.; Chen, L.; Zhang, C. Bidirectional Spatial-Temporal Adaptive Transformer for Urban Traffic Flow Forecasting. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 6913–6925. [Google Scholar] [CrossRef] [PubMed]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Proc. AAAI Conf. Artif. Intell. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
- Mingxing, X.; Dai, W.; Liu, C.; Gao, X.; Lin, W.; Qi, G.J.; Xiong, H. Spatial-Temporal Transformer Networks for Traffic Flow Forecasting. arXiv 2020, arXiv:2001.02908. [Google Scholar]
- Du, L.; Xin, J.; Labach, A.; Zuberi, S.; Volkovs, M.; Krishnan, R.G. MultiResFormer: Transformer with Adaptive Multi-Resolution Modeling for General Time Series Forecasting. arXiv 2023, arXiv:2311.18780. [Google Scholar]
- Lee, S.; Hong, J.; Liu, L.; Choi, W. TS-Fastformer: Fast Transformer for Time-series Forecasting. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–20. [Google Scholar] [CrossRef]
- Chou, C.; Huang, Y.; Huang, C.; Tseng, V. Long-term traffic time prediction using deep learning with integration of weather effect. In Proceedings of the Advances in Knowledge Discovery and Data Mining —23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, 14–17 April 2019. [Google Scholar] [CrossRef]
- Du, S.; Li, T.; Yang, Y.; Horng, S.J. Multivariate time series forecasting via attention-based encoder–decoder framework. Neurocomputing 2020, 388, 269–279. [Google Scholar] [CrossRef]
- Zhang, H.; Zou, Y.; Yang, X.; Yang, H. A temporal fusion transformer for short-term freeway traffic speed multistep prediction. Neurocomputing 2022, 500, 329–340. [Google Scholar] [CrossRef]
- Lin, Y.; Ge, L.; Li, S.; Zeng, B. Prior Knowledge and Data-Driven Based Long- and Short-Term Fusion Network for Traffic Forecasting. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Freeway Bureau Taiwan R.O.C. Taiwan Expressway Dataset. Available online: https://tisvcloud.freeway.gov.tw (accessed on 10 March 2021).
- Smith, B.L.; Demetsky, M.J. Traffic Flow Forecasting: Comparison of Modeling Approaches. J. Transp. Eng. 1997, 123, 261–266. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Qu, L.; Li, W.; Li, W.; Ma, D.; Wang, Y. Daily Long-Term Traffic Flow Forecasting Based on a Deep Neural Network. Expert Syst. Appl. 2019, 121, 304–312. [Google Scholar] [CrossRef]
- Ruland, F. The Wilcoxon-Mann-Whitney Test—An Introduction to Nonparametrics; Independently Published: USA, 2018; p. 77. [Google Scholar]
- Everitt, B.; Skrondal, A. The Cambridge Dictionary of Statistics; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
Symbol | Description |
---|---|
Collection of long short-term segments in . | |
Short-term segment in . | |
Number of data points per periodic segment, 12 time slots per hour. | |
, | Starting time step for and , respectively. |
S | Combined periodic segments in , with . |
D | Number of long short-term segments, set to 7 in this study. |
Represents , denoting the combined periodic segments S. | |
interval | Duration of each period’s time step, with 288 time slots per day. |
Position within , i indexes the feature dimensions. | |
Size of the feature vector. |
Attributes | Dimensions | Description |
---|---|---|
Weekday | ||
Holiday | 3 | Weekend |
National holiday | ||
Day of the week | 7 | Monday to Sunday |
Month | 12 | January to December |
Morning-peak | ||
Peak | 4 | Noon-peak |
Night-peak | ||
Off-peak | ||
Time slot | 288 | # of time slots in each day (1 time slot = 5 min) |
Area | ID | Highway | Name |
---|---|---|---|
nfb0019 | No. 1 | Neihu Interchange to Yuanshan Interchange | |
nfb0033 | No. 1 | Linkou Interchange to Taoyuan Interchange | |
North | nfb0370 | No. 5 | Toucheng Interchange to Pinglin Traffic Control Interchange |
nfb0425 | No. 1 | Taishan Connector Interchange to Linkou Interchange | |
nfb0431 | No. 1 | Elevated Yangmei End to Hukou Interchange | |
nfb0061 | No. 1 | Hsinchu System Interchange to Toufen Interchange | |
nfb0063 | No. 1 | Toufen Interchange to Touwu Interchange | |
Central | nfb0064 | No. 1 | Touwu Interchange to Toufen Interchange |
nfb0247 | No. 3 | Tongxiao Interchange to Yuanli Interchange | |
nfb0248 | No. 3 | Yuanli Interchange to Tongxiao Interchange | |
nfb0117 | No. 1 | Chiayi System Interchange to Xinying Service Area | |
nfb0123 | No. 1 | Xinying Interchange to Xiaying System Interchange | |
South | nfb0124 | No. 1 | Xiaying System Interchange to Xinying Interchange |
nfb0327 | No. 3 | Tianliao Interchange to Yanchao System Interchange | |
nfb0328 | No. 3 | Yanchao System Interchange to Tianliao Interchange |
Short-Term | Long-Term | |||||
---|---|---|---|---|---|---|
MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | |
HA [28] | 35.036 | 75.516 | 6.408 | 34.516 | 74.000 | 6.304 |
LSTM [29] | 23.626 | 62.972 | 4.210 | 31.520 | 74.062 | 5.743 |
DNN [30] | 23.695 | 57.216 | 4.389 | 31.279 | 70.295 | 5.886 |
DE-SLSTM [22] | 20.994 | 51.870 | 3.934 | 31.948 | 72.549 | 5.842 |
MTSMFF [23] | 26.048 | 59.649 | 5.092 | 31.639 | 69.964 | 5.91 |
DHM [8] | 19.591 | 52.872 | 3.712 | 33.281 | 71.896 | 6.245 |
TFT [24] | 31.964 | 70.441 | 6.118 | 32.316 | 70.084 | 6.089 |
PASS2S [5] | 20.381 | 50.250 | 3.860 | 28.929 | 66.695 | 5.373 |
SLIT | 17.697 | 45.628 | 3.391 | 28.270 | 67.794 | 5.194 |
Improvement ratio (%) | 9.67 | 9.20 | 8.66 | 2.28 | −1.65 | 3.33 |
Method | 1 Day | 2 Days | 3 Days | 4 Days | 5 Days | 6 Days | 7 Days | |
---|---|---|---|---|---|---|---|---|
MAE | DE-SLSTM [22] | 30.154 | 32.390 | 32.138 | 31.442 | 32.297 | 33.285 | 31.926 |
MTSMFF [23] | 32.093 | 32.389 | 31.667 | 31.070 | 31.586 | 31.145 | 31.520 | |
DHM [8] | 33.056 | 33.521 | 33.663 | 33.971 | 33.354 | 33.770 | 31.635 | |
TFT [24] | 31.781 | 32.949 | 31.622 | 31.816 | 32.651 | 32.966 | 32.429 | |
PASS2S [5] | 28.295 | 29.178 | 29.035 | 28.684 | 28.914 | 29.368 | 29.028 | |
SLIT | 27.745 | 27.758 | 27.670 | 28.383 | 28.354 | 29.074 | 28.906 | |
RMSE | DE-SLSTM [22] | 71.720 | 75.150 | 74.403 | 70.567 | 70.850 | 74.206 | 70.945 |
MTSMFF [23] | 71.692 | 71.533 | 69.482 | 69.014 | 70.718 | 69.537 | 67.771 | |
DHM [8] | 74.553 | 72.640 | 71.874 | 72.127 | 71.246 | 72.920 | 67.911 | |
TFT [24] | 70.021 | 72.044 | 69.479 | 69.227 | 70.322 | 71.335 | 68.161 | |
PASS2S [5] | 67.431 | 67.937 | 66.930 | 65.891 | 67.171 | 67.267 | 64.238 | |
SLIT | 68.808 | 68.299 | 65.951 | 68.345 | 68.044 | 69.401 | 65.711 | |
SMAPE(%) | DE-SLSTM [22] | 5.492 | 5.810 | 5.856 | 5.814 | 5.997 | 6.058 | 5.866 |
MTSMFF [23] | 5.955 | 6.020 | 5.920 | 5.866 | 5.854 | 5.842 | 5.910 | |
DHM [8] | 6.154 | 6.311 | 6.322 | 6.422 | 6.256 | 6.297 | 5.950 | |
TFT [24] | 5.984 | 6.162 | 5.978 | 6.017 | 6.173 | 6.193 | 6.118 | |
PASS2S [5] | 5.238 | 5.367 | 5.423 | 5.341 | 5.359 | 5.451 | 5.433 | |
SLIT | 5.090 | 5.070 | 5.134 | 5.201 | 5.246 | 5.296 | 5.321 |
High | Moderate | Low | |||||||
---|---|---|---|---|---|---|---|---|---|
MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | |
HA [28] | 68.907 | 141.653 | 10.191 | 25.554 | 68.178 | 6.935 | 13.130 | 25.293 | 2.903 |
LSTM [29] | 46.336 | 117.844 | 6.619 | 17.772 | 58.566 | 4.637 | 8.604 | 20.183 | 1.918 |
DNN [30] | 45.047 | 100.781 | 6.739 | 20.179 | 60.147 | 5.305 | 8.245 | 18.959 | 1.819 |
DE-SLSTM [22] | 38.638 | 87.794 | 5.849 | 18.308 | 57.005 | 4.776 | 8.081 | 18.510 | 1.777 |
MTSMFF [23] | 44.152 | 97.656 | 6.939 | 22.695 | 62.921 | 6.051 | 13.198 | 25.795 | 2.912 |
DHM [8] | 34.706 | 88.262 | 5.254 | 17.805 | 57.428 | 4.608 | 8.185 | 20.343 | 1.830 |
TFT [24] | 57.547 | 124.223 | 8.868 | 25.550 | 68.055 | 6.912 | 14.919 | 27.214 | 3.297 |
PASS2S [5] | 37.125 | 82.990 | 5.697 | 17.847 | 55.799 | 4.648 | 8.118 | 19.268 | 1.806 |
MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | |
SLIT | 31.454 | 74.005 | 4.852 | 15.928 | 52.754 | 4.149 | 7.412 | 17.229 | 1.667 |
Improvement ratio (%) | 9.37 | 10.83 | 7.65 | 10.38 | 5.46 | 9.96 | 8.28 | 6.92 | 6.19 |
High | Moderate | Low | |||||||
---|---|---|---|---|---|---|---|---|---|
MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | MAE | RMSE | SMAPE (%) | |
HA [28] | 67.888 | 140.256 | 10.006 | 24.913 | 63.929 | 6.788 | 13.107 | 25.501 | 2.896 |
LSTM [29] | 60.085 | 140.170 | 8.734 | 23.520 | 63.769 | 6.311 | 13.049 | 25.833 | 2.873 |
DNN [30] | 59.056 | 129.579 | 8.976 | 25.511 | 63.939 | 6.899 | 11.977 | 25.129 | 2.637 |
DE-SLSTM [22] | 61.200 | 135.985 | 8.970 | 23.805 | 63.383 | 6.395 | 12.999 | 25.796 | 2.867 |
MTSMFF [23] | 58.257 | 126.716 | 8.675 | 24.828 | 64.113 | 6.706 | 13.997 | 26.571 | 3.074 |
DHM [8] | 61.833 | 130.079 | 9.316 | 25.607 | 64.545 | 6.985 | 14.604 | 28.311 | 3.191 |
TFT [24] | 59.128 | 126.357 | 8.880 | 25.252 | 63.877 | 6.858 | 14.683 | 27.328 | 3.250 |
PASS2S [5] | 54.747 | 121.140 | 8.217 | 22.404 | 61.597 | 5.996 | 11.764 | 24.723 | 2.587 |
SLIT | 53.264 | 124.927 | 7.809 | 22.132 | 61.202 | 5.895 | 11.534 | 24.578 | 2.547 |
Improvement ratio (%) | 2.71 | −3.13 | 4.97 | 1.22 | 0.64 | 1.70 | 1.95 | 0.59 | 1.55 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, H.-T.C.; Dai, H.; Tseng, V.S. Short-Term and Long-Term Travel Time Prediction Using Transformer-Based Techniques. Appl. Sci. 2024, 14, 4913. https://doi.org/10.3390/app14114913
Lin H-TC, Dai H, Tseng VS. Short-Term and Long-Term Travel Time Prediction Using Transformer-Based Techniques. Applied Sciences. 2024; 14(11):4913. https://doi.org/10.3390/app14114913
Chicago/Turabian StyleLin, Hui-Ting Christine, Hao Dai, and Vincent S. Tseng. 2024. "Short-Term and Long-Term Travel Time Prediction Using Transformer-Based Techniques" Applied Sciences 14, no. 11: 4913. https://doi.org/10.3390/app14114913
APA StyleLin, H.-T. C., Dai, H., & Tseng, V. S. (2024). Short-Term and Long-Term Travel Time Prediction Using Transformer-Based Techniques. Applied Sciences, 14(11), 4913. https://doi.org/10.3390/app14114913