Next Article in Journal
Urban Heat Island Intensity Prediction in the Context of Heat Waves: An Evaluation of Model Performance
Previous Article in Journal
The Present and the Future of Polyethylene Pyrolysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Financial Time Series Models—Comprehensive Review of Deep Learning Approaches and Practical Recommendations †

by
Mateusz Buczyński
1,2,*,
Marcin Chlebus
1,
Katarzyna Kopczewska
1 and
Marcin Zajenkowski
3
1
Faculty of Economic Sciences, University of Warsaw, 00-241 Warsaw, Poland
2
Interdisciplinary Doctoral School, University of Warsaw, 00-312 Warsaw, Poland
3
Faculty of Psychology, University of Warsaw, 00-183 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Presented at the 9th International Conference on Time Series and Forecasting, Gran Canaria, Spain, 12–14 July 2023.
Eng. Proc. 2023, 39(1), 79; https://doi.org/10.3390/engproc2023039079
Published: 12 July 2023
(This article belongs to the Proceedings of The 9th International Conference on Time Series and Forecasting)

Abstract

:
There have been numerous advances in financial time series forecasting in recent years. Most of them use deep learning techniques. We identified 15 outstanding papers that have been published in the last seven years and have tried to prove the superiority of their approach to forecasting one-dimensional financial time series using deep learning techniques. In order to objectively compare these approaches, we analysed the proposed statistical models and then reviewed and reproduced them. The models were trained to predict, one day in advance, the value of 29 indices and the stock and commodity prices over five different time periods (from 2007 to 2022), with 4 in-sample years and 1 out-of-sample year. Our findings indicated that, first of all, most of these approaches do not beat the naive approach, and only some barely beat it. Most of the researchers did not provide enough data necessary to fully replicate the approach, not to mention the codes. We provide a set of practical recommendations of when to use which models based on the data sample that we provide.

1. Introduction

Many researchers have been struggling for decades to understand how the markets behave [1,2,3]. Some argue that the markets are unpredictable due to the Efficient Market Hypothesis (EMH), stating that, in the short term, financial time series follow a random walk. In contrast, there is a large number of behavioural economists that do not agree with such a statement, believing that investors do not always behave rationally [4,5]. They suggest that the market “can be beaten”, because cognitive biases, such as overconfidence and herd behaviour or risk aversion, exist. One is certain and empirically confirmed: investors are winning on the market—mostly because they are ahead of their “brothers in arms”.
Generally, there are two main approaches that are used to predict the financial markets: technical and fundamental analysis. Technical analysis approaches focus widely on building the predictions based on the past movements or changes of the stock market [6,7]. On the other hand, fundamental analysis considers the information about the economic status of the company underlying the asset, news, social media, financial reports, etc. Lately, the most emphasis is put on employing machine or deep learning methods to combine these tasks, due to their ability to find and quantify nonlinear correlations very easily [8,9,10,11,12], but researchers are still struggling to provide an objective way to compare the results. There has been a massive progress in artificial intelligence approaches implemented in the financial area, mainly portfolio optimisation, time series prediction, agent-based modelling, etc. A large number of scientists also agree that the origin of successful prediction lies not only in the data related to the predicted object, but also in finding additional data sources [13,14,15]. Systematic reviews [16,17] show that there have been more than 125 new approaches to time series prediction in the past few years. Many researchers claim to provide better and better-performing models; however, no consensus exists on the best approach yet. The availability of numerous model options in the market without a clear indication of their costs can result in a dilemma known as “choice overload” [18]. This can lead to a situation where one may end up making no choice at all.
When it comes to financial time series, there are also many different areas that can be covered by deep learning. We can see different feature sets being used in the model, either univariate modelling or enriching the data with additional supporting data sources. There is also emerging work using text mining, sentiment analysis, or social media analysis in feature sets. The target variable can also change: it can predict stock prices, indices, commodities, or cryptocurrencies. Some researchers are also looking at volatility and trend forecasting. As far as models are concerned, the horizon is even broader: from simple neural networks, to long-term memory (LSTM) architectures, to sophisticated state-of-the-art approaches such as graph networks or generative adversarial networks. Finally, the prediction horizon is also a point of contention: Are predictions longer than one time step forward of good quality, and what should it be—a regression task or a classification task?
This abundance of different possibilities and options generates a large grid of approaches that cannot be compared with each other solely on the basis of the article provided. There are three main problems when it comes to comparing approaches to predicting financial time series in a deep learning setting:
  • Different data, timespans, and metrics used in every experiment;
  • Lack of publicly available codes supporting the experiment’s execution;
  • Lack of a detailed architecture and hyperparameters that are necessary for the experiment’s reproduction.
The first problem stems from the lack of a single stock framework, indexes, or any other data samples to objectively test the effectiveness of the models. The usual duo is the S&P 500 and the SSE Composite, according to [17], accounting for 80% of the papers they reviewed. However, a large subset of researchers use single stock quotes or commodity prices. When models are supplemented with additional data, e.g., enriched with text-mining techniques, such datasets are not publicly available (only 10% of the reviewed papers in [17]). In time series problems, it is not only about the data, but also about the time sample to be used: different results will be obtained for models trained in 2019 and 2020, or a different training sample or lookback time horizon: different results will be obtained for models trained in 1 year and 5 years. In terms of metrics, there is some minor consensus there: for regression, the common metric is the mean absolute error; for classification, it is the accuracy. However, this consensus does not mean that every researcher provides a grid of “must have” metrics, but rather selects a few of the most-common ones.
The second problem stems from the reluctance of researchers to publish the code they used to train the models. Only three of the papers reviewed in this thesis were taken from publicly available sources such as GitHub. The lack of code reduces the usefulness of the work, as the cost of selection is increased by the time it takes to implement. Another disadvantage is the discrepancy between the implemented solution and the one presented in the original work (e.g., different versions of the base packages).
A final problem is the poor description of the approach along with the hyperparameters that are used to train the model. Typically, in time series deep learning, we can expect the following hyperparameters (depending on the approach implemented): number of training epochs, learning rate, optimiser type, batch size, and number of backward steps (number of time series lags). However, in the works mentioned by [16,17], there are huge gaps in the description of the training approach. Most of the approaches lack a concrete specification of the architecture used (number of neurons, number of layers, activation functions, etc.) or lack parameters. Researchers usually stop at explaining that the architecture used is LSTM or NN. Additionally, only one paper mentions the random seed value, which is also necessary to fully reproduce the model weights.
To overcome these problems, we decided to carry out an extensive practical reproduction of fifteen papers that are listed in [16,17]. We rebuilt each approach that was reported in a duplicate article, taking into account the hyperparameters that were reported. Where important parameters were missing (such as the number of neurons, the optimiser, the learning rate, or the number of training epochs), we supplemented with the average of the remaining, non-missing articles. We compared these models with simple statistical approaches—naive forecasting, ARIMA, and exponential smoothing. The result was 18 forecasts over five different time periods, resulting in 90 forecasts, run for 29 different types of financial data, including indices, equities, and commodities. We compared models using the mean absolute percentage error (MAPE), mean-squared error (MSE), mean absolute error (MAE), and mean absolute error compared in the first time step only. We propose a data, time, and model framework to run when performing time series problems with deep learning.

2. Methods

We reviewed 15 different deep learning models that have been mentioned in recent literature reviews on financial time series prediction. We focused on selecting the broadest possible sample of different deep learning models. In this section, we describe them briefly. To limit the size of the article, we refer the reader to the original papers for more details on the individual models:
  • A hybrid attention-based EMD-LSTM model [19]:
    The paper proposes a two-stage model for time series prediction, combining empirical mode decomposition (EMD) and attention-based long short-term memory (LSTM-ATTE). EMD was used to decompose the time series into a few inherent mode functions (IMFs), which were then taken as the input to LSTM-ATTE for prediction. They used the SSE Composite index to run the predictions. The attention mechanism was used to extract the input features of the IMF and improve the accuracy of the prediction. Researchers have evaluated the model’s predictive quality using linear regression analysis of the stock market index and compared it to other models, showing better prediction accuracy.
  • Empirical. mode decomposition factorisation neural network (EMD2FNN) model [20]:
    A simpler approach proposed by [20], includes feeding the IMFs of some time series into a factorisation neural network, concatenating all the IMFs into a single vector. The data used for the experiment were: the SSE Composite, NASDAQ, and S&P 500. The authors performed a thorough comparison between the proposed method and other neural network models, comparing the mean absolute error (MAE) and root-mean-squared error (RMSE).
  • Neural network ensemble [21]:
    The paper describes a deep neural network ensemble that aims to predict the SSE Composite and SZSE (Shenzhen) Component. The model consists of a set of neural networks that were trained using open, high, low, close (OHLC) data. Every neural network takes the last few days of such data, flattened to a vector form. Later, bagging is used to combine these networks and reduce the generalisation error.
  • Wavelet denoising long short-term memory model [22]:
    The proposed model in this paper is a combination of real-time wavelet denoising and the LSTM neural network. The wavelet denoising was used to separate signals from noise in the stock data and was then taken as the input to the LSTM model. The authors conducted an experiment on several indexes, including the SSE, SZSE, and NIKKEI, using the mean absolute percentage error (MAPE) as a metric.
  • Dual-stage attention-based recurrent neural network [23]:
    This paper proposes a two-stage attention-based recurrent neural network (DA-RNN) model for time series prediction. The DA-RNN model uses an input attention mechanism in the first stage to extract the relevant driving series at each time step based on the previous hidden state of the encoder. In the second stage, the temporal attention mechanism is used to select the relevant hidden encoder states at all time steps. The experiment was conducted on the SML 2010 and NASDAQ datasets and showed that the model outperformed state-of-the-art time series prediction methods. The metrics used were the MAE, MAPE, and RMSE.
  • Bidirectional LSTM [24]:
    This paper compared the performance of the bidirectional LSTM (BiLSTM) and unidirectional LSTM models. BiLSTM is able to traverse the input data twice (left to right and right to left) and, thus, has additional training capabilities. The study showed that BiLSTM-based modelling offers better predictions than regular LSTM-based models and outperformed the ARIMA and LSTM models. However, BiLSTM models reach equilibrium much slower than LSTM-based models. The experiment was carried out on several indices and stocks, including the Nikkei and NASDAQ, as well as the daily IBM share price and compared using RMSE.
  • Multi-scale. recurrent convolutional neural network [25]:
    The proposed method is a multi-scale temporal dependent recurrent convolutional neural network (MSTD-RCNN). The method utilises convolutional units to extract features on different time scales (daily, weekly, monthly) and a recurrent neural network (RNN) to capture the temporal dependency (TD) and complementarity across different scales of financial time series. The proposed method was evaluated on three financial time series datasets from the Chinese stock market and achieved state-of-the-art performance in trend classification and simulated trading compared to other baseline models.
  • Time-weighted. LSTM [26]:
    This paper proposes a novel approach to predicting stock market trends by adding a time attribute to stock market data to improve prediction accuracy. The approach involves assigning weights to the data according to their temporal proximity and using formal stock market trend definitions. The approach also uses a custom long short-term memory (LSTM) network to discover temporal relationships in the data. The results showed that the proposed approach outperformed other models and can be generalised to other stock indices, achieving 83.91% accuracy in a test with the CSI 300 index.
  • ModAugNet [27]:
    The paper proposes a data augmentation approach for stock market index forecasting through the ModAugNet framework, which consists of a fitting-prevention LSTM module and a prediction LSTM module. The prediction module is a simple LSTM network that is fit based only on the historical data on the index realised prices. The prevention module builds on that by adding a set of regressors that are other indexes, highly correlated with the predicted one. Using the MSE, MAE, and MAPE on the S&P500 and KOSPI200, the authors proved the validity of their solution.
  • State frequency memory (SFM) [28]:
    The state frequency memory (SFM) model is the twin of the LSTM model. The SFM model was inspired by the discrete Fourier transform (DFT) and was designed to capture multi-frequency trading patterns from past market data to make long- and short-term predictions over time. The model decomposes the latent states of memory cells into multiple frequency components, where each component models a specific frequency of the latent trading pattern underlying stock price fluctuations. The model then predicts future share prices by combining these frequency components. The authors tested their solution of 50 different stocks in 10 industries using the MSE.
  • Convolutional neural-network-enhanced support vector machine [29]:
    The proposed model in this text is a convolutional neural network (CNN), which is supposed to discover features in the data, which are later passed into the support vector machine (SVM) model. The text then discusses the influence of the model parameters on the prediction results. The model was evaluated empirically on the Hong Kong Hang Seng Index using the RMSE, and the results showed that both models are feasible and effective.
  • Generative adversarial network [30]:
    The generative adversarial network (GAN) in this paper consists of two main components: a discriminator and a generator. The discriminator was designed using a simple feed-forward neural network and is responsible for distinguishing real stock market data from generated data. The generator, on the other hand, was built using an LSTM and is responsible for generating data with the same distribution as the actual stock market data. The model was trained on daily data from the S&P500 index and several other stocks for a wide range of trading days. The LSTM generator learns the distribution of the stock data and generates new data, which are then fed to the MLP discriminator. The discriminator learns to distinguish between the actual stock data and the data generated by the generator. The authors tested their model on several time series, including the S&P 500 and stocks such as IBM or MSFT.
  • Long short-term memory and gated recurrent unit models [31]:
    The paper proposes a hybrid model that combines the long short-term memory (LSTM) and gated recurrent unit (GRU) networks. The authors used the S&P 500 historical time series data and evaluated the model using metrics such as the MSE and MAPE on the S&P500.
  • CNN and bi-directional LSTM model [32]:
    The paper proposes a model combining multiple pipelines of convolutional neural network (CNN) and bidirectional long short-term memory (LSTM) units. The model improved the prediction performance by 9% compared to a single pipelined deep learning model and by more than six-times compared to a support vector machine regressor model on the S&P 500. The paper also illustrates the improvement in the prediction accuracy while minimising overfitting by presenting several variants of multi- and single-pipelined deep learning models based on different CNN kernel sizes and number of bidirectional LSTM units.
  • Tim convolution (TC) LSTM model [33]:
    The authors of this paper propose time convolution long short-term memory (TC-LSTM), employing convolutional neural networks (CNNs) to capture long-term fluctuation features in the stock prices and combining this with LSTM. This combination allows the model to capture both the long-term dependencies of stock prices, as well as the overall change pattern. The authors compared the performance of their TC-LSTM model to three baseline models on 50 stocks from the SSE 50, as well as the index itself. They showed that their model outperformed the others in terms of the mean-squared error.
The proposed architectures were build from scratch in pytorch [34], based on the explanation provided in the article itself. In addition to these models, to provide a more thorough comparison, we also utilised the ARIMA model (tuned, best parameters on the training sample), the naive approach (prediction as: Y t = Y t 1 ), and exponential smoothing.
The hyperparameters derived based on the text or the publicly available codes are presented in Table A1. All missing parameters were filled in with either the mean of other parameters or the mode.

3. Data and Methodology

We provide a more comprehensive background of the developed models we propose to broaden the range of the time series on which the model is tested. The data used in this study were taken from the following financial types: indexes, currency pairs, stocks, cryptocurrencies, and commodities. The purpose of including a comprehensive range of financial types was to provide a comprehensive comparison of the models’ performance.
The following time series were included in this study for analysis:
  • Indexes: WIG20 (PL), S&P 500 (US), NASDAQ (US), Dow Jones Industrial (US), FTSE 250 (UK), Nikkei 225 (JP), DJI (USA), KOSPI 50 (KR), SSE Composite (CN), DAX 40 (DE), CAC40 (FR);
  • Currency pairs: EURPLN, PLNGBP, USDPLN, EURUSD, EURGBP, USDGBP, CHFGBP, CHFUSD, EURCHF, PLNCHF;
  • Stocks: AAPL, META, AMZN, TSLA, GOOG, NFLX;
  • Cryptocurrencies: BTCUSD;
  • Commodities: XAUUSD.
For each of these time series, we identified five periods in which we made predictions:
  • 2016–2020;
  • 2013–2017;
  • 2007–2011;
  • 2009–2013;
  • 2018–2022.
Each period consisted of 4 years of training and 1 year of day-ahead predictions (ca. 250 testing time steps) without re-training the model. The periods differed significantly between each other due to the different levels of variability between the training and test trials.
The data were preprocessed by performing normalisation on the input features using the MinMaxScaler function from the Scikit-learn library [35]. The normalised data were then split into training and test sets with a ratio of 4 years:1 year, respectively. The model was then trained using only the training sample, and predictions were made for every time step in the testing sample.
The evaluation metrics were used to compare the performance of each model across the different financial types and time series. The best-performing model was selected based on the lowest values of:
  • M S E = 1 n t = 1 n ( Y t Y ^ t ) 2 ;
  • R M S E = 1 n t = 1 n ( Y t Y ^ t ) 2 ;
  • M A P E = 1 n t = 1 n Y t Y ^ t Y t × 100 % .
    Y t is the actual value at time t; Y ^ t is the predicted value at time t; n is the total number of time periods.
Finally, the results were analysed to identify patterns and insights that could help improve the accuracy of predictions in future studies. We also provide the MAPEs at the first time step of testing (i.e., calculated for t = 1 ) to provide a quality metric for the trained model with the full set of information, i.e., to provide a metric that would allow the quality of the model to be calculated in the short term.

4. Results

The results are presented in Table 1. To be concise, we only report the MAPE in this paper (for the MAE, MSE, or detailed predictions, please contact the authors directly). In bold, we can notice the best (lowest) MAPE for every financial time series.
We can see that the best model was the naive approach. This was mainly due to the fact that the quality of models tends to deteriorate if they are not retrained after a certain period of time; however, we wanted to keep the reproduction of the models as close to the original as possible. Furthermore, the researchers in their original work also did not retrain the model in the test sample (or mention doing so in any other way), nor did they compare to statistical approaches.
When we leave out the statistical approaches, we can see a few approaches that stand out (5, 6, 7, 9, 12, and 13). These models generally have MAPEs lower than five percent. What these models have in common is either simple LSTM/RNN architectures or sophisticated CNN operations, which increase the range of features. The best model that achieved an average MAPE of 1.79% was the multi-scale recurrent convolutional neural network (Model 7). We believe that the surplus in prediction quality came from the additional operations performed on the data (multi-scale CNN), which improved the information processing. The second-best model was ModAugNet (Model 9), which achieved a MAPE of 2.07%. This model relies heavily on the additional data sources that are provided in training for a given time series. On the other hand, the worst model was Model 11 using SVM after preprocessing the data from the SVM—52.24% MAPE.
From another perspective, the worst-performing financial time series was BTCUSD, followed by TSLA, AAPL, and NFLX. These all achieved high returns over the time periods studied, so what we believe is the reason for the deterioration was the inability of all models to correctly identify and predict rapid price increases. We also observed that indices have a roughly similar MAPE (5–6%), as do currency pairs (1–2%). Stocks, on the other hand, have the highest MAPE of all financial time series (>10%).
Since we can clearly see that the models performed worse due to the lack of hypertuning, we propose to run this procedure for each model training. However, such an experiment will be computationally exhaustive (this experiment already consisted of 2610 model trainings), so some restrictions should be introduced. Furthermore, the testing procedure was detached from how these models are used in reality. Time series models should be trained daily to provide the best-possible fit based on the set of information available at the time of prediction. In this case, the information set becomes smaller and smaller with each prediction.
As a proof of this statement, we provide in Table 2 the MAPE results calculated for the first testing time step. This metric allowed us to confirm that, in the short term, these models are correct and better than the naive approach for each stock. Only for five series was the statistical approach (exponential smoothing) better. Model 7 was still the best in 7 of the remaining 24 cases. Several other models came in first (Model 1 and Model 13) proving that the performance of the models deteriorates over time.

5. Conclusions

The experiment presented in this paper aimed to compare the predictive performance of various deep learning models on different financial time series. The experiment was conducted using the data of daily prices for 29 financial time series, including stocks, indexes, and currency pairs, over a period of 15 years (2007–2022).
The models used in the experiment included classical statistical approaches such as exponential smoothing and ARIMA, as well as deep learning models such as NN, LSTM, CNN, or GAN. The models were trained using a sliding window approach and evaluated using the mean absolute percentage error, mean squared error, and mean absolute error, as well as the mean absolute percentage error at the first time step.
The results of the experiment showed that the best model was the naive approach, but when disregarding the statistical approaches, several deep learning models showed promising results. In particular, the multi-scale recurrent convolutional neural network (Model 7) achieved the best MAPE of 1.79% on average, while ModAugNet (Model 9) achieved a MAPE of 2.07%. The worst-performing model was Model 11, which utilised SVM after data preprocessing with the CNN.
Based on the results presented in this study, it can be concluded that simple time series models, even naive approach, can perform relatively well against more-complex deep learning models in forecasting financial time series, notably in the long run. However, deep learning models, in particular those using LSTM/RNN architectures or complex CNN functions, have the potential to outperform statistical models in the short term, provided they are regularly retrained and properly tuned.
It has also been observed that the quality of models tends to deteriorate if they are not retrained after a certain period of time. This highlights the importance of regular retraining of time series models to ensure the best-possible fit based on all the information available at the time of forecasting. In addition, it was noted that stocks tend to have a higher MAPE than indices or currency pairs, which may be due to their higher volatility and the need for more sophisticated modelling techniques.

Author Contributions

Conceptualization: M.B. and M.C.; methodology and software: M.B.; writing—original draft preparation: M.B.; validation: M.C.; formal analysis: all authors; writing—review and editing: M.C., K.K. and M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Program of Integrated Activities for the Development of the University of Warsaw (ZIP Program), co-financed by the European Social Fund under the Knowledge Education Development Operational Program 2014-2020, Path 3.5.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analysed in this study. These data can be found here: https://stooq.com/ (accessed on 12 June 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARIMAAutoregressive integrated moving average
BiLSTMBidirectional LSTM
CNNConvolutional neural network
DFTDiscrete Fourier transform
EMDEmpirical mode decomposition
EMHEfficient market hypothesis
GANGenerative adversarial network
GRUGated recurrent unit
IMFInherent mode function
LSTMLong short-term memory
MAEMean absolute error
MAPEMean absolute percentage error
MSEMean-squared error
NNNeural network
OHLCOpen, high, low, close
RNNRecurrent neural network
SFMState frequency memory

Appendix A. Hyperparameters Used for Training

Table A1. List of hyperparameters used for training.
Table A1. List of hyperparameters used for training.
Model No.NN ArchitectureEpochsLearning RateOptimiserBatch SizeSteps BackData SamplePerformance Metrics
1---Adam-203407 (Jan 2004–Jan 2018)MAE, RMSE, MAPE, R 2
2---SGD-3,4,5Jan 2012–Dec 2016/Jan 2007–Dec2011RMSE, MAE, MAPE
3randomly selected number of layers (1–6) ensembled 10 times200,0000.0001Adam-20-relative error
42 layer LSTM, with 1/2 and 1/3 input neurons----2, 4, 8, 16, 32, 64, 128, 256, 512Jan 2010–Dec 2016 (testing last year)MAPE
5grid search over layer sizes (16, 32, 64, 128, 256)-0.001 (decreasing)Adam1283, 5, 10, 15, 25Jul 2016 - Dec 2016 minutely dataRMSE, MAE, MAPE
6one layer, 4 neurons1 or 2-Adam--Jan 1985-Aug 2018RMSE
7differently (3) scaled time series -> CNN (16 filters)-> GRU (16 × 3)1000.0005Adam3230Jan 2016–Dec 2016accuracy
8320 neurons LSTM x345000.0024--20Jan 2002–Dec 2017accuracy
91 LSTM 2 layers 5 and 3 neurons, 2LSTM: 4 and 22000.00005Adam3220Jan 2000–July 2017MSE MAE MAPE
10-40000.01RMSProp-3, 5, 10, 15, 202007–2014MSE MAE MAPE
11G: LSTM -> 7 neuron FC D: FC NN with 3 layers (72, 100, 10 neurons)----5last 20 yearsMAE MSE MAPE
122–4 CNN layers----30, 40, 50, 601990–2014MSE
13-200.001Adam--1950-2016MAE MSE MAPE
14CNN -> MaxPooling -> LSTM -> Dense--AdaDelta-502008–2018MSE
15-----1002008–2017MSE

References

  1. Ang, A.; Bekaert, G. Stock Return Predictability: Is It There? Rev. Financ. Stud. 2007, 20, 651–707. [Google Scholar] [CrossRef] [Green Version]
  2. Campbell, J.Y.; Hamao, Y. Predictable Stock Returns in the United States and Japan: A Study of Long-Term Capital Market Integration. J. Financ. 1992, 47, 43–69. [Google Scholar] [CrossRef]
  3. Granger, C.W.J.; Morgenstern, O. Predictability of Stock Market Prices, 1st ed.; Heath Lexington Books: Lexington, MA, USA, 1970. [Google Scholar]
  4. Bollerslev, T.; Marrone, J.; Xu, L.; Zhou, H. Stock Return Predictability and Variance Risk Premia: Statistical Inference and International Evidence. J. Financ. Quant. Anal. 2014, 49, 633–661. [Google Scholar] [CrossRef] [Green Version]
  5. Phan, D.H.B.; Sharma, S.S.; Narayan, P.K. Stock Return Forecasting: Some New Evidence. Int. Rev. Financ. Anal. 2015, 40, 38–51. [Google Scholar] [CrossRef]
  6. Campbell, J.Y.; Thompson, S.B. Predicting Excess Stock Returns Out of Sample: Can Anything Beat the Historical Average? Rev. Financ. Stud. 2008, 21, 1509–1531. [Google Scholar] [CrossRef] [Green Version]
  7. Agrawal, J.; Chourasia, V.; Mittra, A. State-of-the-Art in Stock Prediction Techniques. Int. J. Adv. Res. Electr. Electron. Instrum. Energy 2013, 2, 1360–1366. [Google Scholar]
  8. Yim, J. A Comparison of Neural Networks with Time Series Models for Forecasting Returns on a Stock Market Index. In Developments in Applied Artificial Intelligence; Lecture Notes in Computer Science; Hendtlass, T., Ali, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2002; pp. 25–35. [Google Scholar] [CrossRef]
  9. Bao, W.; Yue, J.; Rao, Y. A Deep Learning Framework for Financial Time Series Using Stacked Autoencoders and Long-Short Term Memory. PLoS ONE 2017, 12, e0180944. [Google Scholar] [CrossRef] [Green Version]
  10. Lahmiri, S.; Bekiros, S. Cryptocurrency Forecasting with Deep Learning Chaotic Neural Networks. Chaos Solitons Fractals 2019, 118, 35–40. [Google Scholar] [CrossRef]
  11. Long, W.; Lu, Z.; Cui, L. Deep Learning-Based Feature Engineering for Stock Price Movement Prediction. Knowl.-Based Syst. 2018, 164, 163–173. [Google Scholar] [CrossRef]
  12. Chong, E.; Han, C.; Park, F.C. Deep Learning Networks for Stock Market Analysis and Prediction: Methodology, Data Representations, and Case Studies. Expert Syst. Appl. 2017, 83, 187–205. [Google Scholar] [CrossRef] [Green Version]
  13. Salinas, D.; Flunkert, V.; Gasthaus, J.; Januschowski, T. DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks. Int. J. Forecast. 2020, 36, 1181–1191. [Google Scholar] [CrossRef]
  14. Oreshkin, B.; Carpo, D.; Chapados, N.; Bengio, Y. N-BEATS: Neural Basis Expansion Analysis for Interpretable Time Series Forecasting. arXiv 2019, arXiv:1905.10437. [Google Scholar]
  15. Makridakis, S.; Spiliotis, E.; Assimakopoulos, V. The M4 Competition: 100,000 Time Series and 61 Forecasting Methods. Int. J. Forecast. 2020, 36, 54–74. [Google Scholar] [CrossRef]
  16. Sezer, O.; Gudelek, U.; Ozbayoglu, M. Financial Time Series Forecasting with Deep Learning: A Systematic Literature Review: 2005–2019. Appl. Soft Comput. 2020, 90, 106181. [Google Scholar] [CrossRef] [Green Version]
  17. Jiang, W. Applications of Deep Learning in Stock Market Prediction: Recent Progress. Expert Syst. Appl. 2021, 184, 115537. [Google Scholar] [CrossRef]
  18. Reutskaja, E.; Lindner, A.; Nagel, R.; Andersen, R.A.; Camerer, C.F. Choice Overload Reduces Neural Signatures of Choice Set Value in Dorsal Striatum and Anterior Cingulate Cortex. Nat. Hum. Behav. 2018, 2, 925–935. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, L.; Chi, Y.; Guan, Y.; Fan, J. A Hybrid Attention-Based EMD-LSTM Model for Financial Time Series Prediction. In Proceedings of the 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 25–28 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 113–118. [Google Scholar] [CrossRef]
  20. Zhou, F.; Zhou, H.; Yang, Z.; Yang, L. EMD2FNN: A Strategy Combining Empirical Mode Decomposition and Factorization Machine Based Neural Network for Stock Market Trend Prediction. Expert Syst. Appl. 2018, 115, 136–151. [Google Scholar] [CrossRef]
  21. Yang, B.; Gong, Z.J.; Yang, W. Stock Market Index Prediction Using Deep Neural Network Ensemble. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 11 September 2017; pp. 3882–3887. [Google Scholar] [CrossRef]
  22. Li, Z.; Tam, V. Combining the Real-Time Wavelet Denoising and Long-Short-Term-Memory Neural Network for Predicting Stock Indexes. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar] [CrossRef]
  23. Qin, Y.; Song, D.; Chen, H.; Cheng, W.; Jiang, G.; Cottrell, G. A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction. arXiv 2017, arXiv:1704.02971. [Google Scholar]
  24. Siami-Namini, S.; Tavakoli, N.; Namin, A.S. A Comparative Analysis of Forecasting Financial Time Series Using ARIMA, LSTM, and BiLSTM. arXiv 2019, arXiv:1911.09512. [Google Scholar]
  25. Guang, L.; Xiaojie, W.; Ruifan, L. Multi-Scale RCNN Model for Financial Time-series Classification. arXiv 2019, arXiv:1911.09359. [Google Scholar]
  26. Zhao, Z.; Rao, R.; Tu, S.; Shi, J. Time-Weighted LSTM Model with Redefined Labeling for Stock Trend Prediction. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1210–1217. [Google Scholar] [CrossRef]
  27. Baek, Y.; Kim, H.Y. ModAugNet: A New Forecasting Framework for Stock Market Index Value with an Overfitting Prevention LSTM Module and a Prediction LSTM Module. Expert Syst. Appl. 2018, 113, 457–480. [Google Scholar] [CrossRef]
  28. Zhang, L.; Aggarwal, C.; Qi, G.J. Stock Price Prediction via Discovering Multi-Frequency Trading Patterns. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; ACM: New York, NY, USA, 2017; pp. 2141–2149. [Google Scholar] [CrossRef]
  29. Cao, J.; Wang, J. Stock Price Forecasting Model Based on Modified Convolution Neural Network and Financial Time Series Analysis. Int. J. Commun. Syst. 2019, 32, e3987. [Google Scholar] [CrossRef]
  30. Zhang, K.; Zhong, G.; Dong, J.; Wang, S.; Wang, Y. Stock Market Prediction Based on Generative Adversarial Network. Procedia Comput. Sci. 2019, 147, 400–406. [Google Scholar] [CrossRef]
  31. Hossain, M.A.; Karim, R.; Thulasiram, R.; Bruce, N.D.B.; Wang, Y. Hybrid Deep Learning Model for Stock Price Prediction. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1837–1844. [Google Scholar] [CrossRef]
  32. Eapen, J.; Bein, D.; Verma, A. Novel Deep Learning Model with CNN and Bi-Directional LSTM for Improved Stock Market Index Prediction. In Proceedings of the 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 7–9 January 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 264–270. [Google Scholar] [CrossRef]
  33. Zhan, X.; Li, Y.; Li, R.; Gu, X.; Habimana, O.; Wang, H. Stock Price Prediction Using Time Convolution Long Short-Term Memory Network. In Knowledge Science, Engineering and Management; Lecture Notes in Computer Science; Liu, W., Giunchiglia, F., Yang, B., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 461–468. [Google Scholar] [CrossRef]
  34. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32; Curran Associates, Inc.: Red Hook, NY, USA, 2019; pp. 8024–8035. [Google Scholar]
  35. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
Table 1. An average of the mean absolute percentage error for every model and every financial time series calculated over five different timespans.
Table 1. An average of the mean absolute percentage error for every model and every financial time series calculated over five different timespans.
123456789101112131415ARIMAnaiveExpSmooth
AAPL10.3319.9863.4210.775.555.532.487.121.697.3256.944.474.329.748.361.71.421.57
AMZN14.5719.2164.1512.366.787.012.9412.112.826.0831.674.983.8314.814.731.881.611.62
BTCUSD19.7441.2472.0722.4817.2325.65.3824.511.319.81110.7612.4617.0536.3232.053.813.173.37
CHFGBP3.035.1618.391.730.991.020.72.260.584.486.440.670.912.821.050.490.460.46
CHFUSD1.74.699.972.91.421.070.561.30.791.237.740.730.791.961.240.510.440.45
EURCHF1.782.766.661.491.20.840.571.150.391.474.60.550.573.212.20.350.310.32
EURGBP1.282.4211.311.020.830.580.520.640.390.61.860.450.681.611.00.420.380.39
EURPLN0.852.018.391.120.650.60.470.850.391.041.740.410.491.410.650.410.340.36
EURUSD1.352.956.551.861.181.030.671.071.241.132.280.680.832.271.820.490.430.43
GOOG10.09.5344.27.984.644.422.9115.732.285.859.434.046.997.539.932.081.771.97
META23.018.6534.9814.397.066.013.1914.992.488.1415.445.517.6915.6517.612.982.411.90
NFLX18.0229.0161.3216.198.927.865.3328.513.469.933.455.555.7321.4821.072.862.262.28
PLNCHF1.865.546.312.611.922.00.782.30.562.96.931.01.143.942.730.540.510.69
PLNGBP0.612.0410.861.271.010.730.71.280.550.782.280.610.832.111.380.540.510.56
TSLA17.7633.7371.8326.8314.5623.224.0327.2914.7821.8165.7716.4713.3434.9423.613.472.982.48
USDGBP2.33.4713.731.810.890.880.711.180.470.732.00.560.971.811.040.480.450.46
USDPLN1.825.0116.222.961.641.720.771.590.72.093.460.841.263.121.510.730.610.59
XAUUSD5.768.5528.575.533.143.551.422.431.354.6713.331.741.746.743.450.940.790.90
CAC2.417.1926.62.512.181.931.523.031.02.116.191.272.084.753.491.210.990.92
DAX3.668.8934.693.772.72.711.625.211.072.146.011.642.54.934.321.20.983.63
DJC4.648.238.774.073.522.671.283.521.162.2711.452.11.926.745.921.00.861.22
DJI5.859.0138.473.092.512.211.484.651.122.4215.792.011.794.155.910.970.831.20
FTM4.559.5833.353.473.423.011.383.921.043.3816.332.052.045.573.490.980.840.96
KOSPI3.548.6531.474.72.062.831.752.851.152.285.591.412.364.284.091.030.881.11
NDX7.7711.6849.297.263.994.652.27.071.474.8434.683.043.478.897.941.251.0724.63
NKX3.7510.529.974.83.054.192.143.692.593.0610.391.552.978.444.031.181.010.58
SHC1.717.5219.432.261.761.241.153.390.861.336.30.971.593.382.210.940.770.78
SPX4.868.0540.444.913.442.951.636.661.214.6618.572.072.387.435.641.040.891.21
WIG203.328.5425.53.672.821.911.523.11.181.777.591.542.164.764.531.331.111.27
GOOG and META calculated for two forecasting horizons; TSLA and BTCUSD calculated for three forecasting horizons. Numbers in bold indicate minimum for a given timeseries.
Table 2. An average of the mean absolute percentage error at the first predicted time step for every model and every financial time series calculated over five different timespans.
Table 2. An average of the mean absolute percentage error at the first predicted time step for every model and every financial time series calculated over five different timespans.
123456789101112131415ARIMAnaiveExpSmooth
AAPL6.6416.6862.553.973.312.911.652.962.041.6910.482.991.986.375.802.162.031.90
AMZN5.2311.7063.841.861.881.891.547.672.012.037.862.241.893.561.732.152.021.81
BTCUSD15.1426.2973.848.689.6811.374.4910.383.342.919.834.657.2615.1610.134.113.293.21
CHFGBP0.883.5016.070.780.710.800.361.880.441.043.310.580.752.371.400.430.460.48
CHFUSD0.804.369.622.530.370.510.440.790.470.583.420.530.280.910.330.400.410.42
EURCHF0.572.764.440.510.540.250.280.580.120.263.480.200.181.921.610.160.140.17
EURGBP0.852.619.110.960.790.210.260.620.380.422.270.240.470.980.500.470.340.34
EURPLN0.441.616.600.500.410.310.560.640.190.341.290.330.410.710.540.310.270.25
EURUSD0.272.234.901.010.530.570.491.020.400.272.100.400.820.961.370.310.400.38
GOOG3.117.1947.741.070.561.831.5519.931.242.769.021.576.621.652.131.201.251.38
META2.0312.2847.442.602.072.140.6011.481.341.283.072.001.163.683.841.491.401.39
NFLX9.5013.2355.413.701.591.392.5919.011.664.369.671.264.004.353.361.661.561.26
PLNCHF0.324.853.520.860.711.190.510.960.270.334.120.460.251.452.170.380.390.37
PLNGBP0.221.2010.070.590.740.280.231.200.410.322.830.230.220.970.530.390.380.36
TSLA11.7721.9765.7411.019.367.854.869.545.785.784.094.9410.5811.636.784.685.405.56
USDGBP1.303.1712.620.400.520.370.311.750.510.512.040.191.021.531.020.530.510.47
USDPLN0.763.2815.681.350.830.720.330.750.400.341.980.531.151.951.150.310.330.30
XAUUSD1.206.9826.941.211.500.880.531.270.761.005.120.751.171.151.091.060.800.79
CAC0.797.7230.381.521.571.541.082.041.631.304.871.421.493.592.661.611.461.46
DAX1.638.7936.952.122.281.640.764.241.271.154.661.221.613.173.661.611.221.28
DJC2.726.9538.851.311.881.341.070.590.771.736.721.210.703.512.611.030.800.77
DJI2.617.1738.401.681.101.391.011.991.011.117.441.351.731.883.381.231.101.08
FTM1.629.7137.591.833.222.250.932.691.091.055.821.691.243.773.021.451.281.64
KOSPI1.266.4231.892.011.201.331.302.180.810.523.181.292.621.181.251.210.800.77
NDX2.968.3548.433.062.312.571.663.181.561.596.951.862.213.563.591.681.671.60
NKX1.746.6730.442.752.461.962.323.711.872.185.782.192.812.552.902.682.102.04
SHC1.244.9322.281.771.981.701.222.910.931.295.301.390.792.602.321.520.861.00
SPX2.107.4240.642.861.432.201.315.931.113.787.871.481.864.812.501.301.181.10
WIG200.565.0030.253.922.921.531.173.511.251.827.551.372.263.803.441.711.321.34
GOOG and META calculated for two forecasting horizons; TSLA and BTCUSD calculated for three forecasting horizons. Numbers in bold indicate minimum for a given timeseries.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Buczyński, M.; Chlebus, M.; Kopczewska, K.; Zajenkowski, M. Financial Time Series Models—Comprehensive Review of Deep Learning Approaches and Practical Recommendations. Eng. Proc. 2023, 39, 79. https://doi.org/10.3390/engproc2023039079

AMA Style

Buczyński M, Chlebus M, Kopczewska K, Zajenkowski M. Financial Time Series Models—Comprehensive Review of Deep Learning Approaches and Practical Recommendations. Engineering Proceedings. 2023; 39(1):79. https://doi.org/10.3390/engproc2023039079

Chicago/Turabian Style

Buczyński, Mateusz, Marcin Chlebus, Katarzyna Kopczewska, and Marcin Zajenkowski. 2023. "Financial Time Series Models—Comprehensive Review of Deep Learning Approaches and Practical Recommendations" Engineering Proceedings 39, no. 1: 79. https://doi.org/10.3390/engproc2023039079

APA Style

Buczyński, M., Chlebus, M., Kopczewska, K., & Zajenkowski, M. (2023). Financial Time Series Models—Comprehensive Review of Deep Learning Approaches and Practical Recommendations. Engineering Proceedings, 39(1), 79. https://doi.org/10.3390/engproc2023039079

Article Metrics

Back to TopTop