Next Article in Journal
ESG Performance and Systemic Risk Nexus: Role of Firm-Specific Factors in Indian Companies
Previous Article in Journal
Consumers’ Financial Knowledge in Central European Countries in the Light of Consumer Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Volatility Forecasting: A Study through Hybrid Deep Learning Methods with WGAN

by
Adel Hassan A. Gadhi
1,2,*,
Shelton Peiris
1 and
David E. Allen
1,3,4
1
School of Mathematics and Statistics, The University of Sydney, Camperdown, NSW 2006, Australia
2
Institute of Public Administration, Riyadh 11141, Saudi Arabia
3
School of Business and Law, Edith Cowan University, Joondalup, WA 6027, Australia
4
Department of Finance, Asia University, Taichung 41354, Taiwan
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2024, 17(9), 380; https://doi.org/10.3390/jrfm17090380
Submission received: 9 July 2024 / Revised: 14 August 2024 / Accepted: 15 August 2024 / Published: 23 August 2024
(This article belongs to the Section Financial Markets)

Abstract

:
This paper examines the predictive ability of volatility in time series and investigates the effect of tradition learning methods blending with the Wasserstein generative adversarial network with gradient penalty (WGAN-GP). Using Brent crude oil returns price volatility and environmental temperature for the city of Sydney in Australia, we have shown that the corresponding forecasts have improved when combined with WGAN-GP models (i.e., ANN-(WGAN-GP), LSTM-ANN-(WGAN-GP) and BLSTM-ANN (WGAN-GP)). As a result, we conclude that incorporating with WGAN-GP will’ significantly improve the capabilities of volatility forecasting in standard econometric models and deep learning techniques.

1. Introduction

It is well-known that the importance of time series forecasting and its vital influence in monitoring the global economy by decision-makers, investors, policymakers, and energy market participants by other interested parties. See for example Kilian and Park (2009) and Elder and Serletis (2010). Accurate and reliable forecasts can help to develop effective risk management strategies and to make well-informed decisions. On the other hand the work of Henriques and Sadorsky (2011), Makridakis et al. (2009) and Kaymak and Kaymak (2022), among others, give an account of different approaches to perform forecasting and more recent machine learning algorithms.
In additional deep learning models, in particular artificial neural networks (ANNs), long short-term memory (LSTM) networks, and bidirectional long short-term memory (BLSTM) networks, have shown promising results in various forecasting-related tasks as discussed in Hua et al. (2019), Chung and Shin (2018), Li et al. (2019), Fang et al. (2021), and He et al. (2023). López-Monroy and García-Salinas (2022), Zhang (2003) and Tu (1996) point out that deep learning models can capture intricate patterns and nonlinear dependencies in the data that are often difficult to handle by traditional time series methods.
Recent advances in machine learning techniques have yielded promising results in various areas of financial forecasting. Sadorsky (2022) showed that machine learning methods based on decision trees outperform traditional logistic regression models in forecasting solar stock prices showing an accuracy of over 85%. Likewise, Kim et al. (2021) proved that neural networks can predict corporate bond yield spreads with reasonable accuracy. Golbayani et al. (2020) also working with models based on decision trees and corporate credit rating data showed their superiority over other machine learning techniques such as support vector machines and multilayer perceptions.
Allen et al. (2023) extended the scope of machine learning applications to finance by examining the effectiveness of generative adversarial neural networks (GANs) in generating time series of artificial financial data. Their research indicates that GANs can successfully reproduce complex features of financial market variables, such as long memory and fat tails, thus suggesting another avenue by which machine learning models can contribute to producing reliable financial forecasts.
The above studies show that the machine learning methods are increasingly becoming vital tools in the empirical modeling of financial markets.
This study contributes to the topic by relying on recent advances in generative adversarial networks, namely, Wasserstein GAN with gradient penalty (WGAN-GP). Following the studies of Arjovsky et al. (2017) and Gulrajani et al. (2017) we chose to use WGAN-GP instead of other GAN versions because it is very helpful for creating synthetic financial time series data that looks authentic.
In mind this paper makes two relevant contributions: The first involved the combination of deep learning and advanced features of the Wasserstein with a GAN with Gradient Penalty (WGAN-GP). The second contribution is from the increased accuracy of the forecasts generated by ANN, LSTM, and BLSTM models in combination with WGAN-GP-based data augmentation1. The accuracy of these models was measured using several metrics, including the mean square error (MSE), mean absolute error (MAE), mean absolute percentage mean error (MAPE), and root mean square error (RMSE). Section 2 reviews the literature, Section 3 presents the methodologies, Section 4 shows the empirical results and Section 5 reports the conclusions.

2. Literature Review

This section structured into three main subsections, each focusing on a key component of our study: Wasserstein GANs, GARCH models and deep learning approaches. We conclude by highlighting the research gap addressed by this study.

2.1. Wasserstein GANs in Time Series Analysis

Wasserstein adversarial generative networks (WGANs) were introduced by Arjovsky et al. (2017) to overcome the mode collapse problem present in early developments of GANs, which was a step forward in the field of generative adversarial networks. According to Gulrajani et al. (2017) training networks using Wasserstein distance makes WGANs to converge more stably ideal for creating synthetic time series data that appear realistic. On the other hand, Abdelfattah et al. (2023) and Liao et al. (2022) proved that WGANs strengthen the predictive ability of models in the context of unpredictable financial market conditions.

2.2. GARCH Models and Their Integration with Machine Learning

Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models have become the standard for predicting the volatility in financial time series. With the fusion of GARCH and machine learning models such as ANN and LSTM, new opportunities for improving forecasting accuracy have emerged. In this connection, pioneering work by Hansen and Lunde (2005) and Bollerslev (1987) has revealed that integration between volatility and neural network models better captures the intricate dynamics of financial time series. Further, Degiannakis et al. (2018) and Sermpinis et al. (2021) showed that GARCH, ANN and LSTM models can work together to provide a more nuanced view of oil market volatility than single models.

2.3. Deep Learning Approaches: LSTM and BLSTM

Long short-term memory (LSTM) and bidirectional long short-term memory (BLSTM) neural networks, created with the introduction of deep learning, are efficient in capturing long-term dependencies in sequential data. In turn, Hybrid models combining neural networks with generative adversarial networks, e.g., LSTM-WGAN and BLSTM-WGAN, have brought a new perspective to time series forecasting, as they integrate the synthetic data generation abilities of GANs with the sequential data processing capabilities of LSTM and BLSTM neural networks. Examples of the application of these hybrid models can be found in the work of Arbat et al. (2022) for cloud workload prediction and Xu et al. (2023) for anomaly detection.

2.4. Integration of GARCH and Deep Learning for Oil Market Analysis

Research by Malik and Ewing (2009) and Khalfaoui et al. (2015) highlight how the integration of GARCH models and deep learning models, such as ANN, LSTM, and BLSTM, allows for improving the accuracy and reliability of predictions in markets as unpredictable as crude oil.

2.5. Hybrid Models for Financial Forecasting: AEI-DNET and Beyond

There has been encouraging progress in using hybrid modeling methods for financial forecasting, which should lead to better predictions. The AEI-DNET model, presented by Albahli et al. (2022), combines the DenseNet architecture with an autoencoder to predict stock market values using technical stock indicators. With a minimum MAPE of 0.41, the AEI-DNET model has proven to be more accurate than traditional methods. This method, which combines deep learning with feature extraction techniques such as autoencoders, demonstrates the effectiveness of deep learning for financial forecasting.

2.6. Research Gap and Our Contribution

While previous studies have explored various combinations of GARCH and deep learning models for forecasting, there has been limited investigation into how these methods can be integrated with WGAN for time series prediction. This gap is significant because WGAN offers potential benefits in generating realistic synthetic data, which could enhance the robustness of forecasting models, by uncovering novel features of the data set.
The integration of WGAN with GARCH and deep learning models presents several advantages:
  • GARCH models in capturing volatility clustering.
  • Deep learning models (LSTM and BLSTM) can capture complex and non-linear patterns in the data.
  • WGAN can generate high-quality synthetic data exposing previously unremarked feature.
Our study aims to close this gap by proposing a novel hybrid model that combines GARCH with deep learning models (ANN, LSTM and BLSTM) with integrates WGAN for data augmentation. We hypothesize that:
  • The proposed hybrid model will outperform individual GARCH and deep learning models in terms of forecasting accuracy.
  • The integration of WGAN will improve the model’s performance.
  • The hybrid model will demonstrate superior performance across different time horizons compared to existing models.

3. Materials and Methods

3.1. Artificial Neural Networks

Neural networks as well as hybrid models incorporating volatility models, are used in this study. The methodology used in each of these models is described below.
Artificial neural networks (ANNs) are among the modern methods with the ability to detect complicated patterns in data. Al-Qaness et al. (2022) highlight the ability of neural networks to learn from historical data and generate highly accurate forecasts. These networks can handle different types of inputs simultaneously proving useful in areas as diverse as biology, marketing, text mining, weather and world oil markets (see Zubaidi et al. (2020) and Arokiaprakash and Selvan (2022)).
In agreement with Kamilaris and Prenafeta-Boldú (2018), ANNs try to replicate the neural design of the human brain. They possess nodes (or neurons) connected and arranged in layers and learn by changing the strength of these connections depending on the mistakes they make and is called training. Once trained, they can apply what they have learned on data outside the training set. On the other hand, the work of Shahid et al. (2019), Zhang and Lu (2021), and Lu (2019) show the success of neural networks in applications related to speech and image recognition, natural language understanding, disease diagnosis, and trend prediction in monetary areas. Further, Park and Lek (2016), Aulakh et al. (2013), and Walczak and Cerpa (1999) provide a review of existing methodologies for the optimal determination of the number of inputs, the number of hidden nodes, and the number of hidden layers in an artificial neural network. Methods that are data-driven with minimal or no human involvement.
Neural Newtwork Architecture The architecture of the artificial neural network will consist of one input layer, two hidden layers, and one output layer.

3.1.1. Input Data

The input data or raw data consists of the logarithmic returns of oil prices p t at time t, given by
R t = 100 × ln p t p t 1 , 1 , 2 , . . . , n
where p 0 is the known initial price.

3.1.2. Output Data

The target variable of this study is the volatility of R t , which is given by
V t = S d ( R t ) ,
where S d is the standard deviation. The empirical literature on neural networks including studies by Zhang (2003) and Tu (1996) has shown that normalizing the training data improves the convergence rate of the network. There are different methodologies to normalize variables; in this work, we will use the MinMax Scaler function to normalize the volatility. This form of normalization scales the volatility values between 0 and 1. The function is given by
V t s = V t V min V max V min ,
where V max and V min are the maximum and minimum volatilities, respectively.

3.1.3. Loss Function

The loss function quantifies the weight of the error between the forecast generated by the neural network and the output data. This function is given by
L = 1 n t = 1 n ( V t s V ^ t s ) 2 ,
where V t s is the current value of the volatility and V ^ t s is the forecast generated by the neural network.

3.2. Long Short-Term Memory

Neural network with long and short-term memory (LSTM) were first introduced by Hochreiter and Schmidhuber (1997). It is designed to avoid the problems associated with a simple regressive neural network, providing improved results.
All recursive neural networks contain a series of recurring patterns, and, in traditional recursive neural networks. these patterns are in the form of a single layer of repetitive neurons. Networks with long and short-term memory also contain a chain, but the shape of this chain is different as it contains four layers instead of one.

3.3. GARCH Model

The Generalized Autoregressive Conditional Heteroskedasticity (GARCH) is a widely used model for capturing volatility clustering in financial time series data. It models the conditional variance of a time series as a function of its past values and past squared errors (see Bollerslev (1987), Chaudhuri and Ghosh (2016), Engle (1982), and Gray (1996)). Let R t be the mean corrected stock and let
R t = V t ϵ t ,
where ϵ t is an independent and identically distributed (i.i.d.) error term with zero mean and unit variance.
The conditional variance V t 2 is then modeled as:
V t 2 = ω + i = 1 p α i R t i 2 + j = 1 q β j V t j 2
subject to the following constraints:
  • ω > 0 is a positive constant term,
  • α i 0 and β j 0 are suitable coefficients, such that
    i = 1 p α i + j = 1 q β j < 1
It is known that when p = q = 1 or GARCH(1,1) is well-known to effectively capture the volatility dynamics in many daily financial asset returns. The parameters of the GARCH(1,1) model are estimated by maximizing the log-likelihood.

3.4. Hybrid Models

In this study, we propose several hybrid models that integrate with GARCH and deep learning models to accommodate linear and non-linear components to improve the accuracy of volatility forecasting.

GARCH-ANN, GARCH-LSTM-ANN and GARCH-BLSTM-ANN

The hybrid models are built under the following methodological scheme: as for the GARCH-ANN model, in the first stage, the GARCH model defined in (5) and (6) is fitted to the oil price data to obtain the conditional volatility estimate, the latter is then used as the input to the ANN. In the case of the GARCH-LSTM-ANN model, this is built by adding an LSTM layer in the architecture implicit in the GARCH-ANN model. The GARCH-BLSTM-ANN model is built by adding a BLSTM layer in the architecture implicit in the GARCH-ANN model. All models are trained end-to-end using a suitable loss function that compares the predicted value V ^ t s by the network with the actual value V t s .

3.5. WGAN with Gradient Penalty (WGAN-GP) and Its Integration into Hybrid Models

Generative Adversarial Networks (GANs) are deep learning frameworks consisting of two networks: a generator and a discriminator. According to Saxena and Cao (2021), Pan et al. (2019), and Chen (2021), these networks compete among themselves. While the generator produces new data samples from an input data set, the discriminator pursues to predict whether the generated data belongs to the original data set.
Arjovsky et al. (2017) introduced Wasserstein adversarial generative networks (WGANs) to improve traditional GANs by providing more stable training and ensuring that the loss function correlates better with the quality of the generated samples. Building upon this, Gulrajani et al. (2017) and Petzka et al. (2017) proposed the Wasserstein adversarial generative network with gradient penalty (WGAN-GP). This variant introduces an additional regularization to the WGAN network, forcing the norm of the gradient of the discriminator to be close to one. The gradient penalty addresses the problem of mode collapse in initial WGANs, ensuring more stable network training.
In our study, we integrate WGAN-GP with deep learning models for volatility prediction of both oil prices and climate data. Our goal is to improve the model’s capability in predicting volatility by stabilizing the training phase of the network and capturing the underlying distribution more effectively. The integration process is as follows:
  • We first train a WGAN-GP model on our historical data, which includes both oil price and climate (temperature) data from Sydney. The WGAN-GP architecture is implemented as:
        class WGAN_GP:
            def __init__(self, data_shape, noise_dim, critic_steps=5, lambda_gp=10):
                self.critic = self.build_critic()
                self.generator = self.build_generator()
  • The generator learns to create synthetic data for both oil prices and temperature, while the critic (discriminator) learns to distinguish between real and generated data.
  • After training, we use the WGAN-GP generator to augment our original datasets:
        noise = tf.random.normal(shape=[num_samples, noise_dim])
        synthetic_data_scaled = wgan_gp_generator.predict(noise)
        synthetic_data = scaler.inverse_transform(synthetic_data_scaled)
        concatenated_data = np.concatenate((data, synthetic_data))
        df = pd.DataFrame(concatenated_data, columns = df_column_names)
  • These augmented datasets are then used to train our hybrid models (GARCH-ANN, GARCH-LSTM-ANN, GARCH-BLSTM-ANN) for both oil price and temperature volatility forecasting.
Figure 1 illustrates the architecture of our hybrid models, showing how WGAN-GP interacts with the GARCH and neural network components for both oil price and temperature data.
Figure 1 illustrates the overall workflow of the data analysis process used in this study. The process begins with raw data, which undergoes data preprocessing before being split into training and testing sets. An optional step of WGAN-GP data augmentation is applied to the training data to enhance the dataset by generating synthetic samples, improving the robustness of the models. The training data is then used to develop various models, including GARCH, ANN, LSTM, BLSTM, and hybrid models such as GARCH-LSTM-ANN. The test data, untouched by augmentation, is utilized for model evaluation. The results are subsequently analyzed using evaluation metrics, leading to a performance comparison across all models. This structured approach ensures a comprehensive assessment of each model’s capabilities, ultimately guiding the selection of the most effective predictive model for the dataset.
The data flow through our hybrid models follows these steps:
  • Historical oil price and Sydney temperature data are preprocessed and scaled.
  • WGAN-GP generates synthetic data to augment both original datasets.
  • The augmented datasets are used to train separate GARCH(1,1) models to capture volatility clustering in oil return prices and temperature.
  • The GARCH predictions for each dataset are then fed into the neural network components (ANN, LSTM, or BLSTM) along with the original features.
  • The neural network components learn to refine the GARCH predictions, capturing additional non-linear patterns in both oil price and temperature data.
  • The final outputs provide volatility forecasts for both oil prices and temperature, combining the strengths of GARCH, neural networks, and the data augmentation capabilities of WGAN-GP.
This integrated approach leverages the strengths of traditional econometric models, deep learning techniques, and generative models, potentially leading to more accurate and robust volatility forecasts for both financial (oil prices) and environmental (temperature) time series.The detailed pseudocode for the key components of the methodology is provided in the Appendix A.

4. Empirical Results for Two Data Sets: Oil Return Price & Temperature

This section presents an analysis of the two datasets used in this study: Brent oil return prices and temperature data from Sydney, Australia. We examine the characteristics of each dataset to inform the modeling approach and provide context for interpreting forecasting results. The performance of these models was evaluated using the Mean Squared Error (MSE), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE) and R 2 score.

4.1. Brent Oil Returns Prices

Brent oil prices with daily frequency were obtained from the Investing website2, a public domain data warehouse. The time interval spans from 5 June 2012 to 27 June 2022, comprising 2643 observations. The target variable, volatility, was derived from daily Brent oil price returns. We allocated 80% of the data for training and the remaining 20% for testing.
Figure 2 illustrates Brent oil prices from 2012 to 2022, revealing two significant periods of price decline. The 2016 decline can be attributed to an oversupply of crude oil, a slowdown in global economic growth, and the strength of the U.S. dollar. The 2020 drop coincides with the COVID-19 pandemic, which led to reduced oil demand due to quarantine measures, travel restrictions, and economic slowdown.
Figure 3 illustrates oil price returns from 2012 to 2022, revealing two significant periods of heightened volatility. The 2015–2016 period shows increased fluctuations, attributable to an oversupply of crude oil, a slowdown in global economic growth and the strength of the U.S. dollar. The 2020 spike coincides with the COVID-19 pandemic which led to extreme price movements due to sudden demand shocks, quarantine measures, travel restrictions and widespread economic uncertainty.
Figure 4 and Figure 5 show the ACF and PACF plots for the oil price data, respectively. These plots indicate [interpretation of ACF and PACF, e.g., the presence of autocorrelation and the potential order of autoregressive processes].
Table 1 presents the results of the Augmented Dickey-Fuller (ADF) test for the oil price returns data. The low p-value (0.00) suggests that we reject the null hypothesis of a unit root, indicating that the series is stationary. This stationarity is crucial for the modeling approach as it allows us to proceed without differencing the data.
Table 2 provides descriptive statistics for the oil price data. The positive skewness (0.74) indicates a right-skewed distribution, suggesting more frequent smaller price values with occasional high peaks. The negative kurtosis (−0.84) suggests a flatter distribution compared to a normal distribution, indicating fewer extreme outliers.
Figure 6 displays the Brent price returns over the study period. Notable is the high clustering volatility during 2015–2017 and in the years 2020 and 2022, coinciding with major market events and the COVID-19 pandemic.
Table 3 provides descriptive statistics for the returns oil prices. The mean return is close to zero (0.000088) indicating that the average change in price over time is negligible. The standard deviation (0.029249) reflects the volatility in the log returns. The skewness (0.0658) is slightly positive suggesting a near-symmetric distribution while the high kurtosis (21.4472) indicates the presence of extreme outliers characteristic of financial return series.

4.2. Temperature Data

We analyzed 3992 temperature readings from Sydney, Australia, covering the period from January 2013 to December 2023. The data were obtained from the Wunderground website3. Similar to the oil price data, we used an 80-20 split for training and testing sets.
Figure 7 illustrates the temperature progression over 11 years in Sydney, Australia, highlighting the strong seasonality present in the data.
Figure 8 and Figure 9 present the ACF and PACF plots for the temperature data. These plots indicate [interpretation of ACF and PACF, e.g., the presence of seasonal patterns and potential autoregressive or moving average components].
Figure 10 shows the annualized temperature data, which helps visualize long-term trends and year-to-year variations in Sydney’s climate.
Table 4 presents the descriptive statistics for the temperature data. The mean temperature over the period is 65.83 °F, with a standard deviation of 7.74 °F. The slight negative skewness (−0.17) suggests a distribution slightly skewed towards lower temperatures. The positive kurtosis (1.28) indicates a distribution with slightly heavier tails than a normal distribution, suggesting more extreme temperature events.
Table 5 shows the ADF test results for the temperature data. The low p-value (0.001550) indicates that we reject the null hypothesis of a unit root, suggesting that the temperature series is stationary.

4.3. Comparative Analysis of Datasets

Comparing the oil price and temperature datasets, we observe several key differences:
1.
Volatility: Oil prices exhibit higher volatility, particularly during economic or global events, while temperature data shows more consistent seasonal patterns.
2.
Seasonality: Temperature data displays strong annual seasonality, whereas oil prices do not show clear seasonal patterns.
3.
External Influences: Oil prices are heavily influenced by global economic and political events, while temperature is primarily affected by natural climate patterns and long-term climate change trends.
4.
Data Frequency: Our oil price data is daily, while temperature data is recorded at a higher frequency, potentially capturing more short-term variations.
These differences in data characteristics suggest that our models may perform differently for each dataset. We expect that models capable of capturing long-term dependencies (like LSTM and BLSTM) might perform well for both datasets, but for different reasons—capturing market trends in oil prices and seasonal patterns in temperature data.

4.4. Implications for Modeling

Based on our analysis of both datasets, we anticipate that:
1.
GARCH models may be particularly effective for oil price returns volatility due to the presence of volatility clustering.
2.
LSTM and BLSTM models might excel in capturing the seasonal patterns in temperature data and the complex, non-linear relationships in oil price returns data.
3.
The integration of WGAN-GP could potentially improve model performance by generating synthetic data that captures the underlying distributions of both datasets, which could be particularly beneficial for the highly volatile oil price data.
4.
Hybrid models combining GARCH with neural network approaches might offer superior performance by leveraging the strengths of both traditional econometric and machine learning techniques.
The following sections will present the results of our various models, including standalone and hybrid approaches, applied to these datasets. We will evaluate how well each model captures the unique characteristics of oil price and temperature data, and assess the impact of incorporating WGAN-GP into our forecasting framework.

5. Model Fitting

5.1. Oil Prices Returns Data

Based on the analysis of this data set the GARCH(1,1) model seems to be the best fit.
Table 6 shows the results of the GARCH(1,1) model estimation for Brent oil return price changes.
All coefficients are statistically significant4 and satisfy the sign restrictions of Equation (6). Figure 11a,b present the normalized residuals and annualized conditional volatility of the estimated GARCH(1,1) model, respectively. Figure 11a,b show a very high degree of volatility clustering during the COVID-19 pandemic.

Hybrid Model for Oil Data

We have fitted several hybrid models with with & without WGAN.
Estimated Hybrid Models for Oil return and the performance of the models in forecasting volatility is summarized in Table 7, which includes the value of the MSE, MAE, RMSE, MAPE and R 2 score metrics evaluated on the test set.
The results in Table 7 show that the hybrid models whose linear component is modeled through a GARCH model are the best fitting (i.e., GARCH-ANN, GARCH-LSTM-ANN, and GARCH-BLSTM-ANN).
The performance of the models in volatility forecasting is summarized in Table 7, where the values of MSE, MAE, MAPE, and RMSE are reported for test data sets respectively. The results demonstrate that the proposed model GARCH - BLSTM- ANN outperforms alternative models for the testing data sets.
Table 8 shows the performance on the test set of hybrid models incorporating generative Wasserstein generative adversarial neural networks with gradient penalty.
It can be observed that except for the MSE metric, the remaining metrics allow inferring that the inclusion of the WGAN-GP in the ANN, LSTM-ANN, and BLSTM-ANN models improves the accuracy of oil price volatility forecasts concerning the counterpart hybrid models incorporating a GARCH component.
Overall, it appears that the proposed hybrid models with WGAN especially those incorporating GARCH, (GARCH-LSTM-ANN and GARCH-BLSTM-ANN), are more accurate than without WGAN at predicting volatility than the others models.
We also investigated several hybrid models incorporating WGAN. An extended analysis was carried out to integrate the WGAN gradient penalty into the deep learning models to evaluate its impact on forecasting accuracy. The introduction of the WGAN gradient penalty into the ANN, LSTM-ANN, and BLSTM-ANN models substantially altered their performance metrics.
While the integration of the WGAN gradient penalty improved the R 2 score across all models, it’s evident from the metrics that they are still facing challenges when dealing with the test data. The models enhanced with the WGAN gradient penalty offer promising performance metrics, yet they seem to be a middle ground between the traditional models and the GARCH-hybrid models. This highlights the value of WGAN in refining volatility predictions, but also showcases the supremacy of models that incorporate GARCH in capturing the intricacies of financial time-series data.

5.2. Temperature Data

We have fitted several hybrid models with with & without WGAN.

Hybrid Models for Temperature Data

We have considered several combination of hybrid & NN models. The performance of the estimated models for the temperature variable is shown in Table 9 and Table 10, which report metrics such as MSE, MAE, RMSE, MAPE, and the   R 2 score, which were evaluated on the test set.
Based on the analysis of this temperature data set, the GARCH(1,1) model appears to be the most suitable fit.
Table 11 presents the results of the GARCH(1,1) model estimation for the temperature data.
Models incorporating WGAN-GP, as seen in Table 10, showed a marked improvement in their predictive ability compared to their counterparts in Table 9. In other words, including the WGAN gradient penalty in models such as ANN, LSTM-ANN, and BLSTM-ANN improves the accuracy of ambient temperature forecasts in Sydney, Australia.

6. Conclusions and Future Work

We explored a methodological approach that allowed us to evaluate the predictive power of deep learning models combined with the sophisticated capabilities of WGAN models with gradient penalty. This was carried out in two domains: the financial sector, with an emphasis on Brent oil price volatility, and the weather domain, with an emphasis on temperature variations in Sydney, Australia.
The metrics through which the predictive ability of the models was evaluated were as follows: Mean Squared Error (MSE), Mean Absolute Error, Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and R 2 Score.
In the context of forecasting Brent crude oil price volatility, the first finding highlighted the superiority of hybrid models with GARCH component (i.e., GARCH-ANN, GARCH-LSTM-ANN, and GARCH-BLSTM-ANN) regarding GARCH, ANN, LSTM-ANN, and BLSTM-ANN models. As a second finding, the inclusion of the WGAN model with gradient penalty in the ANN, LSTM-ANN, and BLSTM-ANN models produced an improvement in accuracy far superior to that of the individual models and the hybrid models that included the GARCH component. A similar pattern regarding the improved forecasts by including WGAN-GP in the hybrid models was also observed for the climate variable temperature.

6.1. Summary of Findings

We have used the metrics Mean Squared Error (MSE), Mean Absolute Error, Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE) and R 2 Score to compare the predictive ability of the models.
In the context of forecasting Brent oil return price volatility, the first finding is the superiority of hybrid models with GARCH component (i.e., GARCH-ANN, GARCH-LSTM-ANN and GARCH-BLSTM-ANN). The second finding is the inclusion of WGAN model with gradient penalty in ANN, LSTM-ANN and BLSTM-ANN produced an improvement in accuracy far superior then othor models. A similar improved forecasts have been observed by including WGAN-GP in the hybrid models was also observed for the climate variable temperature.

6.2. Implications and Contributions

Our research makes several important contributions:
  • We investigated the effectiveness of combining traditional econometric models (GARCH) with advanced deep learning techniques (ANN, LSTM, BLSTM) and generative models (WGAN-GP) for volatility forecasting.
  • We provide empirical evidence for the cross-domain applicability of our hybrid models, showcasing their potential in both financial and environmental forecasting tasks.
  • Our approach addresses common challenges in time series forecasting, such as capturing complex non-linear patterns and dealing with limited historical data through synthetic data generation.

6.3. Limitations and Future Work

While our results are promising, we acknowledge several limitations that pave the way for future research:
  • The study focused on specific datasets (Brent oil prices and Sydney temperatures). Future work could extend this approach to a broader range of financial instruments and climate variables.
  • We used fixed architectures for our neural networks. Further research could explore optimal architecture selection through techniques like neural architecture search.
  • The impact of different WGAN-GP hyperparameters on forecasting performance was not extensively explored and could be a fruitful area for future investigation.
Future research directions could include:
  • Investigating the performance of our hybrid models in multi-step forecasting scenarios.
  • Exploring the integration of other advanced GAN variants or novel deep learning architectures into our hybrid framework.
  • Developing interpretability techniques to better understand the decision-making process of these complex hybrid models.
Overall, the empirical evidence from this study highlights the promising potential of hybrid models incorporating generative adversarial neural networks with gradient penalty. These findings underscore the versatility of hybrid models with GARCH and WGAN-GP components, pointing to interesting avenues for future research to refine these approaches in various data domains.

Author Contributions

Conceptualization, A.H.A.G., D.E.A. and S.P.; methodology, A.H.A.G.; software, A.H.A.G.; validation, A.H.A.G., S.P. and D.E.A.; formal analysis, A.H.A.G.; investigation, A.H.A.G.; resources, D.E.A.; data curation, A.H.A.G.; writing—original draft preparation, A.H.A.G.; writing—review and editing, S.P. and D.E.A.; visualization, A.H.A.G.; supervision, S.P. and D.E.A.; project administration, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author on request.

Acknowledgments

The authors thank the editor and anonymous reviewers for their fruitful comments and valuable suggestions to improve the quality and readability of this version of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Pseudocode for Key Components of the Methodology

Full data will be available on request.

Appendix A.1. Data Prepossessing

Algorithm A1 Reprocess Data(data)
  1:
Input: Raw time series data (e.g., price or temperature)
  2:
Output: Processed data ready for model input
  3:
Remove any missing values from the data
  4:
Calculate log prices: log_price = ln(price)
  5:
Calculate log returns: log_return = diff(log_price)
  6:
Calculate volatility:
  7:
for each time step t do
  8:
    volatility[t] = std(log_return[t-4:t]) ∗ sqrt(252)
  9:
end for
10:
Normalize the data using MinMaxScaler
11:
Split the data into training (80%) and testing (20%) sets
12:
Return: processed_data, train_data, test_data

Appendix A.2. GARCH Model

Algorithm A2 GARCH(returns)
1:
Input: Log returns
2:
Output: Volatility forecast
3:
Specify a GARCH(1,1) model
4:
Estimate parameters using maximum likelihood estimation
5:
Generate the volatility forecast
6:
Return: volatility_forecast

Appendix A.3. LSTM Model

Algorithm A3 LSTM(data, epochs=100, batch_size=32)
  1:
Input: Processed time series data
  2:
Output: Trained LSTM model
  3:
Define the LSTM architecture:
  4:
model = Sequential([
  5:
   LSTM(32, return_sequences=True, input_shape=(sequence_length, features)),
  6:
   LSTM(64, return_sequences=False),
  7:
   Dense(32, activation=’relu’),
  8:
   Dense(1)
  9:
])
10:
Compile the model with the Adam optimizer and MSE loss function
11:
Train the model:
12:
model.fit(train_data, train_labels, epochs=epochs, batch_size=batch_size)
13:
Return: trained_model

Appendix A.4. WGAN-GP

Algorithm A4 WGAN_GP(data, epochs=10000, n_critic=5, lambda_gp=10)
  1:
Input: Training data
  2:
Output: Generator for synthetic data
  3:
Define the generator and critic networks
  4:
for each epoch in epochs do
  5:
    for each iteration in n_critic do
  6:
        Generate random noise
  7:
        Generate fake samples using the generator
  8:
        Train the critic on real and fake samples
  9:
        Calculate the gradient penalty
10:
        Update the critic weights
11:
    end for
12:
    Generate fake samples
13:
    Update the generator weights
14:
end for
15:
Return: trained_generator

Appendix A.5. Hybrid Model Training

Algorithm A5 TrainHybridModel(data, garch, lstm, wgan_gp)
1:
Input: Processed data, GARCH model, LSTM model, WGAN-GP generator
2:
Output: Trained hybrid model
3:
Generate GARCH volatility forecast
4:
Generate synthetic data using WGAN-GP
5:
Combine real data, GARCH forecast, and synthetic data
6:
Train the LSTM on the combined data
7:
Fine-tune the hybrid model
8:
Return: hybrid_model

Appendix A.6. Model Evaluation

Algorithm A6 EvaluateModels(models, test_data)
1:
Input: Trained models, test data
2:
Output: Performance metrics
3:
for each model in models do
4:
    predictions = model.predict(test_data)
5:
    Calculate performance metrics: MSE, MAE, RMSE, MAPE, R 2
6:
end for
7:
Return: performance_metrics

Notes

1
Data augmentation is an approach used to address the problem of limited data availability in the context of machine learning models.
2
3
4
Except for the unconditional mean μ .

References

  1. Abdelfattah, T. M., F. Ahmed, A. Maher, and A. Youssef. 2023. Generating radar signals using one-dimensional gan-based model for target classification in radar systems. Journal of Physics: Conference Series 2616: 012036. [Google Scholar] [CrossRef]
  2. Albahli, Saleh, Tahira Nazir, Awais Mehmood, Aun Irtaza, Ali Alkhalifah, and Waleed Albattah. 2022. Aei-dnet: A novel densenet model with an autoencoder for the stock market predictions using stock technical indicators. Electronics 11: 611. [Google Scholar] [CrossRef]
  3. Allen, D., L. Mushunje, and M. Peiris. 2023. Gans through the looking glass: How real is the fake financial data created by generative adversarial neural nets? Paper presented at the 25th International Congress on Modelling and Simulation (MODSIM2023), Darwin, Australia, July 9–13. [Google Scholar]
  4. Al-Qaness, Mohammed A. A., Ahmed A. Ewees, Laith Abualigah, Ayman Mutahar AlRassas, Hung Vo Thanh, and Mohamed Abd Elaziz. 2022. Evaluating the applications of dendritic neuron model with metaheuristic optimization algorithms for crude-oil-production forecasting. Entropy 24: 1674. [Google Scholar] [CrossRef] [PubMed]
  5. Arbat, Shivani, Vinodh Kumaran Jayakumar, Jaewoo Lee, Wei Wang, and In Kee Kim. 2022. Wasserstein adversarial transformer for cloud workload prediction. Proceedings of the AAAI Conference on Artificial Intelligence 36: 12433–39. [Google Scholar] [CrossRef]
  6. Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. Paper presented at the 34th International Conference on Machine Learning, Sydney, Australia, August 6–11; pp. 214–23. [Google Scholar]
  7. Arokiaprakash, A., and S. Senthil Selvan. 2022. Application of random forest and multi-layer perceptron anns in estimating the axial compression capacity of concrete-filled steel tubes. Iranian Journal of Science and Technology, Transactions of Civil Engineering 46: 4111–30. [Google Scholar] [CrossRef]
  8. Aulakh, Jaspreet, Anita Regmi, Joan R. Fulton, and Corinne E. Alexander. 2013. Estimating Post-Harvest Food Losses: Developing a Consistent Global Estimation Framework. Washington, DC: Agricultural and Applied Economics Association. [Google Scholar]
  9. Bollerslev, Tim. 1987. A conditionally heteroskedastic time series model for speculative prices and rates of return. The Review of Economics and Statistics 69: 542–47. [Google Scholar] [CrossRef]
  10. Chaudhuri, Tamal Datta, and Indranil Ghosh. 2016. Artificial neural network and time series modeling based approach to forecasting the exchange rate in a multivariate framework. arXiv arXiv:1607.02093. [Google Scholar]
  11. Chen, Haiyang. 2021. Challenges and corresponding solutions of generative adversarial networks (GANs): A survey study. Journal of Physics: Conference Series 1827: 012066. [Google Scholar] [CrossRef]
  12. Chung, Hyejung, and Kyung-shik Shin. 2018. Genetic algorithm-optimized long short-term memory network for stock market prediction. Sustainability 10: 3765. [Google Scholar] [CrossRef]
  13. Degiannakis, Stavros, George Filis, and Vipin Arora. 2018. Oil prices and stock markets: A review of the theory and empirical evidence. The Energy Journal 39: 85–130. [Google Scholar] [CrossRef]
  14. Elder, John, and Apostolos Serletis. 2010. Oil price uncertainty. Journal of Money, Credit and Banking 42: 1137–59. [Google Scholar] [CrossRef]
  15. Engle, Robert F. 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. Econometrica: Journal of the Econometric Society 50: 987–1007. [Google Scholar] [CrossRef]
  16. Fang, Zheng, David L. Dowe, Shelton Peiris, and Dedi Rosadi. 2021. Minimum message length in hybrid arma and lstm model forecasting. Entropy 23: 1601. [Google Scholar] [CrossRef] [PubMed]
  17. Golbayani, Parisa, Ionuţ Florescu, and Rupak Chatterjee. 2020. A comparative study of forecasting corporate credit ratings using neural networks, support vector machines, and decision trees. The North American Journal of Economics and Finance 54: 101251. [Google Scholar] [CrossRef]
  18. Gray, Stephen F. 1996. Modeling the conditional distribution of interest rates as a regime-switching process. Journal of Financial Economics 42: 27–62. [Google Scholar] [CrossRef]
  19. Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. 2017. Improved training of wasserstein gans. Advances in Neural Information Processing Systems 30: 5767–77. [Google Scholar]
  20. Hansen, Peter R., and Asger Lunde. 2005. A forecast comparison of volatility models: Does anything beat a garch (1, 1)? Journal of Applied Econometrics 20: 873–89. [Google Scholar] [CrossRef]
  21. He, Kaijian, Qian Yang, Lei Ji, Jingcheng Pan, and Yingchao Zou. 2023. Financial time series forecasting with the deep learning ensemble model. Mathematics 11: 1054. [Google Scholar] [CrossRef]
  22. Henriques, Irene, and Perry Sadorsky. 2011. The effect of oil price volatility on strategic investment. Energy Economics 33: 79–87. [Google Scholar] [CrossRef]
  23. Hochreiter, Sepp, and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9: 1735–80. [Google Scholar] [CrossRef]
  24. Hua, Yuxiu, Zhifeng Zhao, Rongpeng Li, Xianfu Chen, Zhiming Liu, and Honggang Zhang. 2019. Deep learning with long short-term memory for time series prediction. IEEE Communications Magazine 57: 114–19. [Google Scholar] [CrossRef]
  25. Kamilaris, Andreas, and Francesc X. Prenafeta-Boldú. 2018. A review of the use of convolutional neural networks in agriculture. The Journal of Agricultural Science 156: 312–22. [Google Scholar] [CrossRef]
  26. Kaymak, Öznur Öztunç, and Yiğit Kaymak. 2022. Prediction of crude oil prices in COVID-19 outbreak using real data. Chaos, Solitons & Fractals 158: 111990. [Google Scholar]
  27. Khalfaoui, Rabeh, Mohamed Boutahar, and Heni Boubaker. 2015. Analyzing volatility spillovers and hedging between oil and stock markets: Evidence from wavelet analysis. Energy Economics 49: 540–49. [Google Scholar] [CrossRef]
  28. Kilian, Lutz, and Cheolbeom Park. 2009. The impact of oil price shocks on the us stock market. International Economic Review 50: 1267–87. [Google Scholar] [CrossRef]
  29. Kim, Jong-Min, Dong H. Kim, and Hojin Jung. 2021. Applications of machine learning for corporate bond yield spread forecasting. The North American Journal of Economics and Finance 58: 101540. [Google Scholar] [CrossRef]
  30. Li, Xuerong, Wei Shang, and Shouyang Wang. 2019. Text-based crude oil price forecasting: A deep learning approach. International Journal of Forecasting 35: 1548–60. [Google Scholar] [CrossRef]
  31. Liao, Qingyao, Yuan Lu, Yinghao Luo, and Shuyu Yang. 2022. Application of wgan in financial time series generation compared with rnn. Paper presented at the 2nd International Conference on Artificial Intelligence, Automation, and High-Performance Computing (AIAHPC 2022), Zhuhai, China, February 25–27, vol. 12348, pp. 1056–71. [Google Scholar]
  32. López-Monroy, A. Pastor, and Jesús S García-Salinas. 2022. Neural networks and deep learning. In Biosignal Processing and Classification Using Computational Learning and Intelligence. Amsterdam: Elsevier, pp. 177–96. [Google Scholar]
  33. Lu, Yang. 2019. Artificial intelligence: A survey on evolution, models, applications and future trends. Journal of Management Analytics 6: 1–29. [Google Scholar] [CrossRef]
  34. Makridakis, Spyros, Robin M. Hogarth, and Anil Gaba. 2009. Forecasting and uncertainty in the economic and business world. International Journal of Forecasting 25: 794–812. [Google Scholar] [CrossRef]
  35. Malik, Farooq, and Bradley T. Ewing. 2009. Volatility transmission between oil prices and equity sector returns. International Review of Financial Analysis 18: 95–100. [Google Scholar] [CrossRef]
  36. Pan, Zhaoqing, Weijie Yu, Xiaokai Yi, Asifullah Khan, Feng Yuan, and Yuhui Zheng. 2019. Recent progress on generative adversarial networks (gans): A survey. IEEE Access 7: 36322–33. [Google Scholar] [CrossRef]
  37. Park, Y.-S., and S. Lek. 2016. Artificial neural networks: Multilayer perceptron for ecological modeling. In Developments in Environmental Modelling. Amsterdam: Elsevier, vol. 28, pp. 123–40. [Google Scholar]
  38. Petzka, Henning, Asja Fischer, and Denis Lukovnicov. 2017. On the regularization of wasserstein gans. arXiv arXiv:1709.08894. [Google Scholar]
  39. Sadorsky, Perry. 2022. Forecasting solar stock prices using tree-based machine learning classification: How important are silver prices? The North American Journal of Economics and Finance 61: 101705. [Google Scholar] [CrossRef]
  40. Saxena, Divya, and Jiannong Cao. 2021. Generative adversarial networks (GANs) challenges, solutions, and future directions. ACM Computing Surveys (CSUR) 54: 1–42. [Google Scholar] [CrossRef]
  41. Sermpinis, Georgios, Andreas Karathanasopoulos, Rafael Rosillo, and David de la Fuente. 2021. Neural networks in financial trading. Annals of Operations Research 297: 293–308. [Google Scholar] [CrossRef]
  42. Shahid, Nida, Tim Rappon, and Whitney Berta. 2019. Applications of artificial neural networks in health care organizational decision-making: A scoping review. PLoS ONE 14: e0212356. [Google Scholar] [CrossRef]
  43. Tu, Jack V. 1996. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. Journal of Clinical Epidemiology 49: 1225–31. [Google Scholar] [CrossRef]
  44. Walczak, Steven, and Narciso Cerpa. 1999. Heuristic principles for the design of artificial neural networks. Information and Software Technology 41: 107–17. [Google Scholar] [CrossRef]
  45. Xu, Hao, Zihan Sun, Yuan Cao, and Hazrat Bilal. 2023. A data-driven approach for intrusion and anomaly detection using automated machine learning for the internet of things. Soft Computing 27: 14469–81. [Google Scholar] [CrossRef]
  46. Zhang, Caiming, and Yang Lu. 2021. Study on artificial intelligence: The state of the art and future prospects. Journal of Industrial Information Integration 23: 100224. [Google Scholar] [CrossRef]
  47. Zhang, G. Peter. 2003. Time series forecasting using a hybrid arima and neural network model. Neurocomputing 50: 159–75. [Google Scholar] [CrossRef]
  48. Zubaidi, Salah L., Sandra Ortega-Martorell, Patryk Kot, Rafid M. Alkhaddar, Mawada Abdellatif, Sadik K. Gharghan, Maytham S. Ahmed, and Khalid Hashim. 2020. A method for predicting long-term municipal water demands under climate change. Water Resources Management 34: 1265–79. [Google Scholar] [CrossRef]
Figure 1. Flowchart for the Data Analysis Process.
Figure 1. Flowchart for the Data Analysis Process.
Jrfm 17 00380 g001
Figure 2. Daily Brent Oil Price (2012–2022).
Figure 2. Daily Brent Oil Price (2012–2022).
Jrfm 17 00380 g002
Figure 3. Oil Price Returns from 2012 to 2022.
Figure 3. Oil Price Returns from 2012 to 2022.
Jrfm 17 00380 g003
Figure 4. Autocorrelation Function (ACF) for Oil Price return Data.
Figure 4. Autocorrelation Function (ACF) for Oil Price return Data.
Jrfm 17 00380 g004
Figure 5. Partial Autocorrelation Function (PACF) for Oil Price Data.
Figure 5. Partial Autocorrelation Function (PACF) for Oil Price Data.
Jrfm 17 00380 g005
Figure 6. Oil Price Returns.
Figure 6. Oil Price Returns.
Jrfm 17 00380 g006
Figure 7. Temperature Over Time in Sydney, Australia (2013–2023).
Figure 7. Temperature Over Time in Sydney, Australia (2013–2023).
Jrfm 17 00380 g007
Figure 8. Autocorrelation Function (ACF) for Temperature Data.
Figure 8. Autocorrelation Function (ACF) for Temperature Data.
Jrfm 17 00380 g008
Figure 9. Partial Autocorrelation Function (PACF) for Temperature Data.
Figure 9. Partial Autocorrelation Function (PACF) for Temperature Data.
Jrfm 17 00380 g009
Figure 10. Annualized Temperature Data.
Figure 10. Annualized Temperature Data.
Jrfm 17 00380 g010
Figure 11. Standardized residuals and annualized conditional volatility.
Figure 11. Standardized residuals and annualized conditional volatility.
Jrfm 17 00380 g011
Table 1. ADF Test Results for Oil Price Data.
Table 1. ADF Test Results for Oil Price Data.
TestValuep-Value
ADF−8.380.00
Table 2. Descriptive Statistics for Oil Prices.
Table 2. Descriptive Statistics for Oil Prices.
StatisticValue
Count2638.00
Mean64.48
Standard Deviation21.81
Minimum26.21
25% Quantile49.90
50% Quantile (Median)54.74
75% Quantile86.06
Maximum122.11
Skewness0.74
Kurtosis−0.84
Table 3. Descriptive Statistics for Log Returns of Oil Prices.
Table 3. Descriptive Statistics for Log Returns of Oil Prices.
StatisticValue
Count2638.00
Mean0.000088
Standard Deviation0.029249
Minimum−0.278506
25% Quantile−0.008450
50% Quantile (Median)0.000596
75% Quantile0.008899
Maximum0.287098
Skewness0.0658
Kurtosis21.4472
Table 4. Descriptive Statistics of Temperature.
Table 4. Descriptive Statistics of Temperature.
StatisticValue
Count3992
Mean65.83 °F
Standard Deviation7.74 °F
Minimum0.00 °F
25th Percentile59.50 °F
50th Percentile (Median)66.10 °F
75th Percentile71.70 °F
Maximum91.40 °F
Skewness−0.17
Kurtosis1.28
Table 5. ADF Test Results for Temperature Data.
Table 5. ADF Test Results for Temperature Data.
StatisticValue
ADF Statistic−3.974596
p-value0.001550
Table 6. GARCH Model Results.
Table 6. GARCH Model Results.
ParameterCoefficientStandard Errort-Statisticp-Value
μ 0.07230.0001.3270.185
ω 2.18310.9052.4130.016
α 1 0.23250.0003.1230.000
β 1 0.51680.1493.4600.000
Table 7. Performance of the Models on the test oil return data-Without WGAN-GP.
Table 7. Performance of the Models on the test oil return data-Without WGAN-GP.
Model R 2 MSEMAERMSEMAPE (%)
GARCH0.0740.8360.6810.91440.797
ANN0.2370.6830.5670.82623.158
LSTM-ANN0.2410.6730.5600.82023.913
BLSTM-ANN0.2310.6870.5650.82923.931
GARCH-ANN0.7340.2030.2950.45014.497
GARCH-LSTM-ANN0.7250.2060.2930.45313.872
GARCH-BLSTM-ANN0.7310.2040.2940.45223.453
Table 8. Prediction performances for the oil return data with WGAN Gradient Penalty.
Table 8. Prediction performances for the oil return data with WGAN Gradient Penalty.
Model R 2 MSEMAERMSEMAPE
ANN-WGAN0.9800.0040.0490.0650.317%
LSTM-ANN-WGAN0.9770.0500.0500.0690.263%
BLSTM-ANN-WGAN0.9770.0050.0540.0700.284%
Table 9. Prediction performances for the temperature data without WGAN Gradient Penalty.
Table 9. Prediction performances for the temperature data without WGAN Gradient Penalty.
Model R 2 MSEMAERMSEMAPE
ANN0.99660.2290.0260.479inf%
LSTM0.7810.392.473.223.77%
BLSTM0.7611.492.573.394.03%
LSTM-ANN0.990.350.550.590.82%
BLSTM-ANN0.99930.0320.1410.1780.229%
GARCH-ANN0.765711.3032.5713.3623.981%
GARCH-LSTM-ANN0.769811.1052.5633.3323.952%
GARCH-BLSTM-ANN0.768511.1692.5653.3423.968%
Table 10. Prediction performances for the temperature data with WGAN Gradient Penalty.
Table 10. Prediction performances for the temperature data with WGAN Gradient Penalty.
Model R 2 MSEMAERMSEMAPE
ANN-WGAN0.9990.0000.0130.0160.018%
LSTM-ANN-WGAN0.9990.0060.0660.0750.091%
BLSTM-ANN-WGAN0.9990.0070.0810.0860.109%
GARCH-ANN-WGAN0.999 1.318 × 10 7 0.0000.01116,864.385%
GARCH-LSTM-ANN-WGAN0.999 5.876 × 10 6 0.0020.0021.225%
GARCH-BLSTM-ANN-WGAN0.9990.0060.0590.0750.082%
Table 11. GARCH Model Results for the Temperature.
Table 11. GARCH Model Results for the Temperature.
ParameterCoefficientStandard Errort-Statisticp-Value
μ 65.74370.599109.8420.000
ω 10.55803.5862.9440.003
α 1 0.67240.1056.3940.000
β 1 0.20300.1681.2070.227
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gadhi, A.H.A.; Peiris, S.; Allen, D.E. Improving Volatility Forecasting: A Study through Hybrid Deep Learning Methods with WGAN. J. Risk Financial Manag. 2024, 17, 380. https://doi.org/10.3390/jrfm17090380

AMA Style

Gadhi AHA, Peiris S, Allen DE. Improving Volatility Forecasting: A Study through Hybrid Deep Learning Methods with WGAN. Journal of Risk and Financial Management. 2024; 17(9):380. https://doi.org/10.3390/jrfm17090380

Chicago/Turabian Style

Gadhi, Adel Hassan A., Shelton Peiris, and David E. Allen. 2024. "Improving Volatility Forecasting: A Study through Hybrid Deep Learning Methods with WGAN" Journal of Risk and Financial Management 17, no. 9: 380. https://doi.org/10.3390/jrfm17090380

Article Metrics

Back to TopTop