Next Article in Journal
Optimizing Mobile Robot Navigation Based on A-Star Algorithm for Obstacle Avoidance in Smart Agriculture
Previous Article in Journal
Adaptive Mobility-Based IoT LoRa Clustering Communication Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Causal Inference Architecture and Algorithm between Stock Closing Price and Relevant Factors

China School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(11), 2056; https://doi.org/10.3390/electronics13112056
Submission received: 15 April 2024 / Revised: 18 May 2024 / Accepted: 21 May 2024 / Published: 24 May 2024

Abstract

:
Numerous studies are based on the correlation among stock factors, which affects the measurement value and interpretability of such studies. Research on the causality among stock factors primarily relies on statistical models and machine learning algorithms, thereby failing to fully exploit the formidable computational capabilities of deep learning models. Moreover, the inference of causal relationships largely depends on the Granger causality test, which is not suitable for non-stationary and non-linear stock factors. Also, most existing studies do not consider the impact of confounding variables or further validation of causal relationships. In response to the current research deficiencies, this paper introduces a deep learning-based algorithm aimed at inferring causal relationships between stock closing prices and relevant factors. To achieve this, causal diagrams from the structural causal model (SCM) were integrated into the analysis of stock data. Subsequently, a sliding window strategy combined with Gated Recurrent Units (GRUs) was employed to predict the potential values of closing prices, and a grouped architecture was constructed inspired by the Potential Outcomes Framework (POF) for controlling confounding variables. The architecture was employed to infer causal relationships between closing price and relevant factors through the non-linear Granger causality test. Finally, comparative experimental results demonstrate a marked enhancement in the accuracy and performance of closing price predictions when causal factors were incorporated into the prediction model. This finding not only validates the correctness of the causal inference, but also strengthens the reliability and validity of the proposed methodology. Consequently, this study has significant practical implications for the analysis of causality in financial time series data and the prediction of stock prices.

1. Introduction

As a typical complex system, the stock market exhibits characteristics of non-linearity, non-stationarity, and multi-components [1,2,3]. In complex real-world scenarios, accurately identifying the specific factors that influence stock market fluctuations becomes challenging. Many traditional approaches are no longer applicable for predictive research in this domain. Therefore, it is necessary to employ more sophisticated methods to uncover the complex features embedded within time series data.
Currently, methods such as logistic regression, gradient boosting models, and deep learning primarily aim to identify correlation information between variables by fitting observed data. Subsequently, variables with high correlations to the target variable are selected as input variables for prediction [4,5,6,7]. However, focusing solely on correlations and not considering causality when choosing predictors can affect the accuracy of predictions [8]. Correlations may arise from causal relationships, but it is not equivalent to causality and is further influenced by confounding variables. Confounding variables affect both the treatment and target variables simultaneously, resulting in correlations between them [9].
In most cases, the Granger causality test is used to infer causal relationships between financial time series [10,11]. Traditional methods for inferring Granger causality mainly include Vector Autoregression (VAR) [12], the Vector Error Correction Model (VECM) [13], and their respective variants [14,15].
Expanding on the ideas discussed above, Xu et al. [16] presented a novel causal decomposition approach and further applied it to investigate information flow between two financial time series on different time scales. The causal decomposition method has three main steps: decomposition, reconstruction, and causality testing. By tracking the driving factors of causal relationships from the perspective of information frequency, the causal decomposition method re-evaluates the causal relationship between stock prices and trading volume from the time-frequency perspective.
Although causal inference methods based on statistical modeling have been extensively studied and have yielded fruitful results, these methods mainly focus on evaluating the interaction between two variables, overlooking the impact of confounding variables [17,18,19,20,21,22,23,24]. Constructing more intricate causal networks, based on the examination and analysis of pairwise causal relationships, still presents significant challenges. This necessitates the exploration of novel methods and theories to tackle issues such as indirect dependence resulting from front-door paths and common driving factors caused by backdoor paths.
Moreover, it has been found that most economic time series are non-stationary according to various unit root test methods, such as the Augmented Dickey-Fuller (ADF) test [25]. By applying preprocessing techniques, like differencing or logarithmic transformation, these time series can be transformed into stationary data. VAR and the VECM are usually more effective when the input data are stationary [26]. Therefore, for the inference of Granger causality in non-stationary time series, the inputs need to be preprocessed to obtain a stationary series in order to avoid forecast distortion. Assuming that the time series becomes stationary after a difference order of 1 or 2, static causal relationships can be inferred. However, economic data often exhibit dynamic properties, and the preprocessing steps may overlook the dynamic causal relationships within the time series. Causal relationships may not only exist between the current and previous period of data. The oversimplification of preprocessing methods can lead to disregarding dynamic causality and losing valuable information from the original data.
Machine learning (ML) models can capture non-linear and complex relationships better than traditional statistical models [27]. Research has been conducted on causal inference utilizing ML models to overcome limitations in conventional approaches. The use of ML models offers a more accurate and comprehensive ability to infer causal relationships. These models can handle complex datasets that involve a substantial number of variables and aim to identify and infer causal relationships within them. This would help to reveal potential causal paths in causal networks [28].
Leng et al. [29] proposed an independent component analysis (ICA) framework, inspired by the decision tree (DT) algorithm in machine learning, for measuring causal relationships by calculating feature importance. The core idea of the framework is to convert time series data into a causal network representation and to make causal inferences based on feature importance. The ICA framework is designed at the network level, serving as a connection and link between traditional mutual causality detection methods and causal network reconstruction.
While the application of machine learning in causal analysis brings forth new perspectives and approaches, it also possesses certain shortcomings. These limitations encompass sensitivity to data bias and causal confounding, as well as restrictions in handling complex non-linear relationships [30,31]. In comparison, deep learning models exhibit more robust expression and pattern learning capabilities, as they can acquire abstract computational methods from intricate and high-dimensional data [32]. For example, Chong et al. [33] explored a deep learning-based model to evaluate the efficacy of three unsupervised feature extraction methods to predict future market behavior. Similarly, other studies have made significant advancements in the financial domain through the utilization of deep learning models.
Indeed, a dual-stage attention-based recurrent neural network (RNN) model proposed by Qin et al. [34] has shown promising results in predicting stock datasets. By adaptively extracting relevant input features for prediction, the model can effectively capture important information and make accurate predictions. Zahra et al. [35] combined the convolutional neural network (CNN) and long short-term memory (LSTM) with a fundamental analysis. By extracting and synthesizing features at multiple levels and dimensions, the model can capture both local and global patterns and improve prediction accuracy. Rahman et al. [36] applied the GRU model to predict stock data of Coca-Cola and reduced modeling errors. The GRU model is effective in overcoming the problem of vanishing gradients, which can occur in deep learning models and affect their performance. Compared to traditional econometric and ML approaches, the deep learning-based models demonstrate better prediction performance. This highlights the efficiency and effectiveness of deep learning architectures in handling financial time series data.
Wei [37] proposed an interpretable deep learning architecture, known as deep learning inference (DLI), to investigate Granger causality. The main contribution of DLI is to uncover the Granger causality between Bitcoin price and the S&P index, enabling more accurate prediction of Bitcoin prices in relation to the S&P index. However, this method only provides a visual representation of the predicted results for Bitcoin prices and lacks evaluation metrics to quantify the results. Its inference of a Granger causality between the two variables is solely based on the enhanced prediction performance of Bitcoin prices after incorporating historical data from the S&P index. Studies inferring the causal relationship of individual stock-related factors based on deep learning models are gradually stimulating the interest of researchers.
Tank et al. [38] introduced the neural Granger test, enhancing the detection of Granger causality by incorporating non-linear interactions. The researchers proposed a suite of non-linear architectures, wherein each time series is modeled using either Multi-Layer Perceptron (MLP) or RNN. Inputs to this non-linear framework consist of past lags from all series, while the outputs predict the future values of each series. Additionally, a group lasso penalty is applied to effectively reduce the input weights to zero, refining the predictive accuracy of the model.
In order to better comprehend and explicate causal relationships in data, Donald Rubin put forward the POF. The POF seeks to reveal dynamic causal relationships through randomized trials and natural experiments [39,40]. Additionally, Judea Pearl introduced the causal diagrams model as a formalized approach for researchers to describe and infer causal relationships. Combining the potential outcomes framework and causal diagrams can lead to a better understanding and explanation of causal relationships in observed data for causal inference.
Although there has been significant progress in the investigation of causality within stock data, the Granger causality test, designed for linear and stable data, exhibits biases when applied to non-linear and unstable stock data. Moreover, the majority of causal inference methodologies neglect the issue of interference caused by confounding variables, as well as the lack of further validation of the results. To address the aforementioned issue, in this paper, a deep learning-based causal inference architecture and algorithm inspired by the POF, which also integrates causal diagrams and the non-linear Granger test, is proposed. This framework and algorithm are specifically designed to infer the causal relationships between individual stock closing prices and their relevant factors.
The innovation of this paper lies in the utilization of causal diagrams and the establishment of a grouped architecture through deep learning networks. The primary contributions of the proposed methods are summarized as follows:
  • To better understand the impact of confounding variables on causal relationships, causal diagrams are employed in the stock data analysis to explore the relationships among confounding variables, treatment variables, and target variables. The application of front-door and backdoor adjustments allows for the control of confounding variables and the accurate inference of the relationship between closing prices and relevant factors.
  • To leverage the computational power of deep learning networks and address the deficiencies of the Granger test, a sliding window strategy is incorporated into the GRU model to achieve precise estimation of closing prices. The enhanced capability of GTU serves to expand the applicability of the Granger test beyond the realm of linear stationary data, facilitating the direct assessment of causal linkages within non-linear time series data.
  • To control for confounding variables, a grouped architecture structure is built using GRUs combined with a sliding window strategy, inspired by the POF. Additionally, the non-linear Granger test is utilized to infer the causal relationships between individual stock closing prices and relevant factors, thus implementing a deep learning-based causal inference framework and algorithm.
  • To further validate the accuracy of the inferred causal relationships, different sets of input variables, such as individual closing prices, all related factors, and causal factors, are used when predicting stock closing prices. The results show that including causal factors as input variables significantly enhances prediction accuracy. These findings provide additional validation of the effectiveness and reliability of the proposed algorithm.
The remainder of the paper is organized as follows: In Section 2, we introduce causal diagrams and the Granger test as well as presents a deep learning-based architecture and algorithm for causal inference between stock closing prices and relevant factors. The dataset used as well as the evaluation metrics are also presented in Section 2. The experimental results are presented in Section 3. Section 4 discusses the experimental results to validate the effectiveness of the algorithm. Some conclusions are provided in Section 5.

2. Materials and Methods

2.1. Causal Diagrams

Causal diagrams, which utilize directed acyclic graphs (DAGs), were introduced by Judea Pearl to describe the causal relationships between variables. Causal diagrams are helpful in eliminating estimation bias through conditional distributions [41]. The fundamental idea behind this approach is to estimate and test distributions while minimizing bias introduced by other variables. Due to the presence of confounding variables, three distinct paths arise in causal inference: causal paths, backdoor paths, and front-door paths. Figure 1 illustrates these three paths. Here, Z refers to confounding variable, X represents the treatment variable, and Y represents the target variable.
The set of variable Z in backdoor path satisfies the backdoor criterion:
  • Z does not contain any descendant nodes of X.
  • Z blocks every path from Y to X that contains a connection to X.
The variable Z set in the front-door path satisfies the front-door criterion:
  • Z cuts off all directed paths from X to Y.
  • There is no backdoor path from X to Z.
  • X blocks all backdoor paths from Z to Y.
The significance of the backdoor criterion and the front-door criterion lies in their ability to estimate certain causal effects using observed data, even when some variables are unobservable. These two criteria are helpful in identifying confounding variables and in designing experimental studies.
The forthcoming experimental design will employ backdoor adjustment and front-door adjustment to truncate the backdoor paths and front-door paths, respectively, based on the backdoor criterion and the front-door criterion. The influence of confounding variables on the causal paths will be eliminated by doing so, allowing the model to correctly identify and assess the causal relationships and effects among stock-related factors.

2.2. Granger Causality Test

The Granger causality test uses statistical techniques to analyze the causality of economic variables [42]. The existence and direction of causal relationships between variables are determined through the assessment of the significance of respective prior period indicators, as reflected by the lagged variables of the economic variables. This assessment helps to explain and influence indicators of each other, leading to conclusive results. The Granger causality test commonly employs a distributional lag model to infer whether the previous level of variable X impacts the subsequent level of variable Y , which is typically represented as follows:
Y t = α 0 + α 1 Y t 1 + + α p Y t p + β 1 X t 1 + + β p X t p + ε t
where X t i represents the distributed lag term of X , examining whether X has an impact on the current level of Y . The coefficient β i reflects the magnitude of this impact, indicating the existence of causality. Y t i is the distributed lag term of Y , and the coefficient α i represents its impact. ε t denotes the error term. To infer the causality of X on Y is to examine the following hypotheses:
H 0 : β 1 = β 2 = = β p = 0
This hypothesis is generally tested by constructing the F-test statistic Equation (3), which is defined as
F = ( R S S 0 R S S 1 ) / p R S S 1 / ( T 2 p 1 )
where R S S 0 represents the sum of squared errors under the null hypothesis H 0 , R S S 1 is the sum of squared errors under the alternative hypothesis H 1 in Equation (4), p denotes the lag length, and T is the sample size. This statistic satisfies the F-distribution, when the null hypothesis is that X is not the cause of Y . By referring to the F-distribution table, one can determine the statistical significance of the test at a specific confidence level. If the original hypothesis is rejected, this indicates that there is a causal relationship from variable X to variable Y .
The alternative hypothesis H 1 is denoted as
H 1 : β 1 = β 2 = = β p = 1

2.3. Temporal Causal Network

The stock closing price and its relevant factors are time-series data. Figure 2 depicts the causal relationships among these temporal data. In this figure, X i refers to the treatment variable, Z i represents the potential confounding variable, and Y denotes the target variable. The arrows in the figure indicate the direction of influence between the variables.
Specific techniques must be employed for these confounding variables that vary over time. Before using these methods, certain generalized premise assumptions must be fulfilled [43]:
  • Sequential Ignorability Assumption: the assumption is that for each time point, if after controlling for a series of values of the confounding factors, the outcome of the treatment at each time point is only affected by its own treatment status and not by the treatment status of other time points, i.e., the effect of accepting the treatment on a single individual is independent of whether other individuals accept the treatment or not. The Assumption can be expressed as
    Y p o t e n t i a l X t 1 , T 0 , Z t = Y p o t e n t i a l X t 1 , T 1 , Z t
    here, for time point t , the values of various confounding variables at each time point prior to the time point, which includes time point t, can be denoted as Z t . The string of historical values of the treatment variable prior to t, excluding t, is written as Xt−1. The various potential values of Y are represented as Y p o t e n t i a l . T0 indicates that Y does not accept the treatment and T1 indicates that Y accepts the treatment. If this assumption is not satisfied, there is in fact a confounding of causal relationships at each time point. And it is impossible to accurately estimate the causal effect at each time point.
2.
Consistency Assumption: this assumption requires that the observed value of Y under a specific sequence of treatment variable values is equal to its potential value.
3.
Positive Value Assumption: this assumption refers to the probability of an individual receiving a treatment intervention at time point t being between 0 and 1, but not equal to either 0 or 1, after controlling for a series of confounding variables Z t  up to and including time point t, and the sequence of treatment variable values X t 1 before time point t.
Introducing a time dimension in deep learning models allows for the creation of temporal structures to better handle time-related confounding variables. For instance, models such as RNN or LSTM are used to capture temporal dependencies. The prediction of stock closing prices using deep learning models can be obtained for the values in potential states, satisfying the consistency assumption. In addition, neural networks can satisfy the positivity assumption with activation functions, such as the Sigmoid function, Tanh function, ReLu function, etc.
However, experimental design is still required to satisfy the sequential ignorability assumption. After fulfilling the three main assumptions, the temporal causal network can be abstracted as a neural network, where the arrows in the neural network are determined by their corresponding weights. If the weight is zero, it implies that the corresponding path does not exist.

2.4. Deep Learning-Based Causal Inference Network Architecture and Algorithms

The causal inference architecture based on deep learning is illustrated in Figure 3. This architecture employs GRU networks to capture temporal dependencies in sequence data and extract feature representations. Both LSTM and GRUs address the issues of gradient vanishing, gradient explosion, and long-term dependencies in a traditional RNN. However, compared to LSTM, GRUs possess a more concise structure, which makes them computationally faster and suitable for handling large-scale datasets. Meanwhile, GRUs exhibit higher efficiency in memory utilization, reducing the burden of storage and computation.
A sliding window strategy with a window size of 5 is used in GRUs. This strategy allows the segmentation and processing of time series data in fixed window lengths. By sliding the window, continuous subsequences can be obtained and utilized for further analysis and modeling. This approach proves to be highly effective in capturing local patterns and dynamic features within sequences, thereby enhancing the performance and effectiveness of the model.
The fully connected layer converts the feature extraction and representation from the previous layer into the final output result. By learning the weights of each connection, the fully connected layer can adjust these weights during training to minimize the loss function, allowing the model to make accurate predictions.
To satisfy the assumption of sequential ignorability and eliminate the interference brought by confounding variables in analyzing causal relationships, backdoor adjustment and front-door adjustment were conducted. The input data were divided into two groups: an experimental group and a control group. When one test factor is selected as the treatment variable, the other test factors are the confounding variables, and the stock closing price serves as the target variable.
Control group: the historical information of confounding variables Z and the target variable Y is utilized to predict the target variable Y.
Experimental group: The historical information of the treatment variable X, confounding variables Z, and the target variable Y is utilized to predict the target variable Y.
The distribution of confounding variables Z remains unchanged between the control group and the experimental group, with the target variable consistently being the closing price. According to the backdoor criterion and the front-door criterion, it can be inferred that all the backdoor paths and front-door paths between X and Y are cut off. This experimental design also satisfies the MB-by-MB (Markov Blanket by Markov Blanket) algorithm in the local learning of causal networks [44], and the optimal stepwise intervention design in the active learning of causal networks [45].
The corresponding network model for causal inference architecture is shown in Figure 4. The innovation of this model is to build a grouped architecture using two GRU networks, which achieves the control of confounding variables and the accurate prediction of the closing price of the target variable under different circumstances. It is coupled with a sliding window strategy, so that it satisfies the assumption of sequential ignorability.
Specifically, when a factor is selected as a treatment variable, other factors are potential confounding variables, and the closing price of an individual stock is the target variable. The data underwent initial processing and grouping to obtain the experimental and control groups. Then the two groups of data were used as inputs, and the potential values of closing prices in different situations were obtained by sliding window and GRU calculations, respectively, and then the causal relationship between the closing prices of individual stocks and the relevant factors was inferred by the non-linear Granger test.
Closing prices were predicted using data from the experimental group and control group. Since the computational process of the GRU network is non-linear, the formula of the lagged distribution model under the H 1 assumption is rewritten as
Y t = α 0 + f i = 1 p β i X t i , d = 1 m i = 1 p γ i d Z t i d , i = 1 p α i Y t i + ε t
where α 0 represents the baseline value; f denotes the composite function; p indicates the lag length; α i , β i , and γ i d are the coefficients of the corresponding distributional lag terms; X t i , Z t i d , and Y t i are distributional lags of X , Z , and Y , respectively; and ε t stands for the error term. The output under the H 0 assumption is obtained as:
Y ~ t = α 0 + f d = 1 m i = 1 p γ i d Z t i d , i = 1 p α i Y t i
Subsequently, a Granger causality test is conducted by calculating the F-value. The formula is as follows:
F = t = 1 T Y ~ t Y t 2 Y ~ t Y t 2 p t = 1 T Y t Y t 2 T 2 p 1
where Y t is the true value of the closing price. If F > F a (where F a is obtained by querying the F-distribution table using T and p values), the null hypothesis H 0 is rejected. Thus, it is inferred that there is a causal relationship between the treatment variable and the target variable, i.e., the treatment variable is the cause of the target variable.
Algorithm 1 shows the algorithm for the causal inference network architecture based on deep learning.
Algorithm 1. Causal Inference Algorithm Based on Deep Learning.
Input: The experimental dataset contains T samples, N features D I = { ( W t , Y t ) } ; the control group dataset contains T samples and M features D I I = { ( H t , Y ~ t ) } , W t = X t i , Z t d , Y t 1 , C t , H t = Z t d , Y t 1 , C ~ t , Y t = c t + 1 , Y ~ t = c ~ t , X t i = x t p + 1 i , x t p + 2 i , , x t i i = 1 N , Z t d = z t p + 1 d , z t p + 2 d , , z t d d = 1 M , Y t 1 = y t p + 1 , y t p + 2 , , y t 1 , C t = c t p + 1 , c t p + 2 , , c t , C ~ t = c ~ t p + 1 , c ~ t p + 2 , , c ~ t , the lag length p , the number of training cycles E
Output: Granger causality of closing price
1Initialize the parameters of the GRU model;
2 r e s u l t _ l i s t = [ ] # Storage for the results of each iteration.
3for  e ( 1 , E )  do
4     for  i 1 , N  do
5          Y t _ l i s t = [ ] # Storage for the results y1 of each iteration.
6          Y ~ t _ l i s t = [ ] # Storage for the results y2 of each iteration.
7         for t ( 1 , T )  do
8              Y t = G R U X t i , Z t d , Y t 1 , C t ;
9              Y ~ t = G R U Z t d , Y t 1 , C ~ t ;
10              Y t . a p p e n d Y t ;
11              Y ~ t . a p p e n d Y ~ t ;
12          end for
13      F v a l u e = F _ c a l c u l a t e ( y t 1 , y t 2 )
14      F a = F _ f i n d ( T , p )
15      c a u s a l _ f a c t o r = G r a n g e r _ t e s t ( F v a l u e , F a )
16      r e s u l t _ l i s t . a p p e n d ( c a u s a l _ f a c t o r )
17      end for
18end for
19return  c a u s a l _ f a c t o r

2.5. Dataset

Using BaoStock to obtain the time series data of a stock, there are 12 factors to consider for evaluating the causal relationship of closing prices of the stock. These factors include opening price, highest price, lowest price, trading volume, trading amount, turnover rate, percentage change, price-earnings (P/E) ratio, price-to-book (P/B) ratio, price-to-sales (P/S) ratio, price-to-cash flow (P/CF) ratio, and the Shanghai Stock Exchange (SSE) Index.
The SHCOMP is a comprehensive stock index that reflects the performance of the overall A-share market. If the SHCOMP undergoes significant fluctuations, the majority of stock prices will be affected. Based on historical data and market performance, certain industries are more affected to changes in the SHCOMP, including the following industries:
  • Financial industry: Due to the significant impact of government policies and regulations on the financial market, the volatility of financial stocks is more pronounced relative to other industries. China Taibao (sh.601601) was chosen as a representative. It is a Chinese insurance company with a substantial market capitalization and a particular degree of influence.
  • Real estate industry: The real estate market has a large contribution, weighting ratio to the Shanghai stock market. The relaxation or tightening of property market policies has a considerable impact on the volatility of stock prices. Poly Real Estate (sh.600048), a renowned real estate developer in China, involved in diverse sectors, such as residential, commercial real estate, and office buildings, has been selected as a representative.
  • Energy and raw materials industry: The profitability of these industries is affected by factors such as changes in the global supply and the demand of raw materials, international oil prices, policy environment, among others. China Petroleum & Chemical Corporation (sh.601857) has been selected. It is one of largest oil and gas producers in China, with abundant energy resources and a significant market share.
There are also some industry stocks that are relatively less affected. Based on historical data and market performance, the following are some of the industries that are less affected by the volatility of the SHCOMP:
  • Public utility companies: Public utility companies typically exhibit a relatively stable earnings model and decent cash flow. As a result, they may be relatively less affected by the volatility of the SHCOMP. China Guodian (sh.601985) is chosen because it is one of the largest power companies in China.
  • Food & beverage industry: As a general trend, food and beverage enterprises tend to have a stable income and profit model, with a relatively stable market and less volatility in comparison to other industries. Yingjia Gongjiu (sh.603198), a well-known Chinese Baijiu brand, was chosen as a representative.
  • Banking industry: Although the financial industry as a whole tends to be volatile, the stock prices of the banking industry are relatively less affected by the SHCOMP because it has substantial cash flows and asset-liability structure. Shanghai Pudong Development Bank (sh.600000) was chosen due to a relatively stable business model and income.
To ensure sufficient data support and incorporate diverse market conditions, the date range of the selected stocks is from 1 July 2017 to 1 July 2022. This approach also aims to minimize the disturbance of structural changes, allowing for a comprehensive observation and analysis of long-term trends and cyclical fluctuations in the stock market, ultimately enhancing the reliability and generalization of the findings.

2.6. Evaluation Parameter

(1)
Root mean square error (RMSE)
The RMSE is the square root of the sum of squared differences between predicted values and true values, which is defined as
  R M S E = 1 N i = 1 N y i y i 2
where y i denotes the predicted value, y i represents the true value, and N is the number of observations. The sum of squared deviations is highly sensitive to errors that are either significantly larger or smaller. Consequently, the resulting error measure provides a reliable assessment of the predictive performance. A smaller RMSE value denotes superior prediction performance, while a larger RMSE value indicates a greater divergence from the true results.
(2)
Mean absolute error (MAE)
The MAE is the average of absolute differences between predicted values and true values of the model. As a result, it intuitively captures the discrepancy between predicted and true values. The MAE is determined by
  M A E = 1 N i = 1 N y i y i
A smaller MAE value indicates a smaller difference between predicted values and true values, suggesting that the prediction results are closer to true values.
(3)
Mean absolute percentage error (MAPE)
The MAPE diminishes the influence of magnitude in comparison to the previous two metrics, making it well-suited for assessing the efficacy of a model in predicting various stocks, which is formulated as follows:
  M A P E = 100 N i = 1 N y i y i y i
The MAPE is also an error metric, with smaller values indicating a better performance of the predictive model.
(4)
The R2 coefficient of determination
The R2 coefficient of determination is utilized to evaluate the fitting degree of a network model, which can be expressed as
  R 2 = i = 1 N y i y i ¯ 2 i = 1 N y i y i ¯ 2
where y ¯ i is the average of the real values. The R2 coefficient of determination serves as a measure to evaluate the fitting and predictive capabilities of the model. A higher value signifies a greater predictive ability and improved accuracy.

3. Results

3.1. Experimental Hardware and Software Environment

The experiment was conducted using the Python-based TensorFlow framework to build a deep learning network. The central processor used was an Intel® Core™ i7-9750H, and the graphics card was Nvidia GeForce GTX1650 with 4 GB of memory. The learning rate of optimizer Adam was set to 0.001, batch training size was 64, and the total training epochs were set to 80. To accurately examine causality, the causal inference experiments used a fixed seed to initialize weights. The software used for data analysis was Pycharm version 2022.3.2.

3.2. Data Preprocessing and Normalization

After getting the stock data through BaoStock, we checked if there were any missing values in the stock data and, if there were, we filled them with the data of the previous day. After the missing values were processed, the data format was standardized by Equation (13), defined as follows:
x n o r m = x x m i n x m a x x m i n
where x is the value to be normalized, and x m i n and x m a x are the minimum and maximum values in the feature X , respectively. The output of the function x n o r m is the result of the maximum and minimum normalization of x . This formula scales each value in the data to between [0, 1], eliminating the order of magnitude effect between features while preserving the relative size relationship between the data. The sample size for each statistical analysis was 1215.

3.3. Causal Inference Experiment

The length of the dataset is 1215, denoted as sample size T. The GRU model adopts a sliding window strategy with a window size of 5, represented by the lag length p as 5. By referring to the F distribution table, the critical value F a corresponding to the values T = 1215 and p = 5 was determined to be 3.501. If the calculated F value surpassed 3.501, it means that the factor passed the Granger test. To enhance the reliability of the results, the average of the results from ten experiments were calculated, shown in Table 1, with a 95% confidence interval.

3.4. Prediction Comparison Experiment

(1)
Comparison of different input variables
The baseline RNN, LSTM, and GRU models, which integrated all potentially relevant factors, were compared with the RNN, LSTM, and GRU models that incorporated causal factors in the different datasets. Furthermore, the analysis included comparisons with a GRU model that exclusively utilized closing price data. This GRU model, which focused solely on closing price data, was also part of the comparative evaluation. Figure 5 and Figure 6 illustrate the prediction results of the experiments for China Taibao and Shanghai Pudong Development Bank (SPDB), respectively. Figure 7 and Figure 8 show scatter plots for the experiments conducted on China Taibao and SPDB, respectively. Visual presentations of the remaining datasets, as well as scatterplots, can be viewed in the Supplementary Materials.
Table 2 presents the RMSE, MAE, MAPE, and R2 for the seven kinds of models. To account for the varying scale ranges of the target variables in different datasets, the evaluation metrics are calculated using standardized data. This standardization process eliminates scale differences, allowing for more intuitive comparisons of errors. We compared the performance of the RNN, LSTM, and GRU models in the stock price prediction task with the same input variables, and provided support for the algorithm to apply the GRU model to calculate the potential value of the target variable. In the case of the same network structure, we compared the performance of models with different input variables to verify the validity and accuracy of the experimental results of causal inference.
Figure 9 visualizes the comparative analysis of the GRU model experiment results across different input variables in different datasets.
In order to further verify the generality of the model, the data of certain stocks in the Shenzhen Stock Exchange Composite Index (SZCI) from 1 July 2017 to July 2022 were selected; details and the experimental results are in Appendix A.

4. Discussion

The causal inference experiment employs the Granger causality test to determine the causality of factors in industries that are highly and moderately influenced by the SHCOMP. The results show that, in highly influenced industries, causal factors included the open price, high price, low price, trading volume, trading amount, turnover rate, percentage change, P/E ratio, P/B ratio, and the related index itself. In contrast, in less influential industries, the causal relationship between the remaining factors and the closing price is more significant, except for the related index. These findings suggest that, in highly influential industries, individual stock closing prices are more significantly affected by index factors. Meanwhile, the causal relationship between individual stock factors is more pronounced in less influential industries.
The performance of various models was compared using identical input variables, and Table 3 illustrates the percentage improvement in performance of the optimal model compared to other models. From the data presented in the table, it is evident that the GRU model outperformed the RNN and LSTM models in predictive accuracy. This outcome proves that the decision of the proposed algorithm and framework to use GRUs to compute the potential value of the target variable is a reasonable and appropriate choice.
Table 4 shows the percentage improvement in predictive performance of the model, with input causal factors compared to the baseline model with input from all potential factors. The results show that the enhanced model with causal factors input outperformed the corresponding benchmark model with all potential factor input in terms of predictive performance. For example, in the dataset of stock sh.601601, the inclusion of causal factors resulted in performance improvements for the RNN, LSTM, and GRU models compared to their corresponding baseline models. For the RNN model, there was an enhancement of 12.78% in RMSE, 13.07% in MAE, 5.15% in MAPE, and 1.76% in R2. Regarding the LSTM model, RMSE saw a 14.99% enhancement, MAE improved by 16.91%, MAPE by 2.42%, and R2 by 1.03%. For the GRU model, the RMSE was enhanced by 17.70%, MAE by 18.01%, MAPE by 15.65%, and R2 by 1.16%.
In addition to this, the table also compares the performance between GRU models, predicted using data of closing prices and data predicted using causal factors. The results show that the model using causal factors as input variables performed the best. For example, in the dataset of stock sh.600000, the GRU model using causal factors showed a significant performance improvement compared to the GRU model using only closing price: 23.86% on RMSE, 29.25% on MAE, 33.71% on MAPE, and R2 1.93%. Furthermore, relative to the GRU model using all latent factors, the causal factor model improved by 15.08% on RMSE, 20.06% on MAE, 26.95% on MAPE, and 1.03% on R2.
The utilization of causal factors not only reduces the amount of noisy information that the model has to deal with, but also focuses on the most relevant information, thus improving the predictive performance of the model. The causal factors inferred by the method proposed in this paper as inputs to the closing price prediction GRU model outperformed the models using closing price as inputs in terms of prediction accuracy and prediction performance, while the prediction performance of the benchmark model with all potential factors input was inferior to that of the model with causal factors inputs. These results further test the causal inference of correctness and enhance the reliability and validity of the method proposed in this paper.

5. Conclusions

In this study, a causal inference method was applied, combining the GRU model and the Granger causality test, to realize the causality analysis based on stock data. By introducing Granger causality tests, we could identify important factors and determine the degree of influence of index factors on individual stock closing prices. Additionally, the experimental results of incorporating causal factors into the prediction model further validated the correctness of causal inference.
In summary, the deep learning-based causal inference architecture and algorithms proposed in this paper show promising results in analyzing causal relationships in stock data. Future research can further explore the application of causal inference methods in the analysis of other financial time series data. Future research can further explore the application of causal inference methods in the analysis in other financial time series data. In addition, further optimization of the model performance and extension of the application scope can be carried out.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/electronics13112056/s1. Figures S1, S3, S5 and S7 illustrate the prediction results of the three experiments in different datasets. Figures S2, S4, S6 and S8 show scatter plots for the three experiments conducted in different datasets.

Author Contributions

Conceptualization, W.X. and C.C.; methodology, W.X.; software, W.X.; validation, W.X. and C.C.; formal analysis, W.X. and L.X.; investigation: C.C.; resources: L.X.; data curation, W.X. and C.C.; writing—original draft preparation, W.X.; writing—review and editing, W.X. and C.C.; visualization, W.X.; supervision, L.X.; project administration, W.X. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available on BaoStock at www.baostock.com (accessed on 1 July 2022). BaoStock is a free and open-source securities data platform. It provides a large amount of accurate and complete historical securities market data, financial data of listed companies, and so on.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The SZCI impacts industries and stock prices similarly to the SHCOMP. Historical data show specific industries more influenced by the SZCI:
  • Technology industry: Zhong Xing Telecommunication Equipment Corporation (ZTE) (sz.000063) is a leading communications equipment and solutions provider in China, and its position and influence in the technology sector is such that its share price is usually more significantly affected by changes in the SZCI Index.
  • Pharmaceutical industry: Aier Ophthalmology (sz.300015) is one of well-known eye medical enterprises of China; its investment and business in the pharmaceutical industry covers a variety of areas, such as ophthalmology diagnosis and treatment, eye surgery, etc.
  • New energy industry: BYD (sz.002594) is one of the leading new energy vehicle manufacturers of China, which has strong technical strength and market share in the field of electric vehicles, and its share price is often affected by changes in the Shenzhen Composite Index.
There are some industries whose stock prices are relatively less affected by changes in the SZCI. Listed below are a few industries that may be less affected by the volatility of the SZCI:
  • Public service industry: Such as urban infrastructure and domestic waste treatment, etc. These companies are more controlled by the government, and their business is stable and relatively unaffected by industry cyclical factors. China General Nuclear (CGN) Power Corporation (sz.000881) is highly influenced by government policy support and the stability of market demand, and its share price is relatively stable and less affected by fluctuations in the Shenzhen Composite Index.
  • Traditional manufacturing industry: Such as machinery, petrochemicals, iron and steel, and other industries related to enterprise. These companies’ operating business is relatively stable, profit cycle is more obvious, and they are not affected by fluctuations of the Shenzhen Composite Index too much. The Gree Electric Appliances (sz.000651) sector has a strong market share and brand influence in the traditional manufacturing.
  • FMCG industry: Wuliangye (sz.000858) is one of the leading liquor producers, with a stable market share and brand influence in the FMCG industry, and its share price is relatively stable.
Table A1 shows the results of the proposed causal inference method for inferring the causal relationship between correlation factors and stock prices in different datasets.
Table A1. F-test values for each factor in the Shenzhen Stock Exchange datasets (bolded values are those that pass the Granger test).
Table A1. F-test values for each factor in the Shenzhen Stock Exchange datasets (bolded values are those that pass the Granger test).
sz.000063sz.300015sz.002594sz.000881sz.000651sz.000858
Opening Price−2.015−5.897−36.8047.32911.129−1.516
Highest Price−7.171−8.988−34.94817.96034.90512.911
Lowest Price−15.8645.957−17.46010.573−65.159−0.072
Trading Volume−1.259−5.498−17.34111.056−39.1121.412
Trading Amount−9.9228.05527.314−5.3505.5002.697
Turnover Rate−8.4424.091−24.906−15.264−24.564−7.416
Percentage Change−39.072−53.889−24.057−46.02418.947−76.086
P/E Ratio−15.240−11.801−3.728−2.322−20.5836.873
P/B ratio−21.259−3.086−54.3693.817−14.8696.517
P/S ratio−30.9070.9660.3924.39547.68112.986
P/CF ratio10.8374.083−8.094−49.676−35.68510.580
SHCOMP11.3283.91134.236−4.662−37.955−1.149
Table A2 shows the results of the experiments comparing different prediction models in different datasets.
Table A2. Comparison of evaluation metrics for different models in Shenzhen Stock Exchange datasets (↑ indicates that larger values are better, ↓ indicates that smaller values are better, the best result among comparative experiments is in bold).
Table A2. Comparison of evaluation metrics for different models in Shenzhen Stock Exchange datasets (↑ indicates that larger values are better, ↓ indicates that smaller values are better, the best result among comparative experiments is in bold).
ModelStockRMSE↓MAE↓MAPE↓R2
Close Price + GRUssz.0000630.04630.036531.14510.9661
sz.3000150.04260.032113.31530.9670
sz.0025940.09000.075922.96630.8705
sz.0008810.07750.056212.96590.8366
sz.0006510.06720.058421.57010.9292
sz.0008580.06610.053021.30260.9261
Potential Factors + RNNsz.0000630.05650.046235.52970.9497
sz.3000150.04550.043015.15710.9672
sz.0025940.10860.088732.38190.7870
sz.0008810.09640.070516.73190.7473
sz.0006510.06540.055929.61140.9347
sz.0008580.07340.058219.45500.9088
Potential Factors + LSTMsz.0000630.05380.043135.01790.9543
sz.3000150.04290.032913.71510.9666
sz.0025940.08610.070127.89580.8661
sz.0008810.07560.057813.53590.8443
sz.0006510.03690.028714.51300.9792
sz.0008580.05720.045016.31790.9446
Potential Factors + GRUsz.0000630.04200.030218.80820.9722
sz.3000150.04110.031111.85890.9712
sz.0025940.07190.058322.51590.9065
sz.0008810.07470.054211.71940.8480
sz.0006510.03440.026414.00270.9819
sz.0008580.05470.043115.83080.9494
Causal Factors + RNNsz.0000630.04800.036721.29820.9637
sz.3000150.03960.035213.31260.9715
sz.0025940.08220.063325.42030.8780
sz.0008810.09000.067014.78050.7794
sz.0006510.03860.030616.14980.9773
sz.0008580.06660.052519.80410.9250
Causal Factors + LSTMsz.0000630.03810.027918.39140.9771
sz.3000150.04100.031111.20380.9695
sz.0025940.07540.061523.04590.8972
sz.0008810.07490.055813.15000.8474
sz.0006510.03470.025511.75190.9816
sz.0008580.05430.041815.72390.9501
Causal Factors + GRUsz.0000630.03690.026617.89130.9785
sz.3000150.03790.031011.19710.9794
sz.0025940.06800.052621.95710.9163
sz.0008810.04950.039311.07620.9442
sz.0006510.03210.023511.27480.9843
sz.0008580.05380.041916.44090.9509
Figure A1, Figure A3, Figure A5, Figure A7, Figure A9, and Figure A11 illustrate the experimental predictions in different data sets, respectively. Figure A2, Figure A4, Figure A6, Figure A8, Figure A10, and Figure A12 show scatter plots for the experimental predictions in different data sets, respectively.
Figure A1. Visualization of prediction results for different models in the ZTE (sz.000063) dataset.
Figure A1. Visualization of prediction results for different models in the ZTE (sz.000063) dataset.
Electronics 13 02056 g0a1
Figure A2. Scatter plot of prediction results for different models in the ZTE (sz.000063) (sz.300676) dataset.
Figure A2. Scatter plot of prediction results for different models in the ZTE (sz.000063) (sz.300676) dataset.
Electronics 13 02056 g0a2
Figure A3. Visualization of prediction results for different models in the Aier Ophthalmology (sz.300015) dataset.
Figure A3. Visualization of prediction results for different models in the Aier Ophthalmology (sz.300015) dataset.
Electronics 13 02056 g0a3
Figure A4. Scatter plot of prediction results for different models in the Aier Ophthalmology (sz.300015) dataset.
Figure A4. Scatter plot of prediction results for different models in the Aier Ophthalmology (sz.300015) dataset.
Electronics 13 02056 g0a4
Figure A5. Visualization of prediction results for different models in the BYD (sz.002594) dataset.
Figure A5. Visualization of prediction results for different models in the BYD (sz.002594) dataset.
Electronics 13 02056 g0a5
Figure A6. Scatter plot of prediction results for different models in the BYD (sz.002594) dataset.
Figure A6. Scatter plot of prediction results for different models in the BYD (sz.002594) dataset.
Electronics 13 02056 g0a6
Figure A7. Visualization of prediction results for different models in the CGN Power Corporation (sz.000881) dataset.
Figure A7. Visualization of prediction results for different models in the CGN Power Corporation (sz.000881) dataset.
Electronics 13 02056 g0a7
Figure A8. Scatter plot of prediction results for different models in the CGN Power Corporation (sz.000881) dataset.
Figure A8. Scatter plot of prediction results for different models in the CGN Power Corporation (sz.000881) dataset.
Electronics 13 02056 g0a8
Figure A9. Visualization of prediction results for different models In the Gree Electric Appliances (sz.000651) dataset.
Figure A9. Visualization of prediction results for different models In the Gree Electric Appliances (sz.000651) dataset.
Electronics 13 02056 g0a9
Figure A10. Scatter plot of prediction results for different models in the Gree Electric Appliances (sz.000651) dataset.
Figure A10. Scatter plot of prediction results for different models in the Gree Electric Appliances (sz.000651) dataset.
Electronics 13 02056 g0a10
Figure A11. Visualization of prediction results for different models in the Wuliangye (sz.000858) dataset.
Figure A11. Visualization of prediction results for different models in the Wuliangye (sz.000858) dataset.
Electronics 13 02056 g0a11
Figure A12. Scatter plot of prediction results for different models in the Wuliangye (sz.000858) dataset.
Figure A12. Scatter plot of prediction results for different models in the Wuliangye (sz.000858) dataset.
Electronics 13 02056 g0a12

References

  1. Wang, Y.J.; Feng, Q.Y.; Chai, L.H. Structural evolutions of stock markets controlled by generalized entropy principles of complex systems. Int. J. Mod. Phys. B 2010, 24, 5949–5971. [Google Scholar] [CrossRef]
  2. Tiberiu, A.C.; Kumar, T.A.; Phouphet, K. Nonlinearities and Chaos: A New Analysis of CEE Stock Markets. Mathematics 2021, 9, 707. [Google Scholar] [CrossRef]
  3. Olgun, H.; Ozdemir, Z.A. Linkages between the Center and Periphery Stock Prices: Evidence from the Vector ARFIMA Model. Econ. Model. 2007, 25, 512–519. [Google Scholar] [CrossRef]
  4. Moews, B.; Herrmann, J.M.; Ibikunle, G. Lagged Correlation-Based Deep Learning for Directional Trend Change Prediction in Financial Time Series. Expert Syst. Appl. 2018, 120, 197–206. [Google Scholar] [CrossRef]
  5. Zhao, R. Inferring Private Information from Online News and Searches: Correlation and Prediction in Chinese Stock Market. Phys. A Stat. Mech. Its Appl. 2019, 528, 121450. [Google Scholar] [CrossRef]
  6. Liang, M.; Wang, X.; Wu, S. Improving Stock Trend Prediction through Financial Time Series Classification and Temporal Correlation Analysis Based on Aligning Change Point. Soft Comput. 2022, 27, 3655–3672. [Google Scholar] [CrossRef]
  7. Ankit, T.; Dhaval, P.; Preet, S. Pearson Correlation Coefficient-Based Performance Enhancement of Vanilla Neural Network for Stock Trend Prediction. Neural Comput. Appl. 2021, 33, 16985–17000. [Google Scholar]
  8. Liu, J.; Li, H.; Hai, M.; Zhang, Y. A Study of Factors Influencing Financial Stock Prices Based on Causal Inference. Procedia Comput. Sci. 2023, 221, 861–869. [Google Scholar] [CrossRef]
  9. Shivam, S.; Gyaneshwar, S.K. Effects of Temperature Rise on Clean Energy-Based Capital Market Investments: Neural Network-Based Granger Causality Analysis. Sustainability 2022, 14, 11163. [Google Scholar] [CrossRef]
  10. Wang, J.; Yuan, Y.; Tian, G.; Zheng, Y.; Wang, Z. Micro Input Factors Affecting Industrial Energy Efficiency in Hebei Province Based on Granger Causal Relation Test Model. J. Comput. Methods Sci. Eng. 2022, 22, 1069–1080. [Google Scholar] [CrossRef]
  11. Bachar, M.; Varsakelis, N.C. Causality between International Trade and International Patenting: A Combination of Network Analysis and Granger Causality. Atl. Econ. J. 2022, 50, 9–26. [Google Scholar]
  12. Hou, X.; Li, S.; Li, W.; Wang, Q. Bank Diversification and Liquidity Creation: Panel Granger-Causality Evidence from China. Econ. Model. 2018, 71, 87–98. [Google Scholar] [CrossRef]
  13. Meng, X.; Han, J. Roads, Economy, Population Density, and CO2: A City-Scaled Causality Analysis. Resour. Conserv. Recycl. 2018, 128, 508–515. [Google Scholar] [CrossRef]
  14. Zhao, Y.; Billings, S.A.; Wei, H.; He, F.; Sarrigiannis, P.G. A New NARX-Based Granger Linear and Nonlinear Casual Influence Detection Method with Applications to EEG Data. J. Neurosci. Methods 2013, 212, 79–86. [Google Scholar] [CrossRef]
  15. Chang, T.; Gatwabuyege, F.; Gupta, R.; Inglesi-Lotz, R.; Manjezi, N.C.; Simo-Kengne, B.D. Causal Relationship between Nuclear Energy Consumption and Economic Growth in G6 Countries: Evidence from Panel Granger Causality Tests. Prog. Nucl. Energy 2014, 77, 187–193. [Google Scholar] [CrossRef]
  16. Xu, C.; Zhao, X.; Wang, Y. Causal Decomposition on Multiple Time Scales: Evidence from Stock Price-Volume Time Series. Chaos Solitons Fractals Interdiscip. J. Nonlinear Sci. Nonequilibrium Complex Phenom. 2022, 159, 112137. [Google Scholar] [CrossRef]
  17. Baños, R.; Manzano-Agugliaro, F.; Montoya, F.G.; Gil, C.; Alcayde, A.; Gómez, J. Optimization Methods Applied to Renewable and Sustainable Energy: A Review. Renew. Sustain. Energy Rev. 2010, 15, 1753–1766. [Google Scholar] [CrossRef]
  18. Raul, V.; Michael, W.; Michael, L.; Gordon, P. Transfer Entropy—A Model-Free Measure of Effective Connectivity for the Neurosciences. J. Comput. Neurosci. 2011, 30, 45–67. [Google Scholar]
  19. Brockmann, D.; Helbing, D. The Hidden Geometry of Complex, Network-Driven Contagion Phenomena. Science 2013, 342, 1337–1342. [Google Scholar] [CrossRef]
  20. Deyle, E.R.; Fogarty, M.; Hsieh, C.-H.; Kaufman, L.; MacCall, A.D.; Munch, S.B.; Perretti, C.T.; Ye, H.; Sugihara, G. Predicting Climate Effects on Pacific Sardine. Proc. Natl. Acad. Sci. USA 2013, 110, 6430–6435. [Google Scholar] [CrossRef]
  21. Tsonis, A.A.; Deyle, E.R.; May, R.M.; Sugihara, G.; Swanson, K.; Verbeten, J.D.; Wang, G. Dynamical Evidence for Causality between Galactic Cosmic Rays and Interannual Variation in Global Temperature. Proc. Natl. Acad. Sci. USA 2015, 112, 3253–3256. [Google Scholar] [CrossRef] [PubMed]
  22. Hirata, Y.; Amigó, J.M.; Matsuzaka, Y.; Yokota, R.; Mushiake, H.; Aihara, K. Detecting Causality by Combined Use of Multiple Methods: Climate and Brain Examples. PLoS ONE 2017, 11, e0158572. [Google Scholar] [CrossRef] [PubMed]
  23. Joskow, P.L.; Rose, N.L. Chapter 25 The Effects of Economic Regulation. In Handbook of Industrial Organization; Elsevier: Amsterdam, The Netherlands, 1989; Volume 2, pp. 1449–1506. [Google Scholar]
  24. Van Nes, E.H.; Scheffer, M.; Brovkin, V.; Lenton, T.M.; Ye, H.; Deyle, E.; Sugihara, G. Causal Feedbacks in Climate Change. Nat. Clim. Change 2015, 5, 445–448. [Google Scholar] [CrossRef]
  25. Sharma, R.K.; Sharma, A. Forecasting Monthly Gold Prices Using ARIMA Model: Evidence from Indian Gold Market. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 1373–1376. [Google Scholar]
  26. Dickey, D.A.; Fuller, W.A. Distribution of the Estimators for Autoregressive Time Series with a Unit Root. J. Am. Stat. Assoc. 1979, 74, 427–431. [Google Scholar] [CrossRef]
  27. Elham, R.S.; Reza, P.H.; Ali, A.; Farshad, S.S.; Clague, J.J. Comparison of Statistical and Machine Learning Approaches in Land Subsidence Modelling. Geocarto. Int. 2022, 37, 6165–6185. [Google Scholar]
  28. Sun, Z.; Dong, W.; Shi, H.; Ma, H.; Cheng, L.; Huang, Z. Comparing Machine Learning Models and Statistical Models for Predicting Heart Failure Events: A Systematic Review and Meta-Analysis. Front. Cardiovasc. Med. 2022, 9, 812276. [Google Scholar] [CrossRef] [PubMed]
  29. Leng, S.; Xu, Z.; Ma, H. Reconstructing Directional Causal Networks with Random Forest: Causality Meeting Machine Learning. Chaos 2019, 29, 093130. [Google Scholar] [CrossRef] [PubMed]
  30. Ni, W.J.; Shen, Q.L.; Zeng, Q.T.; Wang, H.Q.; Cui, X.Q.; Liu, T. Data-Driven Seeing Prediction for Optics Telescope: From Statistical Modeling, Machine Learning to Deep Learning Techniques. Res. Astron. Astrophys. 2022, 22, 125003. [Google Scholar] [CrossRef]
  31. Ahmed, A.; Ahmed, F.; Ahmad, S.; Mahmoud, B.; Esraa, E. Marine Data Prediction: An Evaluation of Machine Learning, Deep Learning, and Statistical Predictive Models. Comput. Intell. Neurosci. 2021, 2021, 8551167. [Google Scholar]
  32. Sedai, A.; Dhakal, R.; Gautam, S.; Dhamala, A.; Bilbao, A.; Wang, Q.; Wigington, A.; Pol, S. Performance Analysis of Statistical, Machine Learning and Deep Learning Models in Long-Term Forecasting of Solar Power Production. Forecasting 2023, 5, 256–284. [Google Scholar] [CrossRef]
  33. Chong, E.; Han, C.; Park, F.C. Deep Learning Networks for Stock Market Analysis and Prediction: Methodology, Data Representations, and Case Studies. Expert Syst. Appl. 2017, 83, 187–205. [Google Scholar] [CrossRef]
  34. Qin, Y.; Song, D.; Chen, H.; Cheng, W.; Jiang, G.; Cottrell, G.W. A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction. arXiv, 2017; arXiv:1704.02971. [Google Scholar]
  35. Zahra, N.; Narges, H. Combining LSTM and CNN Methods and Fundamental Analysis for Stock Price Trend Prediction. Multimed. Tools Appl. 2022, 82, 17769–17799. [Google Scholar]
  36. Rahman, M.O.; Hossain, S.; Junaid, T.-S.; Forhad, S.A.; Hossen, M.K. Predicting Prices of Stock Market Using Gated Recurrent Units (GRUs) Neural Networks. Int. J. Comput. Sci. Netw. Secur. 2019, 19, 213–222. [Google Scholar]
  37. Wei, P. DLI: A Deep Learning-Based Granger Causality Inference. Complexity 2020, 2020, 5960171. [Google Scholar]
  38. Tank, A.; Covert, I.; Foti, N.; Shojaie, A.; Fox, E.B. Neural Granger Causality. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4267–4279. [Google Scholar] [CrossRef] [PubMed]
  39. Rubin, D.B. Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies. J. Educ. Psychol. 1974, 66, 688–701. [Google Scholar] [CrossRef]
  40. Rosenbaum, P.R.; Rubin, D.B. The Central Role of the Propensity Score in Observational Studies for Causal Effects. Biometrika 1983, 70, 41–55. [Google Scholar] [CrossRef]
  41. Pearl, J. Causal Diagrams for Empirical Research. Biometrika 1995, 82, 669–688. [Google Scholar] [CrossRef]
  42. Granger, C.W.J. Investigating Causal Relations by Econometric Models and Cross-Spectral Methods. Econometrica 1969, 37, 424–438. [Google Scholar] [CrossRef]
  43. Imbens, G.W.; Rubin, D.B. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction; Cambridge University Press: Cambridge, UK, 2015; ISBN 978-0-521-88588-1. [Google Scholar]
  44. Wang, C.; Zhou, Y.; Zhao, Q.; Geng, Z. Discovering and Orienting the Edges Connected to a Target Variable in a DAG via a Sequential Local Learning Approach. Comput. Stat. Data Anal. 2014, 77, 252–266. [Google Scholar] [CrossRef]
  45. He, Y.-B.; Geng, Z. Active Learning of Causal Networks with Intervention Experiments and Optimal Designs. J. Mach. Learn. Res. 2008, 9, 2523–2547. [Google Scholar]
Figure 1. Paths in a causal diagram. (a) Causal path: there is a direct causal relationship between the treatment variable X and the target variable Y, and there may be a confounding variable Z. (b) Backdoor path: the confounding variable Z affects both the treatment variable X and the target variable Y (not between XY, but both). (c) Front-door path: the confounding variable Z affects the path from X to Y (directly affects XY).
Figure 1. Paths in a causal diagram. (a) Causal path: there is a direct causal relationship between the treatment variable X and the target variable Y, and there may be a confounding variable Z. (b) Backdoor path: the confounding variable Z affects both the treatment variable X and the target variable Y (not between XY, but both). (c) Front-door path: the confounding variable Z affects the path from X to Y (directly affects XY).
Electronics 13 02056 g001
Figure 2. Temporal dynamics of the treatment variable, confounding variable, and target variable in a causal network.
Figure 2. Temporal dynamics of the treatment variable, confounding variable, and target variable in a causal network.
Electronics 13 02056 g002
Figure 3. Causal inference network architecture based on deep learning.
Figure 3. Causal inference network architecture based on deep learning.
Electronics 13 02056 g003
Figure 4. Causal inference network model based on deep learning.
Figure 4. Causal inference network model based on deep learning.
Electronics 13 02056 g004
Figure 5. Visualization of prediction results for different models on the China Taibao (sh.601601) dataset.
Figure 5. Visualization of prediction results for different models on the China Taibao (sh.601601) dataset.
Electronics 13 02056 g005
Figure 6. Visualization of prediction results for different models on the SPDB (sh. 600000) dataset.
Figure 6. Visualization of prediction results for different models on the SPDB (sh. 600000) dataset.
Electronics 13 02056 g006
Figure 7. Scatter plot of prediction results for different models on the China Taibao (sh.601601) dataset.
Figure 7. Scatter plot of prediction results for different models on the China Taibao (sh.601601) dataset.
Electronics 13 02056 g007
Figure 8. Scatter plot of prediction results for different models on the China SPDB (sh.600000) dataset.
Figure 8. Scatter plot of prediction results for different models on the China SPDB (sh.600000) dataset.
Electronics 13 02056 g008
Figure 9. Visualization of comparative experimental results of GRU models with different input variables in different datasets. (a) RMSE metric results of various methods. (b) MAE metric results of various methods. (c) MAPE metric results of various methods. (d) R2 metric results of various methods.
Figure 9. Visualization of comparative experimental results of GRU models with different input variables in different datasets. (a) RMSE metric results of various methods. (b) MAE metric results of various methods. (c) MAPE metric results of various methods. (d) R2 metric results of various methods.
Electronics 13 02056 g009
Table 1. F-test values for each factor in Shanghai Stock Exchange datasets (bolded values are those that pass the Granger test).
Table 1. F-test values for each factor in Shanghai Stock Exchange datasets (bolded values are those that pass the Granger test).
sh.601601sh.600048sh.601857sh.601985sh.603198sh.600000
Opening Price9.752−0.228−8.304−31.64037.71016.920
Highest Price9.0460.8370.215−32.97920.381−19.716
Lowest Price−4.79212.579−20.316−30.295−6.7346.556
Trading Volume−2.4048.502−26.5613.524−32.866−0.444
Trading Amount−3.935−3.70916.14217.58035.5384.309
Turnover Rate5.042−0.577−3.4186.49040.9808.669
Percentage Change6.792−3.110−8.044−53.835−2.4492.660
P/E Ratio−0.8947.611−32.9979.70931.841−20.402
P/B ratio8.7828.40813.85968.545−55.818−22.093
P/S ratio−19.3656.20114.16814.39135.568−41.630
P/CF ratio10.5748.397−8.7953.89816.867−32.186
SHCOMP14.46413.91525.1879.3144.7978.986
Table 2. Comparison of evaluation metrics for different models in Shanghai Stock Exchange datasets (↑ indicates that larger values are better, indicates that smaller values are better, the best result among comparative experiments is in bold).
Table 2. Comparison of evaluation metrics for different models in Shanghai Stock Exchange datasets (↑ indicates that larger values are better, indicates that smaller values are better, the best result among comparative experiments is in bold).
ModelStockRMSE↓MAE↓MAPE↓R2
Close Price + GRUsh.6016010.05960.045314.960.9544
sh.6000480.05140.039241.430.9493
sh.6018570.10330.077321.960.7554
sh.6019850.06480.048812.270.9356
sh.6031980.05220.038118.690.9599
sh.6000000.05030.040017.500.9561
Potential Factors + RNNsh.6016010.07200.055117.650.9324
sh.6000480.06600.051044.650.9349
sh.6018570.11330.085623.630.7301
sh.6019850.07820.059315.480.9181
sh.6031980.06310.046821.570.9453
sh.6000000.05720.045218.990.9357
Potential Factors + LSTMsh.6016010.06070.048514.890.9476
sh.6000480.05340.042542.780.9483
sh.6018570.08390.067319.360.8552
sh.6019850.06590.050411.980.9361
sh.6031980.05480.041919.910.9591
sh.6000000.04920.037716.530.9628
Potential Factors + GRUsh.6016010.05990.047214.020.9497
sh.6000480.05220.041241.960.9501
sh.6018570.08060.064018.910.8617
sh.6019850.06330.048111.540.9402
sh.6031980.05300.041219.470.9610
sh.6000000.04750.036116.210.9651
Causal Factors + RNNsh.6016010.06280.047916.740.9488
sh.6000480.05680.043743.470.9437
sh.6018570.11000.083022.820.7431
sh.6019850.07140.053714.190.9273
sh.6031980.05890.043120.360.9554
sh.6000000.05420.042517.670.9449
Causal Factors + LSTMsh.6016010.05160.040314.530.9574
sh.6000480.05040.039541.570.9518
sh.6018570.07580.061317.420.8713
sh.6019850.06010.046111.160.9436
sh.6031980.04950.037118.390.9649
sh.6000000.04170.032813.470.9699
Causal Factors + GRUsh.6016010.04930.038712.670.9608
sh.6000480.04870.037734.600.9544
sh.6018570.06670.051014.920.8980
sh.6019850.05970.044510.640.9452
sh.6031980.04700.035418.070.9674
sh.6000000.03830.028311.600.9746
Table 3. Percentage performance improvement in the optimal model over other models.
Table 3. Percentage performance improvement in the optimal model over other models.
Input VariablesOptimal ModelComparative ModelStockPercentage/%
RMSEMAEMAPER2
Potential FactorsGRURNNsh.60160116.8114.3414.91.86
sh.60004820.9119.226.021.63
sh.60185728.8625.2319.9718.02
sh.60198519.0518.8922.222.41
sh.60319816.0111.979.741.66
sh.60000016.9620.1314.643.14
LSTMsh.6016011.322.685.840.22
sh.6000482.253.061.920.19
sh.6018573.934.902.320.76
sh.6019853.954.563.670.44
sh.6031983.281.672.210.20
sh.6000003.464.241.940.24
Causal FactorsGRURNNsh.60160121.5019.2124.311.26
sh.60004814.2613.7320.401.13
sh.60185739.3638.5534.6220.85
sh.60198516.3917.1325.021.93
sh.60319820.2017.8711.251.26
sh.60000029.3433.4134.353.14
LSTMsh.6016014.463.9712.800.36
sh.6000483.374.5616.770.27
sh.60185712.0116.8014.353.06
sh.6019850.673.474.660.17
sh.6031985.054.581.740.26
sh.6000008.1513.7213.880.48
Table 4. Percentage increase in model performance for inputting causal variables versus inputting other variables.
Table 4. Percentage increase in model performance for inputting causal variables versus inputting other variables.
Optimal ModelComparative ModelStockPercentage/%
RMSEMAEMAPER2
Causal Factors + RNNPotential Factors + RNNsh.60160112.7813.075.161.76
sh.60004813.9414.312.640.94
sh.6018572.913.043.431.78
sh.6019858.709.448.331.00
sh.6031986.667.915.611.07
sh.6000005.245.976.950.98
Causal Factors + LSTMPotential Factors + LSTMsh.60160114.9916.912.421.03
sh.6000485.627.062.830.37
sh.6018579.658.9210.021.88
sh.6019858.808.536.840.80
sh.6031989.6711.467.630.60
sh.60000015.2413.0018.510.74
Causal Factors + GRUClose Price + GRUsh.60160117.2814.5715.310.67
sh.6000485.253.8316.490.54
sh.60185735.4334.0232.0618.88
sh.6019857.878.8713.281.03
sh.6031989.967.093.320.78
sh.60000023.8629.2533.711.93
Potential Factors + GRUsh.60160117.7018.019.631.17
sh.6000486.708.5017.540.45
sh.60185717.2520.3121.104.21
sh.6019855.697.487.800.53
sh.60319811.3214.087.190.67
sh.60000019.3721.6128.440.98
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xing, W.; Chen, C.; Xue, L. Deep Learning-Based Causal Inference Architecture and Algorithm between Stock Closing Price and Relevant Factors. Electronics 2024, 13, 2056. https://doi.org/10.3390/electronics13112056

AMA Style

Xing W, Chen C, Xue L. Deep Learning-Based Causal Inference Architecture and Algorithm between Stock Closing Price and Relevant Factors. Electronics. 2024; 13(11):2056. https://doi.org/10.3390/electronics13112056

Chicago/Turabian Style

Xing, Wanqi, Chi Chen, and Lei Xue. 2024. "Deep Learning-Based Causal Inference Architecture and Algorithm between Stock Closing Price and Relevant Factors" Electronics 13, no. 11: 2056. https://doi.org/10.3390/electronics13112056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop