Skip to Content
  • Article
  • Open Access

25 September 2023

Forecasting of NIFTY 50 Index Price by Using Backward Elimination with an LSTM Model

,
,
,
and
1
School of Business, Woxsen University, Hyderabad 502345, India
2
Faculty of Business Administration, Beirut Arab University, Riad El Solh, Beirut 11072809, Lebanon
3
Department of Management and Humanities, University Technology PETRONAS, Seri Iskandar 32610, Malaysia
4
College of Business Administration, University of Business and Technology, 10000 Prishtina, Kosovo

Abstract

Predicting trends in the stock market is becoming complex and uncertain. In response, various artificial intelligence solutions have emerged. A significant solution for predicting the trends of a stock’s volatile and chaotic nature is drawn from deep learning. The present study’s objective is to compare and predict the closing price of the NIFTY 50 index through two significant deep learning methods—long short-term memory (LSTM) and backward elimination LSTM (BE-LSTM)—using 15 years’ worth of per day data obtained from Bloomberg. This study has considered the variables of date, high, open, low, close volume, as well as the 14-period relative strength index (RSI), to predict the closing price. The results of the comparative study show that backward elimination LSTM performs better than the LSTM model for predicting the NIFTY 50 index price for the next 30 days, with an accuracy of 95%. In conclusion, the proposed model has significantly improved the prediction of the NIFTY 50 index price.

1. Introduction

The recent advancements in smart tools can predict security prices using technical analysis and fundamental analysis (Maniatopoulos et al. 2023), and can also use derivatives data analysis, including open interest and put call ratio. The scope of the significant development of emerging technology in Fintech has acted as a beacon in finance (Weng et al. 2018; Gao et al. 2022). Investor confidence and investment quality are both enhanced by the tremendous research opportunities available in this area (Mondal et al. 2021). Research in this area is more often conducted by corporate entities, who use asset classes to forecast asset prices on back-tested data (Cui et al. 2023). While employing these techniques has helped predict future stock price, achieving maximum accuracy in the prediction is still a challenge. This is because the index or stock momentum depends on various factors like news flow, global and domestic market sentiment, geopolitical scenarios/tensions, FII and DII flow, domestic growth stimulating factors, regulatory body decisions and policy, central government and central bank policy, etc. However, the use of the NIFTY 50 price helps market participants make better judgments and improve strategies in the future and options (F&O) segment or in the cash market (Jain et al. 2018; Vineela and Madhav 2020). The NIFTY 50 is an Index of 50 listed companies that act as derivatives of underlying stock within the portfolio called the NIFTY 50 index (Mondal et al. 2021). The highly volatile and chaotic nature of the stock market creates variation and makes it unpredictable in terms of return generation, closing price, factors impact, and influence of price action factors (Sheth and Shah 2023). The performance and return generated by the NIFTY 50 are directly proportioned to the performance/return of the underlying stock, considering that the weightage assigned to each underlying stock belongs to the NIFTY 50 index (Mondal et al. 2021). Monitoring the NIFTY 50 index enables traders and investors to manage the risk and reward ratio and point risk in the available market by calculating the ATR (average true range).
In recent years, there has been a growing interest in research employing artificial intelligence-based techniques for stock market prediction using the NIFTY 50 data, several machine learning models, including logistic regression (LR), support vector machine (SVM), random forest, etc., have been used for solving specific difficulties in time series forecasting (Abraham et al. 2022; Jin and Kwon 2021; Mehtab and Sen 2020; Parmar et al. 2018; Vijh et al. 2020). However, predicting the real-time market requires models to detect hidden data patterns in order to analyze such time-series data. While machine learning aids in discovering hidden patterns, it is not helpful for all-time series data (Idrees et al. 2019; Thakkar and Chaudhari 2021). The literature has also explored the neural networks method, but a simple neural network seems to be unable to predict market trends, and it even degrades the model’s accuracy. A possible solution is the use of deep neural networks (Olorunnimbe and Viktor 2023), which examine data attributes and take historical data and fluctuations into account to solve this problem. Deep neural networks (DNN), convolutional neural networks (CNN), and long short-term memory networks (LSTM) are three deep neural models that have been efficiently used in the literature to predict stock prices (Ananthi and Vijayakumar 2021; Chen et al. 2021; Dash et al. 2019; El-Chaarani 2019). Among these aforementioned methods, LSTM has been employed in deep learning models for stock price prediction, and it has produced better results (Liu et al. 2021; Mehtab et al. 2020; Nelson et al. 2017; Polamuri et al. 2021; Rezaei et al. 2021; Shen and Shafiq 2020). Although these approaches are acknowledged to be highly useful in data investigation, accuracy in prediction becomes challenging when the time series data is highly unstable and stochastic.
The current study suggests a more accurate method to predict the NIFTY 50 price for the next 30 days by utilizing LSTM and LSTM with backward elimination. A comparison has been made between these two models to predict the closing price of the NIFTY 50 index, and the results are presented in this paper. To indicate the closing price, we have considered specific variables such as date, high, open, low, close volume, and 14-period relative strength index (RSI) values. The subsequent sections of the paper are organized as follows. Section 2 provides a concise overview of the existing research pertaining to the application of deep learning techniques in the prediction of the NIFTY 50. The proposed methodology is explained in Section 3. The discussion regarding the experimental data is presented in Section 4, while Section 5 provides the concluding remarks of the study, including an examination of its limitations and suggestions for future research.

3. Methodology

The proposed work is a new learning-based approach for NIFTY 50 price forecasting. Backward elimination using LSTM (BE-LSTM) is the primary mechanism used in this study. Meanwhile, to understand the suggested method, it is crucial to first comprehend what backward elimination (BE) and LSTM are and how they will perform. Hence, a brief description of these methods is provided in the subsequent section.

3.1. Data Collection

This study is inclined to predict the closing price of the NIFTY 50 index, considering historical data. The data range selected for the study is taken from the Bloomberg database, starting from 11 February 2005 and ending on 5 March 2021. The data consideration includes approximately 15 years of data, which consist of bull and bear phases of the Indian equity market for better analysis and prediction. The study required a historical dataset from a reliable source and an input data which should be relevant and appropriate for the upcoming price prediction. The present study used Bloomberg, the most trusted data source in finance, which provides historical security data in the required form. The study used 15 years of technical analysis data, including HOLC, i.e., high, open, low, and close of daily trade. The data points considered are NIFTY 50 daily volume and 14 periods of RSI as an indicator. The above data was used to train the model to predict the closing price of the NIFTY 50.

3.2. Data Pre-Processing

The data pre-processing is essential in determining the data fit to the trained model in order to obtain the NIFTY 50 price prediction. The process involves removing duplicate data and avoiding the related missing data. The NIFTY 50 dataset of 15 years was split into 80% for training the model and 20% for testing. The model is then set to segregate the data into training and validation data types. This is a feature selection technique used to build a predictive model. The primary use of this algorithm is to eliminate features that do not have any correlation with the dependent variable or prediction of the output. The process of backward elimination is explained in Table 2.
Table 2. Algorithm—backward elimination process.
The probability value is defined as the p-value. It is used as a substitute for the point of rejection to manifest the low significant value, in which the null hypothesis would reject it. If the value of p is less than 0.05, then the evidence for the alternative view will become stronger.
T e s t s t a t i c   Z = S S 0 S 0 ( 1 S 0 ) n
where S is the sample proportion, S0 is the proportion of the assumed population in the null hypothesis, and n is the sample size. The p-value level can be obtained from the obtained Z value.
Step-by-step breakdown of the process.
  • Step 1. Formulate the Hypotheses:
  • Null hypothesis (H0): The proportions are equal; S = S0.
  • Alternative hypothesis (H1): The proportions are not equal; S # S0.
  • Step 2. Calculate the Sample Proportions:
Calculate the sample proportions S and S0:
  • S is the proportion of success in the sample.
  • S0 is the hypothesized proportion of success (given in the null hypothesis).
  • Step 3. Calculate the Standard Error:
The standard error (SE) of the difference between two proportions can be calculated as:
SE = S 1 S n + S 0 1 S 0 n 0
where n is the sample size, and n0 is the reference sample size.
  • Step 4. Calculate the Z-score:
The Z-score measures the difference between the number of standard errors observed between the sample proportions and the expected difference under the null hypothesis. It is calculated as: Z = SS0/SE.
  • Step 5. Determine the Critical Value or p-value:
Depending on the selected significance level (α), determine the critical value by referencing the standard normal distribution table. Alternatively, you can compute the p-value linked to the Z-score using the standard normal distribution.
  • Step 6. Make a Decision:
If using critical values, compare the calculated Z-score to the critical value. If using p-values, compare the S-value to your chosen significance level (α). If the S-value is less than α, reject the null hypothesis. If the S-value is greater than or equal to α, accept the null hypothesis.
  • Step 7. Interpretation:
If the null hypothesis is rejected, this suggests that there is a significant difference between the proportions s and s0. If the null hypothesis is not rejected, it means that there is not enough evidence to conclude that the proportions are significantly different.

3.3. LSTM Model

Hochreiter has designed long short-term memory (LSTM) to overcome speed and stability problems in recurrent neural networks (RNN). It can retrieve data from the beginning of time and utilize it to make future predictions. The vector length assigned to the node is 64, and there is just one hidden layer in a neural network. The data dimensions determine the number of nodes in the input layer. The input layer’s nodes may be linked to the concealed layer’s nodes through synapses. The weight is a coefficient in the relationship between the input and the concealed node—a signal decision maker (Ribeiro et al. 2021; Selvamuthu et al. 2019). The modification of weights is a normal part of the learning process. The artificial neural network will assign ideal weights for each synapse when the learning process is completed. The nodes of the hidden layer, with activation functions such as sigmoid, ReLU, or the tangent hyperbolic (tanh) function, will determine whether that node should be activated or not. This conversion will provide data with the lowest error value when comparing the trained model and test model, if the softmax function is used. The NN output layer comprises the values received after the transformation (Xie et al. 2021). If the results obtained are not optimal, the back propagation procedure can be used. The back propagation (BP) technique will update the weights of the hidden layers, sending the information from the output that reduces the error across the given set of epochs (Nelson et al. 2017; Mehtab et al. 2020; Liu et al. 2021).
This approach may be repeated to improve forecasts and minimize prediction errors. The model obtained will be trained after this procedure is completed. Recurrent neural networks are neural networks that anticipate future values based on previous observation sequences (RNN). This type of NN makes use of previously learned data to estimate future trends. These stages of previous data should be memorized to anticipate and guess future values. In this case, the hidden layer serves as a repository for primary data from the sequentially acquired data. The term “recurrent” can be used to describe the process of forecasting future data using previous portions of sequential data.
RNN cannot store memory for long (Shen and Shafiq 2020). The usage of long short-term memory (LSTM) proved to be very useful in foreseeing cases with long-time data based on “memory line”. In LSTM, the earlier memorization stage can be performed through gates by incorporating memory lines. Each node is substituted with LSTM cells in hidden layers. Each cell is equipped with a forget gate (et), an input gate (jt), and an output gate (mt). The functions of the gates are as follows: the forget gate is used to eradicate the data from the cell state, the input gate is used to add data to the cell state, and the output gate holds the output of the LSTM cell, as shown in Figure 1.
Figure 1. A sample representation of the LSTM model (Sezer et al. 2020).
The goal is to control the state of each cell. The forget gate (et) can output a number between 0 and 1. When the output is 1, it signals to hold the data, whereas a 0 signals to ignore the data, and et represents the vector values ranging from 0 to 1, corresponding to each number in the cell, At−1.
et = σ(Pe[bt−1,We] + qe)
In Equation (2) Pe represents the weight matrix associated with the forget gate, and σ is the sigmoidal function. The memory gate (jt) chooses the data to be stored in the cell. The sigmoid input layer determines the values to be changed. After that, a tanh layer adds a new candidate to the state. The output gate (mt) determines the output of each cell. The output value will be based on the state of the cell, along with the filtered and freshest data.
jt = σ(Pj[bt−1,Wj] + qj)
mt = σ(Pm[bt−1,Wm] + qm)
bt = mt tanh(At)
where We, Wj, and Wm are weight matrices, qe, qj, and qm are bias vectors, bt is the memory cell value at time t, and et corresponds to the forget gate value. Whereas, Pj represents the weight matrix associated with the input gate, and Pm represents the weight matrix associated with the output gate. At represents the current cell state, the input gate value is represented by jt, and mt represents the output gate value.

3.4. Backward Elimination with LSTM (BE-LSTM)

LSTMs are incredibly effective in solving sequence prediction problems because they can retain old data. Hence, LSTM can be a good choice for our prediction problem, as the historical price is vital in determining its future price. In related research, the LSTM model was employed for predicting the stock price. Figure 2 represents the processing stages in developing an LSTM-based stock prediction model, and the algorithm for designing the LSTM model is given in Table 3.
Figure 2. LSTM model without feature selection.
Table 3. Algorithm—building the LSTM model.
In the proposed method, the backward elimination method has been used as a feature selection method, and it is performed after the data pre-processing stage (Figure 3). This is done to determine which independent variable has a high correlation with the dependent variable (date, open, high, low, close, volume, value, trades, RSI, and RSI average). The selected variables are taken as inputs and sliced into training and test sets. Finally, they are entered into the LSTM model for prediction. A brief description of the BE-LSTM algorithm is given in Table 4. The backward elimination method is expected to decrease computational complexity and increase accuracy.
Figure 3. BE—LSTM model.
Table 4. Algorithm—building the BE-LSTM model.

3.5. Evaluation Metrics

Mean square error (MSE), root mean square error (RMSE), and mean absolute percentage error (MAPE) are used to assess the performance of the proposed LSTM- and BE-LSTM-based model (Rezaei et al. 2021; Polamuri et al. 2021). The following is the formula for these metrics:
M S E = 1 m i m ( b i b ˙ i ) 2 .
R M S E = 1 m i m ( b i b ˙ i ) 2 .
M A P E = 1 m i = 1 m b i b ˙ i b i × 100 .
  • Accuracy
Accuracy serves as a metric that offers a broad overview of a model’s performance across all classes. It proves particularly valuable when all classes have equal significance. This metric is computed by determining the proportion of correct predictions in relation to the total number of predictions made.
Accuracy = T r u e p o s i t i v e + T r u e n e g a t i v e T r u e p o s i t i v e + T r u e n e g a t i v e + F a l s e p o s i t i v e + F a l s e n e g a t i v e
Calculating accuracy using scikit-learn, based on the previously computed confusion matrix, is performed as follows: we store the result in the variable ‘acc’ by dividing the sum of true positives and true negatives by the sum of all the values within the matrix.
  • Precision
Precision is computed by taking the proportion of correctly classified positive samples relative to the total number of samples classified as positive, whether correctly or incorrectly. Precision serves as a metric for gauging the model’s accuracy when it comes to classifying a sample as positive.
Precision = T r u e p o s i t i v e T r u e p o s i t i v e + F a l s e p o s i t i v e
When the model generates numerous incorrect positive classifications, or only a few correct positive classifications, this elevates the denominator and results in a lower precision score. Conversely, precision is higher under the following conditions:
  • The model produces a substantial number of correct positive classifications, thus maximising the true positives.
  • The model minimizes the number of incorrect positive classifications, thereby reducing false positives.
  • Recall
Recall is determined by the ratio of correctly classified positive samples to the total number of positive samples. It quantifies the model’s capacity to identify positive samples. A higher recall score signifies a greater ability to detect positive samples.
Recall = T r u e p o s i t i v e T r u e p o s i t i v e , + F a l s e n e g a t i v e
Recall exclusively focuses on the classification of positive samples and is independent of the classification of negative samples, as observed in precision. If the model categorizes all positive samples as positive, even if it incorrectly labels all negative samples as positive, the recall will still register at 100%.

4. Results and Discussion

The historical data of NIFTY 50 was extracted from Yahoo Finance. The period of data covers from 20 January 2005 to 5 March 2021. It consists of 3986 data points and 8 attributes. The attributes are date, open, high, low, volume, value, trades, and RSI average (detail is shown in Table 5). By utilizing backward elimination (BE), this study has identified which independent variable is significantly correlated with the dependent variable.
Table 5. Detail of attributes.
In the case of backward elimination, we are now attempting to remove less important variables from the model. It usually entails repeatedly fitting the model, determining each variable’s importance, and eliminating the least relevant variables.
In essence, we enable the model to include an intercept term constant that reflects the projected value of y when all independent variables are set to zero by including a constant term (a column of 1s) in the dataset. When the independent variables have no influence, the baseline level of y is captured by this intercept term.
When a variable is removed from the model, we are effectively determining its relevance by observing how the overall model performance (often evaluated by a metric like p-value, AIC, or R-squared) changes; therefore, adding this constant term is very important during backward elimination. Without the constant term, eliminating a variable can lead to a model that assumes the dependent variable starts at zero in the absence of all other factors, which may not be applicable in many real-world cases.
Initially, we need to confirm all the independent variables in the backward elimination algorithm, as shown in Table 6.
Table 6. Backward elimination (Step 1).
The variable x contains all 3986 rows and 9 columns of the data (attributes, e.g., 0, 1, 2, 3, 4, 5, 6, 7, 8). In Table 6, the constant x5 has the highest p-value of 0.478 compared to other constants, and is also higher than the defined significance level of 0.01. Thus, x5 is eliminated. In Step 2, the backward elimination method is repeated with the remaining constants, and the results are shown in Table 7.
Table 7. Backward elimination (Step 2).
In Table 7, x contains all the rows, and the columns are [0,1,2,3,4,6,7,8]. After confirming the values in the backward elimination, x5 (i.e., 6th column) again showed the highest p-value of 0.020, compared to the other constants, and it is above the significance level of 0.01. Thus, x5 is eliminated. Again, the process is repeated with the remaining variables.
In Table 8, x contains all the rows, and the columns are [0,1,2,3,4,7,8]. After confirming the values in the backward elimination, all the constant’s p-values are less than the significance level. Thus, we need to stop the backward elimination process. The output of the more correlated features identified using the backward elimination method is shown in Table 9.
Table 8. Backward elimination (Step 3).
Table 9. More correlated features were identified using backward elimination.
In the current study, the selected variables are then fed to the LSTM model, and its stock prediction accuracy is calculated. Again, to validate and compare the proposed model’s effectiveness, its accuracy is compared with the output of the LSTM model, without using the backward elimination method, i.e., all the variables were fed into the LSTM model as input. While designing the LSTM model, two hidden layers were utilized with the “ReLU” activation function. The benefit of utilizing this ReLU function is that it does not trigger all the neurons at once. Hence, it takes less time to process. In the first hidden layer, 64 nodes are used, and in the second hidden layer, 32 nodes are used. Therefore, the total trainable parameters in this model is 330,369, as shown in Figure 4.
Figure 4. LSTM trained model.
To fit the model, we have considered 1030 and 50 epochs, with a batch size of 16. We observed the model performances by varying the epochs, and the performance measures (MSE, RMSE, and MAPE) have been calculated in each case. The classification results are shown in Table 10. While comparing the performance of the LSTM model before and after employing the backward elimination method, it was observed that the backward elimination method improved the classification performance significantly. Moreover, the accuracy in our proposed method has also been compared with the accuracy of some methods used in the previously reported literature (Table 11), in which several classification models have been used. Ariyo et al. (2014) used the ARIMA model and achieved an accuracy of 90%, a precision of 91%, and a recall of 92%. Khaidem et al. (2016) utilized the random forest algorithm and achieved an accuracy of 83%, a precision of 82%, and a recall of 81%. Asghar et al. (2019) built a multiple regression model and achieved an accuracy of 94%, a precision of 95%, and a recall of 93%. Finally, Shen and Shafiq 2020 utilized FE+ RFE+PCA+LSTM and achieved an accuracy of 93%, a precision of 96%, and a recall of 96%. To our surprise, we noted the optimum performance in the proposed method, with high accuracy, precision, and recall scores, i.e., 95%, 97%, and 96%, respectively.
Table 10. Performance of backward elimination LSTM (BE-LSTM) compared with that of LSTM.
Table 11. Comparison of the proposed solution with those in related works.
Figure 5 shows the closing price of the NIFTY 50 index, and Figure 6 shows that in the 50 epochs model, the training loss is 1.5%, and the validation loss is 2.5%. Hence, the Be-LSTM model’s performance is suitable for forecasting the future price of the NIFTY 50 stock. Figure 6 indicates that the data were taken as a look back period, where n = 3986 produced the outcome with a training loss of 1.5% and a validation loss of 2.5%. The LSTM and BE-LSTM testing mechanism is a reliable model for forecasting securities prices. The historical data from the NIFTY 50 index provided realistic output after testing on both the models, indicating a more reliable prediction using the BE-LSTM, with a standard deviation occurrence of 5% in the given sample size, as real independent data.
Figure 5. Closing price of the NIFTY 50 index.
Figure 6. Training and validation loss of both LSTM and BE-LSTM.
The input data, including high, open, low, and close, with relative strength index (RSI) and trades, trained the model to predict the future price over the next 30 days for the NIFTY 50 index. The model’s accuracy suggests that it has achieved a good outcome for predicting the closing price of the NIFTY 50 index. The BE-LSTM is more accurate than the LSTM for price prediction.
The standard deviation of the outcome, compared with the input data and the validation, indicates the number of errors in the model. We considered 3986 data points in the model analysis to train the LSTM and the backward elimination with LSTM. The significant output reflects the average epochs of around 5%, which are considerable remarks for building confidence in the tested model. From Figure 6, we observed that the proposed BE-LSTM performs well compared to the LSTM model. This is because BE-LSTM helps in eliminating the irrelevant features or input nodes, reducing complexity and enhancing efficiency, yielding faster training times and reduced memory requirements, features which are lacking in LSTM. Secondly, as irrelevant nodes are eliminated, the remaining features become more important for capturing the temporal dependencies of the sequence. This helps to provide insights into which features contribute significantly to the model’s predictions. By understanding the influential features, we can make more informed decisions, identify critical factors, and improve the overall interpretability of the model.
Finally, Figure 7 compares three closing prices from 5 March 2021 to 31 March 2021. It clearly depicts that BE-LSTM performs well compared to the conventional LSTM model.
Figure 7. The closing price for the next 30 days (differences between the original close value, the LSTM, and the BE-LSTM).

5. Conclusions and Future Scope

The emerging technology in the financial field, along with its combination with artificial intelligence, is an evolving area of research. This paper proposes a more suitable AI-based method rather than the traditional approach (fundamental analysis, technical analysis, and data analysis) for predicting the NIFTY 50 index price for the next 30 days using the BE-LSTM model.
The dimensional work for determining the NIFTY 50 index price showcases the comparison of LSTM and BE-LSTM for an equilateral dataset. In this work, the BE-LSTM, whose results are much closer to the original close price, is gaining favor in the area of stock price prediction. At the same time, the LSTM showed a deviation in predicting the output when compared to the actual price. The results suggest that the BE-LSTM model showed improved accuracy compared to the LSTM method. In the future, the backward elimination method can be employed with other deep learning methods, such as GAN with varied hyperparameters, for investigating alternative algorithm improvements.
The financial industry is now inclining towards the adoption of technology in various areas, including portfolio management, wealth management, equity analysis, and derivative research. The brokerage houses, as well as fund management and portfolio management services, have struggled to analyze asset prices. This study will help those involved in the finance industry, along with policy makers, to use emerging technology like artificial intelligence in finance. It will also aid the policy makers in analyzing the market sentiment and trends using appropriate algorithmic trading, employing predictive models to create investor awareness and enhance the number of market participants. It is crucial for regulators and policy makers to understand the volatility of the stock market in order to steer the economy toward development, to ensure the smooth operation of the stock exchange, and to encourage more investors—particularly retail investors—to engage in the market. As a result, stronger investor protection measures, as well as more investor education initiatives, will be adopted.
In addition, investors want to generate a significant return on a less risky investment. Therefore, before making an investment decision, Indian investors are required to carefully study and analyze the stock market volatility using publicly accessible information, as well as many other impacts, as this analysis is essential for determining the effectiveness and volatility of stock markets. This study will help investors manage risk by identifying potential market downturns through artificial intelligence, enabling the adjustment of portfolios and the minimization of loss.

Author Contributions

Conceptualization: all authors; methodology: all authors; software: all authors; validation: all authors; formal analysis: all authors; investigation: all authors; resources: all authors; data curation: all authors; writing—original draft preparation: all authors; writing—review and editing: all authors; visualization: all authors; supervision: S.H.J.; project administration: S.H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are available from El-Chaarani upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abraham, Rebecca, Mahmoud El Samad, Amer M. Bakhach, Hani El-Chaarani, Ahmad Sardouk, Sam El Nemar, and Dalia Jaber. 2022. Forecasting a Stock Trend Using Genetic Algorithm and Random Forest. Journal of Risk and Financial Management 5: 188. [Google Scholar] [CrossRef]
  2. Ananthi, M., and K. Vijayakumar. 2021. Stock market analysis using candlestick regression and market trend prediction (CKRM). Journal of Ambient Intelligence and Humanized Computing 12: 4819–26. [Google Scholar] [CrossRef]
  3. Ariyo, Adebiyi A., Adewumi O. Adewumi, and Charles K. Ayo. 2014. Stock price prediction using the ARIMA model. Paper presented at 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, Cambridge, UK, March 26–28. [Google Scholar]
  4. Asghar, Muhammad Zubair, Fazal Rahman, Fazal Masud Kundi, and Shakeel Ahmad. 2019. Development of stock market trend prediction system using multiple regression. Computational and Mathematical Organization Theory 25: 271–301. [Google Scholar] [CrossRef]
  5. Bathla, Gourav, Rinkle Rani, and Himanshu Aggarwal. 2023. Stocks of year 2020: Prediction of high variations in stock prices using LSTM. Multimedia Tools and Applications 7: 9727–43. [Google Scholar] [CrossRef]
  6. Chen, Wei, Haoyu Zhang, Mukesh Kumar Mehlawat, and Lifen Jia. 2021. Mean-variance portfolio optimization using machine learning-based stock price prediction. Applied Soft Computing 100: 106943. [Google Scholar] [CrossRef]
  7. Cui, Tianxiang, Shusheng Ding, Huan Jin, and Yongmin Zhang. 2023. Portfolio constructions in cryptocurrency market: A CVaR-based deep reinforcement learning approach. Economic Modelling 119: 106078. [Google Scholar] [CrossRef]
  8. Dash, Rajashree, Sidharth Samal, Rasmita Dash, and Rasmita Rautray. 2019. An integrated TOPSIS crow search based classifier ensemble: In application to stock index price movement prediction. Applied Soft Computing 85: 105784. [Google Scholar] [CrossRef]
  9. El-Chaarani, Hani. 2019. The Impact of Oil Prices on Stocks Markets: New Evidence During and After the Arab Spring in Gulf Cooperation Council Economies. International Journal of Energy Economics and Policy 9: 214–223. [Google Scholar] [CrossRef]
  10. Gao, Ruize, Xin Zhang, Hongwu Zhang, Quanwu Zhao, and Yu Wang. 2022. Forecasting the overnight return direction of stock market index combining global market indices: A multiple-branch deep learning approach. Expert Systems with Applications 194: 116506. [Google Scholar] [CrossRef]
  11. Idrees, Sheikh Mohammad, M. Afshar Alam, and Parul Agarwal. 2019. A prediction approach for stock market volatility based on time series data. IEEE Access 7: 17287–98. [Google Scholar] [CrossRef]
  12. Ilkka, Virtanen, and Paavo Yli-Olli. 1987. Forcasting stock market prices in a thin security market. Omega 15: 145–55. [Google Scholar]
  13. Jain, Vikalp Ravi, Manisha Gupta, and Raj Mohan Singh. 2018. Analysis and prediction of individual stock prices of financial sector companies in NIFTY 50. International Journal of Information Engineering and Electronic Business 2: 33–41. [Google Scholar] [CrossRef]
  14. Jiang, Weiwei. 2021. Applications of deep learning in stock market prediction: Recent progress. Expert Systems with Applications 184: 115537. [Google Scholar] [CrossRef]
  15. Jin, Guangxun, and Ohbyung Kwon. 2021. Impact of chart image characteristics on stock price prediction with a convolutional neural network. PLoS ONE 16: e0253121. [Google Scholar] [CrossRef]
  16. Jing, Nan, Zhao Wu, and Hefei Wang. 2021. A hybrid model integrating deep learning with investor sentiment analysis for stock price prediction. Expert Systems with Applications 178: 115019. [Google Scholar] [CrossRef]
  17. Khaidem, Luckyson, Snehanshu Saha, and Sudeepa Roy Dey. 2016. Predicting the direction of stock market prices using random forest. arXiv arXiv:1605.00003. [Google Scholar]
  18. Kurani, Akshit, Pavan Doshi, Aarya Vakharia, and Manan Shah. 2023. A comprehensive comparative study of artificial neural network (ANN) and support vector machines (SVM) on stock forecasting. Annals of Data Science 10: 183–208. [Google Scholar] [CrossRef]
  19. Liu, Keyan, Jianan Zhou, and Dayong Dong. 2021. Improving stock price prediction using the long short-term memory model combined with online social networks. Journal of Behavioral and Experimental Finance 30: 100507. [Google Scholar] [CrossRef]
  20. Long, Wen, Zhichen Lu, and Lingxiao Cui. 2019. Deep learning-based feature engineering for stock price movement prediction. Knowledge-Based Systems 164: 163–73. [Google Scholar] [CrossRef]
  21. Mahajan, Vanshu, Sunil Thakan, and Aashish Malik. 2022. Modeling and forecasting the volatility of NIFTY 50 using GARCH and RNN models. Economies 5: 102. [Google Scholar] [CrossRef]
  22. Mahboob, Khalid, Muhammad Huzaifa Shahbaz, Fayyaz Ali, and Rohail Qamar. 2023. Predicting the Karachi Stock Price index with an Enhanced multi-layered Sequential Stacked Long-Short-Term Memory Model. VFAST Transactions on Software Engineering 2: 249–55. [Google Scholar]
  23. Maniatopoulos, Andreas-Antonios, Alexandros Gazis, and Nikolaos Mitianoudis. 2023. Technical analysis forecasting and evaluation of stock markets: The probabilistic recovery neural network approach. International Journal of Economics and Business Research 1: 64–100. [Google Scholar] [CrossRef]
  24. Mehtab, Sidra, and Jaydip Sen. 2020. Stock price prediction using convolutional neural networks on a multivariate time series. arXiv arXiv:2001.09769. [Google Scholar]
  25. Mehtab, Sidra, Jaydip Sen, and Abhishek Dutta. 2020. Stock price prediction using machine learning and LSTM-based deep learning models. In Symposium on Machine Learning and Metaheuristics Algorithms, and Applications. Singapore: Springer, pp. 88–106. [Google Scholar]
  26. Mondal, Bhaskar, Om Patra, Ashutosh Satapathy, and Soumya Ranjan Behera. 2021. A Comparative Study on Financial Market Forecasting Using AI: A Case Study on NIFTY. In Emerging Technologies in Data Mining and Information Security. Singapore: Springer, vol. 1286, pp. 95–103. [Google Scholar]
  27. Nelson, David M. Q., Adriano C. M. Pereira, and Renato A. de Oliveira. 2017. Stock market’s price movement prediction with LSTM neural networks. Paper presented at 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, May 14–19; New York: IEEE, pp. 1419–26. [Google Scholar]
  28. Olorunnimbe, Kenniy, and Herna Viktor. 2023. Deep learning in the stock market—A systematic survey of practice, backtesting, and applications. Artificial Intelligence Review 56: 2057–109. [Google Scholar] [CrossRef]
  29. Ostermark, R. 1989. Predictability of finnish and Swedish stock returns. Omega 17: 223–36. [Google Scholar] [CrossRef]
  30. Oukhouya, Hassan, and Khalid El Himdi. 2023. Comparing Machine Learning Methods—SVR, XGBoost, LSTM, and MLP—For Forecasting the Moroccan Stock Market. Computer Sciences and Mathematics Forum 1: 39. [Google Scholar]
  31. Parmar, Ishita, Navanshu Agarwal, Sheirsh Saxena, Ridam Arora, Shikhin Gupta, Himanshu Dhiman, and Lokesh Chouhan. 2018. Stock market prediction using machine learning. Paper presented at 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, December 15–17; New York: IEEE, pp. 574–76. [Google Scholar]
  32. Polamuri, Subba Rao, Kudipudi Srinivas, and A. Krishna Mohan. 2021. Multi-Model Generative Adversarial Network Hybrid Prediction Algorithm (MMGAN-HPA) for stock market prices prediction. Journal of King Saud University-Computer and Information Sciences 9: 7433–44. [Google Scholar] [CrossRef]
  33. Rezaei, Hadi, Hamidreza Faaljou, and Gholamreza Mansourfar. 2021. Stock price prediction using deep learning and frequency decomposition. Expert Systems with Applications 169: 114332. [Google Scholar] [CrossRef]
  34. Ribeiro, Gabriel Trierweiler, André Alves Portela Santos, Viviana Cocco Mariani, and Leandro dos Santos Coelho. 2021. Novel hybrid model based on echo state neural network applied to the prediction of stock price return volatility. Expert Systems with Applications 184: 115490. [Google Scholar] [CrossRef]
  35. Sarode, Sumeet, Harsha G. Tolani, Prateek Kak, and C. S. Lifna. 2019. Stock price prediction using machine learning techniques. Paper presented at 2019 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, February 21–22; New York: IEEE, pp. 177–81. [Google Scholar]
  36. Selvamuthu, Dharmaraja, Vineet Kumar, and Abhishek Mishra. 2019. Indian stock market prediction using artificial neural networks on tick data. Financial Innovation 5: 16. [Google Scholar] [CrossRef]
  37. Sezer, Omer Berat, Mehmet Ugur Gudelek, and Ahmet Murat Ozbayoglu. 2020. Financial time series forecasting with deep learning: A systematic literature review: 2005–19. Applied Soft Computing 90: 106181. [Google Scholar] [CrossRef]
  38. Sharma, Dhruv, Amisha, Pradeepta Kumar Sarangi, and Ashok Kumar Sahoo. 2023. Analyzing the Effectiveness of Machine Learning Models in Nifty50 Next Day Prediction: A Comparative Analysis. Paper presented at 2023 3rd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Noida, India, May 12–13; New York: IEEE, pp. 245–50. [Google Scholar]
  39. Shen, Jingyi, and M. Omair Shafiq. 2020. Short-term stock market price trend prediction using a comprehensive deep learning system. Journal of Big Data 7: 1–33. [Google Scholar] [CrossRef]
  40. Sheth, Dhruhi, and Manan Shah. 2023. Predicting stock market using machine learning: Best and accurate way to know future stock prices. International Journal of System Assurance Engineering and Management 14: 1–18. [Google Scholar] [CrossRef]
  41. Sisodia, Pushpendra Singh, Anish Gupta, Yogesh Kumar, and Gaurav Kumar Ameta. 2022. Stock market analysis and prediction for NIFTY50 using LSTM Deep Learning Approach. Paper presented at 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM), Pradesh, India, February 23–25; New York: IEEE, vol. 2, pp. 156–61. [Google Scholar]
  42. Thakkar, Ankit, and Kinjal Chaudhari. 2021. Fusion in stock market prediction: A decade survey on the necessity, recent developments, and potential future directions. Information Fusion 65: 95–107. [Google Scholar] [CrossRef]
  43. Vaisla, Kunwar Singh, and Ashutosh Kumar Bhatt. 2010. An analysis of the performance of artificial neural network technique for stock market forecasting. International Journal on Computer Science and Engineering 2: 2104–9. [Google Scholar]
  44. Vijh, Mehar, Deeksha Chandola, Vinay Anand Tikkiwal, and Arun Kumar. 2020. Stock closing price prediction using machine learning techniques. Procedia Computer Science 167: 599–606. [Google Scholar] [CrossRef]
  45. Vineela, P. Jaswanthi, and V. Venu Madhav. 2020. A Study on Price Movement of Selected Stocks in Nse (Nifty 50) Using Lstm Model. Journal of Critical Reviews 7: 1403–13. [Google Scholar]
  46. Weng, Bin, Lin Lu, Xing Wang, Fadel M. Megahed, and Waldyn Martinez. 2018. Predicting short-term stock prices using ensemble methods and online data sources. Expert Systems with Applications 112: 258–73. [Google Scholar] [CrossRef]
  47. Xie, Chen, Deepu Rajan, and Quek Chai. 2021. An interpretable Neural Fuzzy Hammerstein-Wiener network for stock price prediction. Information Sciences 577: 324–35. [Google Scholar] [CrossRef]
  48. Zaheer, Shahzad, Nadeem Anjum, Saddam Hussain, Abeer D. Algarni, Jawaid Iqbal, Sami Bourouis, and Syed Sajid Ullah. 2023. A Multi Parameter Forecasting for Stock Time Series Data Using LSTM and Deep Learning Model. Mathematics 11: 590. [Google Scholar] [CrossRef]
  49. Zhang, Jing, Shicheng Cui, Yan Xu, Qianmu Li, and Tao Li. 2018. A novel data-driven stock price trend prediction system. Expert Systems with Applications 97: 60–69. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.