Application of Gated Recurrent Unit (GRU) Neural Network for Smart Batch Production Prediction
Abstract
:1. Introduction
2. Data Preprocessing
2.1. Data Set
2.2. Workflow
3. Methodology
3.1. RNN, LSTM, and GRU
3.2. “SingleStep”, “Iterative”, and “PartIterative” Prediction Schemes
3.3. The Architecture of the GRU Model
3.4. Random Forest
3.5. Evaluation Indices
4. Results and Discussion
4.1. Verification of the GRU Model with the SingleStep Approach
4.2. Application of the GRU Model with the Iterative Approach
4.3. GRU_RF Model for Improving “Iterative” Accuracy
- Step 1.
- Training the GRU model. After data preprocessing (as described in Figure 2), the data set are split into three parts, and the GRU model is fed by training wells and optimized by validation wells.
- Step 2.
- Preliminary forecasting with the GRU model and Iterative prediction approach. Based on historical production data of HW, we can get the time-series predicted production by continuous iterations.
- Step 3.
- Training the RF model. To revise the GRU model’s predicted errors caused by iterations and diversity of formation and fracturing, the RF model is trained with the predicted errors between the predicted production of the GRU model and the true production as output. Remarkably, the RF model is fed by the same training wells as the GRU model.
- Step 4.
- Second forecasting with the RF model. Given inputs of production, operations, formation, and fracturing parameters, the correction errors can be predicted by the RF model.
- Step 5.
- Forecasting with the GRU_RF model for test wells. The purpose is to use the trained model applicable to the whole well block to forecast the long-term production of new wells. Hence, given inputs of HW, we can directly get the predicted production (Step 2) and the predicted errors (Step 4) without training the model repeatedly, as shown in the deep red flow line in Figure 16. The predictions of the GRU_RF model are obtained by subtracting the predicted errors of the RF model from the predicted production of the GRU model.
4.4. Continuous Learning
5. Conclusions
- (1).
- The proposed GRU_RF model provided a promising method for predicting long-term production of new-developed wells with the initial production, as evidenced by six test wells. Its generalization ability was satisfactory when applying to wells that do not join in the training set.
- (2).
- The model considering SI and CS provided a method to explore the effects of variable external features. The four-feature model outperformed the conventional one-feature model. It could accurately capture the complex variance pattern under multiple factors.
- (3).
- In addition to production and oilfield operations, formation and fracturing parameters significantly affected production prediction of multiple wells from one well block, as illustrated by feature importance ranking of RF. It was essential to combine statistic features with dynamic features into the time-series prediction.
- (4).
- The proposed model outperformed traditional DCA, conventional time-series methods, simple RNN, LSTM, and GRU. Given limited production, the model could achieve batch processing and provide a continuous learning method for real-time prediction.
Supplementary Materials
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Hennigan, H.W.; Canon, M.W.; Ziara, B.A. An interactive production forecasting and economic evaluation system. In Proceedings of the SPE Annual Technical Conference and Exhibition, San Antonio, TX, USA, 4–7 October 1981. [Google Scholar]
- Wang, N.; Zhao, Q. China shale gas economic evaluation based on Monte Carlo simulation. In Proceedings of the 22nd World Petroleum Congress, Istanbul, Turkey, 9–13 July 2017. [Google Scholar]
- Wright, J.D. Economic evaluation of shale gas reservoirs. In Proceedings of the SPE Shale Gas Production Conference, Fort Worth, TX, USA, 16–18 November 2008; pp. 136–145. [Google Scholar]
- Wilson, K.C.; Durlofsky, L.J. Optimization of shale gas field development using direct search techniques and reduced-physics models. J. Pet. Sci. Eng. 2013, 108, 304–315. [Google Scholar] [CrossRef]
- Yu, W.; Luo, Z.; Javadpour, F.; Varavei, A.; Sepehrnoori, K. Sensitivity analysis of hydraulic fracture geometry in shale gas reservoirs. J. Pet. Sci. Eng. 2014, 113, 1–7. [Google Scholar] [CrossRef]
- Nwaobi, U.; Anandarajah, G. Parameter determination for a numerical approach to undeveloped shale gas production estimation: The UK Bowland shale region application. J. Nat. Gas Sci. Eng. 2018, 58, 80–91. [Google Scholar] [CrossRef]
- Arps, J.J. Analysis of Decline Curves. Trans. AIME 1945, 160, 228–247. [Google Scholar] [CrossRef]
- Olominu, O.; Sulaimon, A.A. Application of time series analysis to predict reservoir production performance. In Proceedings of the SPE Nigeria Annual International Conference and Exhibition, Lagos, Nigeria, 5–7 August 2014; Volume 1, pp. 569–582. [Google Scholar]
- Tan, L.; Zuo, L.; Wang, B. Methods of decline curve analysis for shale gas reservoirs. Energies 2018, 11, 552. [Google Scholar] [CrossRef] [Green Version]
- Lee, K.; Lim, J.; Yoon, D.; Jung, H. Prediction of shale-gas production at duvernay formation using deep-learning algorithm. SPE J. 2019, 24, 2423–2437. [Google Scholar] [CrossRef]
- Ma, Z.; Leung, J.Y.; Zanon, S.; Dzurman, P. Practical implementation of knowledge-based approaches for steam-assisted gravity drainage production analysis. Expert Syst. Appl. 2015, 42, 7326–7343. [Google Scholar] [CrossRef]
- Shaheen, M.; Shahbaz, M.; Ur Rehman, Z.; Guergachi, A. Data mining applications in hydrocarbon exploration. Artif. Intell. Rev. 2011, 35, 1–18. [Google Scholar] [CrossRef]
- Wang, S.; Chen, S. Insights to fracture stimulation design in unconventional reservoirs based on machine learning modeling. J. Pet. Sci. Eng. 2019, 174, 682–695. [Google Scholar] [CrossRef]
- Al-Fattah, S.M. Time series modeling for U.S. natural gas forecasting. In Proceedings of the 2005 International Petroleum Technology Conference, Doha, Qatar, 21–23 November 2005; pp. 973–975. [Google Scholar]
- Gupta, S.; Fuehrer, F.; Jeyachandra, B.C. Production forecasting in unconventional resources using data mining and time series analysis. In Proceedings of the SPE/CSUR Unconventional Resources Conference–Canada, Calgary, AB, Canada, 30 September–2 October 2014; Volume 1, pp. 247–254. [Google Scholar]
- Morgan, E. Accounting for serial autocorrelation in decline curve analysis of Marcellus shale gas wells. In Proceedings of the SPE/AAPG Eastern Regional Meeting, Pittsburgh, PA, USA, 7–11 October 2018. [Google Scholar]
- Udegbe, E.; Morgan, E.; Srinivasan, S. From face detection to fractured reservoir characterization: Big data analytics for restimulation candidate selection. In Proceedings of the SPE Annual Technical Conference and Exhibition, San Antonio, TX, USA, 9–11 October 2017. [Google Scholar]
- Li, Y.; Sun, R.; Horne, R. Deep learning for well data history analysis. In Proceedings of the SPE Annual Technical Conference and Exhibition, Calgary, AB, Canada, 30 September–2 October 2019. [Google Scholar]
- Madasu, S.; Rangarajan, K.P. Deep recurrent neural network DRNN model for real-time multistage pumping data. In Proceedings of the OTC Arctic Technology Conference, Houston, TX, USA, 5–7 November 2018. [Google Scholar]
- Quishpe, A.R.; Alonso, K.S.; Claramunt, J.I.A.; Barros, J.L.; Bizzotto, P.; Ferrigno, E.; Martinez, G. Innovative artificial intelligence approach in vaca muerta shale oil wells for real time optimization. In Proceedings of the SPE Annual Technical Conference and Exhibition, Calgary, AB, Canada, 30 September–2 October 2019. [Google Scholar]
- Azamifard, A.; Rashidi, F.; Ahmadi, M.; Pourfard, M.; Dabir, B. Toward more realistic models of reservoir by cutting-edge characterization of permeability with MPS methods and deep-learning-based selection. J. Pet. Sci. Eng. 2019, 181, 106135. [Google Scholar] [CrossRef]
- Etienam, C. 4D seismic history matching incorporating unsupervised learning. In Proceedings of the SPE Europec featured at 81st EAGE Conference and Exhibition, London, UK, 3–6 June 2019. [Google Scholar]
- Wang, S.; Chen, Z.; Chen, S. Applicability of deep neural networks on production forecasting in Bakken shale reservoirs. J. Pet. Sci. Eng. 2019, 179, 112–125. [Google Scholar] [CrossRef]
- Luo, G.; Tian, Y.; Bychina, M.; Ehlig-Economides, C. Production optimization using machine learning in bakken shale. In Proceedings of the Unconventional Resources Technology Conference, Houston, TX, USA, 23–25 July 2018. [Google Scholar]
- He, Q. Smart determination of estimated ultimate recovery in shale gas reservoir. In Proceedings of the SPE Eastern Regional Meeting, Lexington, KY, USA, 4–6 October 2017. [Google Scholar]
- Hochreiter, S.; Munchen, T.U. the Vanishing Gradient Problem During Learning. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 1998, 2, 107–116. [Google Scholar] [CrossRef] [Green Version]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), Doha, Qatar, 8 October 2014; pp. 1724–1734. [Google Scholar]
- Zhan, C.; Sankaran, S.; LeMoine, V.; Graybill, J.; Mey, D.O.S. Application of machine learning for production forecasting for unconventional resources. In Proceedings of the SPE/AAPG/SEG Unconventional Resources Technology Conference, Denver, CO, USA, 22–24 July 2019. [Google Scholar]
- Sun, J.; Ma, X.; Kazi, M. Comparison of decline curve analysis DCA with recursive neural networks RNN for production forecast of multiple wells. In Proceedings of the SPE Western Regional Meeting, Garden Grove, CA, USA, 22–26 April 2018. [Google Scholar]
- Song, X.; Liu, Y.; Xue, L.; Wang, J.; Zhang, J.; Wang, J.; Jiang, L.; Cheng, Z. Time-series well performance prediction based on Long Short-Term Memory (LSTM) neural network model. J. Pet. Sci. Eng. 2020, 186, 106682. [Google Scholar] [CrossRef]
- Olah, C. Understanding LSTM Networks. Available online: https://colah.github.io/posts/2015-08-Understanding-LSTMs (accessed on 5 October 2020).
- Chollet, F. Keras. Available online: https://github.com/fchollet/keras (accessed on 5 October 2020).
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. J. Mach. Learn. Res. 2010, 9, 249–256. [Google Scholar]
- Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
- Grave, É.; Obozinski, G.; Bach, F. Trace Lasso: A trace norm regularization for correlated designs. Adv. Neural Inf. Process. Syst. 2011, 24, 2187–2195. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Phys. Lett. B 1993, 299, 345–350. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Aulia, A.; Jeong, D.; Saaid, I.M.; Kania, D.; Shuker, M.T.; El-Khatib, N.A. A Random Forests-based sensitivity analysis framework for assisted history matching. J. Pet. Sci. Eng. 2019, 181, 106237. [Google Scholar] [CrossRef]
- Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [Google Scholar] [CrossRef] [Green Version]
- Ishwaran, H.; Malley, J.D. Synthetic learning machines. BioData Min. 2014, 7, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Varoquaux, G.; Buitinck, L.; Louppe, G.; Grisel, O.; Pedregosa, F.; Mueller, A. Scikit-learn. GetMobile Mob. Comput. Commun. 2015, 19, 29–33. [Google Scholar] [CrossRef]
- Scornet, E.; Biau, G.; Vert, J.P. Consistency of random forests. Ann. Stat. 2015, 43, 1716–1741. [Google Scholar] [CrossRef]
- Zhang, P.G. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003, 50, 159–175. [Google Scholar] [CrossRef]
- Herui, C.U.I.; Xu, P. Summer short-term load forecasting based on ARIMAX model. Power Syst. Prot. Control 2015, 43, 108–114. [Google Scholar]
- Lomonaco, V. Why Continual Learning is the Key Towards Machine Intelligence. Available online: https://www.oreilly.com/radar/why-continuous-learning-is-key-to-ai/ (accessed on 5 October 2020).
- Ben, Y.; Perrotte, M.; Ezzatabadipour, M.; Corporation, O.P.; Ali, I. Real time hydraulic fracturing pressure prediction with machine learning. In Proceedings of the SPE Hydraulic Fracturing Technology Conference and Exhibition, the Woodlands, The Woodlands, TX, USA, 4–6 February 2020. [Google Scholar]
Features | Mean | Standard Deviation | Range |
---|---|---|---|
Oil production rate (qo) (m3) | 39.58 | 14.64 | 0–144.375 |
Tubing Pressure (TP) (MPa) | 16.36 | 8.57 | 0–38 |
Shut-In Period (SI) (h) | 0.33 | 2.68 | 0–24 |
Choke Size (CS) (mm) | 3.16 | 0.90 | 0–10 |
Parameters | Value |
---|---|
Number of neurons | 32 for the GRU layer; 1 for the dense layer |
HW | 14 days |
Index of shuffle | 100 |
Epoch | 50 |
Batch size | 50 |
Loss function | MSE |
Optimizer | Adam |
Activation function | ReLU |
Regularization parameter of L2 norm () | 0.01 |
Dropout rate | 0.1 |
Learning rate () | 0.001 |
Input Features | Symbol | |
---|---|---|
Production | predicted production by GRU | Predicted qT |
the predicted production of the previous 7 days by GRU | Predicted qT−1, qT−2, qT−3, qT−4, qT−5, qT−6, qT−7 | |
Cumulative production of the first 14 days | Q14 days | |
Cumulative production of the first 30 days | Q30 days | |
Number of iterations (production time) | Niter | |
Operations | Choke size | CS |
Shut-in time | SI | |
changes of choke size | CST− CST−1 | |
changes of shut-in time | SIT − SIT−1 | |
Fracturing | Total sand per well | Total sand |
Total liquid per well | Total liquid | |
Well lateral length | Length | |
Formation | The minimal horizontal principle stress | MHPS |
Oil saturation | Soil | |
Porosity | Poro | |
Well vertical depth | TVD | |
Output | the errors between the predicted and true production rate | Error |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Ma, X.; Xiao, F.; Wang, F.; Zhang, S. Application of Gated Recurrent Unit (GRU) Neural Network for Smart Batch Production Prediction. Energies 2020, 13, 6121. https://doi.org/10.3390/en13226121
Li X, Ma X, Xiao F, Wang F, Zhang S. Application of Gated Recurrent Unit (GRU) Neural Network for Smart Batch Production Prediction. Energies. 2020; 13(22):6121. https://doi.org/10.3390/en13226121
Chicago/Turabian StyleLi, Xuechen, Xinfang Ma, Fengchao Xiao, Fei Wang, and Shicheng Zhang. 2020. "Application of Gated Recurrent Unit (GRU) Neural Network for Smart Batch Production Prediction" Energies 13, no. 22: 6121. https://doi.org/10.3390/en13226121