Residual Life Prediction for Induction Furnace by Sequential Encoder with s-Convolutional LSTM
Abstract
:1. Introduction
2. Related Work
- (1)
- Predictive maintenance for the mechanical component: This is the most common topic of predictive maintenance. Research has been conducted on bearings [13], engines [14], turbines [15], fans [16], pumps [17], gearboxes [18], milling machines [19], and centrifugal pumps [18]. Usually, the mechanical component fault includes vibration, sound, or abnormal patterns of sensor data. These data have a long time window. Methodologies such as logistic regression [13], support vector machines [17,18], artificial neural networks [14,15], and convolutional neural networks [19,20] are used for the predictive maintenance of the mechanical component.
- (2)
- Predictive maintenance for the systematic component: A system is a combination of subsystems or components. In this case, multiple components or attributes are simultaneously operated. Data, such as historical operations [21,22], processes [23], and sensor data [24,25], are analyzed in merged form. Multivariate characteristics lead to the use of random forest [25] and DNN [23,24]. Table 2 summarizes the predictive maintenance studies.
- (1)
- Channel configuration: This affects the dimensionality of the input data. For a higher dimensionality, our approach divides the sensor data attributes into multiple channels of the convolutional layer. Previous studies have configured a single channel.
- (2)
- Data conversion: The input data form determines the neural network structure. In Wen’s study, sensor data were converted into an image, whereas in Zhao’s approach, machine movement and sensor data were imported without a conversion. In our study, data converted into a 2D representation reflect the operation sequence.
- (3)
- Target: Similar papers have handled the mechanical components. However, in our study, induction furnaces are operated in a noisy environment. This distinguishes our approach from those of previous studies.
3. Proposed Method
- (1)
- Data preprocessing: Active operation data are extracted from raw data and noise is removed from the sensor data. Raw data are split into individual operations.
- (2)
- Sequential encoder: Individual operations are converted into two matrices, an adjacent matrix and a feature matrix. Operation sequence data are used for the adjacent matrix, and sensor data are used for the feature matrices. Each feature matrix reflects a single-sensor attribute. Sequential feature matrices are derived by dividing the adjacent matrix by the feature matrix.
- (3)
- Prediction model (s-convLSTM): Sequential feature matrices are distributed and input over n-channels. The prediction model consists of a convolutional layer and an LSTM layer [27]. After the prediction is complete, the model exports the residual life of the induction furnace. In the field application, the manager is able to plan the optimal maintenance schedule by using this prognosis results. These actions keep the system healthy. As an application in the field, the manager can easily plan the optimal maintenance schedule by using the prognosis results. These processes can help the manager keep track of the system’s health. A further explanation of the methodology is as follows.
3.1. Data Preprocessing
- (1)
- Parameter selection: Valid furnace parameters were selected. To select the valid parameters, we considered the patterns and correlations of the data. The raw data scheme is described in Table 3.
- (2)
- Sequence Indexing: The operation sequence was indexed, and the operation sequence was recorded in a text format. For further processing, the operation sequence was changed to a numerical form. Table 4 is the index of the event sequence:
- (3)
- Operation Extraction: An operation was extracted from raw data using frequency and event sequences. The event sequence was the separator between the active and standby. The frequency reflected the operating state of the furnace. By using the frequency pattern, we could distinguish between maintenance and operation. The extracting algorithm starts with event indexing. After extraction, the noise was removed from the operation. Noise removal after separation was due to the loss of the delimiting point during removal. After the preprocessing step, individual operations are prepared without noise.
3.2. Sequential Encoder
Algorithm 1. SequencialEncoder |
Input: dataset O{E, SN} (E, operation sequence data; S, sensor data; I, number of operation data; N, number of sensor attribute), matrix Z[D][D] (NULL matrix with size D × D) Output : matrix SFN[D][D](sequential feature matrix with N channel) FOR n := 1 TO N DO SFn := Z FOR i :=1 TO I-1 DO ADZn := Z FEAn := Z IF E [i] = E[i+1] THEN SP := E[i] ADZn [SP][SP] := ADZn [SP][SP] +1 FEAn [SP][SP] := FEAn [SP][SP] + Sn [i] ELSE SP :=E[i] EP :=E[i+1] ADZn [SP][EP] := ADZn [SP][EP] +1 FEAn [SP][EP] := FEAn [SP][EP] + Sn [i] FOR k :=1 TO D DO FOR l :=1 TO D DO SFn [k][l] := FEAn [k][l]/ADZn [k][l] END FOR END FOR END FOR END FOR Return SFN |
3.3. Prediction Model and Neural Network Layer
3.3.1. LSTM Layer
3.3.2. Convolutional Layer
3.3.3. Prediction Model
4. Experiment
5. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Jenkins, B.; Mullinger, P. Industrial and Process Furnaces: Principles, Design and Operation, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2014; pp. 5–6. [Google Scholar]
- Karandaev, A.; Evdokimov, S.; Saribaev, A.A.; Lednov, R. Requirements to the monitoring system of ultra-high power electric arc furnace transformer performance. Russ. Internet J. Ind. Eng. 2016, 2, 58–68. [Google Scholar]
- Chen, L. A multiple linear regression prediction of concrete compressive strength based on physical properties of electric arc furnace oxidizing slag. Int. J. Appl. Sci. Eng. 2010, 7, 153–158. [Google Scholar]
- Carvalho, T.P.; Soares, F.A.; Vita, R.; Francisco, R.D.P.; Basto, J.P.; Alcalá, S.G. A systematic literature review of machine learning methods applied to predictive maintenance. Comput. Ind. Eng. 2019, 137, 106024. [Google Scholar] [CrossRef]
- Gao, Z.; Cecati, C.; Ding, S.X. A Survey of Fault Diagnosis and Fault-Tolerant Techniques—Part I: Fault Diagnosis with Model-Based and Signal-Based Approaches. IEEE Trans. Ind. Electron. 2015, 62, 3757–3767. [Google Scholar] [CrossRef] [Green Version]
- Nandi, S.; Toliyat, H.A. Fault diagnosis of electrical machines—A review. In Proceedings of the IEEE International Electric Machines and Drives Conference (IEMDC’99), Seattle, WA, USA, 9–12 May 1999; pp. 219–221. [Google Scholar] [CrossRef]
- Chong, U.P. Signal model-based fault detection and diagnosis for induction motors using features of vibration signal in two-dimension domain. Stroj. Vestn. 2011, 57, 655–666. [Google Scholar]
- Choi, Y.; Kwun, H.; Kim, D.; Lee, E.; Bae, H. Method of predictive maintenance for induction furnace based on neural network, In Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Korea, 19–22 February 2020.
- Susto, G.A.; Beghi, A.; De Luca, C. A predictive maintenance system for epitaxy processes based on filtering and prediction techniques. IEEE Trans. Semicond. Manuf. 2012, 25, 638–649. [Google Scholar] [CrossRef]
- Clifton, R.H.; Clifton, R.H. Principles of Planned Maintenance; Arnold(Taylor & Fransis): London, UK, 1974. [Google Scholar]
- MaCarmen, C. An evaluation system of the setting up of predictive maintenance programmes. Reliab. Eng. Syst. Saf. 2006, 91, 945–963. [Google Scholar]
- Edwards, D.J.; Holt, G.D.; Harris, F.C. Predictive maintenance techniques and their relevance to construction plant. J. Qual. Maint. Eng. 1998, 4, 25–37. [Google Scholar] [CrossRef]
- Pandya, D.H.; Upadhyay, S.H.; Harsha, S.P. Fault diagnosis of rolling element bearing by using multinomial logistic regression and wavelet packet transform. Soft Comput. 2014, 18, 255–266. [Google Scholar] [CrossRef]
- Ahmed, R.; El Sayed, M.; Gadsden, S.A.; Tjong, J.; Habibi, S. Automotive internal-combustion-engine fault detection and classification using artificial neural network techniques. IEEE Trans. Veh. Technol. 2015, 64, 21–33. [Google Scholar] [CrossRef]
- Biswal, S.; Jithin, D.G.; Sabareesh, G.R. Fault size estimation using vibration signatures in a wind turbine test-rig. Procedia Eng. 2016, 144, 305–311. [Google Scholar] [CrossRef]
- Balabanov, T.; Hadjiski, M.; Koprinkova-Hristova, P.; Beloreshki, S.; Doukovska, L. Neural network model of mill-fan system elements vibration for predictive maintenance. In Proceedings of the 2011 International Symposium on Innovations in Intelligent Systems and Applications, Istanbul, Turkey, 15–18 June 2011; pp. 410–414. [Google Scholar] [CrossRef]
- Rapur, J.S.; Tiwari, R. On-line time domain vibration and current signals based multi-fault diagnosis of centrifugal pumps using support vector machines. J. Nondestruct. Eval. 2019, 38, 6. [Google Scholar] [CrossRef]
- Zhong, J.; Yang, Z.; Wong, S.F. Machine condition monitoring and fault diagnosis based on support vector machine. In Proceedings of the 2010 IEEE International Conference on Industrial Engineering and Engineering Management, Macao, China, 7–10 December 2010; pp. 2228–2233. [Google Scholar] [CrossRef]
- Zhao, R.; Yan, R.; Wang, J.; Mao, K. Learning to monitor machine health with convolutional bi-directional LSTM networks. Sensors 2017, 17, 273. [Google Scholar] [CrossRef] [PubMed]
- Wen, L.; Li, X.; Gao, L.; Zhang, Y. A new convolutional neural network-based data-driven fault diagnosis method. IEEE Trans. Ind. Electron. 2018, 65, 5990–5998. [Google Scholar] [CrossRef]
- Li, H.; Parikh, D.; He, Q.; Qian, B.; Li, Z.; Fang, D.; Hampapur, A. Improving rail network velocity: A machine learning approach to predictive maintenance. Transp. Res. Part C Emerg. Technol. 2014, 45, 17–26. [Google Scholar] [CrossRef]
- Susto, G.A.; Schirru, A.; Pampuri, S.; McLoone, S.; Beghi, A. Machine learning for predictive maintenance: A multiple classifier approach. IEEE Trans. Ind. Inf. 2015, 11, 812–820. [Google Scholar] [CrossRef] [Green Version]
- Hu, H.; Tang, B.; Gong, X.; Wei, W.; Wang, H. Intelligent fault diagnosis of the high-speed train with big data based on deep neural networks. IEEE Trans. Ind. Inf. 2017, 13, 2106–2116. [Google Scholar] [CrossRef]
- Miao, H.; Li, B.; Sun, C.; Liu, J. Joint learning of degradation assessment and RUL prediction for aeroengines via dual-task deep LSTM networks. IEEE Trans. Ind. Inf. 2019, 15, 5023–5032. [Google Scholar] [CrossRef]
- Kulkarni, K.; Devi, U.; Sirighee, A.; Hazra, J.; Rao, P. Predictive maintenance for supermarket refrigeration systems using only case temperature data. In Proceedings of the 2018 Annual American Control Conference (ACC), Milwaukee, WI, USA, 27–29 June 2018; pp. 4640–4645. [Google Scholar] [CrossRef]
- Christer, A.H.; Wang, W.; Sharp, J.M. A state space condition monitoring model for furnace erosion prediction and replacement. Eur. J. Oper. Res. 1997, 101, 1–14. [Google Scholar] [CrossRef]
- Donahue, J.; Anne Hendricks, L.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015. [Google Scholar]
- Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
- MacKay, D.J.; Mac Kay, D.J. Information Theory, Inference and Learning Algorithms; Cambridge University Press: Cambridge, MA, USA, 2003. [Google Scholar]
- Hochreiter, S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 1998, 6, 107–116. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Liu, C.L.; Hsaio, W.H.; Tu, Y.C. Time series classification with multivariate convolutional neural network. IEEE Trans. Ind. Electron. 2018, 66, 4788–4797. [Google Scholar] [CrossRef]
- Shi, X.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. arXiv 2015, arXiv:1506.04214. [Google Scholar]
- Hogg, R.V.; McKean, J.; Craig, A.T. Introduction to Mathematical Statistics, 6th ed.; Pearson Education: Upper Saddle River, NJ, USA, 2005; p. 119. [Google Scholar]
Heat Source | Description | Operation | Raw Materials | Form Factor | Example |
---|---|---|---|---|---|
Chemical | Melt metal by the combustion process | Continuous | Ores, coke, flux | Vertical shaft | Blasting furnace |
Electrical | Melt metal by electricity | Discrete | Scrapped metal | Large cylinder | EAF, Induction furnace |
Object Type | Object | Type of Input Data | Methodology | Reference |
---|---|---|---|---|
Mechanical component | Bearing | Vibration | Logistic Regression | Pandya et al., 2014 [13] |
Engine | Vibration | ANN | Ahmed et al., 2015 [14] | |
Turbine | Vibration | ANN | Biswal et al., 2016 [15] | |
Fan | Vibration | Echo State Network | Balabanov et al., 2011 [16] | |
Pump | Vibration, motor line data | SVM | Rapur et al., 2019 [17] | |
Gearbox | Vibration | SVM | Zhong et al., 2010 [18] | |
CNC milling machine | Force, vibration, directions | CNN+Bi-LSTM | Zhao et al., 2017 [19] | |
Pump, motor bearing, pump | Grey imaged signal | CNN | Wen et al., 2018 [20] | |
Systematic component | Rail Network | Historical data Maintenance record | SVM | Li et al., 2014 [21] |
High-speed train system | Fault information of bogies | DNN | Hexuan, et al., 2017 [23] | |
Aeroengine | Trajectory, Operation, Fault, Life span | LSTM | Miao et al., 2019 [24] | |
Semiconductor manufacturing | Maintenance record | SVM, k-NN | Susto et al., 2015 [22] | |
Refrigeration System | Temperature, Defrost state | Random Forest | Kulkarni et al., 2018 [25] |
Column Name | Heat Source | Description | Operation |
D03 | Total power (current) | Converter power usage | A |
D04 | Delta Input Voltage (A.C.) | Delta input voltage | V |
D05 | Star Input Voltage (A.C.) | Star input voltage | V |
D06 | Delta Input Current | Delta input current | A |
D07 | Start Input Current | Star input current | A |
D08 | Converter Voltage (D.C.) | Converter voltage | V |
D12 | Commanded Inverter Power | Commanded inverter power | K.W. |
D22 | Actual Inverter Power | Actual inverter power | K.W. |
D23 | Frequency | Inverter frequency | Hz |
D24 | Inverter Input Voltage (D.C.) | Inverter input voltage | V |
D25 | Inverter Input Current | Inverter input current | A |
D26 | Inverter Output Voltage (A.C.) | Inverter output voltage | V |
D27 | Inverter Output Current | Inverter output current | A |
D29 | Event Sequence | Furnace event sequence | N/A |
D30 | Last Heating Time | Last heating time | Min |
DT | Date Time | Recorded date and time | N/A |
Seq_index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
Event seq | PowerDownSeq | SelfTestSeq | InterlockSeq | FaultSeq | PrechgSeq, | AcbWaitSeq | StandbySeq | HeatingSeq |
Index | MLP | LSTM | s-ConvLSTM |
---|---|---|---|
1 | 0.828057831 | 0.206931 | 0.156686 |
2 | 0.63402467 | 0.207138 | 0.190867 |
3 | 0.734765453 | 0.212748 | 0.120577 |
4 | 0.696998674 | 0.213947 | 0.194754 |
5 | 0.873012705 | 0.212426 | 0.173549 |
6 | 0.714811902 | 0.212015 | 0.177297 |
7 | 0.709106769 | 0.210401 | 0.200828 |
8 | 0.812086075 | 0.21038 | 0.20231 |
9 | 0.570165822 | 0.211098 | 0.199597 |
10 | 0.683984788 | 0.208985 | 0.166552 |
Mean | 0.725701469 | 0.210607 | 0.178302 |
Stdev | 0.086761715 | 0.002226 | 0.024345 |
Index | MLP | LSTM | s-ConvLSTM |
---|---|---|---|
1 | 0.725906 | 0.708037 | 0.921909 |
2 | 0.817845 | 0.702439 | 0.882984 |
3 | 0.825011 | 0.684612 | 0.909343 |
4 | 0.800784 | 0.691447 | 0.904265 |
5 | 0.718954 | 0.701471 | 0.867883 |
6 | 0.779492 | 0.678868 | 0.932762 |
7 | 0.790403 | 0.701468 | 0.931963 |
8 | 0.7198 | 0.695576 | 0.801254 |
9 | 0.82586 | 0.69687 | 0.899124 |
10 | 0.769335 | 0.699486 | 0.870275 |
Mean | 0.777339 | 0.696027 | 0.892176 |
Stdev | 0.040523 | 0.008386 | 0.037449 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Choi, Y.; Kwun, H.; Kim, D.; Lee, E.; Bae, H. Residual Life Prediction for Induction Furnace by Sequential Encoder with s-Convolutional LSTM. Processes 2021, 9, 1121. https://doi.org/10.3390/pr9071121
Choi Y, Kwun H, Kim D, Lee E, Bae H. Residual Life Prediction for Induction Furnace by Sequential Encoder with s-Convolutional LSTM. Processes. 2021; 9(7):1121. https://doi.org/10.3390/pr9071121
Chicago/Turabian StyleChoi, Yulim, Hyeonho Kwun, Dohee Kim, Eunju Lee, and Hyerim Bae. 2021. "Residual Life Prediction for Induction Furnace by Sequential Encoder with s-Convolutional LSTM" Processes 9, no. 7: 1121. https://doi.org/10.3390/pr9071121