Next Article in Journal
Clean Energy and Fuel Storage
Previous Article in Journal
Heuristics for Spreading Alarm throughout a Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of Failure Occurrence Rate for Concrete Machine Foundations Used in Gas and Oil Industry by Machine Learning

by
Patryk Ziolkowski
1,*,
Sebastian Demczynski
2 and
Maciej Niedostatkiewicz
1
1
Gdansk University of Technology, Faculty of Civil and Environmental Engineering, Gabriela Narutowicza 11/12, 80-233 Gdansk, Poland
2
Institute of Fluid-Flow Machinery Polish Academy of Sciences, Generala Jozefa Fiszera 14, 80-231 Gdansk, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(16), 3267; https://doi.org/10.3390/app9163267
Submission received: 8 July 2019 / Revised: 2 August 2019 / Accepted: 6 August 2019 / Published: 9 August 2019
(This article belongs to the Section Civil Engineering)

Abstract

:
Concrete machine foundations are structures that transfer loads from machines in operation to the ground. The design of such foundations requires a careful analysis of the static and dynamic effects caused by machine exploitation. There are also other substantial differences between ordinary concrete foundations and machine foundations, of which the main one is that machine foundations are separated from the building structure. Appropriate quality and the preservation of operational parameters of machine foundations are essential, especially in the gas and oil industry, where every disruption in the technological process is costly. First and foremost, there are direct repair costs from damage to foundations, but there are also indirect costs associated with blockages of the production process. Foundation repairs can temporarily shut down a given part of the refining process from operation. Thanks to cooperation from our partner, we obtained data from 510 concrete machine foundations from a refinery. Our database included many parameters, such as concrete cover thickness, machine gravity center distortion, the angular frequency of vertical self-excited vibrations, the angular frequency of horizontal self-excited vibrations, amplitudes of oscillation, foundation area, foundation volume, and information on occurring failures. Concrete machine foundation failure is not yet fully understood. In our study, we assessed what affects the failure occurrence rate of concrete machine foundations and to what extent. We wanted to find out whether there are correlations between the foundation failure occurrence rate and the mentioned parameters. To achieve this goal, we utilized state-of-the-art machine learning techniques.

1. Introduction

Refineries are oil and gas processing plants that produce fuels, oils, lubricants, asphalt, and other crude oil-based raw materials. Refineries often occupy large areas of land where there are hundreds of machine installations that are located on concrete foundations. Failures of concrete foundations for industrial machines are highly problematic and expensive to repair. Repairing a damaged foundation may require even a temporary shutdown of the machinery, which may result in direct and indirect losses that are challenging to estimate. Machine foundations differ significantly from the foundations of buildings and engineering structures. These foundations are specially designed for transferring static and dynamic loads from machines in operation to the ground and are mounted separately from the existing structure [1]. Separating these foundations from the structure of the building in which they are located is of vital importance and is performed primarily to protect the construction of the building against the effects of dynamic loads. These foundations are most often made of reinforced concrete. Machine foundations are designed for the specific machine that is mounted on it, which makes each foundation unique. The problem of failures that occur in concrete machine foundations is quite challenging to explore and find one universal answer. First and foremost, because the machines mounted on the foundations have diverse functions, they support various stages of technological processes and perform and operate in different ways [2].
In our paper, we would like to present applied predictive analytics in civil engineering based on machine learning. In the study, we try to answer the question of what affects the failure occurrence rate of concrete machine foundations for one of the refineries with which we cooperate. In addition, we would like to show a solution that we adopted to solve a classification problem, which can be used in further studies on this subject. The topic of predictive analytics in civil engineering is present in scientific discourse. However, analysis regarding the failure occurrence rate of concrete foundations for machines in the refining industry has not been addressed in the literature. Therefore the subject is new and pioneering. The analysis we prepared involved the use of machine learning techniques. First, we gathered a comprehensive database of different parameters, from the geometry of the foundations to the operational parameters of the machines. We used this vast-ranging database, looking for answers about whether some of these parameters influence the failure occurrence rate of concrete machine foundations and if so to what extent. As we mentioned, to solve the considered problem, we decided to use machine learning techniques, because in our opinion, they cope better with multilevel dependencies between many parameters than well-established statistical methods do. We want to point out that the adopted approach is experimental and that the conclusions drawn are not final and might be revised in further studies.
The subject of concrete foundations for industrial machines is a separate field within concrete structures that requires careful consideration. The design practice of machine foundations requires design assumptions, which consist of type, rotation speed, power, and weight [3]. However, the essential matter to consider is the type of machine that we plan to mount on the foundation. Machines are characterized by different work parameters that can influence the supporting structure in various ways. According to European norm PN-EN 60034-1:2009—“Rotating Electrical Machines—Part 1: Nominal data and parameters” [4], machine operation can be characterized as 1 of 10 modes. The most popular material from which the foundations for machines are made is concrete. To secure the foundation and improve its reliability parameters, active and passive vibration isolations are used. Active vibration isolation reduces the transmission of vibrations from the machine to the ground, and passive ones reduce the transmission of vibrations from the ground to the machine [5].
Machine learning is a field of science that focuses on creating methods and algorithms that allow for predicting relationships between variables or variables and results. This area has developed very intensively, especially in the last decade. Machine learning can have many potential applications and can be used in many scientific fields, from phase identification in structural steel [6] to an assessment of property value [7,8]; in speech and writing recognition software; in cybersecurity; and in banking, decision-making, and medicine. The number of potential applications for this technology is vast and increases every year. Under the general term machine learning, there are many different computational methods, one of the most popular being an artificial neural network (ANN) and the related deep neural network (DNN), which means an ANN with many hidden layers [9]. Moreover, machine learning can also be used to support vector machines, Bayesian networks, and genetic algorithms.
In our study, we decided to utilize an ANN. An ANN is a powerful computing tool that can solve complex problems by imitating the behavior of the human brain. The primary computing unit of an ANN is an artificial neuron. One artificial neuron has a reduced ability to solve problems. However, many artificial neurons connected and put together in layers can give an excellent approximation of the problem under consideration. In order for an ANN to be practical, it should be fed with many examples [10]. Namely, we have to present several input parameters with predefined or assigned weights, along with output solutions. During the learning process, we check whether the answer is consistent with the result assigned to it and whether it seeks the right result by changing the weights of individual parameters. The first ANN required the adjustment of parameter weights using potentiometers. Recently, the weighing process has been automated by introducing a learning rule. The architecture of an ANN consists of layers. We can distinguish three types of layers: the input layer and the output layer, which are associated with input parameters and output targets, and a hidden layer between them. The hidden layers perform computations on the input parameters and their weights that are later applied with activation functions to calculate a result. An ANN can be organized in many different ways. In the available literature, there are three categories in terms of which we can describe an ANN [10,11]. It can be divided by learning paradigms, learning algorithms, and topologies, as in Figure 1.
The first category refers to learning methods that we can divide into supervised and unsupervised learning and reinforced learning. In supervised learning, we strive for a defined result, while in unsupervised learning, we try to detect unknown relationships between variables. The second category, the learning algorithms, refers to how the data are processed to solve the problem. Data are transferred between layers differently, which affects both the calculation time and the quality of the obtained results. A basic ANN processes data in only one direction. A recursive ANN causes feedback loops, which calculate the results after reaching equilibrium [10,11]. Within this category, we can mention the following learning algorithms: conjugate gradient, quasi-Newton, and steepest descent. The third category is a network topology. We have single-layer networks, multilayer networks, and networks with one recurrent layer. The more layers there are, the more powerful the network capability is at solving complex problems and finding nonlinear dependencies. The more neurons in the layer there are, the more input data the layer can handle [12,13].
Machine learning predictive analytics for structural concrete engineering purposes is a broad subject, but we can divide scientific trends into two main groups that focus on material properties of concrete and structural health monitoring, respectively. The first research trend mentioned by us concerns the prediction of concrete compressive strength based on a machine learning analysis of concrete mix recipes and strength tests. This trend was started in 1998 by Yeh et al. [14]. They tries to predict the compressive strength of high-performance concrete by using linear regression and an ANN. In their research, they collected a large amount of data for analysis, but did not take into account the specificity of concrete maturation. In their database, they had concrete test samples that had not yet reached their full compressive strength, which in our opinion, may have led to corrupted results. Lee, S., [15] presented a method in which he improved the accuracy of prediction by introducing the weighting of input neurons and using a parameter condensation technique for the selection of the input neurons number. His neural network had a modular structure built from five networks. The author claimed that the methods he used allowed for optimizing ANN performance well. Gupta et al. [16] approached the topic differently and proposed a neural expert system. They used this method to predict the compressive strength of high-performance concrete. The system they proposed used a multilayered neural network architecture, which was trained with generalized backpropagation for interval training patterns. The authors argued that the solution allowed for learning from example inferences. However, their method allowed for the learning of irrelevant input variables that could distort output. Another element that in our opinion is debatable is the fact that the input variables had different metrics. Apart from the composition of the concrete mix, ANNs analyze variables such as curing time. We think that the input parameters adopted for the analysis could lead to the misrepresentation of concrete compressive strength results. A similar topic is described in the work of Dac-Khuong Bui et al. [17], who focused entirely on the practical application of neural expert systems. An application of DNN architecture to predict the compressive strength of concrete was considered by Fangming Deng et al. [18]. They studied a dataset of recycled concrete recipes and used a network with five input variables and Softmax regression to find the best prediction model. Referring to the analysis itself, the authors used so-called deep features. This means that the DNN was trained not directly on the data but on the input variable ratios. A similar approach was presented by Ziolkowski and Niedostatkiewicz [19], where in their analysis, they used feature scaling. Fangming Deng et al. [18] claimed that the use of their method allowed for the achievement of a higher generalization ability and higher efficiency and precision in comparison to the ANN. However, they used convolution networks that were computationally expensive, which is why their database was small, with 74 records. Such a small dataset did not allow their methods to be used on a larger scale and may have resulted in underfitting, which means that the network did not fit the data correctly to such an extent that it could limit the efficiency. On the contrary, Hosein Naderpour et al. [20] compared the ANN and DNN approaches, presenting a similar degree of precision.
The second trend is focused on structural health monitoring, which consists of constant observation of the structure to ensure its safe exploitation and reliability over time. Such a monitoring system concerns especially public facilities and elements of critical infrastructure. One of the monitoring techniques that acquires data for machine learning analysis is a dynamic response measurement. It involves monitoring continuous intervals of time damage-sensitive features, and it is mainly used in dams, long-span bridges, or tall buildings. Hao and Xia [21] used a genetic algorithm to evaluate a one-span steel portal frame with a cantilever beam. They studied three criteria: the frequency changes, the mode shape changes, and a combination of the first and second criteria. Fang et al. [22] used a backpropagation neural network to perform structural damage detection, where the frequency response functions served as an input. They carried out their analysis on a cantilevered beam and got good accuracy, assessing damage conditions by incorporating into their neural network a tunable steepest descent. Salazar et al. [23] presented an empirical comparison of machine learning techniques for dam behavior modeling, where they proved that machine learning techniques noticeably improved the prediction accuracy of dam behavior. In their work, they presented several details that could improve the obtained results, such as the fact that removing the first years of a dam’s lifecycle could give better estimates. We think that may be related to the maturation of concrete, which we also pointed out as an essential factor in Yeh et al. [14]. Taffese and Sistonen [24] conducted a detailed review of state-of-the-art machine learning applications for durability and service life assessments of reinforced concrete structures. There are also several state-of-the-art applications for machine learning in civil engineering. Bayar and Bilir [25] proposed the estimation of crack propagation in concrete by machine learning. They were able to determine crack morphology and estimate crack propagation from photographs of crack propagation over time. However, they did not perform adequate validation to determine the repeatability and reliability of this method. Nevertheless, the method itself seems promising.

2. Materials and Methods

2.1. Essentials

In our research analysis, we wanted to determine whether and what design parameters of the foundations and operating parameters of machines mounted on foundations affect the failure occurrence rate. For this purpose, we collected data from about 510 machine foundations. We used this data to train an ANN. An ANN estimated the correlations between individual parameters and the failure rate of concrete foundations. We checked whether the correlation occurred and tried to find out if it was robust or weak. We dealt with a classification problem, which is the problem of finding out whether a new observation belongs to a particular category or not. In our case, it was whether a failure in the concrete machine foundation occurred. We adopted an approach that involved the calculation of a correlation matrix and logistic correlation, principal components analysis, the optimization of ANN architecture, a loss index assessment, training, model selection, a receiver operating characteristic (ROC) curve analysis, and predictive model evaluation. Figure 2 presents a flowchart of the adopted approach.

2.2. The Database of Concrete Machine Foundations

To train the ANN, we created a database. Our database contained 510 records of concrete machine foundations. They were acquired thanks to the refinery with which we cooperate. Each of the foundations is different and was designed to transfer loads from the machine mounted on it. Machines located in the refinery and servicing the refining process of oil and gas are incredibly diverse, and hence there are a large variety of foundations. We systematized the data obtained from the foundation measurements into various categories of variables, such as physical dimensions of foundations, operating parameters of the machine mounted on the foundation, the history of the machine, and foundation diagnostics. In our analysis, we wanted to discover which parameters affect the failure of concrete foundations. After extensive consultations with experts, we selected a few that were the most promising. Table 1 shows the adopted input parameters.
Our analysis required us to divide the considered parameters into two groups. The first group consisted of parameters that were used to feed the ANN, and the second one had targets in which we looked for correlations. The parameters and their respective descriptions are in Table 1. In Table 2, we present value ranges for several variables, along with the average value.

2.3. Application and Analysis

We allocated our variables into three subsets: the training, selection, and testing datasets. Overall, our dataset had 510 records, from which 306 records accounted for the training subset, 102 records for the selection subset, and 102 records for the testing subset. The training subset was 60% of the whole dataset, the selection subset 20%, and the testing subset 20%. Training and testing the datasets allowed us to feed the ANN with data and find out what the model’s performance is. In Figure 3, we show scatter plots that represent data from input variables in relation to the failure occurrence rate. Our dataset consisted of records corresponding to the individual foundations, with several parameters and information on whether the failure occurred: [1] means that the failure occurred, and [0] means that the failure did not happen.
For the needs of our analysis, we found a correlation matrix and performed a logistic correlation. We also performed a principal components analysis to identify the underlying structure in the dataset to make a predictive model. In this procedure, we performed an orthogonal transformation to change a set of potentially correlated variables into a set of values of linearly uncorrelated variables. Simplifying this issue, we tried to find directions in which there was the most variance and where the data were most dispersed. The results of the analysis were determinations of principal components. In Figure 4, we present a cumulative sum of explained variance for the eight principal components.
The vertical axis represents the cumulative explained variance, while the horizontal axis corresponds to the individual principal components. The first direction responded to 60% of the variation at normalized evolutionary distances, and including the three primary components, we covered 90% of the total explained variance. The remaining principal components contributed with progressively declining variance. We think that this minor variation was negligible. To find an optimal neural network architecture, we first adopted an initial architecture and then tried to optimize it. The ANN that we propose has eight input and scaling neurons, eight principal components, three hidden layers, four perceptron neurons, and one probabilistic and output neuron. Figure 5 presents the adopted, optimal ANN architecture.
We used the Broyden–Fletcher–Goldfarb–Shanno algorithm to find a suitable training rate [26,27,28,29,30,31] and the Brent method [32,33,34,35] to obtain a quasi-Newton training direction step. We evaluated the influence of each input variable by eliminating training input selectively and examining the output results. The higher the input contribution nominal value was, the more substantial this variable was. The lower the input contribution nominal value was, the lower the contribution it had. We found out that the most influential were the foundation volume, machine gravity center distortion, the angular frequency of vertical self-excited vibrations of the machine, and concrete cover thickness. Figure 6 shows an evaluation of the input contributions.
We introduced input selection by growing algorithm [36,37,38,39]. To find out the optimal number of artificial neurons, we used an order selection algorithm [40,41]. To evaluate the adopted approach and study the loss, we prepared a receiver operating characteristic (ROC) curve. This method is a graphical illustration of how well the classifier discriminates between the two different classes. This capacity of discrimination is measured by calculating the area under the curve (AUC) [42,43,44]. Figure 7 presents the receiver operating characteristic curve.
An ROC curve represents the calculated sensitivity and specificity for a group of different binary classifiers gathered by wavering the threshold value. The threshold value increases along with the output probability of the testing instances. The adopted classifier is random. The vertical axis corresponds to sensitivity (true positive rate), while the horizontal axis refers to specificity (false positive rate). The more the chart deviates to the upper left corner, the larger the AUC, and the better the model performs. In Figure 6, we can see that the graph is not entirely in the upper left corner, but is tilted toward it. On the basis of the ROC curve models described in the literature, we can say that our model, with a sensitivity of 0.87, was good (B class) [45].
We used a life curve as an additional evaluation method to check the performance of the predictive model. Through this method, we tried to find the ratio between the positive prediction with and without the adopted model. Figure 8 shows the life chart.
The vertical axis represents the ratio between the predicted results with and without the model, while the horizontal axis corresponds to the percentage of considered instances. The gray line represents randomness. The analysis of the chart led us to the conclusions that the nature of the chart was correct.
However, it is puzzling that not every lift number followed a decreasing pattern, which may indicate that the model did not order some part of the population adequately. Finally, we tried to answer how the determined parameters could affect the failure occurrence rate. For this purpose, we decided to use output charts, as shown in Figure 9, which represent the variability of a given input parameter in relation to the foundation failure occurrence rate, while the remaining variables are fixed. The output charts are a graphical illustration of how the ANN source code works. We present charts only for parameters whose input contribution exceeded 1.0.

2.4. Results and Discussion

We drew several conclusions: the higher the volume of the foundation was, the lower the foundation failure rate was. Interestingly, we did not find a correlation with foundation surface size, which we initially assumed. The gravity center distortion of the machine mounted on the foundation gave a relatively high contribution to the results, but the output chart shows that this correlation was almost flat, so it is hard to determine the actual dependency. We found out that the angular frequency of the horizontal self-excited vibrations of the machine mounted on the foundation gave a significant contribution to the results. The higher the value of the angular frequency of horizontal self-excited vibrations was, the higher the foundation failure occurrence rate was. The cover thickness gave a proper contribution, but the actual correlation was flat, with a slight extension toward increased failure rate with an increase in coverage. In our database, the values of concrete cover thickness varied between 2 and 5 cm. However, we must point out that the dependencies presented in Figure 9 did not correspond to the combined correlation of variables, but showed only a trend of a given variable concerning the target variable. It should also be noted that input parameters gave different contributions to the results, as we presented before in Figure 6. Based on the performed study, we can assume that it is better to increase the volume of a foundation, and then the surface area, to decrease the chance of a failure occurrence. Gravity center distortion did play a significant role in the considered case. It is better to keep the angular frequency of horizontal self-excited vibrations of the machine mounted on the foundation low to decrease the chance of a failure occurrence. It seems that cover thickness did not correlate with an increased failure rate, or most of the cover thicknesses in the foundations were designed well, so this did not influence the results. Our analysis requires further research and will be developed by us in the future.

3. Summary and Conclusions

The issue of the reliability of concrete foundations for machines is essential, especially in the gas and oil industry. The repair of machine foundations generates both direct costs related to the renovation or construction of a new foundation as well as indirect costs related to the disruption of the production process. We undertook the task of determining which design parameters and operational parameters of machines mounted on the foundations affect the failure occurrence rate of concrete machine foundations. In our research, we used an extensive data resource acquired thanks to a partner company from the gas and oil industry. In the analysis, we used state-of-the-art machine learning techniques to build an optimal artificial neural network model. The dataset we obtained consisted of 510 records and was divided into several subsets to feed an adopted ANN model. The dataset had 306 training records (60%), 102 selection records (20%), and 102 testing records (20%). The adopted ANN model had eight input and scaling neurons, eight principal components, three hidden layers, four perceptron neurons, and one probabilistic and output neuron. We performed features scaling. We calculated the suitable training rate using the Broyden–Fletcher–Goldfarb–Shanno algorithm and the step for the quasi-Newton training direction using the Brent method. We performed input selection by growing an input algorithm and found the optimal number of artificial neurons through an order selection algorithm. We performed an input contribution analysis and found out that the failure occurrence rate was most influenced by the foundation volume, machine gravity center distortion, the angular frequency of vertical self-excited vibrations of the machine, and concrete cover thickness. To do the evaluation, we prepared a receiver operating characteristic (ROC) curve, obtaining a B class model with a sensitivity value 0.87. As an evaluation, we also prepared a life chart with some minor issues, which could indicate that the model does not order some part of the population correctly. Eventually, we drew output charts to find out the nature of the observed correlations for four input variables with an input contribution value higher than 1.0. We found out that the failure occurrence rate of concrete machine foundations increased with smaller foundation volumes and a higher angular frequency of the horizontal self-excited vibrations of the machine mounted on the foundation. It should be noted that the presented machine learning approach may not adequately reflect all relationships between the variables, and we will be developing it further. In future research ventures, primarily we would like to increase our database and analyze more variables that may affect the failure occurrence rate of foundations. In addition, in the future, we would like to increase the complexity of our machine learning model and introduce convolutional networks.

Author Contributions

Conceptualization, P.Z., S.D.; methodology, P.Z., S.D.; software, P.Z., S.D.; validation, P.Z., S.D.; formal analysis, P.Z., S.D.; investigation, P.Z., S.D.; resources, P.Z., S.D., and M.N.; data curation, P.Z., S.D.; writing—original draft preparation, P.Z., S.D.; writing—review and editing, P.Z., S.D.; visualization, P.Z., S.D.; supervision, P.Z., S.D.; project administration, P.Z., S.D.; funding acquisition, S.D. and M.N.

Funding

This research received no external funding.

Acknowledgments

The author wishes to acknowledge Grupa LOTOS S.A., who provided data to conduct this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Srinivasulu, P.; Vaidyanathan, C.V. Handbook of Machine Foundations; Tata McGraw-Hill Education: New York, NY, USA, 1977; ISBN 0070966117. [Google Scholar]
  2. Reynolds, C.E.; Steedman, J.C.; Threlfall, A.J. Reinforced Concrete Designer’s Handbook; CRC Press: Boca Raton, FL, USA, 2007; ISBN 0203087755. [Google Scholar]
  3. Králik, J., Jr. Probability and sensitivity analysis of machine foundation and soil interaction. Appl. Comput. Mech. 2009, 3, 87–100. [Google Scholar]
  4. Council of Standards Australi. Rotating Electrical Machines; Part 1: Nominal data and parameters; PN-EN 60034-1:2009; Standards Australia: Sydney, Australia, 2009. [Google Scholar]
  5. Sitharam, T.G. Advanced Foundation Engineering; Taylor & Francis: Milton Park, UK; Abingdon-on-Thames, UK, 2007; ISBN 9788123915074. [Google Scholar]
  6. Naik, L.D.; Sajid, U.H.; Kiran, R. Texture-Based Metallurgical Phase Identification in Structural Steels: A Supervised Machine Learning Approach. Metals 2019, 9, 546. [Google Scholar] [CrossRef]
  7. Renigier-Bilozor, M.; Wisniewski, R.; Bilozor, A. Rating Attributes Toolkit for the Residential Property Market. Int. J. Strateg. Prop. Manag. 2017, 21, 307–317. [Google Scholar] [CrossRef]
  8. Biłozor, A.; Renigier-Biłozor, M. Procedure of Assessing Usefulness of the Land in the Process of Optimal Investment Location for Multi-family Housing Function. Procedia Eng. 2016, 161, 1868–1873. [Google Scholar] [CrossRef] [Green Version]
  9. Ryu, S.; Noh, J.; Kim, H. Deep neural network based demand side short term load forecasting. In Proceedings of the 2016 IEEE International Conference on Smart Grid Communication, Sydney, Australia, 6–9 November 2016. [Google Scholar] [CrossRef]
  10. Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining: Practical Machine Learning Tools and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2016; ISBN 0128043571. [Google Scholar]
  11. Neal, R.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2007; Volume 49, ISBN 1493938436. [Google Scholar]
  12. Yang, Z.R.; Yang, Z. Artificial Neural Networks. In Comprehensive Biomedical Physics; Araghinejad, S., Ed.; Springer: Dordrecht, The Netherlands, 2014; Volume 6, pp. 1–17. ISBN 9780444536327. [Google Scholar]
  13. Silva, I.N.; Hernane Spatti, D.; Andrade Flauzino, R.; Liboni, L.H.B.; Reis Alves, S.F. Artificial Neural Networks; Springer: Cham, Switzerland, 2017; ISBN 9783319431611. [Google Scholar]
  14. Yeh, I.-C. Modeling of Strength of High-Performance Concrete Using Artificial Neural Networks. Cem. Concr. Res. 1998, 28, 1797–1808. [Google Scholar] [CrossRef]
  15. Lee, S. Prediction of concrete strength using artificial neural networks. Eng. Struct. 2003, 25, 849–857. [Google Scholar] [CrossRef]
  16. Gupta, R.; Kewalramani, M.A.; Goel, A. Prediction of Concrete Strength Using Neural-Expert System. J. Mater. Civ. Eng. 2006, 18, 462–466. [Google Scholar] [CrossRef]
  17. Bui, D.K.; Nguyen, T.; Chou, J.S.; Nguyen-Xuan, H.; Ngo, T.D. A modified firefly algorithm-artificial neural network expert system for predicting compressive and tensile strength of high-performance concrete. Constr. Build. Mater. 2018, 180, 320–333. [Google Scholar] [CrossRef]
  18. Deng, F.; He, Y.; Zhou, S.; Yu, Y.; Cheng, H.; Wu, X. Compressive strength prediction of recycled concrete based on deep learning. Constr. Build. Mater. 2018, 175, 562–569. [Google Scholar] [CrossRef]
  19. Ziolkowski, P.; Niedostatkiewicz, M. Machine Learning Techniques in Concrete Mix Design. Materials 2019, 12, 1256. [Google Scholar] [CrossRef]
  20. Naderpour, H.; Rafiean, A.H.; Fakharian, P. Compressive strength prediction of environmentally friendly concrete using artificial neural networks. J. Build. Eng. 2018, 16, 213–219. [Google Scholar] [CrossRef]
  21. Hao, H.; Xia, Y. Vibration-based Damage Detection of Structures by Genetic Algorithm. J. Comput. Civ. Eng. 2002, 16, 222–229. [Google Scholar] [CrossRef]
  22. Fang, X.; Luo, H.; Tang, J. Structural damage detection using neural network with learning rate improvement. Comput. Struct. 2005, 83, 2150–2161. [Google Scholar] [CrossRef]
  23. Salazar, F.; Toledo, M.A.; Oñate, E.; Morán, R. An empirical comparison of machine learning techniques for dam behaviour modelling. Struct. Saf. 2015, 56, 9–17. [Google Scholar] [CrossRef] [Green Version]
  24. Taffese, W.Z.; Sistonen, E. Machine learning for durability and service-life assessment of reinforced concrete structures: Recent advances and future directions. Autom. Constr. 2017, 77, 1–14. [Google Scholar] [CrossRef]
  25. Bayar, G.; Bilir, T. A novel study for the estimation of crack propagation in concrete using machine learning algorithms. Constr. Build. Mater. 2019, 215, 670–685. [Google Scholar] [CrossRef]
  26. Abdi, F.; Shakeri, F. A globally convergent BFGS method for pseudo-monotone variational inequality problems. Optim. Methods Softw. 2017, 34, 25–36. [Google Scholar] [CrossRef]
  27. Andrei, N. An adaptive scaled BFGS method for unconstrained optimization. Numer. Algorithms 2018, 77, 413–432. [Google Scholar] [CrossRef]
  28. Battiti, R.; Masulli, F. BFGS optimization for faster and automated supervised learning. In International Neural Network Conference; Springer: Dordrecht, Netherlands, 1990; pp. 757–760. [Google Scholar]
  29. Berahas, A.S.; Nocedal, J.; Takác, M. A multi-batch l-bfgs method for machine learning. In Advances in Neural Information Processing Systems; The Neural Information Processing Systems Foundation: Barcelona, Spain, 2016; pp. 1055–1063. [Google Scholar]
  30. Hagan, M.T.; Menhaj, M.B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 1994, 5, 989–993. [Google Scholar] [CrossRef] [PubMed]
  31. Li, D.-H.; Fukushima, M. A modified BFGS method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. 2001, 129, 15–35. [Google Scholar] [CrossRef] [Green Version]
  32. Grabowska, K.; Szczuko, P. Ship resistance prediction with Artificial Neural Networks. In Proceedings of the Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 23–25 September 2015; pp. 168–173. [Google Scholar]
  33. Le, D.; Huang, W.; Johnson, E. Neural network modeling of monthly salinity variations in oyster reef in Apalachicola Bay in response to freshwater inflow and winds. Neural Comput. Appl. 2018, 1–11. [Google Scholar] [CrossRef]
  34. Luenberger, D.G. Introduction to Linear and Nonlinear Programming; Addison-Wesley: Boston, MA, USA, 1973. [Google Scholar]
  35. Yildizel, S.A.; Arslan, Y. Flexural strength estimation of basalt fiber reinforced fly-ash added gypsum based composites. J. Eng. Res. Appl. Sci. 2018, 7, 829–834. [Google Scholar]
  36. Mehnert, A.; Jackway, P. An improved seeded region growing algorithm. Pattern Recognit. Lett. 1997, 18, 1065–1071. [Google Scholar] [CrossRef]
  37. Huang, G.-B.; Saratchandran, P.; Sundararajan, N. An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks. IEEE Trans. Syst. Man Cybern. Part B 2004, 34, 2284–2292. [Google Scholar] [CrossRef]
  38. Arora, P.; Varshney, S. Analysis of k-means and k-medoids algorithm for big data. Procedia Comput. Sci. 2016, 78, 507–512. [Google Scholar] [CrossRef]
  39. Dariane, A.B.; Azimi, S. Forecasting streamflow by combination of a genetic input selection algorithm and wavelet transforms using ANFIS models. Hydrol. Sci. J. 2016, 61, 585–600. [Google Scholar] [CrossRef]
  40. Tabakhi, S.; Moradi, P.; Akhlaghian, F. An unsupervised feature selection algorithm based on ant colony optimization. Eng. Appl. Artif. Intell. 2014, 32, 112–123. [Google Scholar] [CrossRef]
  41. Song, Y.; Liang, J.; Lu, J.; Zhao, X. An efficient instance selection algorithm for k nearest neighbor regression. Neurocomputing 2017, 251, 26–34. [Google Scholar] [CrossRef]
  42. Yu, H.; Reiner, P.D.; Xie, T.; Bartczak, T.; Wilamowski, B.M. An incremental design of radial basis function networks. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1793–1803. [Google Scholar] [CrossRef]
  43. Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  44. Gurbuzbalaban, M.; Ozdaglar, A.; Parrilo, P.A. On the convergence rate of incremental aggregated gradient algorithms. SIAM J. Optim. 2017, 27, 1035–1048. [Google Scholar] [CrossRef]
  45. Metz, C.E. Basic Principles of ROC Analysis. In Seminars in Nuclear Medicine; Elsevier: Amsterdam, The Netherlands, 1978; Volume 8, p. 283. [Google Scholar]
Figure 1. The organization of an artificial neural network (ANN) in machine learning.
Figure 1. The organization of an artificial neural network (ANN) in machine learning.
Applsci 09 03267 g001
Figure 2. Block diagram of the adopted approach.
Figure 2. Block diagram of the adopted approach.
Applsci 09 03267 g002
Figure 3. The scatter plots: targets versus input variables. The vertical axis is the foundation failure occurrence rate (FFOR—foundation failure occurrence rate; YES—failure occurred; NO—failure did not occur). The horizontal axis is the considered parameter, and the units depend on the parameter. (A) Machine failure occurrence (-); (B) concrete cover thickness (cm); (C) machine gravity center distortion (mm); (D) angular frequency of vertical self-excited vibrations (s−1); (E) angular frequency of horizontal self-excited vibrations (s−1); (F) amplitudes of oscillation (μm); (G) foundation area (m2); (H) foundation volume (m3).
Figure 3. The scatter plots: targets versus input variables. The vertical axis is the foundation failure occurrence rate (FFOR—foundation failure occurrence rate; YES—failure occurred; NO—failure did not occur). The horizontal axis is the considered parameter, and the units depend on the parameter. (A) Machine failure occurrence (-); (B) concrete cover thickness (cm); (C) machine gravity center distortion (mm); (D) angular frequency of vertical self-excited vibrations (s−1); (E) angular frequency of horizontal self-excited vibrations (s−1); (F) amplitudes of oscillation (μm); (G) foundation area (m2); (H) foundation volume (m3).
Applsci 09 03267 g003
Figure 4. Cumulative explained variance for the principal components (CEV—cumulative explained variance).
Figure 4. Cumulative explained variance for the principal components (CEV—cumulative explained variance).
Applsci 09 03267 g004
Figure 5. The architecture of the ANN. The figure shows the network architecture, which includes the following parts: input neurons ([…]), scaling neurons (green), principal components (blue), perceptron neurons (red), probabilistic neuron (yellow), and output neuron ([FFO]). Abbreviations: [MF]—machine failure occurrence; [CT]—cover thickness; [GC]—machine gravity center distortion; [VA]—angular frequency of vertical self-excited vibrations; [VB]—angular frequency of horizontal self-excited vibrations; [VC]—amplitudes of oscillation; [FA]—foundation area; [FV]—foundation volume; [FFO]—foundation failure occurrence.
Figure 5. The architecture of the ANN. The figure shows the network architecture, which includes the following parts: input neurons ([…]), scaling neurons (green), principal components (blue), perceptron neurons (red), probabilistic neuron (yellow), and output neuron ([FFO]). Abbreviations: [MF]—machine failure occurrence; [CT]—cover thickness; [GC]—machine gravity center distortion; [VA]—angular frequency of vertical self-excited vibrations; [VB]—angular frequency of horizontal self-excited vibrations; [VC]—amplitudes of oscillation; [FA]—foundation area; [FV]—foundation volume; [FFO]—foundation failure occurrence.
Applsci 09 03267 g005
Figure 6. Input contribution analysis.
Figure 6. Input contribution analysis.
Applsci 09 03267 g006
Figure 7. Receiver operating characteristic (ROC) curve graph.
Figure 7. Receiver operating characteristic (ROC) curve graph.
Applsci 09 03267 g007
Figure 8. Life chart.
Figure 8. Life chart.
Applsci 09 03267 g008
Figure 9. Output charts: the diagrams show the variations in output variables for a single input variable, while the others are fixed. The vertical axis is the foundation failure occurrence rate [FFOR—foundation failure occurrence rate; HIGHER—failure occurrence rate is higher; LOWER—failure occurrence rate is lower]. The horizontal axis represents an input parameter with a contribution input value higher than 1.0. The horizontal axis units depend on the parameter. (A) Foundation volume (m3); (B) machine gravity center distortion (mm); (C) angular frequency of horizontal self-excited vibrations (s−1); (D) cover thickness (cm).
Figure 9. Output charts: the diagrams show the variations in output variables for a single input variable, while the others are fixed. The vertical axis is the foundation failure occurrence rate [FFOR—foundation failure occurrence rate; HIGHER—failure occurrence rate is higher; LOWER—failure occurrence rate is lower]. The horizontal axis represents an input parameter with a contribution input value higher than 1.0. The horizontal axis units depend on the parameter. (A) Foundation volume (m3); (B) machine gravity center distortion (mm); (C) angular frequency of horizontal self-excited vibrations (s−1); (D) cover thickness (cm).
Applsci 09 03267 g009aApplsci 09 03267 g009b
Table 1. The parameters we adopted for our dataset.
Table 1. The parameters we adopted for our dataset.
Ordinal NumberParameterCodenameTypeDescription
1Foundation failure occurrencefoundations_failure_occurrencetargetFoundation failure: 1-yes, 0-no
2Machine failure occurrencemachine_failure_occurrenceinputMachine failure: 1-yes, 0-no
3Concrete cover thicknesscover_thicknessinputConcrete cover thickness (cm)
4Machine gravity center distortiongravity_center_distortioninputDeviation of the foundation with the machine from the calculated center of gravity “e” (mm)
5Angular frequency of vertical self-excited vibrationsvibrations_arinputAngular frequency of vertical self-excited vibrations ωx (s−1)
6Angular frequency of horizontal self-excited vibrationsvibrations_asinputAngular frequency of horizontal self-excited vibrations ωy (s−1)
7Amplitudes of oscillationvibrations_atinputAmplitudes of oscillation (x–y plane (μm)), the horizontal upper surface of the block
8Foundation areafund_areainputFoundation area (m2)
9Foundation volumefund_volumeinputFoundation volume (m3)
Table 2. Ranges of database input and target features.
Table 2. Ranges of database input and target features.
Input FeaturesMinimum (-)Maximum (-)Average (-)
Foundation failure occurrence0.001.00-
Machine failure occurrence0.001.00-
Concrete cover thickness2.005.003.89
Machine gravity center distortion6.00744.0098.16
Angular frequency of vertical self-excited vibrations73.13147.72118.96
Angular frequency of horizontal self-excited vibrations61.18123.5999.53
Amplitudes of oscillation0.0436.514.87
Foundation area0.6025.202.76
Foundation volume1.02156.246.85

Share and Cite

MDPI and ACS Style

Ziolkowski, P.; Demczynski, S.; Niedostatkiewicz, M. Assessment of Failure Occurrence Rate for Concrete Machine Foundations Used in Gas and Oil Industry by Machine Learning. Appl. Sci. 2019, 9, 3267. https://doi.org/10.3390/app9163267

AMA Style

Ziolkowski P, Demczynski S, Niedostatkiewicz M. Assessment of Failure Occurrence Rate for Concrete Machine Foundations Used in Gas and Oil Industry by Machine Learning. Applied Sciences. 2019; 9(16):3267. https://doi.org/10.3390/app9163267

Chicago/Turabian Style

Ziolkowski, Patryk, Sebastian Demczynski, and Maciej Niedostatkiewicz. 2019. "Assessment of Failure Occurrence Rate for Concrete Machine Foundations Used in Gas and Oil Industry by Machine Learning" Applied Sciences 9, no. 16: 3267. https://doi.org/10.3390/app9163267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop