Next Article in Journal
TRIM4Post-Mining: Transition Information Modelling for Attractive Post-Mining Landscapes—A Conceptual Framework
Next Article in Special Issue
Development of a Smart Computational Tool for the Evaluation of Co- and By-Products in Mining Projects Using Chovdar Gold Ore Deposit in Azerbaijan as a Case Study
Previous Article in Journal
Environmental Management Strategies in the Copper Mining Industry in Chile to Address Water and Energy Challenges—Review
Previous Article in Special Issue
A High-Fidelity Modelling Method for Mine Haul Truck Dumping Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rock Fragmentation Prediction Using an Artificial Neural Network and Support Vector Regression Hybrid Approach

1
Department of Mining Engineering and Management, South Dakota School of Mines and Technology, Rapid City, SD 57701, USA
2
Strayos Inc., Buffalo, NY 14203, USA
*
Author to whom correspondence should be addressed.
Mining 2022, 2(2), 233-247; https://doi.org/10.3390/mining2020013
Submission received: 2 March 2022 / Revised: 19 April 2022 / Accepted: 20 April 2022 / Published: 24 April 2022
(This article belongs to the Special Issue Envisioning the Future of Mining)

Abstract

:
While empirical rock fragmentation models are easy to parameterize for blast design, they are usually prone to errors, resulting in less accurate fragment size prediction. Among other shortfalls, these models may be unable to accurately account for the nonlinear relationship that exists between fragmentation input and output parameters. Machine learning (ML) algorithms are potentially able to better account for the nonlinear relationship. To this end, we assess the potential of the multilayered artificial neural network (ANN) and support vector regression (SVR) ML techniques in rock fragmentation prediction. Using geometric, explosives, and rock parameters, we build ANN and SVR models to predict mean rock fragment size. Both models yield satisfactory results and show higher performance when compared with the conventional Kuznetsov model. We further demonstrate an automated means of analyzing a varied number of hidden layers for an ANN using Bayesian optimization in the Keras Python library.

1. Introduction

Rock fragmentation is the process by which rock is broken down into smaller size distributions by mechanical tools or by blasting. The resulting fragment size distribution may be characterized by a histogram showing the percentage of sizes of particles, or as a cumulative size distribution curve [1]. The primary means of rock fragmentation in mining is blasting. A good blast produces a size distribution that is well suited to the mining system it feeds, maximizes saleable fractions, and enhances the value of saleable material [2]. Blasting efficiently saves significant amounts of money that would otherwise be spent on secondary blasting [3]. It also yields significant savings on the costs of downstream comminution processes, i.e., crushing and grinding.
The results of a blast depend on several parameters, which are broadly categorized as controllable and uncontrollable [4,5]. Controllable parameters can be varied by the blasting engineer to adjust the outcome of blasting operations. Controllable parameters can be grouped into geometric, explosives, and time parameters. Geometric parameters include drill hole diameter, hole depth, charge length, spacing, burden, and stemming height. Explosives parameters include the type of explosive, explosive strength and energy, powder factor, and priming systems. Time parameters include delay timing and initiation sequence. A blasting engineer’s ability to change these controllable parameters dynamically in response to as-drilled information is critical to achieving good fragmentation [3]. The uncontrollable parameters constitute the geological and geotechnical properties of the rock mass. These parameters are inherent, and thus, cannot be varied to adjust blasting outcomes. They include rock strength, rock-specific gravity, joint spacing and condition, presence and depth of water, and compressional stress wave velocity [6]. Though these parameters cannot be varied by the blasting engineer, adequately accounting for them in a blast design helps to achieve good fragmentation. Figure 1 is a bench blast profile showing a variety of design parameters.
Several studies have sought to predict fragment size distribution based on the parameters used in blast design. The accurate prediction will give blasting engineers control over the outcome of blasting operations. Consequently, engineers will know which controllable parameters to modify, and to what extent the modification should be. Having an accurate prediction model leads to good post-blast results, and this comes with enhanced loader and excavator productivity along with numerous downstream benefits. However, the prediction exercise proves to be challenging considering that numerous parameters influence fragmentation. Additionally, the rock mass may be heterogeneous and/or anisotropic in its structures of weakness. To this end, it is impossible to develop a predictive tool solely based on theoretical and mechanistic reasoning [5]. Researchers have thus mostly resorted to empirical techniques in predicting the outcome of fragmentation, with the Kuz–Ram being the most widely used. The empirical models are favored and widely used in daily blasting operations because they are easily parameterized. A major shortfall, however, with the empirical methods is that certain significant parameters are not accounted for, and this leads to less accurate results. Cunningham [2], notes that essential parameters omitted by empirical techniques include rock properties and structure, e.g., joint spacing and condition, detonation behavior, and mode of decking. Other parameters include blast dimensions and edge effects from the borders of the blast. Over the years, researchers have modified existing models and formulated new ones in an attempt to improve prediction accuracy. While this has contributed to significant improvement, none of the ensuing models incorporate all the important parameters, and accuracy is still of concern. In some instances, highly simplified or inappropriate procedures were used for estimating the properties of structural weakness in the rock mass [5]. Furthermore, the relationship between fragmentation input and output parameters is highly nonlinear, and empirical models may not be well suited for such modeling.
To this end, researchers, in recent years, have sought to implement machine learning (ML) techniques for fragmentation prediction. The objective was to capture as much of the inherent nonlinearity using limited input parameters and subsequently improve accuracy. Kulatilake et al. [5] and Shi et al. [7] have respectively exploited the potential of using artificial neural network (ANN) and support vector regression (SVR) for this purpose, and have achieved satisfactory results. ANN and SVR are machine learning techniques that are proven to possess high nonlinearity-recognition properties. However, ANN models in the rock fragmentation literature were limited to only one hidden layer, and do not exploit the potential of the multilayered network (ANN with more than one hidden layer), which could potentially lead to achieving higher accuracy. In this research, we implement SVR and a variety of multilayered ANN for predicting mean fragment size.
Machine learning (ML) is a branch of artificial intelligence (AI) that allows computer systems to improve their performance at a task through experience (learning) for the purpose of predicting future outcomes [7,8]. It is a multidisciplinary field that relies significantly on specialized subject areas such as probability and statistics, and control theory. ML techniques are broadly classified as supervised and unsupervised learning. Supervised learning is concerned with predicting an outcome given a set of input data. It does so by making use of the already established relationship between representative sets of input and output data that were used for model training. Unsupervised learning is concerned with data segmentation based on pattern recognition. Unsupervised ML techniques can infer patterns from data without reference to known outcomes. They are useful for discovering the underlying structure of a given data set. The rock fragmentation problem is a regression problem that is suited to tools of supervised machine learning such as multivariate regression analysis, artificial neural network (ANN), and support vector regression (SVR). The last two comprise algorithms that are more robust to nonlinear relationships between input and output data [5,9]. They are thus considered in this study since rock fragmentation input and output parameters are nonlinearly related.

2. Preliminary Background

We provide a fundamental explanation of the machine learning techniques used in this study. The section describes the architecture of the artificial neural network and support vector regression.

2.1. Artificial Neural Network (ANN)

Artificial neural network (ANN) is a machine learning technique that is inspired by the way the biological neural system works, such as how the brain processes information [7,8,10]. Information processing in ANN involves many highly interconnected processing elements known as neurons that work together to solve specific problems. The learning process involves adjustments to the synaptic connections existing between the neurons [7,11]. In the biological neural system, a neuron consists of a cell body, known as soma, an axon, and dendrites. The axon sends signals, and the dendrites receive these signals. A synapse connects an axon to a dendrite. Depending on the signal it receives, a synapse might increase or decrease electrical potential. An ANN consists of a number of neurons similar to human biological neurons. These neurons are known as units and are connected by weighted links that transmit signals from one neuron to the other [7,12]. The output signal is transmitted through the neuron’s outgoing connection, which is analogous to the axon in the biological neuron. The outgoing connection splits into a number of branches that transmit the same signal. The outgoing branches terminate at the incoming connections (analogous to dendrites) of other neurons in the network [7].
An ANN has three types of neurons, and these are known as input, hidden, and output neurons. They are stacked in layers, and receive input from preceding neurons or external sources, and use this to compute an output signal using an activation function. The activation function is a mathematical formula for determining the output of a neuron based on the neuron’s weighted inputs. The output signal is then propagated to succeeding neurons. While this is ongoing, the ANN adjusts its weights in order to record an acceptable minimal error between input variables and the final output variable(s) [13]. The complexity of the ANN architecture makes it well suited for solving both linear and nonlinear problems [10]. Advancement in computational power has enhanced its use in the fields of engineering, industrial process control, medicine, risk management, marketing, finance, communication, and transportation.

2.2. Suport Vector Regression (SVR)

Support vector regression (SVR) is a type of supervised machine learning that is based on statistical learning theory [14]. Just like the ANN, SVR is efficient at modeling nonlinearly related variables and does well at solving both classification and regression problems. It works by nonlinearly mapping, i.e., transforming, a given data set into a higher dimensional feature space, and then solving a linear regression problem in this feature space [9,15]. That is, it seeks to predict a single output variable ( y ^ ) as a function of n input variables ( x ) using a function f ( x ) that has at most ε deviation from the actual values ( y ) for all the training data [16]. Equation (1) expresses this function in its simplest form as a linear relationship [9]:
f ( x ) = b + w φ ( x )
In Equation (1), the function φ ( x ) denotes the high dimensional kernel-induced feature space. Kernel refers to the mathematical function used in the data transformation process. Different kernels are available for use in SVR analysis. They include the linear, polynomial, radial basis function (rbf), and sigmoid kernels. Parameter w in Equation (1) is a weight vector, and b is a bias term. Both w and b are calculated by minimizing a regularized cost function. Figure 2 is a graphical representation of the SVR concept. The ± ε deviation from the actual values ( y ) can be described as a tube that contains the sample data with a certain limit ε [16]. This implies that the function f ( x ) is constrained by the ± ε limits to form a tube that represents the data set with the expected deviations.

3. Literature Review

The ability to accurately predict fragment size distribution from a given blast design will give blasting engineers control over the outcome of blasting operations. Engineers will be able to identify which controllable parameters to modify, and to what extent the modification should be. To this end, several studies have sought to predict fragment size distribution based on the parameters used in blast design. These studies have resulted in empirical prediction models, with the Kuz–Ram being the commonest model in use. Others include the CZM, two-component model (TCM), Kuznetsov–Cunningham–Ouchterlony (KCO), SveDeFo, and Larson’s equation [4,18]. The reliance on empirical models stems from the complexity that comes with the attempt to develop explicit theoretical and mechanistic equations to predict the outcome of fragmentation [2,4,5]. This complexity is primarily attributed to the fact that there are so many parameters that affect a blast, coupled with geological heterogeneity [5,9].
The Kuz–Ram model is essentially a three-part model consisting of a modified version of the Kuznetsov equation, the Rossin–Rammler equation, and the Cunningham uniformity index. The parameters defined by these equations constitute the output of the prediction model [4]. The Kuznetsov equation is for predicting mean fragment size ( X 50 ), and the original version is given by Kuznetsov [19] as:
X 50 = A ( V Q ) 0.8 Q 0.167
In Equation (2), X 50 is the mean fragment size (cm); A is the rock factor (7 for medium hard rocks, 10 for hard but highly fissured rocks, 13 for very hard, weakly fissured rocks); V is the rock volume ( m 3 ); and Q is the weight of TNT (kg) equivalent in energy to the explosive charge in one borehole. A shortfall of the equation is that the rock mass categories it defines are very wide, and thus need more precision [5]. Cunningham [20,21] provides a modified version of the equation as follows:
X 50 = A K 0.8 Q 1 6 ( 115 R W S ) 19 20
In Equation (3), A is the rock factor, and varies between 0.8 and 22 depending on hardness and structure; K is the powder factor, defined as the weight of explosive, in kg , per cubic meter of rock; Q is the mass, in kg , of the explosive in the hole; and R W S is the weight strength relative to ANFO (115 is the RWS of TNT).
The role of the Rosin Rammler equation is to estimate the complete fragmentation distribution. For a given mesh size or screen opening, X , this equation is able to estimate the percentage of fragments retained. It is given as [22]:
R x = exp ( X X c ) n
where R x is the proportion of fragments larger than the mesh size X (cm), and X c is the characteristic fragment size (cm). The characteristic size is one through which 63.2% of the materials pass. If the characteristic size and the uniformity index are known, a size distribution curve can be plotted for the rock fragments [18]. The curve is plotted as percentage passing vs. mesh size. The former is obtained by subtracting R x   from one. Equation (4) can be rewritten to make direct use of the mean fragment size, X 50 , as follows [20,21]:
R x = exp 0.693 ( X X 50 ) n
From Equations (4) and (5), the characteristic size can be deduced as:
X c = X 50 0.693 1 n
The third part of the Kuz–Ram model is the uniformity index, developed by Cunningham through several investigations which involved consideration of the effects of blast geometry, hole diameter, burden, spacing, hole length, and drilling accuracy [4]. This equation is given as [20,21]:
n = ( 2.2 14 B d ) ( 1 + S B 2 ) ( 1 W B ) ( a b s   ( B C L C C L L ) + 0.1 ) 0.1 L H  
where B is the burden (m); S is the spacing (m); d is the hole diameter (mm); W is the standard deviation of drilling precision (m); L is the charge length (m); B C L is the bottom charge length (m); C C L is the column charge length (m); and H is the bench height (m). Equation (7) is multiplied by 1.1 when using a staggered pattern. The value of n is essential in determining the shape of the size distribution curve, and is usually between 0.7 and 2. High values indicate uniform sizing, while low values indicate a wide range of sizes, including both oversize and fines [18,23]. Equations (3), (5), and (7) are what constitute the typical Kuz–Ram model.
Cunningham [2] makes modifications in the model twenty years on, mainly as a result of the introduction of electronic delay detonators. This leads to what is now known in the literature as the modified Kuz–Ram model. The adjustments by Cunningham incorporate the effects of inter-hole delay and timing scatter. The changes also incorporate correction factors for the rock factor and uniformity index. These changes lead to the modification of Equations (3) and (7) as follows [2]:
X 50 = A A T K 0.8 Q 1 6 ( 115 R W S ) 19 20 C ( A )
n = n s ( 2 30 B d ) ( 1 + S B 2 ) ( 1 W B ) ( L H ) 0.3   C ( n )
where A T is a timing factor for the effect of inter-hole delay, C ( A ) is a correction factor for the rock factor, n s is the uniformity factor for the effect of timing scatter, and C ( n ) is a correction factor for the uniformity index. Thus, the modified Kuz–Ram model comprises Equations (5), (8) and (9).
A major shortfall of the Kuz–Ram model is the underestimation of fines. Extensions to the model have, thus, emerged with the objective of improving the prediction of fines. The CZM and TCM are such models [18]. Kanchibotla, Valery, and Morrell [24] address the issue of fines via the CZM model, which provides fragment distribution based on the coarse and fine parts of the muck pile. The authors note that during blasting, two different mechanisms control rock fragmentation, i.e., tensile fracturing and compressive-shear fracturing. Tensile fracturing produces coarse fragments, while compressive fracturing produces the fines. The model predicts the coarser part of the size distribution using the Kuz–Ram model. The size distribution of the finer part is predicted by modifying the values of n and X c in the Rosin–Rammler equation. Djordjevic [25] develops a two-component model (TCM) based on the same mechanisms of failure captured by Kanchibotla et al. [24] in their work. The model utilizes experimentally determined parameters from small-scale blasting, and parameters of the Kuz–Ram model to obtain an improved prediction of fragment size distribution.
Ouchterlony [26] develops the KCO model which ties in the Kuz–Ram, CZM, and TCM models. The KCO model replaces the original Rosin–Rammler equation with the Swebrec function to predict rock fragment size distribution. The replacement stems from the author’s recognition that the Rosin–Rammler curve has limited ability to follow the various distributions from blasting. The Swebrec function proves to be more adaptable and is able to predict fines better. The model is given by Equations (10) and (11) as follows [26]:
P ( x ) = 1 [ 1 + f ( x ) ]  
f ( x ) = [ ln ( X m a x X ) ln ( X m a x X 50 ) ] b
where P ( x ) is the percentage of fragments passing a given mesh size, X ;   X m a x is the upper limit of fragment size; X 50 is the mean fragment size; and b is the curve undulation parameter. Just like the Rosin–Rammler model, the Swebrec function has the mean fragment size ( X 50 ) as its central parameter but introduces an upper limit to fragment size ( X m a x ). While the aforementioned extensions to the Kuz–Ram model improve the distribution of fines, they introduce yet another factor into a predictive model that is already somewhat extended [2].
With the advancement in computational power, attention is being drawn to the use of machine learning (ML) in rock fragmentation prediction. Over the last decade, researchers have used multivariate regression (MVR) analysis, artificial neural network (ANN), and support vector regression (SVR) to predict fragment size distribution. In their work, Hudaverdi, Kulatilake, and Kuzu [27] use MVR analysis to develop prediction equations for the estimation of the mean particle size of muck piles. They develop two different equations based on rock stiffness. The equations incorporate blast design parameters (i.e., burden, spacing, bench height, stemming, and hole diameter) expressed as ratios, explosives parameters (i.e., powder factor), and rock mass properties (i.e., elastic modulus and in situ block size). Comparative analysis involving results of the prediction equations, Kuznetsov empirical equation, and the actual values prove the capability of the proposed models in offering satisfactory results. The authors make use of a diverse database (the largest ever used in research at the time) representing blasts conducted in different parts of the world. This makes their prediction models robust to a wide range of blast design parameters and rock conditions.
Building upon the work of Hudaverdi et al. [27], Kulatilake et al. [5] developed MVR and ANN models for the same set of data used in the former authors’ work. The authors train a single hidden layer neural network model to predict the mean particle size for each of two groups of data, as distinguished by the rock stiffness. The authors perform extensive analysis to determine the optimum number of neurons for the hidden layer. Comparative analysis reveals that the MVR and ANN models perform better than the conventional Kuznetsov model. Shi et al. [9] build upon the work of Kulatilake et al. [5] by exploiting the potential of using support vector regression (SVR) for predicting rock fragmentation. Using the same data set as the previous authors, Shi et al. [9] develop an SVR model for predicting mean fragment size. They compare the results of the SVR model with those of ANN, MVR, Kuznetsov, and the actual values. The comparison shows that SVR is capable of providing acceptable prediction accuracy.
The effectiveness of prediction models is assessed via comparative analysis involving post-blast measurement. Post-blast measurement techniques have been developed over the years for determining the true fragment size after a blast was completed. An accurate predictive model will record insignificant deviation from the true fragment size. The available techniques for measuring fragmentation output can be classified as direct and indirect [3]. The direct methods include sieve analysis, boulder count, and direct measuring of fragments. The most accurate method of determining fragmentation is to sieve the whole muck pile. However, because muck piles are large, the use of sieving and the other direct methods can be tedious, time-consuming, and costly [5]. Thus, they are not practicable for muck pile fragment distribution. They can, however, be used for smaller amounts of fragment materials, and for very special purposes [3].
The indirect methods of fragment size measurement include digital image processing, and measurement of parameters, which can be correlated to the degree of fragmentation [3]. Digital image processing involves the use of sophisticated software and hardware for measuring fragment size. It is the latest fragmentation analysis tool and has largely replaced the conventional methods. The use of this tool comprises the following steps: image capturing of muck pile, image scaling, image filtering, image segmentation, binary image manipulation, measurement, and stereometric interpretation [5]. Though quick and cost-effective, this tool has some challenges. Non-uniform lighting, shadows, and a large range of fragment sizes can make fragment delineation very difficult. Another challenge is the overestimation of fines since the computer treats all undigitized voids between the fragments as fines. Thus, to obtain accurate estimation, a correction must be applied. Additionally, the wide variations in size may require different scales of calibration [5,28].

4. Data and Methodology

This section discusses the data and methods employed in this study. The data set comprises 102 blasts. Using this data set, we develop a multilayered artificial neural network and support vector regression models that satisfactorily predict mean rock fragment size.

4.1. Data Source and Description

The data set used in this work is obtained from the blast database compiled by Hudaverdi et al. [27], and subsequently used by Kulatilake et al. [5] and Shi et al. [9]. The compilation consists of blast data from various mines around the world. The data, therefore, represents a diverse range of blast design parameters and rock formations. Having such a diverse range of data is good for the purpose of this study, i.e., training machine learning models for prediction. The implication here is that the predictive ability of the ensuing models would span a wide variety of rock formations. The compilation by Hudaverdi et al. [27] represents one of the largest and most diverse blast data collections in the literature, and thus fits the purpose of this study.
Table 1 shows a sample of the data. A summary of the individual research projects from which Hudaverdi et al. [27] compiled the data is provided hereafter. Blasts with labels “Rc”, “En”, and “Ru” are from research by Hamdi, Du Mouza, and Fleurisson [29], and Aler, Du Mouza, and Arnould [30] at the Enusa and Reocin mines in Spain. The Enusa Mine is an open-pit uranium mine in a schistose with moderate to heavily folded formation. The Reocin Mine is an open pit and underground zinc mine. Blasts designated “Mg” are from a study by Hudaverdi [31] at the Murgul Copper Mine, an open-pit mine in northeastern Turkey. Those designated “Mr” are from a study by Ouchterlony et al. [28] at the Mrica Quarry in Indonesia. The rock formation is mainly andesite. Blasts with the “Sm” label are from an open-pit coal mine in Soma Basin, in Western Turkey [32]. Blasts labeled “Db” are from the Dongri-Buzurg open-pit manganese mine in Central India. The rock formation is generally micaceous schist and muscovite schist [33]. Blasts labeled “Ad” and “Oz” are, respectively, from the Akdaglar and Ozmert quarries of the Cendere basin in northern Istanbul. Rock formation at both quarries is sandstone [27].
Table 1. Sample blast data [5,9,27,28,29,30,31,32,33].
Table 1. Sample blast data [5,9,27,28,29,30,31,32,33].
IDS/BH/BB/DT/BPf
( k g m 3 )
X b   ( m ) E (Gpa) X 50   ( m )
En11.241.3327.270.780.480.58600.37
En21.241.3327.270.780.480.58600.37
En31.241.3327.270.780.481.08600.33
Rc11.171.526.21.080.330.68450.46
Rc21.171.526.21.120.30.68450.48
Rc31.171.5826.21.220.280.68450.48
Mg112.6727.270.890.750.83500.23
Mg212.6727.270.890.750.78500.25
Mg312.430.30.80.611.02500.27
Ru11.13539.471.930.312450.64
Ru21.2632.893.670.32450.54
Ru31.2632.893.70.32450.51
Mr11.2632.890.80.491.67320.17
Mr21.2632.890.80.511.67320.17
Mr31.2632.890.80.491.67320.13
Db11.253.5201.750.7319.570.44
Db21.255.1201.750.719.570.76
Db31.383201.750.6219.570.35
Sm11.252.528.570.830.420.513.250.15
Sm21.252.528.570.830.420.513.250.19
Sm31.252.528.570.830.420.513.250.23
Ad11.24.428.091.20.580.7716.90.15
Ad21.24.828.091.20.660.5616.90.17
Ad31.24.828.091.20.720.2916.90.14
Oz112.8333.7110.480.45150.27
Oz21.22.428.0910.530.86150.14
Oz31.22.428.0910.530.44150.14
The data set features blast design parameters that can be categorized as geometric, explosives, and rock parameters. The geometric parameters include burden, B (m), spacing, S (m), stemming, T (m), hole depth, H (m), and hole diameter, D (m). These are represented in the data set as ratios and include hole depth to burden (H/B), spacing to burden (S/B), burden to hole diameter (B/D), and stemming to burden (T/B) ratios. The powder factor, Pf ( kg m 3 ), represents the explosives parameter and shows the distribution of explosives in the rock. The elastic modulus, E (GPa), and the in situ block size, X b (m), represent the rock parameters. Specifically, in situ block size represents the rock mass structure, while the elastic modulus represents the intact rock properties [27]. In effect, a total of seven rock fragment size prediction parameters are in the data set, and these will constitute the input parameters (independent variables) for the SVR and ANN models. The data set also features a post-blast parameter, i.e., X 50 ( m ) , which is the actual mean fragment size. This will be the output parameter (dependent variable) to be predicted by the models. Table 2 shows the summary statistics of the seven input parameters and the mean fragment size for the entire data set.

4.2. Model Development

Support vector regression (SVR) and artificial neural network (ANN) models are built for a total of 102 blasts. We split the data into training and test sets comprising 90 and 12 blasts, respectively. The test set has Kuznetsov predictions matching the actual fragment size. This is for the purpose of comparative assessment of results. The data set is scaled within the range 0–1 since the parameters have different orders of magnitude. The scaling is performed using the MinMaxScaler function of the Scikit-learn Python library [34]. The SVR and ANN models are built using the Scikit-learn and Keras Python libraries, respectively [34,35].

4.2.1. SVR Modeling

Using Scikit-learn, we develop and train a support vector regression model for prediction. The modeling process involves iterating over several combinations of the following support vector hyper-parameters: regularization (C), epsilon ( ε ), and kernel (k). Four kernels are considered for modeling, i.e., radial basis function (rbf), polynomial (poly), sigmoid, and linear. Twenty-five different values of C are considered in the interval [1:10], and twenty-seven different values of ε are considered in the interval [ 1 × 10 6 :0.3]. This yields a total of 2700 combinations of hyper-parameters, each representing a unique SVR model. The process of searching for the optimal combination of these hyper-parameters (adjustable parameters which control the support vector) is known as hyper-parameter tuning. To aid with this process, the GridSearchCV function in Scikit-learn is used [34]. It involves building SVR models using each of these hyper-parameter combinations and subsequently using cross-validation to assess model performance. We adopt the five-fold cross-validation technique. This means that for each hyper-parameter combination, the data are split into five folds. The hyper-parameter combination undergoes five runs of model training, and during each run, a distinct fold (one-fifth of the training data) is set aside for validation purposes. The final score assigned to the hyper-parameter combination is the average validation score from the five runs. This process is repeated for all other hyper-parameter combinations. We retrieve the best performing combination of hyper-parameters, and these are C = 5.25, ε = 0.04, and kernel = rbf. The final SVR model is thus built using these hyper-parameters.
In this study, retrieval of the best performing combination is based on the mean squared error (MSE) scoring metric. The MSE is a statistical metric that provides a means of assessing performance between two or more models. For each model, the MSE measures the average squared difference between the actual and predicted values. A perfect model would yield an MSE of zero, signifying that the actual values are perfectly predicted by the model, i.e., there is no error in prediction. In machine learning, the best-performing model among alternatives will be the one with MSE closest to zero. We show the MSE values for selected hyper-parameter combinations for the training and test data in Figure 3. From the figure, we observe that models with rbf kernels have better generalization abilities in respect of unseen, real-world data, i.e., data not included in the training process. This is represented by the test data. The best-performing model retrieved from the hyper-parameter tuning is of the rbf kernel type. It yields the lowest MSE value for the test data.

4.2.2. ANN Modeling

Using Keras, we develop a variety of multilayered ANNs with up to four hidden layers for prediction. In each instance, hyper-parameter tuning is performed to obtain an optimal number of neurons (units) for the hidden layers under consideration. In all cases, the input and output layers have fixed neurons, being seven and one, respectively. These represent the seven input parameters, and the output parameter ( X 50 ), which we seek to predict. Figure 4 is a schematic representing the general architecture of the ANNs used in this study.
For each instance of hidden layers, hyper-parameter tuning is performed using the Bayesian optimization object in Keras [35]. The process involves iterating over several combinations of neurons for a given instance of hidden layers and returning the combination that yields the best performance. This process can be very cumbersome and time-consuming when carried out manually. The use of Bayesian optimization saves time by automating the search process for the best combination of neurons for a given number of hidden layers. During the search process, 20% of the training data is set aside for validation purposes using the MSE scoring metric. The remaining data are used for training, and this involves running 1500 epochs to yield an acceptable reduction in prediction error.
Table 3 shows the results for the various hidden layers considered. For each instance of hidden layers, the table shows the optimal number of neurons returned via hyper-parameter tuning. The neural network with four hidden layers is selected as the final ANN model. This is based on the test scores, which represent the ability of the models to generalize to unseen, real-world data. The four-hidden-layer architecture has the lowest test score.
In the second configuration of hidden layers, the batch normalization (BN) technique serves to control model overfitting, so as to improve model generalization in respect of unseen, real-world data. Batch normalization applies a transformation that maintains the mean output close to zero and the output standard deviation close to 1, thereby standardizing the inputs to a given layer [35]. We show the performance of selected hyper-parameter combinations for the various hidden layer instances in Figure 5. The figure shows how the final ANN model (M8) compares with other models from the hyper-parameter tuning exercise. Model M5 has the worst generalization performance while model M8 has the best generalization performance.

5. Results and Discussion

Through hyper-parameter tuning, we obtain the final SVR and ANN models. For the purpose of assessing model generalization, we subject these models to testing. The test data set comprises 12 blasts; these are not used for training. The performance of the model on this data shows how well it will perform when deployed in the real world. Table 4 shows the performance of the final models on the training and test sets using the mean squared error (MSE) as a scoring metric.
For the purpose of comparative assessment, the Kuznetsov empirical technique, i.e., Equation (3), is used to predict the mean rock fragment size for the test data. Test results obtained for the ANN and SVR models are compared with those for the Kuznetsov technique and the actual values. Table 5 and Figure 6 show the results for all three modeling techniques. It is observed that the ANN model records the least error while the Kuznetsov records the highest error. The coefficient of determination (r2) measures the proportion of the variation in the dependent variable (mean fragment size) that is accounted for by its relationship with the independent variables. It ranges between zero and one. A model with r2 closer to one is said to be reliable in predicting the dependent variable. The foregoing indicates that the ANN and SVR models are better able to model the relationship between the dependent and independent variables than the Kuznetsov empirical model. They show superior performance to the Kuznetsov as a result of their inherent ability to model complex, nonlinear relationships, such as exist between rock fragment size and blast design parameters.

6. Conclusions and Future Work

The paper successfully demonstrates the potential of achieving higher accuracy in mean rock fragment size prediction using multilayered artificial neural network (ANN) and support vector regression (SVR). Using varied blast data sets from different parts of the world, we obtain training and test sets comprising 90 and 12 blasts, respectively, for building multilayered ANN and SVR models. Both models perform satisfactorily and better than the conventional Kuznetsov empirical model. The paper further demonstrates the possibility to analyze a varied number of hidden layers for a neural network in a less cumbersome way using Keras. Keras makes it less time-consuming to consider the performance of a wide variety of hidden layers and neurons via the Bayesian optimization feature. Thus, multilayered ANN analysis of rock fragmentation, which is typically time-consuming, can be carried out in a relatively shorter time. The end goal here is that blasting engineers would be able to fully exploit the potential of the multilayered ANN architecture for improved performance without having to do manual hyper-parameter tuning. The trained ANN and SVR models could be incorporated into existing fragmentation analysis software to give blasting engineers more accurate options for mean rock fragment size estimation. This incorporation would make it possible for blasting engineers to have access to results from both empirical and machine learning techniques. Blasting engineers would then be able to conduct post-blast analysis to verify the improved accuracy offered by the machine learning techniques. Commercial fragmentation software providers could adopt this integrated approach to gradually build client confidence in the use of machine learning techniques with time.
In the future, we seek to improve model performance via data augmentation. We intend to do this using the variational autoencoding (VAE) technique. VAE is a deep learning technique that fits a probability distribution to a given data set, and then samples from the distribution to create new unseen samples. Thus, the VAE offers a means of augmenting the data set used in this study to improve model training, and thus enhance pattern recognition and prediction. We also seek to build additional rock fragmentation models using other machine learning techniques. The final phase of this project will involve developing robust machine learning-based fragmentation software that will not only predict the mean fragment size but the entire fragment size distribution.

Author Contributions

Conceptualization, R.A., A.J. and S.Z.; Formal analysis, R.A.; Methodology, R.A.; Supervision, A.J. and S.Z.; Visualization, R.A. and A.J.; Writing—Original draft, R.A.; Writing—Review & editing, A.J. and S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available in Hudaverdi et al. [27] (pp. 1322, 1331).

Acknowledgments

We acknowledge the guidance of Ravi Sahu and Oktai Radzhabov of Strayos Inc., Buffalo, NY, USA.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rustan, A. (Ed.) Rock Blasting Terms and Symbols: A Dictionary of Symbols and Terms in Rock Blasting and Related Areas Like Drilling, Mining and Rock Mechanics; A. A. Balkema: Rotterdam, Holland, 1998. [Google Scholar]
  2. Cunningham, C.V.B. The Kuz-Ram fragmentation model—20 years on. Bright. Conf. Proc. 2005, 4, 201–210. [Google Scholar] [CrossRef]
  3. Roy, M.P.; Paswan, R.K.; Sarim, M.D.; Kumar, S.; Jha, R.R.; Singh, P.K. Rock fragmentation by blasting: A review. J. Mines Met. Fuels 2016, 64, 424–431. [Google Scholar]
  4. Adebola, J.M.; Ogbodo, D.A.; Peter, E.O. Rock fragmentation prediction using Kuz-Ram model. J. Environ. Earth Sci. 2016, 6, 110–115. [Google Scholar]
  5. Kulatilake, P.H.S.W.; Qiong, W.; Hudaverdi, T.; Kuzu, C. Mean particle size prediction in rock blast fragmentation using neural networks. Eng. Geol. 2010, 114, 298–311. [Google Scholar] [CrossRef]
  6. Jha, A.; Rajagopal, S.; Sahu, R.; Tukkaraja, P. Detection of geological discontinuities using aerial image analysis and machine learning. In Proceedings of the 46th Annual Conference on Explosives and Blasting Technique, Denver, CO, USA, 26–29 January 2020; ISEE: Cleveland, OH, USA, 2020; pp. 1–11. [Google Scholar]
  7. Grosan, C.; Abraham, A. Intelligent Systems: A Modern Approach, 1st ed.; Springer: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
  8. Dumakor-Dupey, N.K.; Sampurna, A.; Jha, A. Advances in blast-induced impact prediction—A review of machine learning applications. Minerals 2021, 11, 601. [Google Scholar] [CrossRef]
  9. Shi, X.Z.; Zhou, J.; Wu, B.B.; Huang, D.; Wei, W. Support vector machines approach to mean particle size of rock fragmentation due to bench blasting prediction. Trans. Nonferrous Met. Soc. China 2012, 22, 432–441. [Google Scholar] [CrossRef]
  10. Amoako, R.; Brickey, A. Activity-based respirable dust prediction in underground mines using artificial neural network. In Mine Ventilation–Proceedings of the 18th North American Mine Ventilation Symposium; Tukkaraja, P., Ed.; CRC Press: Boca Raton, FL, USA, 2021; pp. 410–418. [Google Scholar] [CrossRef]
  11. Ertel, W. Introduction to Artificial Intelligence (Undergraduate Topics in Computer Science), 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  12. Dixon, D.W.; Ozveren, C.S.; Sapulek, A.T. The application of neural networks to underground methane prediction. In Proceedings of the 7th US Mine Ventilation Symposium, Lexington, KY, USA, 5–7 June 1995; pp. 49–54. [Google Scholar]
  13. Krose, B.; van der Smagt, P. An Introduction to Neural Networks, 8th ed.; University of Amsterdam: Amsterdam, The Netherlands, 1996. [Google Scholar]
  14. Zhou, J.; Li, X.; Shi, X. Long-term prediction model of rockburst in underground openings using heuristic algorithms and support vector machines. Saf. Sci. 2012, 50, 629–644. [Google Scholar] [CrossRef]
  15. Shuvo, M.M.H.; Ahmed, N.; Nouduri, K.; Palaniappan, K. A Hybrid approach for human activity recognition with support vector machine and 1D convolutional neural network. In Proceedings of the 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 13–15 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–5. [Google Scholar]
  16. Hamed, Y.; Ibrahim Alzahrani, A.; Shafie, A.; Mustaffa, Z.; Che Ismail, M.; Kok Eng, K. Two steps hybrid calibration algorithm of support vector regression and K-nearest neighbors. Alex. Eng. J. 2020, 59, 1181–1190. [Google Scholar] [CrossRef]
  17. Nandi, S.; Badhe, Y.; Lonari, J.; Sridevi, U.; Rao, B.S.; Tambe, S.S. Hybrid process modeling and optimization strategies integrating neural networks/support vector regression and genetic algorithms: Study of benzene isopropylation on Hbeta catalyst. Chem. Eng. J. 2004, 97, 115–129. [Google Scholar] [CrossRef]
  18. Gheibie, S.; Aghababaei, H.; Hoseinie, S.H.; Pourrahimian, Y. Modified Kuz-Ram fragmentation model and its use at the Sungun Copper Mine. Int. J. Rock Mech. Min. Sci. 2009, 46, 967–973. [Google Scholar] [CrossRef]
  19. Kuznetsov, V.M. The mean diameter of the fragments formed by blasting rock. Sov. Min. Sci. 1973, 9, 144–148. [Google Scholar] [CrossRef]
  20. Cunningham, C.V.B. The Kuz–Ram model for prediction of fragmentation from blasting. In Proceedings of the First International Symposium on Rock Fragmentation by Blasting; Luleå, Sweden, 23–26 August 1983; Holmberg, R., Rustan, A., Eds.; Lulea University of Technology: Luleå, Sweden, 1983; pp. 439–454. [Google Scholar]
  21. Cunningham, C.V.B. Fragmentation estimations and the Kuz–Ram model—Four years on. In Proceedings of the Second International Symposium on Rock Fragmentation by Blasting, Keystone, CO, USA, 23–26 August 1987; pp. 475–487. [Google Scholar]
  22. Rosin, P.; Rammler, E. The laws governing the fineness of powdered coal. J. Inst. Fuel 1933, 7, 29–36. [Google Scholar]
  23. Clark, G.B. Principles of Rock Fragmentation; John Wiley & Sons: New York, NY, USA, 1987. [Google Scholar]
  24. Kanchibotla, S.S.; Valery, W.; Morrell, S. Modelling fines in blast fragmentation and its impact on crushing and grinding. In Explo ‘99–A Conference on Rock Breaking; The Australasian Institute of Mining and Metallurgy: Kalgoorlie, Australia, 1999; pp. 137–144. [Google Scholar]
  25. Djordjevic, N. Two-component model of blast fragmentation. In Proceedings of the 6th International Symposium on Rock Fragmentation by Blasting, Johannesburg, South Africa, 8–12 August 1999; pp. 213–219. [Google Scholar]
  26. Ouchterlony, F. The Swebrec© function: Linking fragmentation by blasting and crushing. Inst. Min. Metall. Trans. Sect. A Min. Technol. 2005, 114, 29–44. [Google Scholar] [CrossRef] [Green Version]
  27. Hudaverdi, T.; Kulatilake, P.H.S.W.; Kuzu, C. Prediction of blast fragmentation using multivariate analysis procedures. Int. J. Numer. Anal. Methods Geomech. 2010, 35, 1318–1333. [Google Scholar] [CrossRef]
  28. Ouchterlony, F.; Niklasson, B.; Abrahamsson, S. Fragmentation monitoring of production blasts at Mrica. In Proceedings of the International Symposium on Rock Fragmentation by Blasting, Brisbane, Australia, 26–31 August 1990; McKenzie, C., Ed.; Australasian Institute of Mining and Metallurgy: Carlton, Australia, 1990; pp. 283–289. [Google Scholar]
  29. Hamdi, E.; Du Mouza, J.; Fleurisson, J.A. Evaluation of the part of blasting energy used for rock mass fragmentation. Fragblast 2001, 5, 180–193. [Google Scholar] [CrossRef]
  30. Aler, J.; Du Mouza, J.; Arnould, M. Measurement of the fragmentation efficiency of rock mass blasting and its mining applications. Int. J. Rock Mech. Min. Sci. Geomech. 1996, 33, 125–139. [Google Scholar] [CrossRef]
  31. Hudaverdi, T. The Investigation of the Optimum Parameters in Large Scale Blasting at KBI Black Sea Copper Works—Murgul Open-Pit Mine. Master’s Thesis, Istanbul Technical University, Istanbul, Turkey, 2004. [Google Scholar]
  32. Ozcelik, Y. Effect of discontinuities on fragment size distribution in open-pit blasting—A case study. Trans. Inst. Min. Metall. Sect. A Min. Ind. 1998, 108, 146–150. [Google Scholar]
  33. Jhanwar, J.C.; Jethwa, J.L.; Reddy, A.H. Influence of air-deck blasting on fragmentation in jointed rocks in an open-pit manganese mine. Eng. Geol. 2000, 57, 13–29. [Google Scholar] [CrossRef]
  34. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar] [CrossRef]
  35. Chollet, F. Keras. Github Repos. 2015. Available online: https://github.com/fchollet/keras (accessed on 15 June 2020).
Figure 1. Blast design terminology [5].
Figure 1. Blast design terminology [5].
Mining 02 00013 g001
Figure 2. Graphical representation of support vector regression [17].
Figure 2. Graphical representation of support vector regression [17].
Mining 02 00013 g002
Figure 3. MSE plot for selected SVR hyper-parameter combinations.
Figure 3. MSE plot for selected SVR hyper-parameter combinations.
Mining 02 00013 g003
Figure 4. ANN architecture for rock fragmentation prediction.
Figure 4. ANN architecture for rock fragmentation prediction.
Mining 02 00013 g004
Figure 5. MSE plot for selected ANN hyper-parameter combinations.
Figure 5. MSE plot for selected ANN hyper-parameter combinations.
Mining 02 00013 g005
Figure 6. MSE plot for test data.
Figure 6. MSE plot for test data.
Mining 02 00013 g006
Table 2. Summary statistics.
Table 2. Summary statistics.
VariableMinimumMaximumMeanStandard Deviation
InputS/B11.751.200.11
H/B1.336.823.461.60
B/D17.9839.4727.234.91
T/B0.54.671.270.69
Pf (kg/m3)0.221.260.550.24
X b (m)0.292.351.160.48
E (Gpa)9.576030.1817.52
Output X 50 (m)0.120.960.310.18
Table 3. Optimal neurons for hidden layers.
Table 3. Optimal neurons for hidden layers.
Number of Hidden LayersOptimal Neurons for Hidden LayersMSE for Test DataSelected Model
1900.0059
225-BN-450.0039
360-195-1900.0040
4115-40-180-350.0031
Table 4. Model performance.
Table 4. Model performance.
ModelMean Squared Error (MSE)
TrainingTest
SVR   ( C = 5.25 ,   ε = 0.04, kernel = rbf)0.00260.0044
ANN (115-40-180-35)0.00280.0031
Table 5. Results for test data.
Table 5. Results for test data.
Blast NumberMean Fragment Size (m)
ActualPredictions
ANNSVRKuznetsov
10.470.440.380.48
20.640.680.640.71
30.440.380.410.42
40.250.250.250.33
50.200.150.140.27
60.350.210.520.09
70.180.190.190.38
80.230.170.180.22
90.170.170.190.25
100.210.210.200.12
110.200.210.190.13
120.170.240.260.23
Coefficient of
determination (r2)
0.870.810.58
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amoako, R.; Jha, A.; Zhong, S. Rock Fragmentation Prediction Using an Artificial Neural Network and Support Vector Regression Hybrid Approach. Mining 2022, 2, 233-247. https://doi.org/10.3390/mining2020013

AMA Style

Amoako R, Jha A, Zhong S. Rock Fragmentation Prediction Using an Artificial Neural Network and Support Vector Regression Hybrid Approach. Mining. 2022; 2(2):233-247. https://doi.org/10.3390/mining2020013

Chicago/Turabian Style

Amoako, Richard, Ankit Jha, and Shuo Zhong. 2022. "Rock Fragmentation Prediction Using an Artificial Neural Network and Support Vector Regression Hybrid Approach" Mining 2, no. 2: 233-247. https://doi.org/10.3390/mining2020013

Article Metrics

Back to TopTop