Next Article in Journal
Experimental Study on Flexural Behavior of RC–UHPC Slabs with EPS Lightweight Concrete Core
Previous Article in Journal
Relevance of Catholic Parish Churches in Public Space in Barcelona: Historical Analysis and Future Perspectives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Soft-Computing Techniques for Predicting Seismic Bearing Capacity of Strip Footings in Slopes

by
Divesh Ranjan Kumar
1,
Pijush Samui
1,
Warit Wipulanusat
2,*,
Suraparb Keawsawasvong
2,
Kongtawan Sangjinda
2 and
Wittaya Jitchaijaroen
2
1
Department of Civil Engineering, National Institute of Technology Patna, Patna 800005, India
2
Department of Civil Engineering, Faculty of Engineering, Thammasat School of Engineering, Thammasat University, Pathumthani 12120, Thailand
*
Author to whom correspondence should be addressed.
Buildings 2023, 13(6), 1371; https://doi.org/10.3390/buildings13061371
Submission received: 25 March 2023 / Revised: 18 May 2023 / Accepted: 22 May 2023 / Published: 24 May 2023
(This article belongs to the Section Building Structures)

Abstract

:
In this study, various machine learning algorithms, including the minimax probability machine regression (MPMR), functional network (FN), convolutional neural network (CNN), recurrent neural network (RNN), and group method of data handling (GMDH) models, are proposed for the estimation of the seismic bearing capacity factor (Nc) of strip footings on sloping ground under seismic events. To train and test the proposed machine learning model, a total of 1296 samples were numerically obtained by performing a lower-bound (LB) and upper-bound (UB) finite element limit analysis (FELA) to evaluate the seismic bearing capacity factor (Nc) of strip footings. Sensitivity analysis was performed on all dimensionless input parameters (i.e., slope inclination (β); normalized depth (D/B); normalized distance (L/B); normalized slope height (H/B); the strength ratio (cu/γB); and the horizontal seismic acceleration (kh)) to determine the influence on the dimensionless output parameters (i.e., the seismic bearing capacity factor (Nc)). To assess the performance of the proposed models, various performance parameters—namely the coefficient of determination ( R 2 ), variance account factor (VAF), performance index (PI), Willmott’s index of agreement (WI), the mean absolute error (MAE), the weighted mean absolute percentage error (WMAPE), the mean bias error (MBE), and the root-mean-square error (RMSE)—were calculated. The predictive performance of all proposed models for a bearing capacity factor (Nc) prediction was compared by using the testing dataset, and it was found that the MPMR model achieved the highest R 2 values of 1.000 and 0.957 and the lowest RMSE values of 0.000 and 0.038 in both the training and testing phases, respectively. The parametric analyses, rank analyses, REC curves, and the AIC showed that the proposed models were quite effective and reliable for the estimation of the bearing capacity factor (Nc).

1. Introduction

One of the most common geotechnical issues with strip footings on slopes is their stability. Many scholars have investigated the stability of this problem using a variety of techniques, such as semi-empirical methods (e.g., Hansen [1]; Satvati et al. [2]; Khalvati et al. [3]); finite element methods (e.g., Georgiadis [4,5]); limit equilibrium techniques (e.g., Meyerhof [6]); and limit analysis (e.g., Davis and Booker [7]; Kusakabe et al. [8]; Shiau et al. [9]; Georgiadis [10]). However, these works did not consider the seismic body force on the overall stability of footings on slopes. Indeed, the seismic bearing capacity problem is essential for understanding the nature of earthquake zones. To capture this seismic effect, the pseudo-static technique is a convenient technique. The horizontal and vertical seismic coefficients (kh and kv) are defined as a function of gravity acceleration. These coefficients have been considered a common and extensively applied method for assessing the stability of footings in earthquake zones.
Previous studies have determined the seismic bearing capacity of footings on slopes regarding pseudo-static seismic forces. Many methods have been used to compute seismic bearing capacity solutions, such as the limit equilibrium method (e.g., Budhu and Al-Karni [11]; Kumar and Mohan [12]) and the limit analysis method (e.g., Farzaneh et al. [13]; Kumar and Ghosh [14]; Yamamoto [15]; Georgiadis and Chrysouli [16]). One of the numerical techniques that can provide rigorous plastic solutions to stability problems is the finite element limit analysis (FELA) method, which was successfully employed by Kumar and Chakraborty [17] to provide the seismic bearing capacity factor for strip footings on cohesionless slopes. In addition, Chakraborty and Kumar [18] and Chakraborty and Mahesh [19] also used the FELA technique to determine the strip footings’ seismic bearing capacities on sloping ground under seismic events. Recently, Luo et al. [20] and Lai et al. [21] also carried out the FELA method to investigate the seismic bearing capacity of strip footings on cohesive soils.
Nevertheless, the MPMR, FN, CNN, RNN, and GMDH models have not yet been applied, based on the findings of the authors’ literature review, to estimate the bearing capacity factor (Nc) of strip footings on sloping ground under seismic events; meanwhile, these methods have been widely used to predict the nonlinear behavior of engineering problems. For instance, the MPMR model was used to estimate the rock strain, the axial capacity of bored piles, the uplift capacity of a suction caisson, etc., and the results showed that the predictive performance of the MPMR model was high [22,23,24]. The FN and GMDH methods were applied to estimate the axial capacity of bored piles and the settlement of pile groups in clay [23,25]. Many of the applications of the CNN and RNN methods, which were applied to predict the accurate and desired target values, have been identified in the literature [26,27,28,29].
Taking these considerations into account, this study implements five advanced machine learning models to evaluate the seismic bearing capacity of strip footings on sloping ground under seismic events. The strip footings in clay with inclination (β) and height (H) are taken as a problem statement, as shown in Figure 1. Machine learning models—namely the minimax probability machine regression (MPMR), functional network (FN), convolutional neural network (CNN), recurrent neural network (RNN), and the group method of data handling (GMDH) model—were constructed, analyzed, and discussed when using artificial datasets that were generated from the FELA method. The prediction performance of the proposed models, i.e., MPMR, FN, CNN, RNN, and GMDH, were thoroughly examined in terms of eight statistical parameters, score analyses, sensitivity analyses, and regression error characteristic (REC) curves to identify the best-performing models. In this study, the authors propose the geotechnical practitioners’ models that are the quickest and easiest to use. These models are based on software, and require only a basic understanding of computer programming to determine the strip footings’ seismic bearing capacity on sloping ground under seismic events. Addressing the geotechnical engineers’ challenges for finding the seismic bearing capacity of strip footings on sloping ground under seismic events is a challenging task. The proposed advanced ML models have many practical implications in the seismic design of soil structures in geotechnical engineering.

2. Data Collection

Figure 1 depicts the problem definition for a strip footing on a slope, where β denotes the slope inclination; H denotes the slope height; B denotes the width of the footing; D denotes the depth of the footing; and L denotes the distance from the top of the slope to the edge of the footing. The soil is cohesive with a unit weight (γ) and undrained shear strength (cu). This study employs the pseudo-static approach, which is a simplified method used in earthquake engineering to estimate the seismic forces acting on structures or underground works. In this approach, the earthquake-induced ground motion is simplified to a static force that acts on the structure. The static force is calculated by multiplying the seismic coefficient (a factor that depends on the seismic hazard and the characteristics of the soil) by the weight of the structure. The resulting force is then applied to the structure as a static load. By applying this approach, we assumed that both the footing and slope are subjected to horizontal seismic acceleration. The horizontal body force is set to khγ, where kh is the horizontal seismic coefficient. The footing is set to be subjected to qu in the vertical direction, and khqu in the horizontal direction, where qu is the ultimate bearing capacity. Note that, in this study, the vertical seismic coefficient (kv) is neglected since it has little effect on the stability of structures or underground works. In most earthquakes, the vertical ground motion is usually smaller than the horizontal motion, and its frequency content is different from that of the horizontal motion. Therefore, it is commonly assumed that the vertical seismic coefficient can be neglected without a significant loss of accuracy in estimating the seismic forces acting on the structure or underground work. More details on this problem can be found in Lai et al. [21].
Based on Lai et al. [21], the seismic bearing capacity (Nc) can be expressed as a function of six dimensionless parameters as follows:
N c = q u c u = f β , H B , L B , D B , c u γ B , k h
Note that the seismic bearing capacity factor (Nc) in this study is the ultimate vertical bearing capacity of the footing divided by the undrained shear strength of soil, which is similar to the classic Terzaghi’s bearing capacity factors. The other dimensionless parameters are D/B, the normalized depth; L/B, the normalized distance; H/B, the normalized slope height; and cu/γB, the strength ratio. The selected ranges of these six dimensionless inputs are shown in Table 1 and are according to Lai et al. [21].
According to Lai et al. [21], FELA techniques were employed to determine the lower bound (LB) and upper bound (UB) solutions of Nc, where OptumG2 [30]—which is a type of FELA software—was carried out to obtain all numerical results. Note that an automatically adaptable mesh refinement technique [31], which was used to enhance the precision of the upper and lower solutions, was also utilized in the study by Lai et al. [21]. The setting of the mesh refinement was set to automatically develop from 5000 to 10,000 elements throughout the 5 adaptive meshing iterations by following certain previous studies (e.g., Keawsawasvong and Ukritchon [32,33]; Shiau et al. [34]; Keawsawasvong et al. [35]). Note that the difference between the UB and LB solutions was within 3% for all numerical results. An example of the OptumG2 model of the footing on a slope is shown in Figure 2, where the potential slip surface can be clearly determined by using the FELA method with the automatically adaptable mesh refinement technique.

2.1. Statistical Analysis of the Dataset

Based on the above dataset, various statistical descriptions, such as the range mean standard deviation (STDEV), skewness, and kurtosis value of the input and output parameters are presented in Table 2. As per the statistical description presented in Table 2, the slope inclination (β) varies from 15° to 60°; the normalized slope height (H/B) ranges from 1 to 4; the normalized depth (D/B) ranges from 0 to 2; the horizontal seismic acceleration (kh) ranges from 0.1 to 0.3; the strength ratio (cu/γB) ranges from 1.5 to 5; the normalized distance (L/B) ranges from 0 to 4; and the seismic bearing capacity (Nc) ranges from 0 to 8.48. The value of skewness for the normalized slope height (H/B), the strength ratio (cu/γB), and the normalized distance (L/B) variables obtained a higher value, so these variables deviate more from their mean value than the other variables. The kurtosis value for all variables was negative, meaning that the taken dataset had a lower peak in a symmetric distribution.
Notably, certain seismic bearing capacity factors can depend on each other. Hence, the correlation heatmap is derived for each input and output variable and is shown in Figure 3. When the input variables’ correlation coefficient has a high positive or negative value, it can be difficult to ascertain the impact of these factors on the output. It can be concluded that the normalized depth (D/B), normalized distance (L/B), and strength ratio (cu/γB) have a positive correlation coefficient with the seismic bearing capacity (Nc); furthermore, the normalized slope height (H/B), slope inclination (β), and horizontal seismic acceleration (kh) have a significant negative correlation with the seismic bearing capacity (Nc), which means that each input variable has quantified association strength with the output variable.
In this study, five advanced computational models were used to evaluate the seismic bearing capacity (Nc) based on influential variables, such as slope inclination (β); normalized depth (D/B); normalized distance (L/B); normalized slope height (H/B); strength ratio (cu/γB); and the horizontal seismic acceleration (kh). To reduce the dimensional effect and to improve the accuracy of the proposed models, we first normalized the variables between 0 and 1 using the min-max approach (with Equation (2)) because the scales of the variables utilized in the model’s construction are not the same [36,37].
D N = D a c t D m i n D m a x D m i n
where D N and D a c t denote the normalized and actual value variables, respectively, and D m i n and D m a x denote the minimum and maximum values of the variables, respectively. In the study, a total of 1296 samples were randomly divided into two parts: the training datasets and the testing datasets. The training phase, which includes 70% of the whole dataset (i.e., 907 datasets) was utilized for training the model. The testing dataset, which contains 30% of the whole dataset (i.e., 389 dataset), was used to test the model. The training data can overfit the model, which may lead to ‘memorization’ instead of ‘generalization’. Among the various methods available, the dropout method was used to solve this problem and to avoid overfitting.

2.2. Performance Evaluation Indicators

For evaluation of the model’s performance, the following evaluation indicators are widely used: (1) the coefficient of determination ( R 2 ); (2) the variance account factor (VAF); (3) the performance index (PI); (4) Willmott‘s index of agreement (WI); (5) the mean absolute error (MAE); (6) the weighted mean absolute percentage error (WMAPE); (7) the mean bias error (MBE); and (8) the root-mean-square error (RMSE). The following Equations (3) to (10) indicate the definition of the abovementioned indices [38,39,40].
R 2 = i = 1 n d i d a v g 2 i = 1 n d i y i 2 i = 1 n d i d a v g 2
V A F = 1 v a r d i y i v a r d i × 100
P I = a d j . R 2 + 0.01 × V A F R M S E
W I = 1 i = 1 n d i y i 2 i = 1 n y i d a v g + d i d a v g 2
M A E = 1 n i = 1 n y i d i
W M A P E = i = 1 n d i y i d i × d i i = 1 n d i
M B E = 1 n i = 1 n y i d i
R M S E = 1 n i = 1 n d i y i 2
where d i and y i denote the actual and predicted i t h values of the seismic bearing capacity, respectively; “n” denotes the total number of datasets used in the training and testing phases; and d a v g denotes the average value of the actual seismic bearing capacity. Some of these statistical indices, including R 2 , VAF, PI, and WI, were recognized as accuracy parameters, whereas others, such RMSE, WMAPE, MAE, and MBE, were classified as error parameters. A perfect model would have a predicted a value that was identical to or very close to the ideal value (see Table 3).

3. Methodology of Soft-Computing Techniques

3.1. Minimax Probability Machine Regression (MPMR)

The minimax probability machine regression (MPMR) technique was suggested by Lanckreit et al. [41] as a powerful probabilistic machine learning approach. For the linear classification problem, the MPMC algorithm was first used in which the minimum probability of the correctly classified future data was maximized and furthered by the help of the Mercer kernels function nonlinear version of this theorem. The MPMR algorithm’s goal, which is based on a nonlinear regression framework, is to maximize the minimal probability for the correct regression of the actual data with minimum probability, which falls between the upper and lower bounds of the actual regression. Following the minmax probability machine classification (MPMC) technique, the following expression is used for regression:
S u p E z = x ~ ( μ , z )   P r a T x b
where x ( μ , z ) is a random vector representing the class of statistical information, S u p E z represents the supremum over the distribution having the mean μ R n and covariance matrix z R n × n   , and a and b represent the constant value.
MPMC is a classification method created by Strohmann and Grudic [42]. It is used as a binary classifier for separating data into two sets of points to implement the MPMR algorithm, in which one set of classes is produced by shifting all the regression data along the + ε side of the output variable axis, and the other is obtained by shifting all the regression data along the ε side of the output variable axis. The MPMR model is based on a regression surface, and the classification boundary between these two separating surfaces is ± ε . This paper uses the MPMR algorithm to estimate the undrained seismic bearing capacity factor. Assume a set of training data was generated using an unknown regression function f x , where f : R d R with the following structure. Now, the minimum probability may be estimated directly with the help of the following regression equation, which is based on kernel formulation.
y = i 1 n α i K x i , x + b
where K x i , x represents the kernel function, y represents the output of the MPMR algorithm, and n represents the total number of datasets of independent variables. α i and b represent the output of the MPMR model. As a kernel function, the radial basis function (RBF) is employed, i.e., K x i , x = exp x i , x x i , x T / 2 σ 2   , in which σ is defined as the width of the RBF. In this study, the slope inclination (β), normalized depth (D/B), normalized distance (L/B), normalized slope height (H/B), the strength ratio (cu/γB), and the horizontal seismic acceleration (kh) are used as inputs of the MPMR model, and the seismic bearing capacity (Nc) is used as the output of the MPMR model. Thus, y = N c and K x i , x = f β , H B , L B , D B , c u γ B , k h . However, the kernel function can be investigated for its potential in developing a regression model. (Please refer to Strohmann and Grudic’s [42] work for comprehensive methodology details.) The MPMR model was constructed using MATLAB software.

3.2. Functional Network (FN)

Castillo et al. [43] introduced a new technique: the updated neural network. To estimate the unknown neuron functions, an FN algorithm was used on both data and domain knowledge. This is considered to be the advantage of FN over the ANN algorithm. Initially, the FN method had a complex topology, but it can now be reduced to simple terminology. Thus, FNs solve the “black box” issue with neural networks by combining domain knowledge with data knowledge to infer the problem’s topology. FNs utilize data to estimate the unknown neuron functions and domain knowledge to ascertain the network’s topology. In an FN, it is expected that the neural functions have several arguments and are vector-valued functions, whereas the ANN algorithm uses the sigmoidal function. Parametric and structural learning are used to learn and estimate the functions. In contrast, artificial neural networks have predefined neural functions. Compared to ANNs, the FN’s intermediary layers allow many neuron outputs to be coupled to the same unit. Functional networks can be classified as either structural learning or parametric learning, depending on the approach taken to train them. The network’s initial topology is constructed using the designer’s actual resources in structural learning. Using functional equations, we can further reduce the complexity of the problem. On the other hand, parametric learning relies on a combination of functional families to estimate neuron function. The parameters in question are calculated using the data at hand. Three distinct kinds of components make up a functional network. They include data stores (input, output, and processing layers), processors, and directed link sets. The following mathematical Equation (13) is proposed to approximate the neural function.
f i x = j = 1 n a i j ϕ i j X
where X represents the input vector and ϕ i j represents the shape function; these can be polynomial functions, such as ( 1 ,   x ,   x 2 ,   x 3 ,   x n ) ; a trigonometric function, such as ( sin x , cos x , tan x sin 2 x ) ; exponential functions, such as e x ,   e 2 x , e n x ; or any other acceptable function. Associative optimization functions are used to obtain a system of linear or nonlinear algebraic equations. Working with a functional network necessitates prior knowledge of the functional equation. Cauchy’s functional equation appears more frequently than any other type in the class of functional equations. The degree/order of the function and the type of basic function (exponential, polynomial, sine, cosine, or tangent) are utilized to determine the FN’s effectiveness. To construct the FN model, the tan basic function (BF) was adopted. The basic architecture of the FN, which is presented in Figure 4, was used to predict the bearing capacity (Nc) of the strip footing.

3.3. Convolutional Neural Network (CNN)

A convolutional neural network (CNN) is a deep, feed-forward NN with global sliding, local connections, and weight sharing; as such, it can solve the problem of excessive parameters and prolonged training time as the hidden layer increases, thus making the network extraordinarily applicable and generalizable [44]. It consists primarily of a one-dimensional CNN (1D-CNN) and a high-dimensional CNN. Furthermore, the 1D-CNN is frequently employed for time series and natural language processing; meanwhile, for image processing and video processing, 2D-CNN and 3D-CNN have been used (as in Wang et al. [45]). For this prediction problem, it should be noted that a 2D-CNN has been employed. It comprises one input, two convolutional layers, two pooling layers, one fully connected layer, and an output layer.
Convolutional layers are typically the product of the input matrix multiplied by the filters. Filters, sometimes known as kernels, are used to identify and categorize features within the incoming data. Using pooling layers can reduce the spatial dimensions of the incoming data. In contrast, a fully connected layer adheres to the structure of a CNN and is made up of a number of hidden layers, which further combine the features that are extracted. The whole model structure was built, and the main work focuses on the training of the model. The objective of the training was to determine the optimal values for all parameters, including the weights and biases, to minimize the loss function. This function is used to quantify the degree to which the measured value deviates from the anticipated value. The basic structure of the CNN model is presented in Figure 5.

3.4. Recurrent Neural Networks (RNN)

Recurrent neural networks (RNNs) are dynamic neural networks that are used to solve time series problems [46]. RNNs are supervised machine learning models that take sequence data as their input. They are distinguished from other machine-learning (ML) model architectures by their use of recurrent connections, which means that the output of one cell is related to the output of the previous cell. More specifically, the network memorizes the information from the cell’s output that came from the previous output. In contrast to a standard neural network, it possesses a recursive loop, as is demonstrated in Figure 6.
The parameters of RNNs are trained by employing backpropagation through time (BPTT) values. The BPTT values propagate the difference between the ground truth and the output at time t, and backward to time t − 1. Likewise, an error at time t − 1 is propagated at time t − 2, and the training is then conducted retroactively. Here, the fundamental equations of a simple RRN are depicted in Equations (14) and (15).
S t = f U x t + W S t 1 + b h
y t = f V S t + b o
where the variables x t , S t ,   y t represent the input, the hidden, and the output layers at time t, respectively. Additionally, W, U, and V are the shared parameters with W, indicating the weight of the input layers. U stands for the weight of the current state and V denotes the weight of outputs. Here, f(.) is an activation function, and b h and b o are the biases of the hidden and output layers, respectively.
The RNN that is typically used is both straightforward and effective. In actuality, however, it can be challenging to train a model for problems that involve a significant amount of time between the target and the antecedent-related events [47]. In context, the RNN cannot keep good memory if the time interval is large and suffers from a vanishing gradient problem (BPTT).

3.5. Group Method of Data Handling (GMDH)

Ivakhnenko developed GMDH in 1971 [48] as an inductive learning algorithm, and it has been extensively applied to the field of civil engineering to analyze complex and nonlinear problems. The GMDH network is often referred to as a polynomial neural network due to the feed-forward of the neural network’s structure [49]. The GMDH network, unlike other networks, continuously changes throughout the training process. The identification problem is essentially defined as finding a function f ^ that may be approximately used in place of the actual one f to predict output y ^ for a given input vector X = x 1 , x 2 , x 3 , , x n , which is as close as feasible to its actual output y. Consequently, when considering m observations of a multiple input, the single-output data pairs y i can be written as
y i = f x i 1 , x i 2 , x i 3 , , x i n i = 1 , 2 , , m
GMDH-type neural networks can now be trained to predict output values y ^ i   for every given input vector X = x i 1 , x i 2 , x i 3 , , x i n , i.e.,
y ^ i = f ^ x i 1 , x i 2 , x i 3 , , x i n i = 1 , 2 , , m
To minimize the square of the difference between the actual and predicted output, that is, to find a GMDH-type neural network, we used
i = 1 m f ^ x i 1 , x i 2 , x i 3 , , x i n y i 2 m i n
A complex discrete form of the Volterra functional series was used in the form of
y = a 0 + 1 n a i x i + 1 n 1 n a i j x i x j + 1 n 1 n 1 n a i j k x i x j x k +
which are known as the Kolmogorov–Gabor polynomial [50] and can be used to express general relationships between input and output variables.
This comprehensive mathematical description can be expressed as a system of partial quadratic polynomials with just two variables (neurons), taking the form of
y ^ = G x i , x j = a 0 + a 1 x i + a 2 x j + a 3 x i x j + a 4 x i 2 + a 5 x j 2
Regression techniques were used to determine the coefficients a i   in Equation (20) to minimize the difference between the calculated output y ^ and the actual output y for each pair of input variables x i , x j . This was conducted to determine the coefficients of each quadratic function G i   and to ensure that the output in the entire set of input–output data pairs fit as closely as possible.
E = i = 1 m y i G i 2 m m i n
The GMDH algorithm’s basic form involves selecting all possible combinations of two independent variables from a total of n   input variables to build the regression polynomial in the form of Equation (20), whereby, in the sense of the least-squares method, it can best fit the dependent data ( y i , i = 1 , 2 , , m ) . As a result, n 2 = n n 1 2 neurons will be developed from the observations y i , x i p , x i q i = 1 , 2 , , m for various p , q 1 , 2 , , n in the feed-forward network’s first hidden layer. Therefore, it is now possible to create m data triples y i , x i p , x i q i = 1 ,   2   , , m from random selection by utilizing such p , q 1 , 2 , , n in matrix form.
x 1 p x 1 q y 1 x 2 p x 2 q y 2 x M p x M q y M
For each row of m data triples, the quadratic subexpression in the form of Equation (20) can be used to easily generate the following matrix equation:
A a = Y
where a represents the quadratic polynomial’s unknown coefficient vector in Equation (20),
a = a 0 , a 1 , a 2 , a 3 , a 4 , a 5   a n d   Y = y 1 , y 2 , y 3 , , y m T
and Y represents the vector of the observation’s output value. It is clear that matrix A is as follows:
A = 1 x 1 p x 1 q x 1 p x 1 q 1 x 2 p x 2 q x 2 p x 2 q 1 x M p x M q x M p x M q x 1 p 2 x 2 p 2 x M p 2 x 1 q 2 x 2 q 2 x M q 2
Using the least-squares method with multiple regression analysis, the normal equations are solved as follows:
a = A T A 1 A T Y
which, for the entire set of m data triples, determines the vector containing the optimal coefficients of the quadratic Equation (20). Depending on the network’s connectivity structure, this process is repeated for each neuron in the subsequent hidden layer. Such a solution, however, is rather prone to round-off errors and, more critically, to the singularity of these equations when it is obtained directly from solving normal equations.

4. Results and Discussion

4.1. Tuning Hyperparameters of the Proposed Models

For the MPMR technique, the design values of the error insensitive zone (e) and the width (s) of the radial basis function are 0.005 and 0.3, respectively. For the FNN, the cos function was used with a four-degree polynomial. The GMDH model was developed based on eight layers with three neurons and a = 0.6. In constructing the CNN model, the following hypermeters were used to obtain the best result. Table 4 displays the optimal values for the deterministic parameters of the CNN and RNN models. The method of predicting the seismic bearing capacity factor (Nc) of the strip footing is presented by using a flow chart, as shown in Figure 7.

4.2. Performance Evaluations of the Proposed Models

Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the scatter plots of the actual seismic bearing capacity (Nc) that was obtained through the FELA solution as well as the model-predicted seismic bearing capacity (Nc) for both the training and testing phases for the MPMR, FN, CNN, RNN, and GMDH models, respectively; these were constructed to provide a more in-depth look at the performance. The line y = x shows the ideal model with the actual output value = the model-predicted output value. Figure 2 demonstrates this clearly; all of the data points cluster closely around line y = x , suggesting that the MPMR model is the best fit. Most data points lie between the dotted line, which indicates a ± 20 % deviation of the predicted output from the actual regression line y = x . Both the training and testing phases were observed to have significantly less variation in the MPMR model, followed by the CNN, RNN, FN, and GMDH models.

4.3. Performance Parameters

To assess the performance of the proposed models, certain statistical parameters, such as R 2 , VAF, PI, WI, MAE, WMAPE, MBE, and RMSE, were evaluated, the results of which are presented in Table 5. The results in Table 5 give some quantitative information on the performance of each algorithm for both the training and testing phases. It also shows the ranks for the models with better performance. The models that achieve statistical parameter values close to their respective ideal values (which are presented in Table 3) are considered the most efficient. Generally, the models that attained the higher values for the accuracy parameter and the lower values for the error parameter are considered the best. The proposed MPMR models attained the maximum accuracy ( R 2 = 1 ) and least degree of error (RMSE = 0.00), followed by the CNN ( R 2 = 0.9945 ,   R M S E = 0.0140 ) , RNN ( R 2 = 0.8791 ,   R M S E = 0.0655 ) , FN ( R 2 = 0.8231 ,   R M S E = 0.0785 ) , and GMDH R 2 = 0.722 ,   R M S E = 0.0985 models during the training phase. Furthermore, the CNN model attained the maximum accuracy ( R 2 = 0.9754 )   and least degree of error ( R M S E = 0.0297 ) , followed by the MPMR, RNN, FN, and GMDH models during the testing phase. Overall, the MPMR model outperformed the CNN, RNN, FN, and GMDH models based on the other index results (which are presented in Table 5).

4.4. Rank Analysis

A rank analysis is the most straightforward and extensively used technique for evaluating model performance and comparing robustness. The maximum score depends on the number of models considered in the analysis (i.e., five). In this analysis, we assigned the score based on the statistical parameter values for both the training and testing phases separately. Which of the proposed models produces the best outcomes is based on the statistical parameters that are assigned the highest possible score (i.e., five) and vice versa. If two models produce the same statistical result, their ranking ratings could be identical. When scoring a model, the training and test scores are added together by using Equation (25) to obtain a total score. The model that attained the highest score is ranked as one, and the model that earned the lowest score is ranked as five. From this analysis, it can be concluded that the MPMR model attained the highest total score (72), followed by the CNN (70), RNN (44), FN (35), and GMDH (19) models (as presented in Table 6). Thus, the MPMR model gives the most accurate result, followed by the CNN, RNN, FN, and GMDH models when calculating the seismic bearing capacity (NC).
T o t a l   s c o r e = i = 1 m S i + j = 1 n S j
where S i and S j represent the score of the individual statistical parameters for the training and testing phases, respectively. Additionally, m and n represent the number of statistical parameters used for the rank analysis.

4.5. Sensitivity Analysis

Sensitivity analysis aims to ascertain the impact on the model’s target variables (i.e., Nc) regarding the changes to the model’s input variables, such as β, D/B, L/B, H/B, cu/γB, and kh. It is a method that is used for determining the results of a choice when only some of the possible outcomes are known—in other words, an analyst can learn how a shift in a single variable affects a result if they construct a model with that collection of variables. In this analysis, the impact of the input variables on the output variable is determined using the cosine amplitude technique [51]. The data prepared to perform the study were stored in a data array form (V) as follows.
V = { v 1 ,   v 2 ,   v 3 ,   v n )
where V represents the input vector of length “n”, and v i represents the length vector of dimension “m”, which is presented as follows in Equation (27):
v i = { v i 1 ,   v i 2 ,   v i 3 ,   v i m )
The correlation between the strength of the relation C i j and the dataset of v i and v j was calculated by using Equation (28).
C i j = k = 1 m v i k v j k k = 1 m v i k 2 k = 1 m v j k 2
The relative importance between the input parameters and the seismic bearing capacity (Nc) of the footing are presented with a pie chart in Figure 13.
From the obtained sensitivity analysis results, it can be concluded that the horizontal seismic acceleration (kh) has the greatest influence on seismic bearing capacity (Nc) with a value of 0.89, followed by slope inclination (β) with a value of 0.88 and strength ratio (cu/γB) with a value of 0.86. The other parameters, normalized slope height (H/B), normalized depth (D/B), and normalized distance (L/B), have a minimal effect on the seismic bearing capacity (Nc), with values of 0.85, 0.75, and 0.73, respectively. Finally, it can be concluded that all five parameters strongly influenced the seismic bearing capacity (Nc). Hence, the effect of all input parameters was considered when predicting the output. Additionally, sensitivity analysis can serve as a guide for prioritizing which input parameters to use when developing a model.

4.6. Regression Error Characteristic (REC) Curve

A receiver operating characteristic (ROC) curve is a graphical representation of a classifier’s performance in a binary classification problem. Although ROC curves only apply to classification problems, regression error characteristic (REC) curves can be used to visualize the performance of regressor models. By plotting the percentage of points with accurate predictions within the tolerance interval against the absolute error tolerance, the x-and y-axes of a regression function represent the margin of error and the precision, respectively. The curve thus obtained is a rough approximation of the error’s cumulative distribution function. The predicted error is estimated by the area over the REC curve (AOC). Models perform better when their AOC value is lower. Therefore, ROC curves offer a visual representation of model performance that is both fast and reliable.
Figure 14 and Figure 15 represent the REC curves of all proposed models for both the training and testing phases, respectively.
Table 7 represents the AOC value for all the proposed models for both the training and testing phases. From the results presented in Figure 14 and Figure 15, it can be concluded that MPMR is the most accurate model and GMDH is the least accurate model in terms of prediction accuracy. The obtained value of AOC is shown in Table 7. MPMR has the most negligible AOC value (0.000025), followed by the CNN (0.0084), RNN (0.0381), FN (0.0503), and GMDH (0.0658) models in the training phase, and CNN has the lowest AOC value (0.0164), followed by the MPMR (0.0211), RNN (0.0362), FN (0.0471), and GMDH (0.0644) models in the testing phase. Finally, it can be concluded that both the MPMR and CNN models give the most accurate results in the training and testing phases when compared to the RNN, FN, and GMDH models.

4.7. Akaike Information Criterion (AIC)

Akaike [52] established the Akaike information criterion (AIC) to determine whether trained models are generalizable. The Akaike information criterion (AIC) is used to evaluate the relative quality of statistical models for a given dataset. The AIC estimates the generalization potential of each model concerning other models when given a set of models for a particular dataset. As a result, the AIC offers a model selection method. The AIC value for each model is calculated using Equation (29).
A I C = n × ln R M S E 2 + 2 k
where n represents the number of datasets used to train the model and k represents the total number of input parameters used to train the model. For the best performing model, the AIC value should be the lowest [53,54]. In this study, as presented in Table 8, the MPMR model attained the lowest AIC value (−18,953.65 for training and −2518.42 for testing) compared to the other models. Thus, it can be concluded that the MPMR model has a greater generalization potential, followed by the CNN, RNN, FN, and GMDH models. The comparison of the AIC value for all the models is presented in Figure 16.

5. Conclusions

This paper presents five advanced machine learning algorithms, i.e., the MPMR, FN, CNN, RNN, and GMDH models, for the estimation of the bearing capacity (Nc) performance of strip footings that are resting on undrained cohesive slopes. The suggested model was constructed by first training and then testing all models on a set of 1296 FELA solutions. A dataset of 1296 FELA solutions with six individual dimensionless variables, i.e., β , H B , L B , D B , c u γ B , k h , was taken as the input, and the seismic bearing capacity factor (Nc) was considered as the output during the construction of the model. A sensitivity analysis was performed to determine the influence of the dimensionless input parameters on the seismic bearing capacity factor. The proposed model’s efficiency and performance were then analyzed using statistical indicators such as R 2 , VAF, PI, WI, MAE, WMAPE, MBE, and RMSE. Subsequently, rank analyses, REC curves, and the AIC were used to compare the complete performance of the various proposed models. Based on the obtained results, it is evident that the proposed MPMR model attained the highest prediction accuracy in predicting the Nc of strip footings. Through this study, the following conclusions can be drawn: (1) the proposed advanced machine learning model is a valuable tool for estimating—with less computational effort and greater precision—the seismic bearing capacity factor of strip footings; (2) the MPMR model has a particularly high potential for predicting the desired Nc value of strip footings, as was evident from the values obtained from the performance parameters, rank analyses, REC curves, and the AIC; (3) the proposed models can be easily implemented for practical application, as well as for numerous applications in seismic research; and (4) the model has a particularly low computational cost of approximately 30 s. Overall, the MPMR model was found to be the best-performing model, followed by the CNN, FN, RNN, and GMDH models. The comparative results indicated that all the proposed models in this study have a better accuracy and ability to estimate the seismic bearing capacity factor of strip footings. However, they have certain limitations that need to be explored in the future, which are identified as follows: (i) the proposed models should be trained for large datasets to predict the desired target value of the seismic bearing capacity factor (Nc) more accurately; (ii) the models created for estimating the seismic bearing capacity factor (Nc) of strip footings are valid for the defined range of dimensionless input parameters; (iii) a comparison should be conducted on the performance of the proposed models via several standard machine learning models and with metaheuristic optimization algorithms on the testing dataset; and (iv) the proposed models do not apply to soils with multiple layers. Layered impacts can be studied in greater depth with additional research.

Author Contributions

Conceptualization, P.S. and W.W. and S.K.; Methodology, S.K.; Software, D.R.K. and K.S.; Formal analysis, D.R.K. and S.K.; Resources, W.W.; Data curation, K.S. and W.J.; Writing—original draft, D.R.K. and W.J.; Writing—review & editing, P.S., W.W. and S.K.; Supervision, P.S. and S.K.; Funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Thammasat University Research Unit in Data Science and Digital Transformation.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding authors on reasonable request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Hansen, B.J. A General Formulaa for Bearing Capacity; Bulletin No. 11; Danish Geotechnical Institute: Lyngby, Denmark, 1961. [Google Scholar]
  2. Satvati, S.; Alimohammadi, H.; Rowshanzamir, M.; Hejazi, S.M. Bearing Capacity of Shallow Footings Reinforced with Braid and Geogrid Adjacent to Soil Slope. Int. J. Geosynth. Ground Eng. 2020, 6, 41. [Google Scholar] [CrossRef]
  3. Khalvati Fahliani, H.; Arvin, M.R.; Hataf, N.; Khademhosseini, A. Experimental Model Studies on Strip Footings Resting on Geocell-Reinforced Sand Slopes. Int. J. Geosynth. Ground Eng. 2021, 7, 24. [Google Scholar] [CrossRef]
  4. Georgiadis, K. The Influence of Load Inclination on the Undrained Bearing Capacity of Strip Footings on Slopes. Comput. Geotech. 2010, 37, 311–322. [Google Scholar] [CrossRef]
  5. Georgiadis, K. Undrained Bearing Capacity of Strip Footings on Slopes. J. Geotech. Geoenviron. Eng. 2010, 136, 677–685. [Google Scholar] [CrossRef]
  6. Meyerhof, G.G. The Ultimate Bearing Capacity of Foundations on Slopes. In Proceedings of the 4th International Conference on Soil Mechanics and Foundation Engineering, London, UK, 12–24 August 1957; Volume 1, pp. 384–386. [Google Scholar]
  7. Davis, E.H.; Brooker, J.R. Some Adaptations of Classical Plasticity Theory for Soil Stability Problems. In Proceedings of the Symposium on the Role of Plasticity in Soil Mechanics, Cambridge, UK, 13–15 September 1973; p. 24. [Google Scholar]
  8. Kusakabe, O.; Kimura, T.; Yamaguchi, H. Bearing Capacity of Slopes under Strip Loads on the Top Surfaces. Soils Found. 1981, 21, 29–40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Shiau, J.S.; Merifield, R.S.; Lyamin, A.V.; Sloan, S.W. Undrained Stability of Footings on Slopes. Int. J. Geomech. 2011, 11, 381–390. [Google Scholar] [CrossRef] [Green Version]
  10. Georgiadis, K. An Upper-Bound Solution for the Undrained Bearing Capacity of Strip Footings at the Top of a Slope. Geotechnique 2010, 60, 801–806. [Google Scholar] [CrossRef]
  11. Budhu, M.; Al-Karni, A. Seismic Bearing Capacity of Soils. Geotechnique 1994, 44, 185–187. [Google Scholar] [CrossRef]
  12. Kumar, J.; Mohan Rao, V.B.K. Seismic Bearing Capacity of Foundations on Slopes. Geotechnique 2003, 53, 347–361. [Google Scholar] [CrossRef]
  13. Farzaneh, O.; Mofidi, J.; Askari, F. Seismic Bearing Capacity of Strip Footings near Cohesive Slopes Using Lower Bound Limit Analysis. In Proceedings of the 18th International Conference on Soil Mechanics and Geotechnical Engineering: Challenges and Innovations in Geotechnics, ICSMGE 2013, Paris, France, 2–6 September 2013; Volume 2, pp. 1467–1470. [Google Scholar]
  14. Kumar, J.; Ghosh, P. Seismic Bearing Capacity for Embedded Footings on Sloping Ground. Geotechnique 2006, 56, 133–140. [Google Scholar] [CrossRef]
  15. Yamamoto, K. Seismic Bearing Capacity of Shallow Foundations near Slopes Using the Upper-Bound Method. Int. J. Geotech. Eng. 2010, 4, 255–267. [Google Scholar] [CrossRef]
  16. Georgiadis, K.; Chrysouli, E. Seismic Bearing Capacity of Strip Footings on Clay Slopes. In Proceedings of the 15th European Conference on Soil Mechanics and Geotechnical Engineering; IOS Press: Amsterdam, The Netherlands, 2011; pp. 723–728. [Google Scholar]
  17. Kumar, J.; Chakraborty, D. Seismic Bearing Capacity of Foundations on Cohesionless Slopes. J. Geotech. Geoenviron. Eng. 2013, 139, 1986–1993. [Google Scholar] [CrossRef]
  18. Chakraborty, D.; Kumar, J. Seismic Bearing Capacity of Shallow Embedded Foundations on a Sloping Ground Surface. Int. J. Geomech. 2015, 15, 4014035. [Google Scholar] [CrossRef]
  19. Chakraborty, D.; Mahesh, Y. Seismic Bearing Capacity Factors for Strip Footings on an Embankment by Using Lower-Bound Limit Analysis. Int. J. Geomech. 2016, 16, 6015008. [Google Scholar] [CrossRef]
  20. Luo, W.; Zhao, M.; Xiao, Y.; Zhang, R.; Peng, W. Seismic Bearing Capacity of Strip Footings on Cohesive Soil Slopes by Using Adaptive Finite Element Limit Analysis. Adv. Civ. Eng. 2019, 2019, 4548202. [Google Scholar] [CrossRef] [Green Version]
  21. Lai, V.Q.; Lai, F.; Yang, D.; Shiau, J.; Yodsomjai, W.; Keawsawasvong, S. Determining Seismic Bearing Capacity of Footings Embedded in Cohesive Soil Slopes Using Multivariate Adaptive Regression Splines. Int. J. Geosynth. Ground Eng. 2022, 8, 46. [Google Scholar] [CrossRef]
  22. Thangavel, P.; Samui, P. Determination of the Size of Rock Fragments Using RVM, GPR, and MPMR. Soils Rocks 2022, 45, e2022008122 . [Google Scholar] [CrossRef]
  23. Mohanty, R.; Suman, S.; Das, S.K. Modeling the Axial Capacity of Bored Piles Using Multi-Objective Feature Selection, Functional Network and Multivariate Adaptive Regression Spline, 1st ed.; Elsevier Inc.: Amsterdam, The Netherlands, 2017; ISBN 9780128113196. [Google Scholar]
  24. Das, S.K.; Suman, S. Prediction of Lateral Load Capacity of Pile in Clay Using Multivariate Adaptive Regression Spline and Functional Network. Arab. J. Sci. Eng. 2015, 40, 1565–1578. [Google Scholar] [CrossRef]
  25. Kumar, M.; Samui, P. Reliability Analysis of Pile Foundation Using GMDH, GP and MARS. In Proceedings of the CIGOS 2021, Emerging Technologies and Applications for Green Infrastructure. Proceedings of the 6th International Conference on Geotechnics, Civil Engineering and Structures, Ha Long, Vietnam, 28–29 October 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1151–1159. [Google Scholar]
  26. Dey, P.; Chaulya, S.K.; Kumar, S. Hybrid CNN-LSTM and IoT-Based Coal Mine Hazards Monitoring and Prediction System. Process Saf. Environ. Prot. 2021, 152, 249–263. [Google Scholar] [CrossRef]
  27. Tiwari, S.K.; Kumaraswamidhas, L.A.; Gautam, C.; Garg, N. An Auto-Encoder Based LSTM Model for Prediction of Ambient Noise Levels. Appl. Acoust. 2022, 195, 108849. [Google Scholar] [CrossRef]
  28. Tiwari, S.K.; Kumaraswamidhas, L.A.; Prince; Kamal, M.; Ur Rehman, M. A Hybrid Deep Leaning Model for Prediction and Parametric Sensitivity Analysis of Noise Annoyance. Environ. Sci. Pollut. Res. 2023, 30, 49666–49684. [Google Scholar] [CrossRef]
  29. Chen, L.; Chen, W.; Wang, L.; Zhai, C.; Hu, X.; Sun, L.; Tian, Y.; Huang, X.; Jiang, L. Convolutional Neural Networks (CNNs)-Based Multi-Category Damage Detection and Recognition of High-Speed Rail (HSR) Reinforced Concrete (RC) Bridges Using Test Images. Eng. Struct. 2023, 276, 115306. [Google Scholar] [CrossRef]
  30. Optum Computational Engineering: Copenhagen, Denmark. Available online: https//optumce.com/ (accessed on 21 May 2023).
  31. Ciria, H.; Peraire, J.; Bonet, J. Mesh Adaptive Computation of Upper and Lower Bounds in Limit Analysis. Int. J. Numer. Methods Eng. 2008, 75, 899–944. [Google Scholar] [CrossRef]
  32. Keawsawasvong, S.; Ukritchon, B. Undrained Stability of a Spherical Cavity in Cohesive Soils Using Finite Element Limit Analysis. J. Rock Mech. Geotech. Eng. 2019, 11, 1274–1285. [Google Scholar] [CrossRef]
  33. Keawsawasvong, S.; Ukritchon, B. Undrained lateral capacity of I-shaped concrete piles. Songklanakarin J. Sci. Technol. 2017, 39, 751–758. [Google Scholar]
  34. Shiau, J.; Chudal, B.; Mahalingasivam, K.; Keawsawasvong, S. Pipeline Burst-Related Ground Stability in Blowout Condition. Transp. Geotech. 2021, 29, 100587. [Google Scholar] [CrossRef]
  35. Keawsawasvong, S.; Thongchom, C.; Likitlersuang, S. Bearing Capacity of Strip Footing on Hoek-Brown Rock Mass Subjected to Eccentric and Inclined Loading. Transp. Infrastruct. Geotechnol. 2021, 8, 189–202. [Google Scholar] [CrossRef]
  36. Kumar, M.; Biswas, R.; Kumar, D.R.; Pradeep, T.; Samui, P. Metaheuristic Models for the Prediction of Bearing Capacity of Pile Foundation. Geomech. Eng. 2022, 31, 129–147. [Google Scholar] [CrossRef]
  37. Kumar, D.R.; Samui, P.; Burman, A. Prediction of Probability of Liquefaction Using Soft Computing Techniques. J. Inst. Eng. Ser. A 2022, 103, 1195–1208. [Google Scholar] [CrossRef]
  38. Naser, M.Z.; Alavi, A.H. Error Metrics and Performance Fitness Indicators for Artificial Intelligence and Machine Learning in Engineering and Sciences. Archit. Struct. Constr. 2021, 1–19. [Google Scholar] [CrossRef]
  39. Chai, T.; Draxler, R.R. Root Mean Square Error (RMSE) or Mean Absolute Error (MAE)?—Arguments against Avoiding RMSE in the Literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
  40. Kumar, D.R.; Samui, P.; Burman, A. Prediction of Probability of Liquefaction Using Hybrid ANN with Optimization Techniques. Arab. J. Geosci. 2022, 15, 1587. [Google Scholar] [CrossRef]
  41. Lanckriet, G.R.G.; El Ghaoui, L.; Bhattacharyya, C.; Jordan, M.I. A Robust Minimax Approach to Classification. J. Mach. Learn. Res. 2002, 3, 555–582. [Google Scholar]
  42. Strohmann, T.; Grudic, G. A Formulation for Minimax Probability Machine Regression. Adv. Neural Inf. Process. Syst. 2002, 15, 785–792. [Google Scholar]
  43. Castillo, E.; Cobo, A.; Gómez-Nesterkin, R.; Hadi, A.S. A General Framework for Functional Networks. Netw. Int. J. 2000, 35, 70–82. [Google Scholar] [CrossRef]
  44. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  45. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph Cnn for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
  46. Elman, J.L. Finding Structure in Time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
  47. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef]
  48. Ivakhnenko, A.G.; Ivakhnenko, G.A. The Review of Problems Solvable by Algorithms of the Group Method of Data Handling (GMDH). Pattern Recognit. Image Anal. 1995, 5, 527–535. [Google Scholar]
  49. Farlow, S.J. Self-Organizing Method in Modeling: GMDH; Type Algorithm; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
  50. Mueller, J.-A.; Lemke, F. Self-Organising Data Mining; Extracting Knowledge from Data; Libri GmbH: Hamburg, Germany, 2000. [Google Scholar]
  51. Biswas, R.; Bardhan, A.; Samui, P.; Rai, B.; Nayak, S.; Armaghani, D.J. Efficient Soft Computing Techniques for the Prediction of Compressive Strength of Geopolymer Concrete. Comput. Concr. 2021, 28, 221–232. [Google Scholar] [CrossRef]
  52. Akaike, H. A New Look at the Statistical Model Identification. IEEE Trans. Automat. Contr. 1974, 19, 716–723. [Google Scholar] [CrossRef]
  53. Pradeep, T.; Samui, P. Prediction of Rock Strain Using Hybrid Approach of Ann and Optimization Algorithms. Geotech. Geol. Eng. 2022, 40, 4617–4643. [Google Scholar] [CrossRef]
  54. Guven, A.; Kişi, Ö. Estimation of Suspended Sediment Yield in Natural Rivers Using Machine-Coded Linear Genetic Programming. Water Resour. Manag. 2011, 25, 691–704. [Google Scholar] [CrossRef]
Figure 1. The problem definition of a strip footing on a slope.
Figure 1. The problem definition of a strip footing on a slope.
Buildings 13 01371 g001
Figure 2. Numerical model, boundary condition, and failure mechanism.
Figure 2. Numerical model, boundary condition, and failure mechanism.
Buildings 13 01371 g002
Figure 3. Correlation heatmap matrix.
Figure 3. Correlation heatmap matrix.
Buildings 13 01371 g003
Figure 4. Basic Architecture of Functional Network.
Figure 4. Basic Architecture of Functional Network.
Buildings 13 01371 g004
Figure 5. Basic structure of a CNN model.
Figure 5. Basic structure of a CNN model.
Buildings 13 01371 g005
Figure 6. Basic Architecture of a Simple RNN model.
Figure 6. Basic Architecture of a Simple RNN model.
Buildings 13 01371 g006
Figure 7. The flowchart of the methodology in predicting the seismic bearing capacity factor (Nc) of the strip footings.
Figure 7. The flowchart of the methodology in predicting the seismic bearing capacity factor (Nc) of the strip footings.
Buildings 13 01371 g007
Figure 8. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the MPMR model.
Figure 8. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the MPMR model.
Buildings 13 01371 g008
Figure 9. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the FN model.
Figure 9. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the FN model.
Buildings 13 01371 g009
Figure 10. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the CNN model.
Figure 10. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the CNN model.
Buildings 13 01371 g010
Figure 11. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the RNN model.
Figure 11. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the RNN model.
Buildings 13 01371 g011
Figure 12. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the GMDH model.
Figure 12. Scatter plot of the actual and predicted seismic bearing capacities (Nc) for the GMDH model.
Buildings 13 01371 g012
Figure 13. Illustration of the sensitivity analysis result.
Figure 13. Illustration of the sensitivity analysis result.
Buildings 13 01371 g013
Figure 14. Illustration of the REC curve for the training phase.
Figure 14. Illustration of the REC curve for the training phase.
Buildings 13 01371 g014
Figure 15. Illustration of the REC curve for the testing phase.
Figure 15. Illustration of the REC curve for the testing phase.
Buildings 13 01371 g015
Figure 16. A comparison of the AIC value for all models.
Figure 16. A comparison of the AIC value for all models.
Buildings 13 01371 g016
Table 1. List of parametric values used in this study.
Table 1. List of parametric values used in this study.
Input ParametersSelected Values
β15°, 30°, 45°, 60°
H/B1, 2, 4
L/B0, 1, 2, 4
D/B0, 1, 2
cu/γB1.5, 2.5, 5
kh0.1, 0.2, 0.3
Table 2. Statistical description of the input and output data.
Table 2. Statistical description of the input and output data.
StatisticsbH/BD/Bkhcu/gBL/BNc
Max.60420.3548.48
Min.15100.11.500
St. dev.16.781.250.820.081.471.481.58
Mean37.52.310.231.755.2
Skewness0.0000.3820.0000.0000.4710.435−0.256
Kurtosis−1.361−1.501−1.501−1.501−1.501−1.154−0.769
Table 3. Ideal values of the statistical parameters.
Table 3. Ideal values of the statistical parameters.
Statistical ParametersR2WMAPERMSEVAFPIWIMAEMBE
Ideal Values1001002100
Table 4. Details of the hyperparametric configurations for the CNN and RNN models.
Table 4. Details of the hyperparametric configurations for the CNN and RNN models.
HyperparametersCNNRNN
Number of hidden layers33
Batch size150150
Activation functionReLUReLU
Dense layer6464
Number of epochs500500
Loss functionmean_squared_errormean_squared_error
Optimizeradamadam
Table 5. The statistical parameter values.
Table 5. The statistical parameter values.
ModelPhaseR2VAFPIWIMAEWMAPEMBERMSE
MPMRTrain1100210000
Test0.957795.67751.87510.98840.02140.03470.00360.0387
FNTrain0.823182.31421.56660.94960.05080.08410.00000.0785
Test0.860586.03161.64930.96060.04800.07760.00440.0694
CNNTrain0.994599.44611.97490.99860.00850.01410.00180.0140
Test0.975497.44071.91970.99370.01670.02700.00220.0297
RNNTrain0.879187.87391.69160.96580.03850.06380.00760.0655
Test0.914391.37191.77050.97490.03710.06000.01420.0562
GMDHTrain0.722072.19421.34360.91530.06630.10980.00090.0985
Test0.744474.43561.39090.92280.06540.10570.00540.0938
Table 6. Rank analysis of all proposed models based on statistical parameters.
Table 6. Rank analysis of all proposed models based on statistical parameters.
ParametersMPMRFNCNNRNNGMDH
TRTSTRTSTRTSTRTSTRTS
R2Score5422453311
RMSEScore5422453311
PIScore5422453311
WIScore5422453311
MAEScore5422453311
WMAPEScore5422453311
MBEScore5443251132
VAFScore5422453311
Sub Total4032181730402222109
Total Score7235704419
Rank14235
Table 7. The AOC value for all proposed methods.
Table 7. The AOC value for all proposed methods.
PhaseMPMRFNCNNRNNGMDHIdeal Value
Training2.51 × 10−50.05030.00840.03810.06580
Testing0.02110.04710.01640.03620.06440
Table 8. The AIC value for all models.
Table 8. The AIC value for all models.
ModelMPMRFNCNNRNNGMDHIdeal Value
Training−18,953.65−4603.04−7728.65−4933.07−4192.56Lowest value
Testing−2518.42−2063.89−2723.42−2227.19−1829.08Lowest value
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, D.R.; Samui, P.; Wipulanusat, W.; Keawsawasvong, S.; Sangjinda, K.; Jitchaijaroen, W. Soft-Computing Techniques for Predicting Seismic Bearing Capacity of Strip Footings in Slopes. Buildings 2023, 13, 1371. https://doi.org/10.3390/buildings13061371

AMA Style

Kumar DR, Samui P, Wipulanusat W, Keawsawasvong S, Sangjinda K, Jitchaijaroen W. Soft-Computing Techniques for Predicting Seismic Bearing Capacity of Strip Footings in Slopes. Buildings. 2023; 13(6):1371. https://doi.org/10.3390/buildings13061371

Chicago/Turabian Style

Kumar, Divesh Ranjan, Pijush Samui, Warit Wipulanusat, Suraparb Keawsawasvong, Kongtawan Sangjinda, and Wittaya Jitchaijaroen. 2023. "Soft-Computing Techniques for Predicting Seismic Bearing Capacity of Strip Footings in Slopes" Buildings 13, no. 6: 1371. https://doi.org/10.3390/buildings13061371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop