Next Article in Journal
A Maslov-Type Index in Dimension 2
Previous Article in Journal
Learning a Context-Aware Environmental Residual Correlation Filter via Deep Convolution Features for Visual Object Tracking
Previous Article in Special Issue
Multiagent Coordination and Teamwork: A Case Study for Large-Scale Dynamic Ready-Mixed Concrete Delivery Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Seismic Response Prediction of Rigid Rocking Structures Using Explainable LightGBM Models

by
Ioannis Karampinis
1,
Kosmas E. Bantilas
2,
Ioannis E. Kavvadias
2,
Lazaros Iliadis
1,* and
Anaxagoras Elenas
2
1
Lab of Mathematics and Informatics (ISCE), Department of Civil Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
2
Institute of Structural Statics and Dynamics, Department of Civil Engineering Democritus University of Thrace, 67100 Xanthi, Greece
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(14), 2280; https://doi.org/10.3390/math12142280
Submission received: 2 July 2024 / Revised: 18 July 2024 / Accepted: 18 July 2024 / Published: 21 July 2024

Abstract

:
This study emphasizes the explainability of machine learning (ML) models in predicting the seismic response of rigid rocking structures, specifically using the LightGBM algorithm. By employing SHapley Additive exPlanations (SHAP), partial dependence plots (PDP), and accumulated local effects (ALE), a comprehensive feature importance analysis has been performed. This revealed that ground motion parameters, particularly peak ground acceleration (PGA), are critical for predicting small rotations, while structural parameters like slenderness and frequency are more significant for larger rotations. Utilizing an extensive dataset generated from nonlinear time history analyses, the trained LightGBM model demonstrated high accuracy in estimating the maximum rotation angle of rigid blocks under natural ground motions. The study also examined the sensitivity of model performance to lower bound thresholds of the target variable, revealing that reduced feature sets can maintain predictive performance effectively. These findings advance ML-based modeling of seismic rocking responses, providing interpretable and accurate models that enhance our understanding of rocking structures’ dynamic behavior, which is crucial for designing resilient structures and improving seismic risk assessments. Future research will focus on incorporating additional parameters and exploring advanced ML techniques to further refine these models.

1. Introduction

Field studies conducted after the 1960 Chile earthquakes show widespread agreement that structures’ rocking response during strong earthquake excitations improves their seismic performance [1]. Housner’s pioneering work [1] pointed out the fundamental aspects of rocking block dynamics, emphasizing that the uplifting of freestanding structures during earthquakes is contingent upon a minimum ground acceleration threshold determined by the system’s geometry, while scale-size effects govern stability during rocking response. Unlike structural systems with fixed bases, which primarily rely on dissipating energy through damage at specific locations (plastic hinges), the rocking response of freestanding structures leads to reduced structural damage owning to negative stiffness after uplift [2]. Aside from large-scale modern or monumental structures [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21], non-structural building components, such as furniture and electrical or mechanical equipment, can also undergo uplifting and rocking during earthquakes [22,23,24,25,26].
To fully harness the benefits of the rocking response, ensuring the system’s stability and preventing overturning during its dynamic response is crucial. However, attaining this goal is challenging due to the inherent non-linearity of rocking motion stemming from the negative stiffness characteristic of the rocking oscillator [2]. Numerous analytical and experimental studies have demonstrated that slight adjustments in the parameters of the rocking oscillator or the attributes of the ground motion record lead to vastly different responses [27,28,29,30,31,32,33,34]. As a result, it is essential to assess the seismic response of rocking systems within a probabilistic framework, which includes conducting numerous time history analyses and selecting appropriate seismic intensity measures (IMs) to derive probabilistic seismic demand models.
Over the past two decades, substantial research efforts have been dedicated to soft computing methods to evaluate the seismic response of structures, addressing the time-consuming and resource-intensive nature of nonlinear time history analysis. To this end, most machine learning (ML) applications in this domain primarily concentrate on forecasting seismic demands or earthquake-induced damage for different types of structures [35,36,37,38,39,40,41]. Furthermore, the effect of ground motion parameters on structures’ complex, nonlinear, dynamic response has also been investigated, harnessing the recently developed ML interpretability techniques [42,43,44,45,46].
ML-based methodologies have supported advancements in many areas ranging from real-world applications to scientific investigations [47]. However, despite the increasing attention given to rocking structures over the past two decades, there is a notable scarcity of ML implementations within this class of structural systems. Regarding the stability of freestanding blocks, Achmet et al. [48] and Gerolymos et al. [49] trained ML algorithms to classify the rocking response under pulse excitations and natural ground motions, i.e., whether or not a rocking system overturns. Pan et al. [50] and Shen and Málaga-Chuquitaype [51] utilized machine learning algorithms to simulate the time history response of rigid blocks undergoing rocking motion. Moreover, Karampinis et al. [52] integrated regression and classification ML algorithms to forecast both the maximum rocking rotations in the case of safe rocking and the structural collapse of rigid blocks, respectively. Recently, Banimahd et al. [53] explored the significant predictors of the seismic response of freestanding rigid blocks using artificial neural networks (ANN).
Given the sensitivity of rocking response to slight changes in both rocking block and ground motion characteristics, the present study aims to develop interpretable ML models to investigate the effect of each input parameter on the outcome. To this end, nonlinear time history analysis was performed on 1100 rigid blocks subjected to 15,000 ground motions, generating a training dataset that enables the models to capture and learn intricate patterns. Emphasis is given to explainability; thus, following the training of machine learning (ML) models, a variety of ML explainability methodologies were implemented using different combinations of thresholds for the input parameters. These combinations were designed to cover a wide range of the rocking block’s response. To this end, SHapley Additive exPlanations (SHAP) plots were developed to assess the individual contributions of features to model predictions.
Additionally, accumulated local effects (ALE) plots are provided to examine the accumulated effects of each feature across its range on the predictions, while the relationship between a feature and the predicted outcome is also discussed using partial dependence plots (PDP). The present study goes beyond the current state of the art in ML-based modeling of rocking systems, offering transparent and understandable models crucial for understanding complex rocking phenomena, integrating the abovementioned techniques. Importantly, the aforementioned feature combinations, in conjunction with the explainability techniques, allow us a detailed examination of the rocking response, focusing individually on small, moderate, and large angles, as well as small, medium, and large oscillators and investigate the effect of each parameter separately for each category.
The structure of the paper is as follows: Section 2 provides an overview of the fundamental principles underlying the rocking block problem, including the features that make up the dataset. Section 3 introduces the machine learning algorithms used for classification and regression purposes. Lastly, Section 4 delves into the primary outcomes of the methodology, employing established metrics such as SHAP, ALE, and PDP for result interpretation.

2. Dataset

The seismic response of a large number of rigid blocks was numerically evaluated to construct the database. To this end, Housner’s model [1] was adopted, where the rocking oscillator is characterized by a rectangular block of height 2 H , width 2 B , semi-diagonal length R, slenderness α , mass m c , and moment of inertia about the center of mass I c m = 1 / 3 m c R 2 . The model is illustrated in Figure 1. Until uplift occurs, the rocking oscillator is considered inactive. Since uplift, the rocking response is governed by the following second-order, nonlinear differential equation [1]:
θ ¨ = p 2 sin ( ± α θ ) + a g g cos ( ± α θ ) ,
where θ denotes the rocking rotation, the upper (+) and lower (−) signs specify the rocking response around the right ( θ > 0 ) and left ( θ < 0 ) pivot points, a g represents the ground acceleration, g is the acceleration of gravity, α is the slenderness, and p is the frequency parameter of the block, which is defined by Equation (2):
p = 3 g 4 R ,
where R denotes the semi-diagonal length of the rigid block. The following equation gives the minimum ground acceleration threshold that triggers uplift:
| a g | g tan ( α ) .
To develop interpretable ML models for the prediction of the maximum rocking response of freestanding rigid blocks, a total of 1100 rocking blocks were examined. Equation (1) highlights the significance of the frequency parameter (p) and the slenderness parameter ( α ) in influencing the rocking response. Consequently, these two parameters serve as input features characterizing the analyzed blocks. The examined rocking blocks were generated by combining 100 evenly spaced values of the frequency parameter (p), ranging from 0.7 s−1 to 7 s−1, and 11 evenly spaced values of the slenderness parameter ( tan ( α ) ) , spanning from 0.05 to 0.30. The selected range for the frequency parameter corresponds to blocks with semi-diagonals ranging from 0.15 m to 15 m. Hence, the dataset encompasses the dynamic response of structures varying from small-sized building contents to rocking bridge piers.
The analyzed blocks undergo testing with 5000 natural ground motion acceleration records to construct the dataset. Given that slender rocking blocks are prone to overturning, ground motion selection is based on a minimum peak ground acceleration (PGA) value of 0.05 g to induce uplift without overturning. Additionally, each record was scaled using three scale factors (0.5, 1, and 2) to capture the full range of potential responses of the blocks under examination. The ground motion records encompass a wide variety of seismic data, spanning a broad spectrum of seismic events. With V s , 30 values ranging from 100 m/s to 2000 m/s corresponding to soil classes A to D, the dataset accommodates diverse soil conditions. Furthermore, the R rup ranges from 0.07 km to 185 km, and the magnitude M w spans from 3.2 to 7.9. Moreover, the database includes records for all faulting types, providing a comprehensive overview of seismic activity.
Seismic acceleration signals are complex and require a significant amount of data for thorough characterization. To address this challenge, the selection of proper intensity measures (IMs) to simplify the representation of strong ground motion signals is essential. To this end, the 18 IMs reported in [52] are adopted in the present study as input features to characterize the amplitude, the frequency content, and the duration of the ground motion signal.
Fifteen thousand time history analyses were conducted for each block to create the dataset. Only responses where the rocking motion was initiated and the rocking block did not overturn were retained from the resulting dataset. Therefore, a total of 3,256,553 responses were obtained, and the maximum seismic rocking rotation was measured. Therefore, the dataset comprising 20 features was obtained. These include the 18 IMs that characterize the ground motion signals, along with the frequency parameter (p) and the slenderness parameter ( tan ( α ) ) that characterize the rocking blocks. The performance of the analyzed freestanding blocks is assessed based on the maximum tilt rotation | θ max | , normalized to the critical overturning rotation ( α ). This engineering demand parameter (EDP) is the target variable of the regression model. This variable was obtained by numerically solving Housner’s equation of motion (Equation (1)) in the Matlab environment [54] using the explicit Runge–Kutta method (ode45). Figure 2 presents a summary of the distribution of this target variable. It is evident that the majority of instances exhibited a small maximum rotation, closely resembling a gamma distribution. To explore the sensitivity of the models to the small rotations, three different thresholds of the target variable were considered.
Specifically, the whole dataset was used to train the reference model, while three reduced datasets with lower bounds of the target variable set at 0.001, 0.01, and 0.1 have been used for the training of the corresponding models. Values lower than 0.001 and 0.01 practically correspond to non-rocking and slightly rocking response, respectively, and therefore can be omitted [30]. Within these intervals, the response is governed by PGA and since a major portion of the dataset lies within, these responses are excluded from the train dataset to investigate the potential bias introduced in the model. Finally, values greater than 0.1 correspond to a substantial rocking response, where the identification of the significant features is of major importance.
The substantial volume of response data facilitates training a robust model within the specified range of interest. Given that different rocking structures can be represented by a rigid block with equivalent frequency parameters and slenderness, the trained model can also predict their maximum response and interpret the effects of input features on the rocking behavior. Since rocking response is governed by size scale effects [2], the dataset is split according to the frequency parameter p into three subsets that correspond to small ( 3.8 p 7 ), medium ( 1.56 p < 3.8 ), and large rocking ( 0.7 p < 1.56 ) oscillators. Specifically, the small, medium, and large oscillators correspond to rigid bodies with semi-diagonals lower than 0.50 m, 3.00 m, and 15.00 m, respectively (Equation (2)).

3. Machine Learning Algorithms

3.1. Predictive Model

There are many widely employed algorithms in the field of Machine Learning (ML) pertaining to the regression problem [55]. However, the large number of input vectors in the examined dataset imposes inherent restrictions on computational capacity. Thus, efficient algorithms should be implemented for this task. In a previous study by Karampinis et al. [52] in which the same dataset was considered, a variety of different models were employed, including artificial neural networks (ANNs), regression trees, random forests, and gradient boosting regressors. The best-performing one was shown to be gradient boosting. Thus, in principle, this is the algorithm that should be employed when explainability is the goal, as is the case in the present study. To this end, a modification of this algorithm, namely LightGBM, was employed, which offered significant computational benefits compared to the previously employed algorithm [56].
LightGBM is a tree-based boosting ensemble model. Its fundamental component is the decision tree (DT). DT is comprised of two parts, namely, the nodes and the branches. The top node is called the root, while the bottom layer of nodes is called the leaves. Each node is associated with a specific feature X i and a binary decision rule pertaining to that feature, starting from the root. This binary rule splits the node into two branches, and the process terminates at the leaves [57]. LightGBM combines many such decision trees in a process known as boosting ensembling. In this framework, the predictions and errors of each tree are iteratively used to train the next one, leading each iteration of the algorithm to correct the errors of the previous one. However, this is a time-consuming and computationally intensive process, especially for large datasets with continuous features, such as the one employed in the present study. The reason is that, at each binary split, each tree needs to consider all the features and all the distinct values of that feature to find the optimal splitting rule. To alleviate this, LightGBM uses a binning of each feature before training begins [56].
There are a number of parameters that the user needs to predefine which are not updated during training. These are the so-called hyperparameters of the model and include characteristics such as the maximum number of trees, the maximum allowed depth of each tree, or the minimum number of data points that must remain on each leaf. These hyperparameters can affect the performance of the model, and thus, dedicated algorithms have been introduced in the literature for the task of hyperparameter optimization [58]. However, the very large size of our dataset leads to inherent constraints on both time and computational resources. Due to these, the hyperparameter optimization was performed using a trial and error approach. To this end, a so-called 70–15–15 split was implemented, wherein 70% of the dataset was used for training the ML models, 15% was used as a validation set, i.e., to measure the performance and optimize the hyperparameters, and finally, the remaining 15% was used as a test set to measure the performance of the models on truly unseen data and thus, quantify their generalization ability.

3.2. Interpretation Methods—SHAP, PDP, and ALE

Despite their well-documented power, complex ML models often suffer from an inability to provide direct insight into how they arrive at their predictions. Thus, researchers and practitioners cannot readily gauge each input parameter’s qualitative and quantitative effect on the model. Furthermore, understanding how the predictions are made can increase trust in them [59]. To this end, various dedicated methodologies have been introduced in the literature [60]. In this Section, commonly employed algorithms, namely the SHapley Additive exPlanations (SHAP), the partial dependence plot (PDP), and the accumulated local effects (ALE), have been employed in the sequel of our numerical experimentations.

3.2.1. SHapley Additive exPlanations (SHAP)

SHAP values are a game-theoretic formulation recently introduced by Scott Lundberg [61]. They are based on the so-called Shapley values from cooperative game theory [62]. In this framework, the complex ML model f is approximated in the neighborhood of each data point x by a simpler, inherently explainable linear model g, i.e.,
g ( x ^ ) = ϕ 0 + i = 1 n ϕ i x ^ i ,
where n is the number of input features, x ^ { 0 , 1 } n is a binary vector, and ϕ i , i = 1 , 2 , n denote the SHAP values. The binary vector x ^ is related to x via a mapping h x . Its ith value denotes whether or not the corresponding feature was included in the computations. Following the above, the SHAP values are given by the formula [61]
ϕ i = S N { i } | S | ! ( n | S | 1 ) ! n ! f ( S { i } ) f ( S ) .
In Equation (5), N = 1 , 2 , , n is the set of all the input features, and S N is a subset of N called a “coalition”. Thus, the above equation is a weighted average of all possible feature coalitions of the difference in the model predictions at point x , with and without the inclusion of the feature of interest. Intuitively, this quantifies the contribution of the feature to the model in the neighborhood of x . This can be appropriately averaged over the whole dataset to produce a global estimate of the magnitude of each feature’s contribution to the model’s outputs.
As can be readily observed from Equation (5), the computation of SHAP values is of combinatorial complexity and thus, can be computationally intractable for general models, leading to approximations. However, an advantage of a tree-based model such as LightGBM, which was employed in the present study, is that it allows for efficient computations, leading to exact SHAP values [63,64].

3.2.2. Partial Dependence Plot (PDP)

As was already mentioned, LightGBM is a (gradient-based) boosting ensemble algorithm. This class of algorithms was introduced in 2001 by Jerome Friedman [65]. In the same paper, he introduced the partial dependence plot (PDP) to interpret this newly proposed model. PDP marginalizes the effect of a set S of features of interest by averaging over C, the complement of S, as follows:
PD ( X S ) = E X C f ^ ( X S , X C ) = X C f ^ ( X S , X C ) P ( X C ) d X C ,
where f ^ is the trained ML model, X S are the values of the features of interest at a specific input point X, X C are the corresponding values of all the other features in the dataset, and P ( X C ) is the marginal probability distribution of X c , i.e.,
P ( X C ) = X S P ˜ ( X ) d X S ,
where P ˜ ( X ) is the joint probability distribution of all the features in the input dataset.
The partial dependence function given by Equation (6) can be computed for an arbitrary number of features. However, to visualize the results, S usually comprises one or two features of interest. In a numerical setting, the integral in Equation (6) is approximated by considering all data points to have equal probability, concluding to
PD ^ ( X S ) = 1 N i = 1 N f ^ ( X S , X C ( i ) ) ,
where N denotes the total number of input data points.
While PDP is a widely employed and well-established algorithm, it produces exact results only when the features are completely independent from one another. This is because the integration in Equation (6) and the corresponding summation in Equation (8) hold the values of X S constant while averaging over X C . However, if the features are dependent, X C depends on X S . By ignoring this dependency, the integration/summation assigns equal weight to pairs ( X S , X C ) that are potentially unlikely to occur in the dataset.

3.2.3. Accumulated Local Effects (ALE)

Accumulated local effects (ALE) was recently introduced by Apley and Zhu in 2020 [66] and aims to address the aforementioned PDP drawback. For a single feature of interest X j , with value x s at the input point X, ALE takes the form
ALE ( x S ) = min X j x S E X C | x S f ^ X j , X C X j X j = z S d z S c = min X j x S X C f ^ X j , X C X j P X C | z S X C d z S c ,
where min X j is the minimum value of the feature of interest, P ( X C | z s ) is the conditional probability of X C based on the value of the feature of interest S, and c is a constant introduced to center the value of ALE, i.e., so that
E x s ALE ( x S ) = 0 .
By using the conditional instead of the marginal probability distribution, ALE aims to address the issue of averaging over unlikely pairs ( X S , X C ) discussed in the previous section. Furthermore, the differentiation of f ^ (computed as a finite difference in a numerical setting) aims to isolate the effect of x s from X C . Finally, the centering of the ALE values readily allows for a qualitative model interpretation: when ALE > 0 , (respectively <0) the feature has a positive (respectively negative) contribution compared to the average effect. However, the quantitative interpretation of ALE is not straightforward, contrary to PDP, which directly approximates the variations in the model output.

4. Results

In total, 16 models were trained using the whole dataset and the subsets generated based on the restriction of θ / α and p. The performance of the models in terms of coefficient of determination ( R 2 ) is listed in Table 1. The coefficient of determination is evaluated on the test dataset, which was taken as 15% of the original dataset, as mentioned before. It is interpreted as the performance of the variance in the examined set that is explained by the predictions of the trained ML model.
For a given response level, the frequency parameter slightly affects the performance of the models. On the other hand, the performance is strongly dependent on the lower bound of the targeted values. Specifically, as the minimum rotation threshold increases, the coefficient of determination decreases. That fact can be justified by the concentration of small rocking rotations, where the oscillators slightly uplift. As a reminder, uplift is a deterministic procedure (Equation (3)). Furthermore, it is known that small changes in the input parameters can lead to very different rocking responses, a phenomenon that is more pronounced for larger rotation angles. This also supports the reduced model accuracy in this rotation range.
SHAP values of the features of the four models trained with subsets of different lower bounds of the target variable are presented in Figure 3. It should be noted that these SHAP values pertain to the logit of the rocking response, i.e.,
ln θ 1 θ .
Using a logarithmic transformation is common; however, in the considered scenario, the implementation of a logit instead of a natural logarithm is preferable. This is because the logit, as can be seen from the above equation, is symmetric in the (0, 1) range. As is well known, the following limits hold as follows:
lim θ 0 + ln θ 1 θ = , lim θ 1 ln θ 1 θ = + .
Thus, SHAP values are allowed to be unbounded both for very small and for very large rotation angles. In turn, this will allow disproportionally important features to manifest in both these ranges. On the other hand, if the natural logarithm is employed, as is the most common approach, then disproportionally important features can only appear for small rotation angles and not large ones.
The SHAP plots imply that the lower bound significantly affects the features of major importance. Specifically, if the whole data set is the case ( θ > 0 ), the concentration of slightly rocking responses enforces the significance of PGA. It can be readily observed from the top left subfigure that the SHAP values corresponding to PGA can take very large negative values, dominating the rest of the features. However, these very large values are not manifested even for a very low threshold of θ > 0.001 . Evidently, no single feature exhibits very large SHAP values for larger thresholds, either positive or negative. This is the case even though, as mentioned, the logit transformation shown in Equation (11) is employed. Thus, it can be inferred that for larger angles, there is not a single feature that is as dominating as PGA in the small angle range. Furthermore, as the lower bound of the subset increases, the structural characteristics ( α   and   p ) emerge as the most important features. Simultaneously, the effect of PGA decreases, while parameters that characterize the ground motion signal ( T m ) frequency content substantially affect rocking response. Parameters such as PGD and CAV, known for their low correlation with rocking response [28], are included in the features of minor importance.
The normalized mean absolute SHAP value plots of the 16 models are depicted in Figure 4. The effect of the lower bound of the target values on the feature importance is effectively the same with only minor variations, regardless of the restrictions of the frequency parameter p. Among the examined features, only the importance of PGA is noticeably decreased as the lower bound of the rocking response increases. Additionally, a well-known property of rocking response is outlined. Specifically, the effect of PGA on the rocking response tends to decrease as the size of the freestanding block increases [30]. Moreover, the importance of frequency parameter p is minor if datasets of rocking blocks of restricted sizes is the case, and its importance vanishes when examining small blocks. Finally, the normalized mean absolute SHAP values of the remaining IMs do not present a distinguished trend among the different subsets.
Given the strong correlation among many of the adopted IMs [35], the ML models have also been trained using a reduced number of features that characterize the seismic signal (IMs). To this end, the intercorrelation between the selected IMs was calculated, and one IM from each pair with a correlation coefficient greater than 0.75 was excluded. Based on the adopted criterion, a balanced set of features that correspond to well-known amplitude-, frequency-, energy-, and duration-based IMs [31] was selected. Accordingly, datasets consisting of 9 IMs instead of 18 were used to train ML models capable of highlighting the effect of the IMs on the rocking response.
The performance of the models with reduced features that characterize the ground motions in terms of R 2 is listed in Table 2. Despite the reduced features of the models, their performance is comparable to the models trained with twice as many IMs as input features. Quantitatively, the difference in performance between the respective p- θ combinations is very small and, in most cases, less than 0.01 in terms of R 2 . For example, for the completely unrestricted case, i.e., unrestricted p and θ , the model using the full feature set achieved an accuracy of 0.887, while the model using the reduced feature set achieved an accuracy of 0.891. These remarks stem from the intercorrelation of the IMs. Overall, the largest difference in performance was found to be 0.021 in terms of R 2 , which is deemed very low. Qualitatively, the results also exhibited similar trends. On the one hand, the restrictions on p only very slightly affected the observed performance. On the other hand, similar to Table 1, a reduction in performance is observed when large rotations are considered.
The features’ SHAP values of the four models trained with subsets of different lower bounds of the target variable are presented in Figure 5. The feature importance considering the whole dataset ( θ > 0 ) results in misleading observations since the dataset is dominated by slightly uplift cases ( θ < 0.001 ) where the freestanding block practically does not rock. Thereby, parameters with known weak correlation, such as PGD, emerge as features of major importance. However, neglecting these cases, the importance of reasonable features emerges. Moreover, the well-known scale-size effect can be observed. Specifically, as the size of the freestanding block increases (small values of p), the response becomes more stable, and vice versa. Indeed, small values of p negatively impact the response (small rocking rotations), while larger values of p result in positive SHAP values. The same remarks regarding the scale-size effect can also be derived considering the frequency content of the seismic signal ( T m ). Finally, Figure 3, in conjunction with Figure 5, demonstrates the benefit of presenting the SHAP values on the logit of the response.
The normalized mean absolute SHAP value plots of the 16 models are depicted in Figure 6. The effect of the lower bound of the target values on the feature importance is identical regardless of the restrictions of the frequency parameter p. Moreover, the effect of PGA on the rocking response tends to decrease as the size of the freestanding block increases. Simultaneously, the effect of energy- and frequency-based IMs increases. Additionally, the importance of frequency parameter p becomes more crucial as the rocking rotations increase. Finally, when examining the dataset with a reduced number of features, the significance of each parameter becomes more evident, highlighting how it shifts across different subsets. Thus, even though the tree-based models used a reduced set of features, which led to them learning a different sequence of splits and different decision paths, it can generally be observed that the overall trends and most significant features remained unaltered.
For the PDP and ALE, θ was explained directly. For each feature, PDP shows how the average effect on the angle varies. On the other hand, ALE is centered, showing that its average is 0. Thus, it shows that this changes when the feature effect is positive or negative compared to the average and the threshold.
Figure 7 summarizes both PDPs and ALE results of the model trained with 20 input features, while no restrictions are imposed regarding the lower bound of rocking rotations or the frequency parameter of freestanding blocks. As Equations (6)–(9) demonstrate, the PDP and ALE approach have common elements, but also some differences. Most notably, the two quantities do not have the same scale, which is why each of the two is shown with its own y-axis in each subplot. The most dominant effect is presented by the structural features α and p. As expected, the slenderness negatively impacts the target variable across its entire range. On the other hand, the frequency parameter presents a reverse effect. A common trend among all the input features that characterize the ground motions is that they affect the response up to a certain IM level. Beyond that barrier, the effect is negligible. The most impactful IMs are the PGA, PGV, L c a , L c v , and I A .
It can also be observed that the curves pertaining to some of the features, such as CAV, CAD, PGD, and T m , are close to flat. For these features, this is true for both types of examined curves, i.e., PDP and ALE. This indicates that the effect these features had on the model’s predictions was very small. This is in agreement with the overall feature importance, as quantified by the normalized SHAP values presented in Figure 4. Finally, it should be noted that, as can be observed from Equation (6), PDP provides a quantification of the effect of each individual parameter in addition to the qualitative behavior displayed in its respective plots. On the other hand, as can be observed from Equation (9), the quantitative interpretation of ALE is not straightforward. However, in general, both curves are in agreement about the qualitative effect of each parameter on the predictions of the ML model.
Figure 8 illustrates the combined PDPs between a set of two input features and the target variable. This allows us to examine how feature combinations affect the rocking response. Thus, the effect of feature interactions can be observed, complementing the main effects displayed in Figure 7. Figure 8a highlights the effect of both structural features on the rocking response, i.e., in general, large blocks (small values of p) with small slenderness (large values of α ) develop lower values of rocking rotations. Additionally, the uplift criteria can be observed in Figure 8b. Specifically, a distinguished area is described by a constant ratio α / PGA = 1 , where the rocking rotations are negligible ( θ 0 ). Given a certain value of PGA or PGV, the resistance of rocking blocks to mobilize their moment of inertia increases as the frequency parameter decreases, resulting in lower rocking rotations [31]. That behavior is pronounced in Figure 8c,d.

5. Summary and Conclusions

This study demonstrates the effective application of machine learning techniques, particularly the LightGBM algorithm, to predict the seismic response of rigid rocking structures. By generating a comprehensive dataset from extensive nonlinear time history analyses, we were able to train and evaluate models that accurately predict the maximum rotation angle of rigid blocks subjected to various ground motions. The dataset covers an extensive range of input parameters, which describe the dynamic response of structures ranging from small-sized building contents to rocking bridge piers [52]. Thus, this corresponds to the range of applicability of the present study and potential avenues for future research could include refining the grid of the studied input features.
Our findings highlight that the LightGBM model performs well, especially for small to moderate rotation angles, with high accuracy and reliability. The performance of the two models, i.e., the model trained using the full and reduced features set, respectively, was comparable, as shown in Table 1 and Table 2. Furthermore, the models exhibited similar qualitative trends, with regard to different θ / α and p restrictions. In both cases, p only very slightly affected the performance of the ML model. On the other hand, θ / α had a more noticeable effect on the performance, since a drop of approximately 0.1 was observed in the coefficient of determination R 2 for the high rotations range.
The use of SHapley Additive exPlanations (SHAP), partial dependence plots (PDP), and accumulated local effects (ALE) provided valuable insights into feature importance, revealing that ground motion parameters like peak ground acceleration (PGA) are crucial for predicting small rotations. Structural parameters such as slenderness and the frequency parameter became more significant for larger rotations. The study also shows that model performance is sensitive to the lower bound thresholds of the target variable, with higher thresholds leading to decreased accuracy. This underscores the importance of tailored models for different ranges of structural responses. Additionally, reduced feature sets, focusing on intercorrelated ground motion parameters, proved effective in maintaining high predictive performance, demonstrating the feasibility of simplified models in practical applications.
The findings of this study can be applied in the design and assessment of rigid rocking structures by prioritizing the most significant intensity measures (IMs) and structural parameters. Explainable machine learning models, as employed in the present study, offer the benefit of high accuracy and, thus, reliability in identifying and prioritizing the most influential factors, leading to more informed design decisions and optimized structural performance. Additionally, these models facilitate faster analysis and can be continuously updated with new data, ensuring that engineering practices remain current and effective in mitigating seismic risks. Finally, these models can be extended to various structural types across different engineering contexts, while integrating real-time seismic data and structural health monitoring systems can enable prompt assessment during seismic events.

Author Contributions

Conceptualization, I.K., K.E.B. and I.E.K.; methodology, I.K., K.E.B., I.E.K., L.I. and A.E.; software, I.K., K.E.B., I.E.K., L.I. and A.E.; validation, I.K., K.E.B. and I.E.K.; formal analysis, I.K. and K.E.B.; investigation, I.K., K.E.B., I.E.K., L.I. and A.E.; resources, I.E.K., L.I. and A.E.; data curation, I.K., K.E.B., I.E.K., L.I. and A.E.; writing—original draft preparation, I.K., K.E.B. and I.E.K.; writing—review and editing, A.E. and L.I.; visualization, I.K. and K.E.B.; supervision L.I. and A.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLMachine Learning
SHAPSHapley Additive exPlanations
PDPPartial Dependence Plots
ALEAccumulated Local Effects
PGAPeak Ground Acceleration
IMIntensity Measure
ANNArtificial Neural Network
EDPEngineering Demand Parameter

References

  1. Housner, G.W. The behavior of inverted pendulum structures during earthquakes. Bull. Seismol. Soc. Am. 1963, 53, 403–417. [Google Scholar] [CrossRef]
  2. Makris, N. A half-century of rocking isolation. Earthq. Struct. 2014, 7, 1187–1221. [Google Scholar] [CrossRef]
  3. Gelagoti, F.; Kourkoulis, R.; Anastasopoulos, I.; Gazetas, G. Rocking isolation of low-rise frame structures founded on isolated footings. Earthq. Eng. Struct. Dyn. 2012, 41, 1177–1197. [Google Scholar] [CrossRef]
  4. Agalianos, A.; Psychari, A.; Vassiliou, M.F.; Stojadinovic, B.; Anastasopoulos, I. Comparative assessment of two rocking isolation techniques for a motorway overpass bridge. Front. Built Environ. 2017, 3, 47. [Google Scholar] [CrossRef]
  5. Giouvanidis, A.I.; Dimitrakopoulos, E.G. Seismic performance of rocking frames with flag-shaped hysteretic behavior. J. Eng. Mech. 2017, 143, 04017008. [Google Scholar] [CrossRef]
  6. Li, S.; Hu, Y.; Lu, Z.; Song, B.; Huang, G. Seismic Isolation of Fragile Pole-Type Structures by Rocking with Base Restraints. Buildings 2024, 14, 1176. [Google Scholar] [CrossRef]
  7. He, X.; Unjoh, S.; Yamazaki, S.; Noro, T. Development of a bidirectional rocking isolation bearing system (Bi-RIBS) to control excessive seismic response of bridge structures. Earthq. Eng. Struct. Dyn. 2023, 52, 3074–3096. [Google Scholar] [CrossRef]
  8. Wada, A.; Qu, Z.; Motoyui, S.; Sakata, H. Seismic retrofit of existing SRC frames using rocking walls and steel dampers. Front. Archit. Civ. Eng. China 2011, 5, 259–266. [Google Scholar] [CrossRef]
  9. Ríos-García, G.; Benavent-Climent, A. New rocking column with control of negative stiffness displacement range and its application to RC frames. Eng. Struct. 2020, 206, 110133. [Google Scholar] [CrossRef]
  10. Bachmann, J.; Vassiliou, M.F.; Stojadinović, B. Dynamics of rocking podium structures. Earthq. Eng. Struct. Dyn. 2017, 46, 2499–2517. [Google Scholar] [CrossRef]
  11. Bantilas, K.E.; Kavvadias, I.E.; Vasiliadis, L.K. Seismic response of elastic multidegree of freedom oscillators placed on the top of rocking storey. Earthq. Eng. Struct. Dyn. 2021, 50, 1315–1333. [Google Scholar] [CrossRef]
  12. Bantilas, K.E.; Kavvadias, I.E.; Vasiliadis, L.K. Analytical investigation of the seismic response of elastic oscillators placed on the top of rocking storey. Bull. Earthq. Eng. 2021, 19, 1249–1270. [Google Scholar] [CrossRef]
  13. Bantilas, K.E.; Kavvadias, I.E.; Elenas, A. Analytical modeling and seismic performance of a novel energy dissipative kinematic isolation for building structures. Eng. Struct. 2023, 294, 116777. [Google Scholar] [CrossRef]
  14. Jaimes, M.A.; Trejo, S.; Juarez, V.; Garcia-Soto, A.D. Seismic response of structures with a rocking seismic isolation system at their base under narrow-band earthquake loading. Earthq. Struct. 2023, 25, 269–282. [Google Scholar]
  15. Reggiani Manzo, N.; Vassiliou, M.F.; Mouzakis, H.; Badogiannis, E. Shaking table tests of a resilient bridge system with precast reinforced concrete columns equipped with springs. Earthq. Eng. Struct. Dyn. 2022, 51, 213–239. [Google Scholar] [CrossRef]
  16. Makris, N.; Vassiliou, M.F. Planar rocking response and stability analysis of an array of free-standing columns capped with a freely supported rigid beam. Earthq. Eng. Struct. Dyn. 2013, 42, 431–449. [Google Scholar] [CrossRef]
  17. Banović, I.; Radnić, J.; Grgić, N.; Buzov, A. Performance of geotechnical seismic isolation using stone pebble-geogrid layer: Experimental investigation. Soil Dyn. Earthq. Eng. 2023, 171, 107941. [Google Scholar] [CrossRef]
  18. Ko, K.W.; Ha, J.G.; Kim, D.S. Analytical evaluation and experimental validation on dynamic rocking behavior for shallow foundation considering structural response. Earthq. Eng. Eng. Vib. 2022, 21, 37–51. [Google Scholar] [CrossRef]
  19. Li, S.; Tsang, H.H.; Lam, N. Seismic protection by rocking with superelastic tendon restraint. Earthq. Eng. Struct. Dyn. 2022, 51, 1718–1737. [Google Scholar] [CrossRef]
  20. Dasiou, M.E.; Lachanas, C.G.; Melissianos, V.E.; Vamvatsikos, D. Seismic performance of the temple of Aphaia in Aegina island, Greece. Earthq. Eng. Struct. Dyn. 2024, 53, 573–592. [Google Scholar] [CrossRef]
  21. Buzov, A.; Radnić, J.; Grgić, N.; Baloević, G. Effect of the joint type on the bearing capacity of a multi–drum column under static load. Int. J. Archit. Herit. 2018, 12, 137–152. [Google Scholar] [CrossRef]
  22. Kazantzi, A.; Lachanas, C.; Vamvatsikos, D. Seismic response distribution expressions for rocking building contents under ordinary ground motions. Bull. Earthq. Eng. 2022, 20, 6659–6682. [Google Scholar] [CrossRef]
  23. Liu, P.; Zhang, Y.M.; Chen, H.T.; Yang, W.G. Experimental study on rocking blocks subjected to bidirectional ground and floor motions via shaking table tests. Earthq. Eng. Struct. Dyn. 2023, 52, 3171–3192. [Google Scholar] [CrossRef]
  24. Fragiadakis, M.; Diamantopoulos, S. Fragility and risk assessment of freestanding building contents. Earthq. Eng. Struct. Dyn. 2020, 49, 1028–1048. [Google Scholar] [CrossRef]
  25. Huang, B.; Günay, S.; Lu, W. Seismic assessment of freestanding ceramic vase with shaking table testing and performance-based earthquake engineering. J. Earthq. Eng. 2022, 26, 7956–7978. [Google Scholar] [CrossRef]
  26. Yu, P.; Zhai, C.; Liu, J.; Wang, X. Shake table tests for the seismic performance assessment of desktop medical laboratory equipment considering the effect of adjacent walls and restrainers. In Structures; Elsevier: Amsterdam, The Netherlands, 2023; Volume 50, pp. 1922–1933. [Google Scholar]
  27. Bachmann, J.; Strand, M.; Vassiliou, M.F.; Broccardo, M.; Stojadinović, B. Is rocking motion predictable? Earthq. Eng. Struct. Dyn. 2018, 47, 535–552. [Google Scholar] [CrossRef]
  28. Giouvanidis, A.I.; Dimitrakopoulos, E.G. Rocking amplification and strong-motion duration. Earthq. Eng. Struct. Dyn. 2018, 47, 2094–2116. [Google Scholar] [CrossRef]
  29. Lachanas, C.G.; Vamvatsikos, D.; Dimitrakopoulos, E.G. Statistical property parameterization of simple rocking block response. Earthq. Eng. Struct. Dyn. 2023, 52, 394–414. [Google Scholar] [CrossRef]
  30. Sieber, M.; Vassiliou, M.F.; Anastasopoulos, I. Intensity measures, fragility analysis and dimensionality reduction of rocking under far-field ground motions. Earthq. Eng. Struct. Dyn. 2022, 51, 3639–3657. [Google Scholar] [CrossRef]
  31. Kavvadias, I.E.; Vasiliadis, L.K.; Elenas, A. Seismic response parametric study of ancient rocking columns. Int. J. Archit. Herit. 2017, 11, 791–804. [Google Scholar] [CrossRef]
  32. Dimitrakopoulos, E.G.; Paraskeva, T.S. Dimensionless fragility curves for rocking response to near-fault excitations. Earthq. Eng. Struct. Dyn. 2015, 44, 2015–2033. [Google Scholar] [CrossRef]
  33. Solarino, F.; Giresini, L. Fragility curves and seismic demand hazard analysis of rocking walls restrained with elasto-plastic ties. Earthq. Eng. Struct. Dyn. 2021, 50, 3602–3622. [Google Scholar] [CrossRef]
  34. Kavvadias, I.E.; Papachatzakis, G.A.; Bantilas, K.E.; Vasiliadis, L.K.; Elenas, A. Rocking spectrum intensity measures for seismic assessment of rocking rigid blocks. Soil Dyn. Earthq. Eng. 2017, 101, 116–124. [Google Scholar] [CrossRef]
  35. Lazaridis, P.C.; Kavvadias, I.E.; Demertzis, K.; Iliadis, L.; Vasiliadis, L.K. Structural damage prediction of a reinforced concrete frame under single and multiple seismic events using machine learning algorithms. Appl. Sci. 2022, 12, 3845. [Google Scholar] [CrossRef]
  36. Wang, S.; Cheng, X.; Li, Y.; Song, X.; Guo, R.; Zhang, H.; Liang, Z. Rapid visual simulation of the progressive collapse of regular reinforced concrete frame structures based on machine learning and physics engine. Eng. Struct. 2023, 286, 116129. [Google Scholar] [CrossRef]
  37. Kumari, V.; Harirchian, E.; Lahmer, T.; Rasulzade, S. Evaluation of machine learning and web-based process for damage score estimation of existing buildings. Buildings 2022, 12, 578. [Google Scholar] [CrossRef]
  38. Zahra, F.; Macedo, J.; Málaga-Chuquitaype, C. Hybrid data-driven hazard-consistent drift models for SMRF. Earthq. Eng. Struct. Dyn. 2023, 52, 1112–1135. [Google Scholar] [CrossRef]
  39. Nguyen, H.D.; Dao, N.D.; Shin, M. Prediction of seismic drift responses of planar steel moment frames using artificial neural network and extreme gradient boosting. Eng. Struct. 2021, 242, 112518. [Google Scholar] [CrossRef]
  40. Kazemi, F.; Asgarkhani, N.; Jankowski, R. Predicting seismic response of SMRFs founded on different soil types using machine learning techniques. Eng. Struct. 2023, 274, 114953. [Google Scholar] [CrossRef]
  41. Zhou, W.; Xiong, L.; Jiang, L.; Wu, L.; Xiang, P.; Jiang, L. Optimal combinations of parameters for seismic response prediction of high-speed railway bridges using machine learnings. In Structures; Elsevier: Amsterdam, The Netherlands, 2023; Volume 57, p. 105089. [Google Scholar]
  42. Lazaridis, P.C.; Kavvadias, I.E.; Demertzis, K.; Iliadis, L.; Vasiliadis, L.K. Interpretable Machine Learning for Assessing the Cumulative Damage of a Reinforced Concrete Frame Induced by Seismic Sequences. Sustainability 2023, 15, 12768. [Google Scholar] [CrossRef]
  43. Feng, D.C.; Wang, W.J.; Mangalathu, S.; Taciroglu, E. Interpretable XGBoost-SHAP machine-learning model for shear strength prediction of squat RC walls. J. Struct. Eng. 2021, 147, 04021173. [Google Scholar] [CrossRef]
  44. Junda, E.; Málaga-Chuquitaype, C.; Chawgien, K. Interpretable machine learning models for the estimation of seismic drifts in CLT buildings. J. Build. Eng. 2023, 70, 106365. [Google Scholar] [CrossRef]
  45. Lai, D.; Demartino, C.; Xiao, Y. Interpretable machine-learning models for maximum displacements of RC beams under impact loading predictions. Eng. Struct. 2023, 281, 115723. [Google Scholar] [CrossRef]
  46. Lei, X.; Siringoringo, D.M.; Dong, Y.; Sun, Z. Interpretable machine learning methods for clarification of load-displacement effects on cable-stayed bridge. Measurement 2023, 220, 113390. [Google Scholar] [CrossRef]
  47. Prakash, S.B.; Chandan, K.; Karthik, K.; Devanathan, S.; Kumar, R.V.; Nagaraja, K.; Prasannakumara, B. Investigation of the thermal analysis of a wavy fin with radiation impact: An application of extreme learning machine. Phys. Scr. 2023, 99, 015225. [Google Scholar] [CrossRef]
  48. Achmet, Z.; Diamantopoulos, S.; Fragiadakis, M. Rapid seismic response prediction of rocking blocks using machine learning. Bull. Earthq. Eng. 2024, 22, 3471–3489. [Google Scholar] [CrossRef]
  49. Gerolymos, N.; Apostolou, M.; Gazetas, G. Neural network analysis of overturning response under near-fault type excitation. Earthq. Eng. Eng. Vib. 2005, 4, 213–228. [Google Scholar] [CrossRef]
  50. Pan, X.; Wen, Z.; Yang, T. Dynamic analysis of nonlinear civil engineering structures using artificial neural network with adaptive training. arXiv 2021, arXiv:2111.13759. [Google Scholar]
  51. Shen, S.; Málaga-Chuquitaype, C. Physics-informed artificial intelligence models for the seismic response prediction of rocking structures. Data-Centric Eng. 2024, 5, e1. [Google Scholar] [CrossRef]
  52. Karampinis, I.; Bantilas, K.E.; Kavvadias, I.E.; Iliadis, L.; Elenas, A. Machine Learning Algorithms for the Prediction of the Seismic Response of Rigid Rocking Blocks. Appl. Sci. 2023, 14, 341. [Google Scholar] [CrossRef]
  53. Banimahd, S.A.; Giouvanidis, A.I.; Karimzadeh, S.; Lourenço, P.B. A multi-level approach to predict the seismic response of rigid rocking structures using artificial neural networks. Earthq. Eng. Struct. Dyn. 2024, 53, 2185–2208. [Google Scholar] [CrossRef]
  54. MATLAB, version 9.13.0 (R2022b); The MathWorks Inc.: Portola Valley, CA, USA, 2022.
  55. Fernández-Delgado, M.; Sirsat, M.S.; Cernadas, E.; Alawadi, S.; Barro, S.; Febrero-Bande, M. An extensive experimental survey of regression methods. Neural Netw. 2019, 111, 11–34. [Google Scholar] [CrossRef] [PubMed]
  56. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  57. Loh, W.Y. Classification and regression trees. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2011, 1, 14–23. [Google Scholar] [CrossRef]
  58. Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  59. Mangalathu, S.; Karthikeyan, K.; Feng, D.C.; Jeon, J.S. Machine-learning interpretability techniques for seismic performance assessment of infrastructure systems. Eng. Struct. 2022, 250, 112883. [Google Scholar] [CrossRef]
  60. Molnar, C. Interpretable Machine Learning; Lulu. com: Morrisville, NC, USA, 2020. [Google Scholar]
  61. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  62. Shapley, L. A Value for n-Person Games; Technical Report; RAND Corporation: Santa Monica, CA, USA, 1952. [Google Scholar]
  63. Lundberg, S.M.; Erion, G.G.; Lee, S.I. Consistent individualized feature attribution for tree ensembles. arXiv 2018, arXiv:1802.03888. [Google Scholar]
  64. Janzing, D.; Minorics, L.; Blöbaum, P. Feature relevance quantification in explainable AI: A causal problem. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; pp. 2907–2916. [Google Scholar]
  65. Friedman, J.H. Greedy function approximation: A gradient boosting machine. In Annals of Statistics; Institute of Mathematical Statistics: Beachwood, OH, USA, 2001; pp. 1189–1232. [Google Scholar]
  66. Apley, D.W.; Zhu, J. Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. Ser. B Stat. Methodol. 2020, 82, 1059–1086. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of rigid rocking oscillator.
Figure 1. Schematic representation of rigid rocking oscillator.
Mathematics 12 02280 g001
Figure 2. Distributions of the target variable.
Figure 2. Distributions of the target variable.
Mathematics 12 02280 g002
Figure 3. SHAP summary plots.
Figure 3. SHAP summary plots.
Mathematics 12 02280 g003
Figure 4. Normalized mean absolute SHAP values.
Figure 4. Normalized mean absolute SHAP values.
Mathematics 12 02280 g004
Figure 5. SHAP summary plots using the reduced set of features.
Figure 5. SHAP summary plots using the reduced set of features.
Mathematics 12 02280 g005
Figure 6. Normalized mean absolute SHAP values using the reduced set of features.
Figure 6. Normalized mean absolute SHAP values using the reduced set of features.
Mathematics 12 02280 g006
Figure 7. PDP and ALE values for the total dataset.
Figure 7. PDP and ALE values for the total dataset.
Mathematics 12 02280 g007
Figure 8. PDP values for the total dataset. (a) Combined effect of α and p. (b) Combined effect of α and PGA. (c) Combined effect of p and PGA. (d) Combined effect of p and PGV.
Figure 8. PDP values for the total dataset. (a) Combined effect of α and p. (b) Combined effect of α and PGA. (c) Combined effect of p and PGA. (d) Combined effect of p and PGV.
Mathematics 12 02280 g008
Table 1. R 2 metric for all p- θ combinations using the full feature sets.
Table 1. R 2 metric for all p- θ combinations using the full feature sets.
Unrestricted p 0.7 p < 1.56 1.56 p < 3.8 3.8 p 7
θ > 0 0.8870.8990.8810.891
θ > 10 3 0.8720.8830.8690.867
θ > 10 2 0.8660.8680.8470.832
θ > 0.1 0.7860.7790.7800.794
Table 2. R 2 metric for all p- θ combinations using the reduced feature sets.
Table 2. R 2 metric for all p- θ combinations using the reduced feature sets.
Unrestricted p 0.7 p < 1.56 1.56 p < 3.8 3.8 p 7
θ > 0 0.8910.8870.8810.891
θ > 10 3 0.8780.8810.8570.880
θ > 10 2 0.8600.8660.8500.839
θ > 0.1 0.7840.7580.7800.781
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karampinis, I.; Bantilas, K.E.; Kavvadias, I.E.; Iliadis, L.; Elenas, A. Seismic Response Prediction of Rigid Rocking Structures Using Explainable LightGBM Models. Mathematics 2024, 12, 2280. https://doi.org/10.3390/math12142280

AMA Style

Karampinis I, Bantilas KE, Kavvadias IE, Iliadis L, Elenas A. Seismic Response Prediction of Rigid Rocking Structures Using Explainable LightGBM Models. Mathematics. 2024; 12(14):2280. https://doi.org/10.3390/math12142280

Chicago/Turabian Style

Karampinis, Ioannis, Kosmas E. Bantilas, Ioannis E. Kavvadias, Lazaros Iliadis, and Anaxagoras Elenas. 2024. "Seismic Response Prediction of Rigid Rocking Structures Using Explainable LightGBM Models" Mathematics 12, no. 14: 2280. https://doi.org/10.3390/math12142280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop