Next Article in Journal
Numerical Simulation on the Influence of the Distribution Characteristics of Cracks and Solution Cavities on the Wellbore Stability in Carbonate Formation
Previous Article in Journal
Research on the A* Algorithm for Automatic Guided Vehicles in Large-Scale Maps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability Analysis of Complex Structures Under Multi-Failure Mode Utilizing an Adaptive AdaBoost Algorithm

School of Mechanics, Civil Engineering and Architecture, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(22), 10098; https://doi.org/10.3390/app142210098
Submission received: 9 October 2024 / Revised: 28 October 2024 / Accepted: 31 October 2024 / Published: 5 November 2024
(This article belongs to the Special Issue Uncertainty and Reliability Analysis for Engineering Systems)

Abstract

:
A reliability analysis can become intricate when addressing issues related to nonlinear implicit models of complex structures. To improve the accuracy and efficiency of such reliability analyses, this paper presents a surrogate model based on an adaptive AdaBoost algorithm. This model employs an adaptive method to determine the optimal training sample set, ensuring it is as evenly distributed as possible on both sides of the failure curve and fully contains the information it represents. Subsequently, with the integration and iterative characteristics of the AdaBoost algorithm, a simple binary classifier is iteratively applied to build a high-precision alternative model for complex structural fault diagnosis to cope with multiple failure modes. Then, the Monte Carlo simulation technique is employed to meticulously assess the failure probability. The accuracy and stability of the proposed method’s iterative convergence process are validated through three numerical examples. The findings of the study illuminate that the proposed method is not only remarkably precise but also exceptionally efficient, capable of addressing the challenges related to the reliability evaluation of complex structures under multi-failure mode. The method proposed in this paper enhances the application of mechanical structures and facilitates the utilization of complex mechanical designs.

1. Introduction

The aim of the structural reliability analysis is to meticulously quantify the failure probability that arises from a multitude of uncertain factors, such as the geometric size error, variations in material properties, and external applied load [1,2]. To improve system reliability and simplify the reliability-based design problems widely applied in various engineering practices [3,4], many scholars have studied the reliability of multi-failure mode systems [5,6,7,8,9,10]. For example, Marozaua et al. [7] conducted a reliability assessment to solve specific safety assessment problems, analyzed the underlying causes of micro-electro-mechanical system device failure, and carried out a failure test using a typical micro-electro-mechanical system device package structure. Wang et al. [8] introduced a probabilistic analysis framework aimed at evaluating the reliability of turbine discs while accounting for the interdependencies among various failure modes. Meng et al. [9] proposed a hybrid reliability-based topology optimization method to deal with cognitive and cognitive uncertainty. Yang et al. [10] proposed an innovative reliability analysis method that integrates Markov chain Monte Carlo method with random forest algorithms. J. Guadalupe Monjardin-Quevedo, Alfredo Reyes-Salazar, et al. [11] explored the properties of steel by evaluating the seismic reliability of deep column smf. The first-order reliability method, response surface method, and an advanced probability scheme were adopted. The efficiency and accuracy of the proposed method were verified by the traditional Monte Carlo simulation method. German Michel Guzman-Acevedo, Juan A. Quintana-Rodriguez, Jose Ramon Gaxiola-Camacho, et al. [12] evaluated the Usumacinta Bridge in Mexico based on probabilistic methods, The Sentinel-1 image was used to define structural reliability. The case for selecting probabilistic methods integrates the reliability of the structure. Displacement theory and probability density function obtained by InSAR technique (pdf).
Common structural reliability analysis methods include the digital simulation method [13,14], approximate analytical method [15,16] and surrogate model methods. The Monte Carlo method [17] is a relatively straightforward digital simulation method. However, the large number of samples required for obtaining an accurate solution makes the method less efficient. Another digital simulation method known as importance sampling [18] utilizes the design point of the limit state equation as its sampling center, thereby enhancing sampling efficiency to a certain extent. However, this design point must be predetermined, which limits the applicability of this method to complex structures. Approximate analytic methods, such as the first-order second-moment method [19], necessitate minimal computational effort but yield large relative errors. Consequently, these methods are only suitable for solving the reliability problem associated with an explicit limit state equation. In view of the nonlinear implicit characteristics of complex mechanical structures, surrogate modeling methods such as the Kriging algorithm [20], neural network algorithm [21], and support vector machines algorithm [22] are frequently utilized to assess the reliability of these intricate mechanical structures. However, these algorithms are simple extensions of the single traditional classification algorithm for specific data, and they inevitably inherit the inherent shortcomings of the original algorithm [23]. Simultaneously, these algorithms rely on relatively idealized assumptions regarding probability distribution and data types when dealing with data uncertainty, which consequently hampers their widespread applicability in practical projects [24].
Traditional reliability analysis methods have accuracy problems, large errors, and low computational efficiency, making it difficult to effectively apply them in practical engineering. To address these challenges, this paper proposes a new reliability analysis method for complex structural systems with multiple failure modes, utilizing an adaptive AdaBoost model that enhances both accuracy and effectiveness in assessing the reliability of complex structures. Adaptive iteration continuously optimizes the sampling center through proactive sampling. During this process, the effectiveness of the sample is quantitatively evaluated by measuring the ratio of the number of samples falling into the ineffective region to the total number of samples. The AdaBoost algorithm is a strong integrated learning method for low-bias prediction tasks, which are not easy to overfit during training [25]. The method employs a training dataset with different weights to construct different weak basis classifiers. Each data sample is then assigned a weight that indicates its importance as a training sample, and all samples have the same weight in the first iteration. In the ensuing iterations, the weight assigned to misclassified samples is elevated (or alternatively, the weight attributed to correctly classified samples is proportionately diminished), causing the new classifier to pay increasing attention to misclassified samples that may gather near the classification edge, thereby ultimately reducing the error rate [26,27,28]. Many scholars around the world have actively engaged in in-depth research on the AdaBoost algorithm, evaluating its application effects and proposing practical and feasible improvements [29,30,31,32,33,34]. For example, Liu et al. [31] presented an innovative approach to the naval gun hydraulic system fault diagnosis based on the BP-AdaBoost model. Zhou et al. [32] proposed a new polynomial chaos expansion surrogate modeling method utilizing AdaBoost for uncertainty quantification. Lou [33] artfully integrated the enhanced AdaBoost algorithm with the backpropagation neural network model, employing an evolutionary optimization algorithm to refine the model and proposing a mechanical structure reliability calculation method grounded in this advanced framework. Du et al. [34] proposed a real BP AdaBoost algorithm based on weighted information entropy to solve the problems of low prediction accuracy and poor reliability of the software reliability model of the single neural network. Meng et al. [35,36,37] proposed a new hybrid adaptive Kriging model and a water-cycle algorithm based on reliability assessment learning and optimization strategies, subsequently applying it to offshore wind turbine monopole and offshore wind power towers. N. Asgarkhani, F. Kazemi, et al. [38] proposed a machine learning (ML) algorithm to provide prediction models for determining the seismic response, seismic performance curve and earthquake failure probability curve of brbf. Farzin Kazemi, Neda Asgarkhani, and Robert Jankowski [39] solved the problem of seismic probability and risk assessment of reinforced concrete shear walls (RCSWs) by introducing a stacked machine learning (stacked ML) model based on Bayesian optimization (BO), genetic algorithm (GA), particle swarm optimization (PSO), and gradient-based optimization (GBO) algorithms. The IDA curves (MIDA) and seismic probability curves of this model have a good curve-fitting ability.
The subsequent sections of this paper will be meticulously organized into three sections. Section 2 will thoroughly explore the distinctive characteristics of complex structures in the context of multiple failure modes, the methodology for adaptive sample selection, and the principle underlying the iterative AdaBoost algorithm. Additionally, it outlines a process for assessing the reliability of mechanical structures based on the AdaBoost surrogate model. The Section 3 validates the effectiveness and precision of the AdaBoost surrogate model in evaluating multi-failure mode complex structures through three illustrative examples and discusses their computational results. The Section 4 summarizes both the advantages and limitations of the AdaBoost surrogate model and establishes the applicable scope of this method.

2. Reliability Analysis Based on the AdaBoost Surrogate Model

2.1. Reliability Modeling of Complex Structures under Multi-Failure Mode

There are two primary reliability models for complex structures exhibiting multi-failure mode: the series system and the parallel system. Nevertheless, in the realm of practical engineering applications, complex structural systems frequently exhibit multiple failure modes, with the overall failure mode typically arising from the interactions among these failures. Particularly within intricate systems, the interrelationships between failure modes become progressively more complex, as each mode engages in reciprocal interactions with others.
Suppose that a problem contains m failure modes, the corresponding limit state equation is denoted by g ( k ) ( x ) = 0 ( k = 1 , 2 , , m ) , and the corresponding failure domain is denoted by D f ( k ) = g ( k ) ( x ) 0 ( k = 1 , 2 , , m ) , where x = x 1 , x 2 , , x n represents an n-dimensional random variable.
When m failure modes are present in series, the relationship between the system’s failure domain, D f , and the failure domain, D f ( k ) , of each individual mode can be expressed as follows:
D f = k = 1 m D f ( k )
When m failure modes occur simultaneously, it is a parallel situation, and the relationship between the system’s failure domain, D f , and failure domain, D f ( k ) , of each individual mode can be expressed as follows:
D f = k = 1 m D f ( k )

2.2. Adaptive AdaBoost Algorithm

The adaptive AdaBoost algorithm initially employs an adaptive iteration method to obtain the optimal training sample set, which is as evenly distributed as possible on both sides of the failure curve, fully encompassing the information that describes the failure curve. By using the integration and iteration properties of the AdaBoost algorithm, a high-precision surrogate model for failure discrimination in complex structures under multi-failure modes can be achieved through simple binary classifier iteration (the dichotomy method).

2.2.1. Adaptive Sampling

The idea of adaptive sampling is to determine the optimal design point through the iteration process and pre-sample the sample center in each iteration to obtain an optimal training sample set that is as evenly distributed as possible on both sides of the failure curve and fully contains the information of the failure curve. To swiftly ascertain the optimal design point, this study applied the extended sample variance and adaptive iteration method. The detailed steps of the proposed adaptive sampling method are outlined as follows:
(1)
It is presumed that the variables are mutually independent and follow a normal distribution, x i ~ N ( μ i , σ i ) ( i = 1 , 2 , , n ) .
(2)
Define the expanded coefficient, f, and let x i ~ N ( μ i , f σ i ) ( i = 1 , 2 , , n ) .
(3)
Employ Latin hypercube sampling to derive 200 sample points, and then bring them into the system model for discrimination and calculate failure probability η j ( j = 1 , 2 , , m ) . Select the point x ¯ ( j ) with the largest joint probability density from among the failure points; this point is taken as the sampling center for subsequent sampling.
(4)
Define a sampling efficiency index, ε j = 0.5 η j . The smaller the value of ε j , the closer the sampling center is to the optimal design point, and the failure probability approaches 50%.
(5)
When ε j 0.1 , the loop comes to an end. The sampling center at this time is take as the new sampling center, and resampling is conducted using x i ~ N ( μ i , f σ i ) ( i = 1 , 2 , , n ) .
The detailed flowchart illustrating the expanded sample variance and the adaptive iteration method is presented in Figure 1.

2.2.2. AdaBoost Algorithm

First, the AdaBoost algorithm builds the first simple classifier, h1, based on the original data distribution. Then, Bootstrap produces T classifiers that are interconnected. Finally, these different classifiers are combined to form a stronger final classifier (strong classifier).
In theory, as long as each weak classifier’s classification ability is better than random guessing (a classification accuracy rate greater than 0.5), the error rate of the strong classifier will converge zero as the number of weak classifiers approaches infinity. Within the framework of the AdaBoost algorithm, different training sets are generated by adjusting the weights of each sample. At the beginning, the weight of each sample is the same, and basic classifier, h 1 ( x ) , is trained under this sample distribution. The weights of the samples exhibiting poor h 1 ( x ) scores are elevated, while the weights of properly classified samples are decreased to highlight the incorrect samples. Meanwhile, h 1 ( x ) is assigned a weight according to its error, representing its importance; the lower the error rate, the greater the corresponding weight. The basic classifier is trained again using the new sample weight to obtain the basic classifier, h 2 ( x ) , and its weight. Therefore, by analogy, after T rounds, T fundamental classifiers and their associated weights can be derived. Finally, T basic classifiers are added according to the previously calculated weights to obtain the final desired strong classifier.
Let us assume that the training set sample is denoted as T = E 11 , E 12 , G 12 , G 13 , X c , X t , Y c , Y t , S 12 , S 13 , d , and the output weight of the training set in the k-th weak learner is D ( k ) = ( w k 1 , w k 2 , w k m ) ; w 1 i = 1 m ; i = 1 , 2 m .
Let us take a binary classification problem as an example. If the output is { 0 , 1 } , the weighted error rate of the k-th weak classifier, G k ( x ) , on the training set is e k = P G k ( x i ) y i = i = 1 m w k i I ( G k ( x i ) y i ) .
The weight coefficient of the k-th weak classifier, G k ( x ) , is denoted as α k = 1 2 log 1 e k e k . An increase in the classification error rate, e k , is associated with a decrease in the corresponding weak coefficient, α k , of the classifier weight. Assuming that the sample set-weight coefficient for the k-th weak classifier is D ( k ) = ( w k 1 , w k 2 , w k m ) , then the sample set-weight coefficient of the corresponding k + 1 weak classifier is w k + 1 , i = w k i Z K exp ( α k y i G k ( x i ) ) , where Z K represents a normalization factor, Z K = i = 1 m w k i exp ( α k y i G k ( x i ) ) . It is apparent from the w k + 1 calculation formula that if the i-th sample is classified incorrectly, y i G k ( x i ) < 0 , resulting in the increase in the weight of the sample in the k + 1 weak classifier. Should the classification be deemed accurate, the weight in the k + 1 weak classifier decreases.
The weighted voting method is adopted for AdaBoost classification, and the ultimate strong classifier can be expressed as follows: f ( x ) = s i g n ( k = 1 K α k G k ( x ) ) .

2.3. Reliability Analysis Model Based on the Adaptive AdaBoost Algorithm

The basic idea of a reliability analysis based on the adaptive AdaBoost algorithm involves employing an adaptive iterative method to find the optimal training sample set, as elaborated in Section 2.2.1. Then, a high-precision surrogate model for failure discrimination of complex structures under multiple failure modes is obtained through simple binary classifier iteration based on the integration and iteration characteristics to the AdaBoost algorithm, as discussed in Section 2.2.2. Finally, the Monte Carlo method is used to determine the failure probability. The detailed analytical procedures are delineated as follows:
(1)
Use the adaptive method to find the optimal sample center.
(2)
According to the sample center, use Latin hypercube sampling to generate training samples.
(3)
Initialize the weight distribution of the training data. Assign the same weight to each training sample as follows: D 1 = ( w 11 , w 12 , , w 1 i , , w 1 N ) , w 1 i = 1 N , i = 1 , 2 , , N .
(4)
In the m-th iterations, obtain G m ( x ) , which represents the current m-round iterative classifier; obtain e m , denoting the present classification error; and obtain α m , which signifies the cumulative coefficient as elaborated below:
Obtain the basic classifier G m ( x ) : x i 1 , + 1 by learning the training dataset with a weight distribution of D m .
Calculate the classification error rate of e m for the training dataset:
e m = P G m ( x i ) y i = i = 1 N w m i I ( G m ( x i ) y i )
Calculate the coefficient for G m ( x ) , where α m represents the importance of G m ( x ) in the final classifier:
α m = 1 2 log 1 e m e m
It is evident from the preceding equation that e m 1 2 and α m 0 , and that α m increases as e m decreases, thereby indicating that basic classifiers with lower classification error rates have a greater impact on the final strong classifier.
Update the weight distribution of the training dataset to obtain a new weight distribution for the subsequent interaction:
D m + 1 = ( w m + 1 , 1 , w m + 1 , 2 , , w m + 1 , i , , w m + 1 , N )
w m + 1 , i = w m i Z m exp ( α m y i G m ( x i ) ) , i = 1 , 2 , , N
where Z m = i = 1 N w m i exp ( α m y i G m ( x i ) ) .
As a result, the weights of samples misclassified by the fundamental classifier, G m ( x ) , are augmented, whereas the weights of accurately classified samples are diminished. In this manner, the AdaBoost algorithm can concentrate on samples that are more challenging to differentiate.
(5)
Integrate the weak classifiers:
f ( x ) = m = 1 M α m G m ( x )
The ultimate strong classifier is denoted by
G ( x ) = s i g n ( f ( x ) ) = s i g n ( m = 1 M α m G m ( x ) )
(6)
Use the Monte Carlo method to calculate the failure probability based on the final strong classifier.
Figure 2 illustrates the flowchart of steps 1–6 for the reliability analysis of complex structures utilizing the adaptive AdaBoost algorithm proposed in this paper.

3. Examples

To verify the accuracy and efficiency of the proposed adaptive AdaBoost surrogate model, this section provides examples of mechanical structure reliability analysis under different failure functions, with the iteration number set to 200.

3.1. Parallel System

Consider a parallel system distinguished by two distinct failure modes, with the corresponding function articulated as follows.
y 1 = x 1 2 x 2 + 2
y 2 = x 1 2 + x 2 - 6
where x 1 and x 2 denote normally distributed random variables, specifically denoted as x 1 ~ N ( 4 , 1 2 ) and x 2 ~ N ( 3 , 0.5 2 ) , respectively. The failure conditions of the system are defined by y 1 < 0 and y 2 < 0 .
The proposed adaptive AdaBoost surrogate model is employed to evaluate the failure probability of the parallel system. The variation in error with respect to the number of iterations during the training process (resubstitution loss) is shown in Figure 2.
It can be discerned from Figure 3 that the adaptive AdaBoost surrogate model effectively simulates the failure surface of parallel systems. As the number of iterations escalates, the error of the agent model steadily diminishes and ultimately converges to zero.
The trained model is utilized to evaluate the failure probability, which is subsequently compared with the result obtained by using the conventional Monte Carlo method. The results are presented in Table 1, where it can be noted that the failure probability obtained derived from the adaptive AdaBoost method is 3.7 × 10−5, with a relative error of 7.5%, when compared to the conventional Monte Carlo method. This underscores the effectiveness of the proposed adaptive AdaBoost model. It is noteworthy that the conventional Monte Carlo method necessitated millions of samples, whereas the adaptive AdaBoost surrogate model required only 500 samples, leading to a substantial reduction in computation time and a marked enhancement in computational efficiency.

3.2. Series System

Consider a series structure system comprising two failure modes, with the corresponding function denoted as follows.
y 1 = 2 x 1 + 2 x 2 4.5 x 4 100
y 2 = x 1 + x 2 + 2 x 3 4.5 x 4 + 380
where x 1 , x 2 , x 3 , and x 4 are normal random variables, specifically denoted as x 1 ~ N ( 200 , 30 2 ) , x 2 ~ N ( 200 , 30 2 ) , x 3 ~ N ( 200 , 30 2 ) , and x 4 ~ N ( 80 , 15 2 ) , respectively. The failure conditions of the system are defined by y 1 < 0 or y 2 < 0 .
The adaptive AdaBoost surrogate model is employed to assess the failure probability of the series system. The variation in error concerning the number of iterations throughout the training process (resubstitution loss) is illustrated in Figure 3.
As depicted in Figure 4, it is evident that with an increasing number of iterations, the error of the adaptive AdaBoost surrogate model gradually decreases and converges to zero, indicating its efficacy in failure discrimination for the series system.
The trained models are utilized to evaluate the failure probability, after which the results are compared with those derived from the conventional Monte Carlo method. The results are presented in Table 2, with a relative error of 5.49% between the result obtained using the adaptive AdaBoost surrogate model and that obtained using conventional Monte Carlo methods. This illustrates the precision and substantiates the efficacy of the proposed method. In contrast to the traditional Monte Carlo method, which necessitated millions of samples, the adaptive AdaBoost surrogate model required only 300 samples, meaning that the sampling efficiency has been significantly improved and the computing time has been greatly shortened.

3.3. Engineering Example

Figure 5 shows an I-beam system with eight random input variables, X =   d   ,   b   ,   t w ,   t f , L   , a   ,   P   ,   S   [40]. The specific distribution is presented in Table 3. The response function of the I-beam system is given by
Y = g X = S     σ max = S   Pa L - a d 2 LI
where S is the strength, σ max is the maximum stress, and I = bd 3 b - t w d - 2 t f 3 12 .
The adaptive AdaBoost surrogate model is employed to obtain the failure probability of the I-beam system. The variation in the error with respect to the number of iterations during the training process (resubstitution loss) is shown in Figure 6.
As illustrated in Figure 5, it is evident that with an increasing number of iterations, the error of the adaptive AdaBoost surrogate model gradually decreases and converges to zero; this suggests that the surrogate model possesses a robust fitting capability.
The trained model is employed to ascertain the failure probability, which is subsequently compared with that determined using the conventional Monte Carlo method. The results are presented in Table 4, in which the relative error between the results obtained from the adaptive AdaBoost surrogate model and those obtained from the conventional Monte Carlo method is 1.59%, satisfying accuracy requirements and demonstrating the effectiveness of the adaptive AdaBoost surrogate model. In comparison to the traditional Monte Carlo method, which necessitated millions of samples, only 70 samples are required by the adaptive AdaBoost surrogate model, indicating considerably improved sampling efficiency and a reduction in calculational time.

3.4. Analysis and Discussion

The previously mentioned examples illustrate that as the number of iterations escalates, the calculation error under the adaptive AdaBoost surrogate model converges to a minimum value for both parallel and series systems. This suggests that the AdaBoost surrogate model demonstrates an exceptional fitting performance and remarkable versatility in both parallel and series systems. Furthermore, when comparing the adaptive AdaBoost method with the Monte Carlo method, it is evident that the relative error between the failure probability derived from the adaptive AdaBoost alternative model method and that computed using the Monte Carlo method is smaller, aligning more closely with Monte Carlo simulation results, while significantly enhancing computational efficiency compared to traditional methods. Therefore, the accuracy of structural reliability calculated by the method proposed in this paper is higher than that of the traditional algorithm model, and calculation efficiency has been significantly enhanced, rendering it suitable for real engineering structures.

4. Conclusions

An adaptive AdaBoost algorithm is proposed to evaluate the reliability of multi-failure mode structures. This method shows excellent effectiveness in complex mechanical structures where explicit functions are difficult to establish. The optimal training sample set is obtained by the adaptive method, and then the AdaBoost algorithm is trained on this refined sample set. The generated proxy model not only meets the accuracy requirements, but also can better distinguish whether the target structure is invalid under the given parameters.
(1)
Compared with the traditional Monte Carlo method, this model significantly improves the computational efficiency and can accurately calculate the failure probability in a shorter time.
(2)
The method has good universality. Compared with the general alternative model, the adaptive AdaBoost algorithm proposed in this paper has the advantages of strong applicability, low dependence on the operator’s engineering experience, and high precision.
(3)
However, we must be aware that the studies conducted to date have some limitations. The adaptive AdaBoost algorithm proposed by us has not been applied to a multi-failure mode reliability analysis of more complex structures. More and more complex research scenarios put forward higher requirements and challenges for our proposed algorithms.
However, the method proposed in this paper is of great significance to the exploration of complex mechanical structures, which is helpful to improve the application level of mechanical structures and promote their wide application.

Author Contributions

Conceptualization, F.Z. and Z.Q.; Methodology, F.Z., Y.T., M.W. and X.X.; Software, Z.Q.; Formal analysis, F.Z., Z.Q., Y.T., M.W. and X.X.; Writing—original draft, Z.Q., Y.T. and M.W.; Funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Fundamental Research Funds for the Central Universities (NWPU-310202006zy007).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yazdi, M.; Golilarz, N.A.; Nedjati, A.; Adesina, K.A. An improved lasso regression model for evaluating the efficiency of intervention actions in a system reliability analysis. Neural Comput. Appl. 2021, 33, 7913–7928. [Google Scholar] [CrossRef]
  2. Yan, W.; Deng, L.; Zhang, F.; Li, T.; Li, S. Probabilistic machine learning approach to bridge fatigue failure analysis due to vehicular overloading. Eng. Struct. 2019, 193, 91–99. [Google Scholar] [CrossRef]
  3. Yang, J.; Chen, C.; Ma, A. A Fast Product of Conditional Reduction Method for System Failure Probability Sensitivity Evaluation. CMES-Comput. Model. Eng. Sci. 2020, 125, 1159–1171. [Google Scholar] [CrossRef]
  4. Zhu, S.-P.; Keshtegar, B.; Trung, N.-T.; Yaseen, Z.M.; Bui, D.T. Reliability-based structural design optimization: Hybridized conjugate mean value approach. Eng. Comput. 2021, 37, 381–394. [Google Scholar] [CrossRef]
  5. Xu, J.-G.; Cai, Z.-K.; Feng, D.-C. Life-cycle seismic performance assessment of aging RC bridges considering multi-failure modes of bridge columns. Eng. Struct. 2021, 244, 112818. [Google Scholar] [CrossRef]
  6. Zhang, F.; Xu, X.; Cheng, L.; Tan, S.; Wang, W.; Wu, M. Mechanism reliability and sensitivity analysis method using truncated and correlated normal variables. Saf. Sci. 2020, 125, 104615. [Google Scholar] [CrossRef]
  7. Marozaua, I.; Auchlina, M.; Pejchal, V.; Souchon, F.; Vogel, D.; Lahti, M.; Saillen, N.; Sereda, O. Reliability assessment and failure mode analysis of MEMS accelerometers for space applications. Microelectron. Reliab. 2018, 88–90, 846–854. [Google Scholar] [CrossRef]
  8. Wang, R.; Liu, X.; Hu, D.; Mao, J. Reliability assessment for system-level turbine disc structure using LRPIM-based surrogate model considering multi-failure modes correlation. Aerosp. Sci. Technol. 2019, 95, 105422. [Google Scholar] [CrossRef]
  9. Meng, Z.; Pang, Y.; Pu, Y.; Wang, X. New hybrid reliability-based topology optimization method combining fuzzy and probabilistic models for handling epistemic and aleatory uncertainties. Comput. Methods Appl. Mech. Eng. 2020, 363, 112886. [Google Scholar] [CrossRef]
  10. Yang, F.; Ren, J. Reliability Analysis Based on Optimization Random Forest Model and MCMC. CMES-Comput. Model. Eng. Sci. 2020, 125, 801–814. [Google Scholar] [CrossRef]
  11. Monjardin-Quevedo, J.G.; Reyes-Salazar, A.; Tolentino, D.; Gaxiola-Camacho, O.D.; Vazquez-Becerra, G.E.; Gaxiola-Camacho, J.R. Seismic reliability of steel SMFs with deep columns based on PBSD philosophy. Structures 2022, 42, 1–15. [Google Scholar] [CrossRef]
  12. Guzman-Acevedo, G.M.; Quintana-Rodriguez, J.A.; Gaxiola-Camacho, J.R.; Vazquez-Becerra, G.E.; Torres-Moreno, V.; Monjardin-Quevedo, J.G. The Structural Reliability of the Usumacinta Bridge Using InSAR Time Series of Semi-Static Displacements. Infrastructures 2023, 8, 173. [Google Scholar] [CrossRef]
  13. Wang, Z. Markov chain Monte Carlo sampling using are servoirmethod. Comput. Stat. Data Anal. 2019, 139, 64–74. [Google Scholar] [CrossRef]
  14. Liu, X.-X.; Elishakoff, I. A combined Importance Sampling and active learning Kriging reliability method for small failure probability with random and correlated interval variables. Struct. Saf. 2020, 82, 101875. [Google Scholar] [CrossRef]
  15. Lu, H.; Cao, S.; Zhu, Z.; Zhang, Y. An improved high order moment-based saddlepoint approximation method for reliability analysis. Appl. Math. Model. 2020, 82, 836–847. [Google Scholar] [CrossRef]
  16. Meng, Z.; Li, G.; Yang, D.; Zhan, L. A new directional stability transformation method of chaos control for first order reliability analysis. Struct. Multidiscip. Optim. 2017, 55, 601–612. [Google Scholar] [CrossRef]
  17. Chen, H.-N.; Mao, Z.-L. Study on the Failure Probability of Occupant Evacuation with the Method of Monte Carlo Sampling. Procedia Eng. 2018, 211, 55–62. [Google Scholar] [CrossRef]
  18. Xiao, S.; Oladyshkin, S.; Nowak, W. Reliability analysis with stratified importance sampling based on adaptive Kriging. Reliab. Eng. Syst. Saf. 2020, 197, 106852. [Google Scholar] [CrossRef]
  19. Yang, Z.; Ching, J. A novel reliability-based design method based on quantile-based first-order second-moment. Appl. Math. Model. 2020, 88, 461–473. [Google Scholar] [CrossRef]
  20. Zhang, X.; Pandey, M.D.; Yu, R.; Wu, Z. HALK: A hybrid active-learning Kriging approach and its applications for structural reliability analysis. Eng. Comput. 2022, 38, 3039–3055. [Google Scholar] [CrossRef]
  21. Nezhad, H.B.; Miri, M.; Ghasemi, M.R. New neural network-based response surface method for reliability analysis of structures. Neural Comput. Appl. 2019, 31, 777–791. [Google Scholar] [CrossRef]
  22. Ni, T.; Zhai, J. A matrix-free smoothing algorithm for large-scale support vector machines. Inf. Sci. 2016, 358–359, 29–43. [Google Scholar] [CrossRef]
  23. Kotsiantis, S.B. Supervised Machine Learning: A Review of Classification Techniques. Informatica 2007, 31, 249–268. [Google Scholar]
  24. Liu, H.; Zhang, X.; Zhang, X. PwAdaBoost: Possible world based AdaBoost algorithm for classifying uncertain data. Knowl.-Based Syst. 2019, 186, 104930. [Google Scholar] [CrossRef]
  25. Huang, X.; Li, Z.; Jin, Y.; Zhang, W. Fair-AdaBoost: Extending AdaBoost method to achieve fair classification. Expert Syst. Appl. 2022, 202, 117240. [Google Scholar] [CrossRef]
  26. Ravikumar, S.; Sekar, S.; Jeyalakshmi, S.; Narayanan, S.; Vivekanandan, G.; Sundarakannan, N. An optimized AdaBoost Multi-class support vector machine for driver behavior monitoring in the advanced driver assistance systems. Expert Syst. Appl. 2023, 212, 118618. [Google Scholar]
  27. Jiang, H.; Zheng, W.; Luo, L.; Dong, Y. A two-stage minimax concave penalty based method in pruned AdaBoost ensemble. Appl. Soft Comput. J. 2019, 83, 105674. [Google Scholar] [CrossRef]
  28. Zhou, Y.; Mazzuchi, T.A.; Sarkani, S. M-AdaBoost-A based ensemble system for network intrusion detection. Expert Syst. Appl. 2020, 162, 113864. [Google Scholar] [CrossRef]
  29. Yu, Q.; Zhou, Y. Traffic safety analysis on mixed traffic flows at signalized intersection based on Haar-AdaBoost algorithm and machine learning. Saf. Sci. 2019, 120, 248–543. [Google Scholar] [CrossRef]
  30. Wang, W.; Sun, D. The improved AdaBoost algorithms for imbalanced data classification. Inf. Sci. 2021, 563, 358–374. [Google Scholar] [CrossRef]
  31. Liu, X.; Hu, Y.; Xu, Z.; Ren, Y.; Gao, T. Fault diagnosis for hydraulic system of naval gun based on BP-AdaBoost model. In Proceedings of the 2017 Second International Conference on Reliability Systems Engineering (ICRSE), Beijing, China, 10–12 July 2017. [Google Scholar]
  32. Zhou, Y.; Lu, Z.; Cheng, K. AdaBoost-based ensemble of polynomial chaos expansion with adaptive sampling. Comput. Methods Appl. Mech. Eng. 2022, 388, 114238. [Google Scholar] [CrossRef]
  33. Luo, P. Reliability analysis of mechanical structure based on improved BP-AdaBoost algorithm. Intern. Combust. Engine Parts 2019, 15, 41–42. [Google Scholar]
  34. Du, R.C.; Hua, J.X.; Zhai, X.Y.; Li, Z.P. Research on Software Reliability Prediction Based on Improved Real AdaBoost. J. Air Force Eng. Univ. (Nat. Sci. Ed.) 2018, 19, 91–96. [Google Scholar]
  35. Meng, D.; Yang, S.; De Jesus, A.M.P.; Fazeres-Ferradosa, T.; Zhu, S.-P. A novel hybrid adaptive Kriging and water cycle algorithm for reliability-based design and optimization strategy: Application in offshore wind turbine monopole. Comput. Methods Appl. Mech. Eng. 2023, 412, 116083. [Google Scholar] [CrossRef]
  36. Meng, D.; Yang, S.; de Jesus, A.M.P.; Zhu, S.-P. A novel Kriging-model-assisted reliability-based multidisciplinary design optimization strategy and its application in the offshore wind turbine tower. Renew. Energy 2023, 203, 407–420. [Google Scholar] [CrossRef]
  37. Yang, S.; Meng, D.; Wang, H.; Yang, C. A novel learning function for adaptive surrogate-model-based reliability evaluation. Philos. Trans. R. Soc. A-Math. Phys. Eng. Sci. 2024, 382, 20220395. [Google Scholar] [CrossRef]
  38. Asgarkhani, N.; Kazemi, F.; Jakubczyk-Gałczyńska, A.; Mohebi, B.; Jankowski, R. Seismic response and performance prediction of steel buckling-restrained braced frames using machine-learning methods. Eng. Appl. Artif. Intell. 2024, 128, 107388. [Google Scholar] [CrossRef]
  39. Kazemi, F.; Asgarkhani, N.; Jankowski, R. Optimization-based stacked machine-learning method for seismic probability and risk assessment of reinforced concrete shear walls. Expert Syst. Appl. 2024, 255 Pt D, 124897. [Google Scholar] [CrossRef]
  40. Li, G.; Lu, Z.; Song, S. A new method to measure the importance of fundamental variables to failure probability. Mech. Eng. 2010, 32, 71–75. [Google Scholar]
Figure 1. Flowchart of expanded sample variance and adaptive iteration method.
Figure 1. Flowchart of expanded sample variance and adaptive iteration method.
Applsci 14 10098 g001
Figure 2. Flowchart of the proposed method for reliability analysis of complex structures using the adaptive AdaBoost algorithm.
Figure 2. Flowchart of the proposed method for reliability analysis of complex structures using the adaptive AdaBoost algorithm.
Applsci 14 10098 g002
Figure 3. Variation in the parallel system error with the number of iterations during training.
Figure 3. Variation in the parallel system error with the number of iterations during training.
Applsci 14 10098 g003
Figure 4. Variation in series system error with the number of iterations during training.
Figure 4. Variation in series system error with the number of iterations during training.
Applsci 14 10098 g004
Figure 5. Schematic diagram of the I-beam system.
Figure 5. Schematic diagram of the I-beam system.
Applsci 14 10098 g005
Figure 6. Variation in system error with the number of iterations in the training process.
Figure 6. Variation in system error with the number of iterations in the training process.
Applsci 14 10098 g006
Table 1. Calculated failure probability of the parallel system.
Table 1. Calculated failure probability of the parallel system.
MethodSampling FrequencyFailure ProbabilityRelative Error
Monte Carlo method1064 × 10−5/
Adaptive AdaBoost method5003.7 × 10−57.5%
Table 2. Calculated failure probability of series system.
Table 2. Calculated failure probability of series system.
MethodSampling FrequencyFailure ProbabilityRelative Error
Monte Carlo method1069.1 × 10−4/
Adaptive AdaBoost method3009.6 × 10−45.49%
Table 3. Distribution of random variables of the I-beam system.
Table 3. Distribution of random variables of the I-beam system.
VariableMean ValueStandard DeviationDistribution Type
d / i n 2.31/24Normal
b / i n 2.31/24Normal
t w / i n 0.161/24Normal
t f / i n 0.261/24Normal
L / i n 1206Normal
a / i n 726Normal
P / N 6070200Normal
S / k P a 170,0004760Normal
Table 4. Calculated failure probability of the I-beam system.
Table 4. Calculated failure probability of the I-beam system.
MethodSampling FrequencyFailure ProbabilityRelative Error
Monte Carlo method1060.2074/
Adaptive AdaBoost method700.20411.59%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, F.; Qiao, Z.; Tian, Y.; Wu, M.; Xu, X. Reliability Analysis of Complex Structures Under Multi-Failure Mode Utilizing an Adaptive AdaBoost Algorithm. Appl. Sci. 2024, 14, 10098. https://doi.org/10.3390/app142210098

AMA Style

Zhang F, Qiao Z, Tian Y, Wu M, Xu X. Reliability Analysis of Complex Structures Under Multi-Failure Mode Utilizing an Adaptive AdaBoost Algorithm. Applied Sciences. 2024; 14(22):10098. https://doi.org/10.3390/app142210098

Chicago/Turabian Style

Zhang, Feng, Zijie Qiao, Yuxiang Tian, Mingying Wu, and Xiayu Xu. 2024. "Reliability Analysis of Complex Structures Under Multi-Failure Mode Utilizing an Adaptive AdaBoost Algorithm" Applied Sciences 14, no. 22: 10098. https://doi.org/10.3390/app142210098

APA Style

Zhang, F., Qiao, Z., Tian, Y., Wu, M., & Xu, X. (2024). Reliability Analysis of Complex Structures Under Multi-Failure Mode Utilizing an Adaptive AdaBoost Algorithm. Applied Sciences, 14(22), 10098. https://doi.org/10.3390/app142210098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop