Next Article in Journal
Entropy-Based Dynamic Rescoring with Language Model in E2E ASR Systems
Previous Article in Journal
Design and Simulation of a Solar Tracking System for PV
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Reliability Method with Multiple Shape Parameters Based on Radial Basis Function

1
Research Center of Applied Mechanics, School of Electro-Mechanical Engineering, Xidian University, Xi’an 710071, China
2
School of Electro-Mechanical Engineering, Guangdong University of Petrochemical Technology, Maoming 525000, China
3
Shaanxi Key Laboratory of Space Extreme Detection, Xidian University, Xi’an 710071, China
4
Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9689; https://doi.org/10.3390/app12199689
Submission received: 31 August 2022 / Revised: 19 September 2022 / Accepted: 23 September 2022 / Published: 27 September 2022

Abstract

:
Structural reliability analysis has an inherent contradiction between efficiency and accuracy. The metamodel can significantly reduce the computational cost of reliability analysis by a simpler approximation. Therefore, it is crucial to build a metamodel, which achieves the minimum simulations and accurate estimation for reliability analysis. Aiming at this, an effective adaptive metamodel based on the combination of radial basis function (RBF) model and Monte Carlo simulation (MCS) is proposed. Different shape parameters are first used to generate the weighted prediction variance, and the search for new training samples is guided by the active learning function that achieves a tradeoff of (1) being close enough to limit state function (LSF) to have a high reliability sensitivity; (2) keeping enough distance between the existing samples to avoid a clustering problem; and (3) being in the sensitive region to ensure the effectiveness of the information obtained. The performance of the proposed method for a nonlinear, non-convex, and high dimensional reliability analysis is validated by three numerical cases. The results indicate the high efficiency and accuracy of the proposed method.

1. Introduction

The reliability analysis of structures is of great importance for modern equipment and systems [1,2,3,4,5]. Generally, the reliability state of a structure is described by its performance function, i.e., the structure is safe when its performance function is greater than 0, fails when its performance function is less than 0, and is in its limited state when its performance function is equal to 0 [6,7,8,9]. In practical engineering, the structural performance function is frequently very complex [10,11]. To obtain a simpler alternative for the limit state function (LSF), the traditional reliability analysis methods carries out a first-order or second-order Taylor expansion at the most probable point (MPP), such as the first-order reliability mothed (FORM) [6,12] and second-order reliability method (SORM) [13,14]. Nevertheless, large prediction error occurs when the performance function is of great nonlinearity, as the failure probabilities obtained by FORM or SORM are subjected to the nonlinearity of the structural performance function.
The Monte Carlo simulation (MCS) [15,16] is frequently used in structural reliability analysis, because of its conceptual simplicity and irrelevance to the abovementioned nonlinearity. However, MCS is limited by the simulation (finite element analysis (FEA) in general) time, because a sufficiently large sample size is essential for the accuracy of MCS [16]. The surrogate model is a simpler approximation of the structure’s response. Therefore, MCS with surrogate models can avoid the massive computational cost caused by multiple FE simulations [17]. To further reduce the computational cost and ensure prediction accuracy, MCS is often combined with adaptive models. Different from the general metamodel, the adaptive model finds its training samples iteration by iteration. Generally, the framework of reliability analysis based on MCS with an adaptive model consists of four steps:
(1)
Initialize the design of experiment (DoE);
(2)
Generate the metamodel according to DoE;
(3)
Perform MCS based on the metamodel to estimate the failure probability;
(4)
Check the stopping criterion: if satisfied, end the iteration, and output the failure probability; otherwise, add new samples into DoE through the learning function and go back to (2).
Currently, there are mainly four available surrogate techniques, namely response surface methodology (RSM) [18], artificial neural networks (ANN) [19], support vector machines (SVM) [20], and the Kriging model [21]. Among them, the Kriging model is frequently employed in the above framework. This is because the prediction variance provided by the Kriging model can represent the local uncertainty of prediction to some degree. Many Kriging-based adaptive models have been proposed to address the reliability analysis problem. Specifically, Echard et al. proposed AK − MCS + EFF and AK − MCS + U, in which the learning functions U(X) and EFF(X) aim to find sample points that are simultaneously close to the LSF and have a high local uncertainty [21]. Furthermore, by searching the line between the best sample point found by U(X) and its nearest sample point with different signs in the MC population, Zheng et al. [22] obtained a better new sample than that using U(X). Besides the prediction’s value and variance, the location of the new sample is also an important factor to consider. To make the sample points prone to being located in the sensitive region, the learning function C(X) proposed by Xiao et al. takes the distance from the candidate sample point to the mean point into account, i.e., sample points with a smaller above-mentioned distance are easier to be selected for sequential sampling [23].
Unlike the Kriging model, the structural reliability research based on the RBF model is relatively less. This is because RBF model does not provide prediction variance. In fact, the RBF model has many merits such as applicability to high dimensional problems, exponential convergence speed, and a high prediction accuracy [24]. Hence, in order to make up for the above-mentioned inability of RBF model, multiple predictions by RBF are performed to obtain prediction variance. Some of the similar studies are as follows: Shi et al. [25] presented two adaptive models named CVRBF-MCS and ARBFM-MCS for the structural reliability analysis, in which different RBF models are launched for multiple predictions by applying different subsets of DoE or different kernel functions. Furthermore, by conducting multiple predictions, Hong et al. used the efficient global optimization (EGO) as the learning function for sequential sampling based on the RBF model and achieved satisfactory results for some benchmark problems [26]. In addition, for sequential sampling based on the RBF model, using the MPP of the current surrogate model as the newly added sample is also an effective approach. Chau et al. [27] conducted a performance measure approach (PMA) to find the MPP of the RBF model as the new sample point of the sequential sampling. To improve the efficiency, the adaptive RBF model presented by Wang can find 2m (m is the dimension of random input) additional newly added sample points in each iteration by using points near the MPP [28]. Differing from Chau’s and Wang’s methods, the RBF-GA proposed by Jing et al. used MCS rather than the reliability index to calculate the structure failure probability, and the potential MPP was found using the genetic algorithm (GA) [29]. However, because of the “most probable” characteristic of the MPP, sequential sampling methods must utilize specialized distance control strategies to avoid the clustering problem, when taking MPP as the new sample for sequential sampling.
Although the above-mentioned RBF-based adaptive models have proven to be promising for structural reliability analysis, they do not consider the effect of shape parameters on the search for new samples for sequential sampling. In fact, predictions using the RBF model with different shape parameters can vary widely [24]. To address this problem, an adaptive model, which based on a combination of the RBF model and MCS, is proposed. For the proposed model (MSRBF-MCS), different shape parameters are used to launch multiple predictions. The effect of the shape parameters on sequential sampling is considered by calculating the weighted variance of the multiple predictions. Furthermore, the 2-norm of the MC population in the standard normal space is investigated using the proposed learning function to ensure new samples are in the sensitive region (see the sensitive region in Figure 1). Moreover, to make the clustering problem of new samples difficult, a distance control strategy that encourages a larger minimum inter-sample distance is adopted. Finally, the proposed method is verified with three benchmark problems and a particle engineering problem with good results.
The remainder of this paper is organized as follows. Section 2 shows the relevant basic theories of the RBF model. The proposed adapted metamodel is proposed in Section 3. In Section 4, the proposed model is validated by three academic examples and one practical example. Finally, the concluding remarks are given in Section 5.

2. Radial Basis Function Model and K-Fold Cross-Validation: A Reminder

2.1. RBF Model

The RBF model can adopt a linear combination of the radial symmetric function based on the Euclidean distance to estimate the performance function. Assuming the n-dimensional vector X = [ x 1 , x 2 , , x n ] is the input of the performance function g ( X ) , the RBF surrogate model can be obtained by m sample pairs ( X i , g ( X i ) ) , i = 1 , 2 , , m :
g ^ ( X ) = i = 1 m ω i ϕ ( r )
where g ^ ( X ) is the RBF surrogate model; r = X X i is the Euclidean distance between input vector X and current sample point Xi; and ϕ ( ) is a kernel function. The frequently used kernel function with shape parameters [29] are shown in Table 1; ω i ( i = 1 , 2 , , m ) is the unknown weighting coefficient of ϕ ( r ) , which can be obtained by solving the following equations:
{ ω 1 ϕ ( X 1 X 1 ) + ω 2 ϕ ( X 1 X 2 ) + + ω m ϕ ( X 1 X m ) = g ( X 1 ) ω 1 ϕ ( X 2 X 1 ) + ω 2 ϕ ( X 2 X 2 ) + + ω m ϕ ( X 2 X m ) = g ( X 2 ) ω 1 ϕ ( X m X 1 ) + ω 2 ϕ ( X m X 2 ) + + ω m ϕ ( X m X m ) = g ( X m )
Denote the vector [ g ( X 1 ) , g ( X 2 ) , g ( X m ) ] as G m and weighting the coefficient vector [ ω 1 , ω 2 , ω m ] as Ω m . Then, the Equation (2) can be rewritten as:
G m T = Φ Ω m T
where:
Φ = [ ϕ ( X 1 X 1 ) ϕ ( X 1 X 2 ) ϕ ( X 1 X m )   ϕ ( X 2 X 1 ) ϕ ( X 2 X 2 ) ϕ ( X 2 X m ) ϕ ( X m X 1 ) ϕ ( X m X 2 ) ϕ ( X m X m ) ]
According to the definition of the radial basis function ϕ ( · ) , the inverse matrix Φ 1 must exist. Then Ω m can be calculated by:
Ω m T = Φ 1 G m T
Finally, substituting Equation (5) into Equation (1), the RBF surrogate model g ^ ( X ) can be calculated.

2.2. K-Fold Cross-Validation

K-fold cross validation is often employed to measure the global prediction error of a surrogate model [30]. Per definition, m samples are divided into k subsets equally, of which k − 1 subsets are randomly regarded as the training set to build the surrogate model g ^ i ( X ) , and another one is used to validate the prediction accuracy of g ^ i ( X ) :
m s e i = i s u b = 1 n t ( g ( X i s u b ) g ^ i ( X i s u b ) ) 2
where n t is the number of samples in the test set; ( X i s u b , g ( X i s u b ) ) is the i s u b -th sample pair in the subset ( i s u b = 1 , 2 , , n t ); is the mean squared error (MSE). Then, the global error PRESS of a surrogate model can be expressed as follows:
P R E S S = i = 1 k m s e i 2

3. The Proposed Approach: MSRBF-MCS

In the Kriging model-based sequential sampling, researchers always aim to find the candidate sample points that are the closest to the LSF and have a high local uncertainty at the same time [25]. However, the prediction based on the RBF model does not offer the prediction variance. Hence, it is necessary to launch multiple predictions to obtain the prediction variance [25,26]. To the best of our knowledge, the method to launch multiple predictions by creating multiple RBF models with different shape parameters c has not been presented yet. By setting different values of c, different RBF models are created. The global uncertainty of each RBF model is calculated via cross-validation, and the weighting coefficients of each model are obtained accordingly. Then, the weighted prediction’s mean, variance, and the distance-related factors are used to formulate the learning function.

3.1. Multiple Predictions Launched by Different Shape Parameters

The value of shape parameter c has a significant influence on the surrogate model [13]. Hence, RBF models with widely varying prediction results can be obtained by setting different values of c:
c = [ c 1 , c 2 , , c n c ]
where c is the vector of shape parameters, and n c is the number of shape parameters. Assuming that there are n i n i sample pairs in the initial DoE. Then n c RBF models for multiple prediction can be generated according to c and DoE.
In order to reduce the global prediction error of the surrogate model, the shape parameter c needs to be optimized with the objective of minimum PRESS [25,26,27]. Similarly, the following steps are designed to reduce the global prediction error of the metamodel proposed in this paper. First, a k-fold cross-validation of each RBF model is performed to calculate the PRESSi of the i-th RBF model g ^ i ( X ) . Second, the weight coefficient q i of g ^ i ( X ) is calculated accordingly:
q i = P R E S S i 1 i = 1 n c P R E S S i 1 , i = 1 , 2 , , n c
Apparently, q i satisfies the following:
i = 1 n c q i = 1
Finally, the weighted mean and variance can be calculated as follows:
m e a n w = i = 1 n c q i g ^ i ( X )
s i g m a w = i = 1 n c ( g ^ i ( X ) m e a n w ) 2
m e a n w is regarded as the predicted value at sample point X of the metamodel:
g ^ ( X ) = m e a n w

3.2. The Proposed Learning Function

In order to improve the efficiency and effectiveness of the adaptive model for acquiring LSF information, the location of its new training sample points should meet the following three requirements:
(1)
Maintain sufficient distance from the existing training sample points (i.e., to alleviate the clustering problem);
(2)
Be in the sensitive region;
(3)
Be near enough to the LSF, i.e., g ^ ( X ) is close enough to zero.
To achieve the above purpose, first, the sample points are transformed from the original space (X-space) to the equivalent standard normal space (U-space) with Nataf transform [31]. This is because the following facilities are available in U-space: (1) the Euclidean distance from a sample point to the origin is directly related to the probability of that point, and (2) the effect of order-of-magnitude differences between the different dimensions of the sample on the distance is eliminated. Second, for the above requirement (1), a penalty function based on the distance between the sample points is proposed. Assuming the minimum distance between each sample point of the DoE in U-space is d D o E and the minimum distance from a sample point to the DoE in U-space is d U , the penalty function can be expressed as follows:
d p e n a l t y ( X ) = d D o E d U { d U = n o r m ( U U n e a r e s t ) d D o E = min ( n o r m ( U i U j ) ) i , j = 1 , 2 , , n D o E   and   i j
where U is the Nataf transform of X; U n e a r e s t is a sample point of DoE in U-space, from which the Euclidean distance to U is the minimum; U i   and   U j are the i-th and j-th sample points of DoE in U-space; n D o E is the number of sample points in DoE; and n o r m ( ) is the two-norm function. The candidate sample point with a smaller predicted value is considered to be better. Hence, d p e n a l t y > 1   ( i . e . ,   d U > d D o E ) is regarded as a penalty and d p e n a l t y < 1   ( i . e . ,   d U < d D o E ) is considered as a reward. By employing d p e n a l t y ( X ) , the learning function will tend to select points far away from DoE.
However, an excessively large d U may result in new training samples being in an insensitive region. To avoid losing the validity of LSF information due to the low probability of a new training sample point, p ( X ) is employed to satisfy the above requirement (2). Denote p ( X ) as follows:
p ( X ) = n o r m ( U )
Then, a distance control function d c ( X ) that can adjust the weight between the above requirement (1) and requirement (2) can be expressed as:
d c ( X ) = p ( X ) d p e n a l t y ( X ) 2 α
where the adjustment coefficient α is employed to adjust the influence of d p e n a l t y ( X ) . It is important to underline that the input variables must be normalized. Otherwise, when there are order-of-magnitude differences between different dimensions of X , the distance control function d c ( X ) may not work as intended.
Finally, synthesizing the three requirements above, an active learning function L ( X ) with reference to U(X) [22] is proposed, as follows:
L ( X ) = | m e a n w | s i g m a w d c ( X )
The optimization problem of finding a sample point with a minimum L ( X ) can be solved by direct MCS, which has the advantage of saving computational cost. This is because MCS would already need to be performed to predict failure probability. An alternative method is searching the variable space with meta-algorithms such as particle swarm optimization (PSO) [32] or Genetic algorithm [33]. Apparently, by searching the variable space, it is possible to find better sample points to add than those in the MC population. The disadvantage of the alternative method is that it requires more computational resources. In this paper, we chose to use the direct MCS to solve the optimization problem.
d U , d D o E , and U n e a r e s t are shown in Figure 1, in which the diamonds represent the sample points of the DoE in U-space, the triangles represent the standardized candidate sample points U , and the area surrounded by the dotted line is the sensitive region.

3.3. The Proposed Learning Function and Stopping Criterion

As the sequential sampling proceeds, the learning function keeps adding new samples into DoE and renews the surrogate model according to the enriched DoE. The iterations stop when the stopping criterion is satisfied. Generally, the relative error of the predicted failure probability is used to check the stopping criterion [23,28,34]:
| p ^ f k p ^ f k 1 | p ^ f k < ε
where ε is a user defined constant, and p ^ f k and p ^ f k 1 are the failure probability predicted in the k -th and ( k 1 ) -th iteration. p ^ f can be calculated as follows:
p ^ f = 1 n m c i = 1 n m c I ( g ^ ( X ) < 0 )
In Equation (19), n m c is the number of MC populations; I ( ) is an indicator: when the expression in it is real, I ( ) is equal to 1; else I ( ) equals to 0. In addition, when performing MCS for small failure probability events, the coefficient of variation (COV) of MCS needs to meet the following requirements [21]:
C O V = 1 p ^ f n m c p ^ f ,   C O V < 5 %

3.4. Reliability Analysis Procedure Based on MSRBF-MCS

In this section, based on the learning function in Equation (17), MSRBF-MCS is proposed to carry out the probabilistic reliability analysis. In order to present the framework of the method clearly, a simple system with sufficient nonlinearity is taken as a demonstration of the main steps of MSRBF-MCS. Assuming that the system’s performance function and distribution of input variables are as follows:
g ( x 1 , x 2 ) = cos ( x 1 2 ) 2 x 2 + 5 ,   x 1   and   x 2 ~ N ( 0 , 1 )
Then, the steps of MSRBF-MCS approach and the solving process of this demo is as follows:
(1)
Generate the initial DoE with Latin hypercube sampling (LHS). In the demo, n i n i = 8.
(2)
Initialize the number of iterations k = 0, the kernel function type, shape parameters, adjustment coefficient α , and stopping criterion constant ε . For this demo, c = [ 0.2 , 0.4 , 0.6 , 0.8 ] , α = 0.5 , ε = 1 × 10 5 , and the kernel function type is “Multiquadric”.
(3)
Build RBF models with different shape parameters for multiple predictions, calculate the weight coefficient q i by cross-validation, and obtain the metamodel’s prediction (i.e., m e a n w ) and its variance (i.e., s i g m a w ), and then formulate the learning function L ( X ) .
(4)
Set k = k + 1 and perform MCS to obtain the predicted failure probability p ^ f k of the k -th iteration. For MCS in the demo, the initial population size is set as n m c = 1 × 10 5 .
(5)
Check if k 2 and the stopping criterion (Equation (18)) are satisfied at the same time. If satisfied, proceed to the next step; otherwise, pick the sample point from the MC population with a minimum L ( X ) and add it into DoE. Then, go back to step 3 and proceed.
(6)
Check the COV. If C O V < 5 % proceed to next step; otherwise, enlarge the size of the MC population by n m c = n m c × 10 and go back to the step 6.
(7)
End the iteration and output p ^ f k as the predicted failure probability.
All steps of MSRBF-MCS are also shown in Figure 2. In the demonstration, step 1 to step 3 of the solving process is shown in Figure 3. In Figure 3a, the hollow dots are generated by LHS as the initial DoE; the curves c = 0.2 , c = 0.4 , c = 0.6 , and c = 0.8 are the LSFs of corresponding RBF models; and the curve m e a n w is the LSF of the weighted combination of RBF models corresponding to c = 0.2 , c = 0.4 , c = 0.6 , and c = 0.8 . In order to formulate the m e a n w , cross-validation of the RBF models is performed, and the corresponding P R E S S = [ 2 . 01 , 1 . 82 , 1 . 64 , 1 . 47 ] , q = [ 0 . 21 ,   0 . 24 ,   0 . 26 ,   0 . 29 ] . Figure 3b shows the contour map of the weighted prediction variance s i g m a w . As the predictions of the RBF model are unbiased at the known sample points, s i g m a w increases significantly when the distance from DoE becomes bigger. This fact indicates that, if only s i g m a w is examined, there will be a tendency for the learning function to select sample points far away from DoE, which may result in newly added sample points not being located in a sensitive area.
The sequential sampling process (corresponding to step 6–9) of MSRBF-MCS is shown in Figure 4 and Figure 5. In Figure 4, the triangular dots represent newly added sample points. Denote n a d d as the number of newly added sample points, then Figure 4a–c shows the LSFs of the corresponding surrogate models, of which n a d d = 4 ,   8   and   12 , respectively. Figure 4d shows the convergence curve of the sequential sampling. The iteration ends when n a d d = 12 , then the times of calling performance function n c a l l = 20 ( n c a l l = n a d d + n i n i ). From this we can see the fast convergence of the proposed approach. The p ^ f by MSRBF-MCS and p ^ f _ m c by direct MCS are 0.2340 and 0.2323, respectively. Accordingly, the prediction error e p r can be expressed as follows:
e p r = | p ^ f _ m c p ^ f | p ^ f _ m c
In this demonstrated example, e p r = 0 . 8 % . Then, the coefficient of variation should be checked to see if the size of the MC population is large enough. According to Equation (20), C O V = 0 . 57 % (smaller than 5%). This is the end of the whole procedure.
Furthermore, the adjustment coefficient α can influence the selection of a new point for the sequential sampling. It is therefore necessary to investigate the effect of α on the performance of the proposed reliability analysis approach. The demonstrated example is solved 10 times by MSRBF-MCS when α = 0.1 , 0.3 , 0.5 , 0.7   and   0 . 9 . The results of the above simulations are shown in Figure 5. In Figure 5a, the mean of n c a l l is the greatest when α = 0 . 9 , and the prediction error e p r = 0.5 % is simultaneously the smallest.

4. Case Study

In this section, three benchmark reliability analysis problems, including one two-dimensional problem (a series system with four branches) and two high-dimensional problems (a dynamic response of a nonlinear oscillator and a roof truss structure with small failure probability), are used to verify the feasibility and effectiveness of the proposed MSRBF-MCS.
In order to fully explore the proposed approach, the performance of the proposed method with different types of kernel functions (Gaussian (G), Multiquadric (M), and Inverse multiquadric (IM)) is first investigated. The kernel function with the best performance is then chosen to examine the influence of the adjustment coefficient α . In addition, to further verify the effectiveness and feasibility of the proposed approach, the results of the presented model are compared with those of the other models proposed in recent literature.

4.1. A Series System with Four Branches

For example 1, a series system with four branches was used to validate the proposed model. In the system, the input valuables x 1 , x 2 obeyed a standard normal distribution, and the performance function of example 1 is as follows:
g ( x 1 , x 2 ) = min { 3 + 0.1 ( x 1 x 2 ) 2 ( x 1 + x 2 ) / 2 3 + 0.1 ( x 1 x 2 ) 2 + ( x 1 + x 2 ) / 2 ( x 1 x 2 ) + 6 / 2 ( x 2 x 1 ) + 6 / 2
First, set the α = 1 , ε = 10 5 , n m c = 10 6 and the initial DoE to eight fixed sample points. Then, launch MSRBF-MCS with different types of kernel functions for example 1. Figure 6 shows the results of the above simulation, in which (a), (c), and (e) show the LSFs and the newly added sample points (triangle dots) of MSRBF-MCS + G, MSRBF-MCS + I and MSRBF-MCS + IM, respectively, and (b), (d), and (f) show their convergence curves. As we can see in Figure 6, the shape of LSF by MSRBF-MCS + M is most similar to that of the real RBF. In contrast, the shape of LSF by MSRBF-MCS + G and MSRBF-MCS + IM is distorted. Especially, the area enclosed by the LSF of MSRBF-MCS + IM is significantly larger than that of the true LSF, and this is why the failure probability by MSRBF-MCS + IM is obviously smaller than the failure probability by direct MCS. In addition, compared with the convergence curves of MSRBF-MCS + G and MSRBF-MCS + IM, the convergence curve of MSRBF-MCS + M has a smaller fluctuation and similarly high prediction accuracy. Therefore, it can be concluded that MSRBF-MCS + M has the best performance. Then, the effect of different α on the performance of MSRBF-MCS + M is examined by setting α = 0.1 , 0.3 , 0.5 , 0.7   and   0 . 9 , respectively.
In fact, many scholars have used this example to validate their proposed reliability model, among them, the recent metamodels ARBFM-MCS [25], CVRBF-MCS [25], RBF-GA [29], and LIF [35] have achieved good results. A comparison between the proposed metamodel and the above metamodels is listed in Table 2. From Table 2, one can imply, in this example, a positive correlation between the prediction accuracy of the MSRBF-MCS + M and the value of α is significant. The prediction error of the algorithm is consistently greater than 4% when the alpha is 0.1, 0.3, and 0.5, while it immediately drops to within 2% when the alpha is 0.7, 0.9, and 1. This indicates that enhancing the effect of the penalty function d p e n a l t y ( X ) to some degree may help to improve the performance of the method. MSRBF-MCS + M performs optimally when α = 1 , with a mean of n c a l l = 36.3 (smaller than the results of ARBFM-MCS and CVRBF-MCS, but larger than the results of RBF-GA) and prediction error e p r = 1.43 % (only smaller than the results of CVRBF-MCS). Although the method in this paper does not provide the best results for all of the perspectives, it is prone to achieving a good balance in terms of efficiency and prediction accuracy.

4.2. Dynamic Response of a Nonlinear Oscillator

Example 2 is the dynamic response of an oscillator (a high-dimensional nonlinear problem), as shown in Figure 7. Its performance function is as follows:
g ( m , c 1 , c 2 , r , F 1 , t 1 ) = 3 r | 2 F 1 m ω 0 2 sin ( ω 0 t 1 2 ) | ,   ω 0 = ( c 1 + c 2 ) / 2
where m , c 1 , c 2 , r , F 1 , t 1 are the six independent random variables obeying the normal distribution. The mean values and standard deviations are shown in Table 3.
The parameters for MSRBF-MCS are set as ε = 10 5 , n i n i = 12 , n m c = 10 6 . Then, example 2 is solved using MCRBF-MCS + G ( α = 1 ), MCRBF-MCS + M ( α = 0.1 ,   0.4 ,   0.7   and   1 ), and MCRBF-MCS + IM ( α = 1 ). In addition, the results of example 2 from the proposed model are compared with those from some promising models such as C(X) [23], ARBFM-MCS, CVRBF-MCS [25], and RBF-GA [29]. As one can see from Table 4, the model MCRBF-MCS + M ( α = 1 ) provides the best outcomes ( n c a l l = 32.2 , e p r = 0.07 % ) of all of the presented models. This result is slightly lower in accuracy than CVRBF-MCS (whose e p r = 0 ), but it has a smaller n c a l l than CVRBF-MCS. In terms of convergence curves (as shown in Figure 8), all the presented models converge rapidly with an acceptable prediction accuracy.

4.3. Displacement Response of an Airfoil

Example 3 is a practical engineering problem in which the proposed method is used in the reliability analysis for an airfoil. As shown in Figure 9, the airfoil is 1350 mm in length. The components of the airfoil include the main beam (made of carbon fiber), ribs (made of epoxy resin), front and rear walls (made of epoxy resin), and skin (made of glass fiber). The left end of the airfoil is fixed, and a uniform load is applied to its bottom. Then, its displacement response in the vertical direction is obtained by finite element analysis, as shown in Figure 10. Generally, when the displacement of the airfoil in the vertical direction is greater than one-tenth (135 mm in this case) of its length, the airfoil can be considered as having failed [36]. The normal random variables in example 3 are independent of each other and their distribution is shown in Table 5. In order to examine the performance of this proposed method with different failure probabilities, the coefficient of variation for a uniform load is set as 0.05, 0.1, and 0.15.
The parameters for MSRBF-MCS are set as ε = 10 5 , n i n i = 12 , n m c = 1.5 × 10 4 . Then, MCRBF-MCS + M ( α = 0.5 ) and RBF-LSH are employed to solve example 3. RBF-LSH is a one-shot surrogate model, whose DoE is generated by one-time Latin hypercube sampling (LHS) [18]. The comparation between the two methods is shown in Table 6. As one can see in Table 6, the results from the proposed method are much more accurate and efficient than those from RBF-LHS.

5. Conclusions

In this paper, an adaptive reliability analysis method called MSRBF-MCS is proposed. MSRBF-MCS utilizes multiple shape parameters to generate different RBF models for multiple predictions. Then, the weighted mean and standard deviation of the multiple predictions is used as the estimation of the surrogate model and the measurement of local uncertainty, respectively. Additionally, a distance control function d c ( X ) that is likely to avoid the clustering problem and the “not being located in a sensitive region” problem of sample points is presented. The performance of MSRBF-MCS is further investigated with three benchmark problems. The results indicate the following:
(1)
Compared with MSRBF-MCS + G and MSRBF-MCS + IM, the performance of MSRBF-MCS + M is often more stable and accurate;
(2)
The proposed method has good stability, i.e., for different adjustment coefficients, kernel functions, or systems with different failure probabilities, the proposed method tends to provide acceptable results;
(3)
Compared with models in the literature, the model proposed in this paper has the characteristics of a high prediction accuracy and efficiency.

Author Contributions

Funding acquisition, Y.G.; Methodology, P.Y.; Writing—original draft, W.D.; Writing—review & editing, J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of China (Grant No. 11572233), and Pre-Research Foundation (Grant No. 61400020106) as well as the Fundamental Research Funds for the Central Universities.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, S.; Zhang, J.; Zhang, Q.; Huang, Y.; Wen, M. Uncertainty Theory-Based Structural Reliability Analysis and Design Optimization under Epistemic Uncertainty. Appl. Sci. 2022, 12, 2846. [Google Scholar] [CrossRef]
  2. Li, H.; Díaz, H.; Soares, C.G. A failure analysis of floating offshore wind turbines using AHP-FMEA methodology. Ocean Eng. 2021, 234, 109261. [Google Scholar] [CrossRef]
  3. Li, H.; Soares, C.G.; Huang, H.Z. Reliability analysis of a floating offshore wind turbine using Bayesian Networks. Ocean Eng. 2020, 217, 107827. [Google Scholar] [CrossRef]
  4. Ma, J.; Yue, P.; Du, W.Y.; Dai, C.P.; Wriggers, P. Reliability-based combined high and low cycle fatigue analysis of turbine blade using adaptive least squares support vector machines. Struct. Eng. Mech. 2022, 3, 293–304. [Google Scholar]
  5. Ditlevsen, O.; Madsen, H.O. Structural Reliability Methods; Wiley: Chichester, NY, USA, 1996. [Google Scholar]
  6. Gao, H.F.; Zio, E.; Wang, A.; Bai, G.C.; Fei, C.W. Probabilistic-based combined high and low cycle fatigue assessment for turbine blades using a substructure-based kriging surrogate model. Aerosp. Sci. Technol. 2020, 104, 105957. [Google Scholar] [CrossRef]
  7. Yue, P.; Ma, J.; Zhou, C.H.; Zu, J.W.; Shi, B.Q. Dynamic fatigue reliability analysis of turbine blades under the combined high and low cycle loadings. Int. J. Damage Mech. 2021, 30, 825–844. [Google Scholar] [CrossRef]
  8. Yuan, R.; Li, H.; Gong, Z.; Tang, M.; Li, W. An enhanced Monte Carlo simulation—Based design and optimization method and its application in the speed reducer design. Adv. Mech. Eng. 2017, 9, 1687814017728648. [Google Scholar] [CrossRef]
  9. Yue, P.; Ma, J.; Huang, H.; Shi, Y.; Zu, W.J. Threshold damage-based fatigue life prediction of turbine blades under combined high and low cycle fatigue. Int. J. Fatigue 2021, 150, 106323. [Google Scholar] [CrossRef]
  10. Bichon, B.J.; Eldred, M.S.; Swiler, L.P.; Mahadevan, S.; McFarland, J.M. Efficient Global Reliability Analysis for Nonlinear Implicit Performance Functions. AIAA J. 2008, 46, 2459–2468. [Google Scholar] [CrossRef]
  11. Shayanfar, M.A.; Barkhordari, M.A.; Roudak, M.A. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method. Commun. Nonlinear Sci. Numer. Simul. 2017, 47, 223–237. [Google Scholar] [CrossRef]
  12. Maincon, P. A first order reliability method for series systems. Struct. Saf. 2000, 22, 5–26. [Google Scholar] [CrossRef]
  13. Wei, Y.; Bai, G.; Song, L.K. A novel reliability analysis approach with collaborative active learning strategy-based augmented RBF metamodel. IEEE Access 2020, 8, 199603–199617. [Google Scholar] [CrossRef]
  14. Du, W.; Ma, J.; Dai, C.; Yue, P.; Zu, J.W. A New Approach for Fatigue Reliability Analysis of Thin-Walled Structures with DC-ILSSVR. Materials 2021, 14, 3967. [Google Scholar] [CrossRef] [PubMed]
  15. Luo, C.Q.; Behrooz, K.; Shun, P.Z.; Osman, T.; Niu, X.P. Hybrid enhanced Monte Carlo simulation coupled with advanced machine learning approach for accurate and efficient structural reliability analysis. Comput. Methods Appl. Mech. Eng. 2022, 388, 114218. [Google Scholar] [CrossRef]
  16. Chiacchio, F.; Iacono, A.; Compagno, L.; D’Urso, D. A general framework for dependability modelling coupling discrete-event and time-driven simulation. Reliab. Eng. Syst. Saf. 2020, 199, 106904. [Google Scholar] [CrossRef]
  17. Du, W.; Luo, Y.; Wang, Y.; Ma, L. A general framework for fatigue reliability analysis of a high temperature component. Qual. Reliab. Eng. Int. 2019, 35, 292–303. [Google Scholar] [CrossRef]
  18. Zhang, C.Y.; Sun, T.; Ai-Hua, W.; Jing, H.Z.; Liu, B.S.; Li, C.W. Reliability analysis of blade fatigue life based on fuzzy intelligent multiple extremum response surface method. Filomat 2018, 32, 1897–1907. [Google Scholar] [CrossRef]
  19. Vanini, Z.S.; Khorasani, K.; Meskin, N. Fault detection and isolation of a dual spool gas turbine engine using dynamic neural networks and multiple model approach. Inf. Sci. 2014, 259, 234–251. [Google Scholar] [CrossRef]
  20. Tao, T.; Zio, E.; Zhao, W. A novel support vector regression method for online reliability prediction under multi-state varying operating conditions. Reliab. Eng. Syst. Saf. 2018, 177, 35–49. [Google Scholar] [CrossRef]
  21. Echard, B.; Gayton, N.; Lemaire, M. AK-MCS: An active learning reliability method combining Kriging and Monte Carlo Simulation. Struct. Saf. 2011, 33, 145–154. [Google Scholar] [CrossRef]
  22. Zheng, P.; Wang, C.; Zong, Z.; Wang, L. A new active learning method based on the learning function U of the AK-MCS reliability analysis method. Eng. Struct. 2017, 148, 185–194. [Google Scholar]
  23. Xiao, N.; Zuo, M.; Guo, W. Efficient reliability analysis based on adaptive sequential sampling design and cross-validation. Appl. Math. Model. 2017, 58, 404–420. [Google Scholar] [CrossRef]
  24. Chen, W.; Fu, Z.J.; Chen, C.S. Recent advances in radial basis function collocation methods. In Springer Briefs in Applied Sciences and Technology; Springer: Heidelberg, Germany, 2014. [Google Scholar]
  25. Shi, L.; Sun, B.; Ibrahim, D.S. An active learning reliability method with multiple kernel functions based on radial basis function. Struct. Multidiscip. Optim. 2019, 60, 211–229. [Google Scholar] [CrossRef]
  26. Hong, L.; Li, H.; Peng, K. A combined radial basis function and adaptive sequential sampling method for structural reliability analysis. Appl. Math. Model. 2021, 90, 375–393. [Google Scholar] [CrossRef]
  27. Chau, M.; Han, X.; Bai, Y.; Tran, T.N.; Truong, V.H. An efficient PMA-based reliability analysis technique using radial basis function. Eng. Comput. 2013, 31, 1098–1115. [Google Scholar] [CrossRef]
  28. Wang, Q.; Fang, H. Reliability analysis of tunnels using an adaptive RBF and a first-order reliability method. Comput. Geotech. 2018, 98, 144–152. [Google Scholar] [CrossRef]
  29. Jing, Z.; Chen, J.; Li, X. RBF-GA: An adaptive radial basis function metamodeling with genetic algorithm for structural reliability analysis. Reliab. Eng. Syst. Saf. 2019, 189, 42–57. [Google Scholar] [CrossRef]
  30. Li, H.; Soares, C.G. Assessment of failure rates and reliability of floating offshore wind turbines. Reliab. Eng. Syst. Saf. 2022, 108777. [Google Scholar] [CrossRef]
  31. Wen, C.; Jia, F.Z.; Xin, W. The Radial Basis Function Methods in Science and Engineering Mathematics; Science Press: Beijing, China, 2014. [Google Scholar]
  32. Dubourg, V.; Sudret, B.; Bourinet, J.M. Reliability-based design optimization using kriging surrogates and subset simulation. Struct. Multidiscip. Optim. 2011, 44, 673–690. [Google Scholar] [CrossRef]
  33. Ishaque, K.; Salam, Z.; Amjad, M.; Mekhilef, S. An improved particle swarm optimization (PSO) based MPPT for PV with reduced steady state oscillation. IEEE Trans. Power Electron. 2012, 27, 3627–33638. [Google Scholar] [CrossRef]
  34. MathWorks. Matlab Uses’s Guide; The MathWorks: Natick, MA, USA, 1991. [Google Scholar]
  35. Yun, W.; Lu, Z.; Jiang, X. A modified importance sampling method for structural reliability and its global reliability sensitivity analysis. Struct. Multidiscip. Optim. 2018, 57, 1625–1641. [Google Scholar] [CrossRef]
  36. Tang, D.; Dowell, E.H. Limit-cycle hysteresis response for a high-aspect ratio wing model. J. Aircr. 2002, 39, 885–888. [Google Scholar] [CrossRef]
Figure 1. Illustration of the distance control.
Figure 1. Illustration of the distance control.
Applsci 12 09689 g001
Figure 2. The flowchart of MSRBF-MCS.
Figure 2. The flowchart of MSRBF-MCS.
Applsci 12 09689 g002
Figure 3. Steps 1–3 of the demonstrated problem. (a) LSF of the weighted of RBF models corresponding to c = 0.2 , c = 0.4 , c = 0.6 , and c = 0.8 and m e a n w . (b) Contour map of the weighted prediction variance.
Figure 3. Steps 1–3 of the demonstrated problem. (a) LSF of the weighted of RBF models corresponding to c = 0.2 , c = 0.4 , c = 0.6 , and c = 0.8 and m e a n w . (b) Contour map of the weighted prediction variance.
Applsci 12 09689 g003
Figure 4. Step 5−9 of the demonstrated problem. (a) n a d d = 4 . (b) n a d d = 8 . (c) n a d d = 12 . (d) Convergence curve.
Figure 4. Step 5−9 of the demonstrated problem. (a) n a d d = 4 . (b) n a d d = 8 . (c) n a d d = 12 . (d) Convergence curve.
Applsci 12 09689 g004
Figure 5. The mean n c a l l and e p r of the predictions with different α . (a) mean n c a l l with different α . (b) mean e p r with different α .
Figure 5. The mean n c a l l and e p r of the predictions with different α . (a) mean n c a l l with different α . (b) mean e p r with different α .
Applsci 12 09689 g005
Figure 6. LSFs and convergence curves of example 1. (a) LSF of MSRBF-MCS + G. (b) Convergence curve of MSRBF-MCS + G. (c) LSF of MSRBF-MCS + M. (d) Convergence curve of MSRBF-MCS + M. (e) LSF of MSRBF-MCS + IM. (f) Convergence curve of MSRBF-MCS + IM.
Figure 6. LSFs and convergence curves of example 1. (a) LSF of MSRBF-MCS + G. (b) Convergence curve of MSRBF-MCS + G. (c) LSF of MSRBF-MCS + M. (d) Convergence curve of MSRBF-MCS + M. (e) LSF of MSRBF-MCS + IM. (f) Convergence curve of MSRBF-MCS + IM.
Applsci 12 09689 g006aApplsci 12 09689 g006b
Figure 7. Dynamic response of a nonlinear oscillator.
Figure 7. Dynamic response of a nonlinear oscillator.
Applsci 12 09689 g007
Figure 8. Convergence curves for different α in example 2.
Figure 8. Convergence curves for different α in example 2.
Applsci 12 09689 g008
Figure 9. The structure of the airfoil.
Figure 9. The structure of the airfoil.
Applsci 12 09689 g009
Figure 10. Displacement contour map of the aircraft wing.
Figure 10. Displacement contour map of the aircraft wing.
Applsci 12 09689 g010
Table 1. Common types of kernel functions.
Table 1. Common types of kernel functions.
Type NameExpression
Gaussian ϕ ( r ) = e ( r / c ) 2
Multiquadric ϕ ( r ) = 1 + ( r / c ) 2
Inverse multiquadric ϕ ( r ) = ( 1 + ( r / c ) 2 ) 1 / 2
Note: c is the shape parameter, and its value range is (0, 1).
Table 2. The results for example 1.
Table 2. The results for example 1.
Model Description n c a l l e p r ( % ) P ^ f ( × 10 3 )
MCS 10 6 /4.416
ARBFM - MCS   ( α = 0 ,G + M)60.11.134.466
CVRBF - MCS   ( α = 0 )65.41.444.480
RBF-GA33.31.104.367
LIF401.724.340
Proposed model:
MCRBF + G ,   α = 1 44.22.124.509
MCRBF + M ,   α = 1 36.31.434.479
MCRBF + IM ,   α = 1 36.519.473.556
MCRBF + M ,   α = 0.1 30.97.354.091
MCRBF + M ,   α = 0.3 40.83.234.273
MCRBF + M ,   α = 0.5 35.84.014.238
MCRBF + M ,   α = 0.7 39.21.904.332
MCRBF + M ,   α = 0.9 35.31.574.347
Note: The results of the model in this paper are the mean values of 10 simulations.
Table 3. The mean values and standard deviations for example 2.
Table 3. The mean values and standard deviations for example 2.
Random VariablesMean ValueStandard Deviation
m 10.05
c 1 10.10
c 2 0.10.01
r 0.50.05
F 1 10.20
t 1 10.20
Table 4. The results for example 2.
Table 4. The results for example 2.
Model Description n c a l l   e p r ( % ) P ^ f ( × 10 2 )
MCS 10 6 /2.856
ARBFM - MCS   ( α = 0 , G + M + IM)33.50.242.850
CVRBF - MCS   ( α = 0 )36.102.856
RBF-GA20.01.222.891
C ( X )   ( α = 0.3 )55.90.632.877
Proposed model:
MCRBF + G ,   α = 1 40.41.652.081
MCRBF + IM ,   α = 1 43.50.182.851
MCRBF + M ,   α = 0.1 27.91.432.815
MCRBF + M ,   α = 0.4 43.00.742.835
MCRBF + M ,   α = 0.7 40.20.072.854
MCRBF + M ,   α = 1 32.20.072.854
Note: The results of the model in this paper are the mean values of 10 simulations.
Table 5. The mean values and standard deviations for example 3.
Table 5. The mean values and standard deviations for example 3.
Random VariablesMean ValueCoefficient of Variation
Uniform load q (MPa)0.490.05, 0.1, 0.15
Elastic   modulus   of   carbon   fiber   E c (MPa)2.1 × 1050.05
Elastic   modulus   of   epoxy   resin   E e (MPa)7.3 × 1040.05
Elastic   modulus   of   glass   fiber   E g (MPa)8.7 × 1040.05
Poisson s   ratio   of   carbon   fiber   μ c 0.3070.05
Poisson s   ratio   of   epoxy   resin   μ e 0.220.05
Poisson s   ratio   of   glass   fiber   μ g 0.220.05
Table 6. The results for example 3.
Table 6. The results for example 3.
Model Description n c a l l   e p r ( % ) P ^ f ( × 10 2 )
q ’s coefficient of variation equals to 0.05MCS 1.5 × 10 4 /3.128
RBF-LHS 500/0
MCRBF + M ,   α = 0.5 890.123.124
q ’s coefficient of variation equals to 0.10MCS 1.5 × 10 4 /17.589
RBF-LHS 50035.527.292
MCRBF + M ,   α = 0.5 780.3927.401
q ’s coefficient of variation equals to 0.15MCS 1.5 × 10 4 /26.737
RBF-LHS 50050.340.113
MCRBF + M ,   α = 0.5 730.526.872
Note: The results of the model in this paper are the mean values of 10 simulations.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Du, W.; Ma, J.; Yue, P.; Gong, Y. An Efficient Reliability Method with Multiple Shape Parameters Based on Radial Basis Function. Appl. Sci. 2022, 12, 9689. https://doi.org/10.3390/app12199689

AMA Style

Du W, Ma J, Yue P, Gong Y. An Efficient Reliability Method with Multiple Shape Parameters Based on Radial Basis Function. Applied Sciences. 2022; 12(19):9689. https://doi.org/10.3390/app12199689

Chicago/Turabian Style

Du, Wenyi, Juan Ma, Peng Yue, and Yongzhen Gong. 2022. "An Efficient Reliability Method with Multiple Shape Parameters Based on Radial Basis Function" Applied Sciences 12, no. 19: 9689. https://doi.org/10.3390/app12199689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop