1. Introduction
The reliability analysis of structures is of great importance for modern equipment and systems [
1,
2,
3,
4,
5]. Generally, the reliability state of a structure is described by its performance function, i.e., the structure is safe when its performance function is greater than 0, fails when its performance function is less than 0, and is in its limited state when its performance function is equal to 0 [
6,
7,
8,
9]. In practical engineering, the structural performance function is frequently very complex [
10,
11]. To obtain a simpler alternative for the limit state function (LSF), the traditional reliability analysis methods carries out a first-order or second-order Taylor expansion at the most probable point (MPP), such as the first-order reliability mothed (FORM) [
6,
12] and second-order reliability method (SORM) [
13,
14]. Nevertheless, large prediction error occurs when the performance function is of great nonlinearity, as the failure probabilities obtained by FORM or SORM are subjected to the nonlinearity of the structural performance function.
The Monte Carlo simulation (MCS) [
15,
16] is frequently used in structural reliability analysis, because of its conceptual simplicity and irrelevance to the abovementioned nonlinearity. However, MCS is limited by the simulation (finite element analysis (FEA) in general) time, because a sufficiently large sample size is essential for the accuracy of MCS [
16]. The surrogate model is a simpler approximation of the structure’s response. Therefore, MCS with surrogate models can avoid the massive computational cost caused by multiple FE simulations [
17]. To further reduce the computational cost and ensure prediction accuracy, MCS is often combined with adaptive models. Different from the general metamodel, the adaptive model finds its training samples iteration by iteration. Generally, the framework of reliability analysis based on MCS with an adaptive model consists of four steps:
- (1)
Initialize the design of experiment (DoE);
- (2)
Generate the metamodel according to DoE;
- (3)
Perform MCS based on the metamodel to estimate the failure probability;
- (4)
Check the stopping criterion: if satisfied, end the iteration, and output the failure probability; otherwise, add new samples into DoE through the learning function and go back to (2).
Currently, there are mainly four available surrogate techniques, namely response surface methodology (RSM) [
18], artificial neural networks (ANN) [
19], support vector machines (SVM) [
20], and the Kriging model [
21]. Among them, the Kriging model is frequently employed in the above framework. This is because the prediction variance provided by the Kriging model can represent the local uncertainty of prediction to some degree. Many Kriging-based adaptive models have been proposed to address the reliability analysis problem. Specifically, Echard et al. proposed AK − MCS + EFF and AK − MCS + U, in which the learning functions
U(
X) and
EFF(
X) aim to find sample points that are simultaneously close to the LSF and have a high local uncertainty [
21]. Furthermore, by searching the line between the best sample point found by
U(
X) and its nearest sample point with different signs in the MC population, Zheng et al. [
22] obtained a better new sample than that using
U(
X). Besides the prediction’s value and variance, the location of the new sample is also an important factor to consider. To make the sample points prone to being located in the sensitive region, the learning function
C(
X) proposed by Xiao et al. takes the distance from the candidate sample point to the mean point into account, i.e., sample points with a smaller above-mentioned distance are easier to be selected for sequential sampling [
23].
Unlike the Kriging model, the structural reliability research based on the RBF model is relatively less. This is because RBF model does not provide prediction variance. In fact, the RBF model has many merits such as applicability to high dimensional problems, exponential convergence speed, and a high prediction accuracy [
24]. Hence, in order to make up for the above-mentioned inability of RBF model, multiple predictions by RBF are performed to obtain prediction variance. Some of the similar studies are as follows: Shi et al. [
25] presented two adaptive models named CVRBF-MCS and ARBFM-MCS for the structural reliability analysis, in which different RBF models are launched for multiple predictions by applying different subsets of DoE or different kernel functions. Furthermore, by conducting multiple predictions, Hong et al. used the efficient global optimization (EGO) as the learning function for sequential sampling based on the RBF model and achieved satisfactory results for some benchmark problems [
26]. In addition, for sequential sampling based on the RBF model, using the MPP of the current surrogate model as the newly added sample is also an effective approach. Chau et al. [
27] conducted a performance measure approach (PMA) to find the MPP of the RBF model as the new sample point of the sequential sampling. To improve the efficiency, the adaptive RBF model presented by Wang can find 2
m (
m is the dimension of random input) additional newly added sample points in each iteration by using points near the MPP [
28]. Differing from Chau’s and Wang’s methods, the RBF-GA proposed by Jing et al. used MCS rather than the reliability index to calculate the structure failure probability, and the potential MPP was found using the genetic algorithm (GA) [
29]. However, because of the “most probable” characteristic of the MPP, sequential sampling methods must utilize specialized distance control strategies to avoid the clustering problem, when taking MPP as the new sample for sequential sampling.
Although the above-mentioned RBF-based adaptive models have proven to be promising for structural reliability analysis, they do not consider the effect of shape parameters on the search for new samples for sequential sampling. In fact, predictions using the RBF model with different shape parameters can vary widely [
24]. To address this problem, an adaptive model, which based on a combination of the RBF model and MCS, is proposed. For the proposed model (MSRBF-MCS), different shape parameters are used to launch multiple predictions. The effect of the shape parameters on sequential sampling is considered by calculating the weighted variance of the multiple predictions. Furthermore, the 2-norm of the MC population in the standard normal space is investigated using the proposed learning function to ensure new samples are in the sensitive region (see the sensitive region in
Figure 1). Moreover, to make the clustering problem of new samples difficult, a distance control strategy that encourages a larger minimum inter-sample distance is adopted. Finally, the proposed method is verified with three benchmark problems and a particle engineering problem with good results.
The remainder of this paper is organized as follows.
Section 2 shows the relevant basic theories of the RBF model. The proposed adapted metamodel is proposed in
Section 3. In
Section 4, the proposed model is validated by three academic examples and one practical example. Finally, the concluding remarks are given in
Section 5.
3. The Proposed Approach: MSRBF-MCS
In the Kriging model-based sequential sampling, researchers always aim to find the candidate sample points that are the closest to the LSF and have a high local uncertainty at the same time [
25]. However, the prediction based on the RBF model does not offer the prediction variance. Hence, it is necessary to launch multiple predictions to obtain the prediction variance [
25,
26]. To the best of our knowledge, the method to launch multiple predictions by creating multiple RBF models with different shape parameters
c has not been presented yet. By setting different values of
c, different RBF models are created. The global uncertainty of each RBF model is calculated via cross-validation, and the weighting coefficients of each model are obtained accordingly. Then, the weighted prediction’s mean, variance, and the distance-related factors are used to formulate the learning function.
3.1. Multiple Predictions Launched by Different Shape Parameters
The value of shape parameter
c has a significant influence on the surrogate model [
13]. Hence, RBF models with widely varying prediction results can be obtained by setting different values of
c:
where
is the vector of shape parameters, and
is the number of shape parameters. Assuming that there are
sample pairs in the initial DoE. Then
RBF models for multiple prediction can be generated according to
and DoE.
In order to reduce the global prediction error of the surrogate model, the shape parameter
c needs to be optimized with the objective of minimum
PRESS [
25,
26,
27]. Similarly, the following steps are designed to reduce the global prediction error of the metamodel proposed in this paper. First, a k-fold cross-validation of each RBF model is performed to calculate the
PRESSi of the
i-th RBF model
. Second, the weight coefficient
of
is calculated accordingly:
Apparently,
satisfies the following:
Finally, the weighted mean and variance can be calculated as follows:
is regarded as the predicted value at sample point
of the metamodel:
3.2. The Proposed Learning Function
In order to improve the efficiency and effectiveness of the adaptive model for acquiring LSF information, the location of its new training sample points should meet the following three requirements:
- (1)
Maintain sufficient distance from the existing training sample points (i.e., to alleviate the clustering problem);
- (2)
Be in the sensitive region;
- (3)
Be near enough to the LSF, i.e., is close enough to zero.
To achieve the above purpose, first, the sample points are transformed from the original space (
X-space) to the equivalent standard normal space (
U-space) with Nataf transform [
31]. This is because the following facilities are available in
U-space: (1) the Euclidean distance from a sample point to the origin is directly related to the probability of that point, and (2) the effect of order-of-magnitude differences between the different dimensions of the sample on the distance is eliminated. Second, for the above requirement (1), a penalty function based on the distance between the sample points is proposed. Assuming the minimum distance between each sample point of the
DoE in
U-space is
and the minimum distance from a sample point to the
DoE in
U-space is
, the penalty function can be expressed as follows:
where
U is the Nataf transform of
X;
is a sample point of
DoE in
U-space, from which the Euclidean distance to
U is the minimum;
are the
i-th and
j-th sample points of
DoE in
U-space;
is the number of sample points in
DoE; and
is the two-norm function. The candidate sample point with a smaller predicted value is considered to be better. Hence,
is regarded as a penalty and
is considered as a reward. By employing
, the learning function will tend to select points far away from
DoE.
However, an excessively large
may result in new training samples being in an insensitive region. To avoid losing the validity of LSF information due to the low probability of a new training sample point,
is employed to satisfy the above requirement (2). Denote
as follows:
Then, a distance control function
that can adjust the weight between the above requirement (1) and requirement (2) can be expressed as:
where the adjustment coefficient
is employed to adjust the influence of
. It is important to underline that the input variables must be normalized. Otherwise, when there are order-of-magnitude differences between different dimensions of
, the distance control function
may not work as intended.
Finally, synthesizing the three requirements above, an active learning function
with reference to
U(
X) [
22] is proposed, as follows:
The optimization problem of finding a sample point with a minimum
can be solved by direct MCS, which has the advantage of saving computational cost. This is because MCS would already need to be performed to predict failure probability. An alternative method is searching the variable space with meta-algorithms such as particle swarm optimization (PSO) [
32] or Genetic algorithm [
33]. Apparently, by searching the variable space, it is possible to find better sample points to add than those in the MC population. The disadvantage of the alternative method is that it requires more computational resources. In this paper, we chose to use the direct MCS to solve the optimization problem.
,
, and
are shown in
Figure 1, in which the diamonds represent the sample points of the
DoE in
U-space, the triangles represent the standardized candidate sample points
, and the area surrounded by the dotted line is the sensitive region.
3.3. The Proposed Learning Function and Stopping Criterion
As the sequential sampling proceeds, the learning function keeps adding new samples into DoE and renews the surrogate model according to the enriched DoE. The iterations stop when the stopping criterion is satisfied. Generally, the relative error of the predicted failure probability is used to check the stopping criterion [
23,
28,
34]:
where
is a user defined constant, and
and
are the failure probability predicted in the
-th and
-th iteration.
can be calculated as follows:
In Equation (19),
is the number of
MC populations;
is an indicator: when the expression in it is real,
is equal to 1; else
equals to 0. In addition, when performing MCS for small failure probability events, the coefficient of variation (
COV) of MCS needs to meet the following requirements [
21]:
3.4. Reliability Analysis Procedure Based on MSRBF-MCS
In this section, based on the learning function in Equation (17), MSRBF-MCS is proposed to carry out the probabilistic reliability analysis. In order to present the framework of the method clearly, a simple system with sufficient nonlinearity is taken as a demonstration of the main steps of MSRBF-MCS. Assuming that the system’s performance function and distribution of input variables are as follows:
Then, the steps of MSRBF-MCS approach and the solving process of this demo is as follows:
- (1)
Generate the initial DoE with Latin hypercube sampling (LHS). In the demo, = 8.
- (2)
Initialize the number of iterations k = 0, the kernel function type, shape parameters, adjustment coefficient , and stopping criterion constant . For this demo, , , and the kernel function type is “Multiquadric”.
- (3)
Build RBF models with different shape parameters for multiple predictions, calculate the weight coefficient by cross-validation, and obtain the metamodel’s prediction (i.e., ) and its variance (i.e., ), and then formulate the learning function .
- (4)
Set k = k + 1 and perform MCS to obtain the predicted failure probability of the -th iteration. For MCS in the demo, the initial population size is set as .
- (5)
Check if and the stopping criterion (Equation (18)) are satisfied at the same time. If satisfied, proceed to the next step; otherwise, pick the sample point from the MC population with a minimum and add it into DoE. Then, go back to step 3 and proceed.
- (6)
Check the COV. If proceed to next step; otherwise, enlarge the size of the MC population by and go back to the step 6.
- (7)
End the iteration and output as the predicted failure probability.
All steps of MSRBF-MCS are also shown in
Figure 2. In the demonstration, step 1 to step 3 of the solving process is shown in
Figure 3. In
Figure 3a, the hollow dots are generated by LHS as the initial DoE; the curves
,
,
, and
are the LSFs of corresponding RBF models; and the curve
is the LSF of the weighted combination of RBF models corresponding to
,
,
, and
. In order to formulate the
, cross-validation of the RBF models is performed, and the corresponding
,
.
Figure 3b shows the contour map of the weighted prediction variance
. As the predictions of the RBF model are unbiased at the known sample points,
increases significantly when the distance from DoE becomes bigger. This fact indicates that, if only
is examined, there will be a tendency for the learning function to select sample points far away from DoE, which may result in newly added sample points not being located in a sensitive area.
The sequential sampling process (corresponding to step 6–9) of MSRBF-MCS is shown in
Figure 4 and
Figure 5. In
Figure 4, the triangular dots represent newly added sample points. Denote
as the number of newly added sample points, then
Figure 4a–c shows the LSFs of the corresponding surrogate models, of which
, respectively.
Figure 4d shows the convergence curve of the sequential sampling. The iteration ends when
, then the times of calling performance function
(
). From this we can see the fast convergence of the proposed approach. The
by MSRBF-MCS and
by direct MCS are 0.2340 and 0.2323, respectively. Accordingly, the prediction error
can be expressed as follows:
In this demonstrated example, . Then, the coefficient of variation should be checked to see if the size of the MC population is large enough. According to Equation (20), (smaller than 5%). This is the end of the whole procedure.
Furthermore, the adjustment coefficient
can influence the selection of a new point for the sequential sampling. It is therefore necessary to investigate the effect of
on the performance of the proposed reliability analysis approach. The demonstrated example is solved 10 times by MSRBF-MCS when
. The results of the above simulations are shown in
Figure 5. In
Figure 5a, the mean of
is the greatest when
, and the prediction error
is simultaneously the smallest.