Next Article in Journal
Center-to-Center Distance’s Effect between Vertical Square Tubes of a Horizontal Array on Natural Convection Heat Transfer
Previous Article in Journal
Geometric Error Parameterization of a CMM via Calibrated Hole Plate Archived Utilizing DCC Formatting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Sampling Method Based on Normal Search Particle Swarm Optimization for Active Learning Reliability Analysis

1
College of Civil Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China
2
Shaanxi Key Lab of Geotechnical and Underground Space Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China
3
School of Civil Engineering, Qinghai University, Xining 810016, China
4
Qinghai Provincial Key Laboratory of Energy-Saving Building Materials and Engineering Safety, Xining 810016, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(10), 6323; https://doi.org/10.3390/app13106323
Submission received: 18 April 2023 / Revised: 15 May 2023 / Accepted: 17 May 2023 / Published: 22 May 2023
(This article belongs to the Section Civil Engineering)

Abstract

:

Featured Application

The present study can provide an efficient sampling method for candidate points in the iteration process of active learning reliability analysis.

Abstract

In active learning reliability methods, an approximation of limit state function (LSF) with high precision is the key to accurately calculating the failure probability (Pf). However, existing sampling methods cannot guarantee that candidate samples can approach the LSF actively, which lowers the accuracy and stability of the results and causes excess computational effort. In this paper, a novel candidate samples-generating algorithm was proposed, by which a group of evenly distributed candidate points on the predicted LSF of performance function (either the real one or the surrogate model) could be obtained. In the proposed method, determination of LSF is considered as an optimization problem in which the absolute value of performance function was considered as objective function. After this, a normal search particle swarm optimization (NSPSO) was designed to deal with such problems, which consists of a normal search pattern and a multi-strategy framework that ensures the uniform distribution and diversity of the solution that intends to cover the optimal region. Four explicit performance functions and two engineering cases were employed to verify the effectiveness and accuracy of NSPSO sampling method. Four state-of-the-art multi-modal optimization algorithms were used as competitive methods. Analysis results show that the proposed method outperformed all competitive methods and can provide candidate samples that evenly distributed on the LSF.

1. Introduction

Reliability analysis has been receiving significant attention in engineering practice because of its important role in safety design. Purpose of reliability analysis is to quantify and evaluate the uncertainty of the safety of the system [1]. Among different methods, the commonly used first-order reliability method (FORM) [2] and Second-order reliability method (SORM) [3] are only applicable for relatively simple engineering problems. Monte Carlo simulation (MCS)-based methods [4,5,6] can provide an unbiased estimation of the failure probability (Pf) and are not limited by the implications or complexity of performance function. However, MCS methods often require extensive computational cost for the performance function (usually more than 105 times [7]). For complex and highly nonlinear engineering problems, numerical calculation for the performance function may reach an unacceptable level. Therefore, it is necessary to achieve an accurate reliability analysis with a much lower calculation cost [8].
Accordingly, methods incorporating a surrogate model with MCS were proposed [9]. The core of these methods is the design of experiments (DOEs) that help to establish the surrogate model which characterize the performance function. The MCS was then performed on the surrogate model, instead of the actual performance function, to calculate the probability of failure. In these methods, computational cost was reduced to the number of training samples obtained from DOEs. The key to guaranteeing the accuracy of the surrogate MCS method lies in the goodness-of-fit of the surrogate model, which fundamentally depends on the selection of the training samples. In the early stage of DOEs research, a sampling method called one-shot design (OSD) was often used. In OSD, to improve the convenience of sample acquisition, a large number of training samples were obtained at once in the design space according to a certain sampling method (such as Latin hypercube sampling (LHS)) [10,11,12]. New random samples may be added depending on the MCS calculation results. This kind of sampling method is simple and straightforward but aimless. Since Pf was calculated as Nz<0/Nmcs, i.e., the ratio of MCS realization times the value of surrogate model is smaller than 0 compared to the total realization times, it is not necessary for surrogate model to be accurate everywhere. An accurately predicted limit state function (LSF) is enough. Therefore, the ultimate objective of the established surrogate model is to locate the LSF that divide the variable space into safe and unsafe subareas [7]. In this case, different positions in variable space should be valued differently. Sampled points that have closer distances to the LSF indicates higher potential contributions. OSD method is therefore inefficient in that the sample process is totally random. Sequential design of experiments (SDOEs) was then invented to improve the calculation efficiency. In SDOEs, training samples are selected based on its potential value. A large sampling population is firstly initialized. A small number of samples (such as ten samples) are selected to establish the initial surrogate model. Then, active learning function (ALF) is used to add new sampled points from the large sample population iteratively. During iteration process, Pf was calculated based on the continuously updated surrogate model through MCS until the iterative termination condition is met. Usually, the purpose of ALF is to select the points those are close to LSF or with greater uncertainty. This is the key in guaranteeing the accuracy and efficiency of the SDOEs method. The Kriging model is able to obtain both the predicted value and prediction variance. Therefore, the SDOEs-based Kriging–MCS method is often used in reliability analysis [1,8,13]. Using the knife variance method, Xiao et al. [14] successfully improve the application range of SDOEs method by adding the ability of solving the variance of non-sample points to surrogate model other than Kriging model. Furthermore, application of different models such as radial based neural network [15], Support vector regression model [16,17,18], and Bayesian regression network [19] were also investigated. Among them, the SDOEs method, that uses support vector machine (SVM) as a surrogate model, has attracted much attention. This is mainly because the calculation of Pf can be regarded as a binary classification problem in which LSF serve as a classification boundary, while SVM is good at classification of sample data. Accordingly, in their study, Pan et al. [20,21] introduce the pool-based ALF and build the surrogate model based on the adaptive SVM model.
Despite these substantial investigations, SDOEs still suffer from a shortcoming, which is that the candidate samples are randomly generated in each iteration. This makes the results rely heavily on the initial sample population [14]. Different initial population may lead to different spent time of sampling process and different results, which restricted the accuracy and stability of the method. To overcome the above problems, a novel optimization-based method was proposed in this study to produce candidate samples with high potential value during the iteration of SDOEs. Since LSF is a sub-area in variable space where value of the performance function equal zero, by taking the absolute value of the performance function and using it as the objective function, the process of finding LSF can be treated as a minimization problem with a continuous region as solution. Usually, based on the number of solutions, optimization problems can be categorized as unimodal optimization problems (UMOPs) and multimodal optimization problems (MMOPs). Optimization problems that have a continuous region as the optimal solution have barely been investigated. In our study, we named this kind of optimization problem as Regional-modal optimization problems (RMOPs) [22]. Furthermore, an innovative particle swarm optimization using a normal search pattern called normal search particle swarm optimization (NSPSO) was introduced for the sampling process to reach an accurate and comprehensive approximation of LSF. The proposed algorithm replaces the normally used point search pattern in optimization algorithms with a normal search pattern to consider the continuity of the solution. Further, a multi-strategy framework with three components was introduced to improve the performance of NSPSO, so as to increase the precision and integrity of the found LSF. The finding of LSF is regarded as an optimization process, therefore the sampling process is purposive, so the sampled points can approach LSF with higher accuracy.
The remainder of the paper is organized as follows. In Section 2, problem definition and the related theories are stated. In Section 3, NSPSO-based sampling method for approximating the LSF is described. Experiments and results analysis are demonstrated in Section 4 and Section 5. In Section 6, our conclusions are described.

2. Problem Description and General Idea of NSPSO for Sampling

2.1. Problem Description

Performance function Z is the function of state variables X = {x1, x2, …, xn}T of a system. State variables can be the strength of a component in an engineering structure, the applied load, or any other quantifiable indicators that influence the safety of the system. For instance, performance function of a slope reliability analysis equals FS(X) − 1, where FS is the factor of safety of a slope obtained through slope stability analysis. In this case, state variables could be the strength parameters of each soil layer. As shown in Equation (1), Z < 0 denotes the failure of a slope. Z > 0 denotes that the slope is stable. Z = 0 means the slope is in a limit state. The region where Z = 0 was called LSF. LSF divides the variable space into the safe region and the unsafe region (as shown in Figure 1). As described above, finding LSF is the core of MCS-based reliability analysis.
Z = G ( x ) = G ( x 1 , x 2 , , x n ) = < 0   failure = 0   limit > 0   safety
For MCS-based reliability analysis, Pf was calculated by Equation (2):
P f = I [ G ( X ) < 0 ] f ( X ) d X 1 N MCS i = 1 N MCS I [ G ( X ) < 0 ]
where Pf is the probability of failure. I[•] is an indicator function which equals 1 when • > 0, and equals 0 when • < 0; G(X) is the performance function; f(X) is the joint probability density function of all state variables; NMCS is the MCS simulation times.
In Equation (2), NMCS should be big enough to ensure that the coefficient of variation of Pf (COV(Pf)) is smaller than 10% [7].
C O V ( P f ) = 1 P f N MCS P f
In practice, the performance function of a system is mostly implicit. Complex and time-consuming numerical calculations, such as limit equilibrium methods, FEM-based strength reduction methods, or slip surface stress methods, are required. To address this issue, surrogate models were introduced to approximate the actual performance function in Equation (2), as shown in Equation (4).
P f 1 N MCS i = 1 N MCS I [ G ( X ) < 0 ] 1 N MCS i = 1 N MCS I [ R ( X ) < 0 ]
Here, R(X) is the surrogate model of G(X).
As shown in Equation (4), a good approximation quality of R(X) is the premise of the accuracy of the calculation of Pf. The key to the quality of R(X) lies in DOEs. Further, according to Equation (2) and Figure 1, a good approximation of the LSF is enough for the accuracy of the calculation of Pf. In this sense, sampling points near LSF is of the highest priority. SDOEs choose points that have higher uncertainty and near LSF. However, the candidate sampling points are fixed and dependent on the probability model of state variables. Therefore, SDOEs still suffer from the shortcomings as described in the Introduction. How to design a sampling method to make full use of the computational effort is still a problem to be solved.
Based on the above analysis, sampling enough points on or near LSF is the key for an accurate calculation of Pf. Moreover, distribution uniformity of the sampled points is crucial for calculation efficiency. Therefore, DOEs can be translated to find enough uniformly distributed points on or near LSF. According to Equation (1), LSF is the assemblage of all points that makes Z = 0. Therefore, the problem can finally be described as a minimum optimization problem as shown in Figure 2 and Equation (5):
M i n i m i z e G ( x )   s . t .   x Ω
where x is the state variable vector, Ω is the optimization space of state variables.
The solution of this optimization problem is the LSF, which is different from regular optimization problem such as UMOPs and MMOPs since LSF is a continuous region instead of finite discrete points. In our study, this kind of optimization problem will be referred to as regional-modal optimization problems (RMOPs). Therefore, RMOP can be mathematically defined as:
M i n i m i z e f ( X )     s . t .   X Ω
g 1 ( X 1 ,   X 2 ,   ,   X d ) = 0 g 2 ( X 1 ,   X 2 ,   ,   X d ) = 0 g j ( X 1 ,   X 2 ,   ,   X d ) = 0 , j d 1
here, f(·) is the objective function. X is a d-dimensional decision variable vector. Ω   is   [ L 1 , S 1 ] × [ L 2 , S 2 ] × × [ L d , S d ] , in which Ld and Sd are the superior boundary and lower boundary of the Xd, respectively. In Equation (7), functions g1 to gj defines the solution of Equation (6). For most reliability problems, j = d − 1. Therefore, we will only discuss this situation in this study.

2.2. General Idea of Normal Search Particle Swarm Optimization for Sampling

Most existing algorithms are used for problems that have one or several discrete optimal solutions. Few of them are specially designed for RMOPs. However, using the idea of discretized approximation, existing swarm intelligence (SI) methods can be modified to design an algorithm that was suitable for RMOPs.
The swarm intelligence (SI) [23] is widely used in optimization problems because of its high optimization efficiency and no restriction on the form of the objective function. Among different SI methods, the niching PSO [23,24] possess the advantages of fast convergence, good diversity, high accuracy, and strong robustness. Therefore, to deal with RMOPs so as to address the DOEs, NSPSO was developed in our study on the basis of distance-based niching PSO. To reflect the continuity of the solution of RMOPs, the normal search pattern was designed to replace the point-based one. Moreover, a multi-strategy framework was designed to improve the uniformity and diversity of the particles to cover the optimal region. By using NSPSO on the candidate sampling process of DOEs, the final result of iterative optimization is a number of discrete points uniformly distributed on LSF. This result can represent the LSF better than existing methods and can be used as the candidate samples for active learning.

3. The Proposed NSPSO Sampling Algorithm

The proposed NSPSO is constructed on distance-based niching PSO and contains four parts, the normal search pattern and a multi-strategy framework including dynamic particle repulsion to increase the diversity and coverage of the particles, particle memory that was used to improve convergence of the algorithm, and density-based elite breeding to increase computation efficiency. Detailed description of the four components are as follows.

3.1. Normal Search Pattern

Most existing optimization algorithms for UMOPs or MMOPs tends to guide particles to certain locations according to the global and local optimal locations (Figure 3(left)). Such a feature helps UMOPs or MMOPs to let particles converge to one or several locations [25,26]. However, for ROMPs such as the sampling process of LSF described above, continuity of the solutions will be ignored using such a searching pattern, leading to a poor approximation of the LSF. To deal with this issue, normal search pattern was designed. Take a 2-D problem as an example: the two blue particles in Figure 3(right) represent the best locations of a niche. Nearby particles were perpendicularly guided towards their connecting line so that the continuous feature and local geometric characteristics can be considered. Generally, particles were guided to the normal direction of the hyperplane formed by several best locations of a niche. For a d-dimensional RMOP, normal search pattern record n distance-based neighbor [27] for each particle and select the best d particles to form a d − 1-dimensional hyperplane. By using the normal search pattern, continuity of the solution can be fully utilized.
However, this also raises an instability problem as shown in Figure 4. Particle A was guided towards the connecting line of two neighbor optimal locations. However, the projection point on the line is not an optimal position. Therefore, the particle may vibrate between the optimal position and the projection point. This issue is serious when the optimal region is highly nonlinear, or the number of particles is small.
To address this issue, we added a compensation term to the normal search pattern (the second term in Equation (8)). To sum up, the proposed normal search pattern can be defined as Equations (8)–(10).
N i ( t ) = n i ( t ) 1 δ ( f ( x i ( t ) ) f ( p g ( t ) ) ) n p i ( t ) 2
n i ( t ) = α i , Q i n s o ( t ) d Q i n s i ( t )
n p i ( t ) = α i , Q i n s o ( t ) d Q i n s p i ( t )
where Ni(t) is the normal search vector that will be exerted on particle i at iteration t; ni(t) is the uncorrected normal search vector; α i , Q i ns o ( t ) and d Q i ns i ( t ) are the unit vector and minimum distance between particle i to the hyperplane Q i ns formed by the best locations of the neighbors of particle i, respectively; ns is the niche population size; d denotes the problem dimension. The second term is the compensation term that was intended to offset ni(t) when particle i is near the optimal location. pi is the personal best location of particle i so far; δ > 1 is a parameter that controls the strength of the compensation term; d Q i ns p i ( t ) is the minimum distance between pi and Q i ns .
As can be seen from Equation (8), implementation of the normal search can be divided into two parts. The first term is the searching vector of particles along the normal direction of the corresponding hyperplane. The second term is the normal search correction item based on the adaptive activation strategy. Its freezing coefficient δ ( f ( x i ( t ) ) f ( p g ( t ) ) ) is used to determine whether the normal search mode is activated or not. Apparently the second term is effective only when the current particle is near optimal, so that the optimal region can capture this particle to deal with the problem as shown in Figure 4 properly.

3.2. Dynamic Particle Repulsion Strategy

When solving RMOPs such as finding LSF, it is particularly important to ensure that the optimal solution set of the algorithm can be uniformly distributed on LSF under a certain density. To this end, a dynamic repulsion strategy is proposed to control the density of solutions. When the distance between two particles is smaller than a predefined repel distance drep, this strategy will give both particles opposite repulsive velocity increment to maintain certain distance. This strategy can improve the diversity and distribution uniformity of the particles. The repel velocity increment for each particle is shown in Equations (11) and (12).
r i ( t ) = 0 , i f   d n b e s t 1 i ( t ) d r e p   β n b e s t 1 , i o ( t ) ( d r e p d n b e s t 1 i ( t ) ) , i f   d n b e s t 1 i ( t ) < d r e p   ,
where
β n b e s t 1 , i o ( t ) = x i ( t ) x n b e s t 1 i ( t ) x i ( t ) x n b e s t 1 i ( t )
Here, ri(t) is the vector of repulsion for particle i at iteration t; x n b e s t 1 i ( t ) records the position of the closest neighbor of particle i at iteration t; d n b e s t 1 i ( t ) is the distance from x nbest 1 i ( t ) to x i ( t ) at iteration t; drep is repulsion distance that was used to keep a certain distance between particles; β nbest 1 , i o ( t ) is the unit vector from x nbest 1 i ( t ) to x i ( t ) ; and ‖ ‖ is the L2 norm operator.
Dynamic repulsion also has a side effect in the same way as normal search. Supposing the optimal region of a RMOP is a curve inside the variable plane as shown in Figure 5, dynamic repulsion strategy may let the marginal particles repelled from the optimal region unexpectedly, causing a convergency problem. In Figure 5, the figure on the left is expected result, while the right is the wrong result caused by this problem.
A compensation term was therefore added to the dynamic particle repulsion strategy.
r e i ( t ) = min j = 1 , 2 , , d { e ( f ( x n b e s t j i ( t ) ) f ( p g ( t ) ) ) } ( 1 e ( f ( x i ( t ) ) f ( p g ( t ) ) ) ) ( P c ( t ) P i ( t ) )
where x nbest j i ( t ) are the neighbors that forms the hyperplane for the particle i; P c ( t ) is the arithmetic average of the coordinates of x nbest j i ( t ) .
Apparently, this term is active only when the current particle is not optimal, and its best neighbors are optimal. Therefore, a compensation velocity increment will be added to a particle when it is repelled from the optimal region.

3.3. Particle Memory Strategy

According to Equation (8), the personal best locations of particles are extremely important for normal search, since the guiding hyperplane is formed by the neighbors’ best location of each particle. Normally, during the determination process of a particle’s personal best location, the whole iteration process will be considered and, metaphorically speaking, the particle never forgets. However, outdated information may misguide the particles in RMOPs. This is because in NSPSO, particles are guided by the hyperplane formed by their neighbors’ best locations. If the best location of a particle is too far from its current location, the formed hyperplane may be unable to reflect the local feature of the optimal region near the particle. Therefore, a particle memory strategy (as shown in Equation (14)) is invented to address this problem. This strategy lets particles only record the information of the last k iterations, so as to ensure that the recorded information is updated.
p i ( t ) = arg   min τ = 1 , 2 , , t   f ( x i ( τ ) ) ,        i f   t < k arg   min τ = t k , t k + 1 , , t   f ( x i ( τ ) ) ,      i f   t k  
where x i ( τ ) is the location of particle i at iteration τ; k is the memory duration.

3.4. Density-Based Elite Breeding Strategy

Computational cost is a crucial factor in reliability analysis. Therefore, controlling the overall evaluation times is the key to the efficiency of the method. Since RMOP is characterized by continuous solution, one discovered optimal particle certainly indicates an optimal region nearby. By generating new particles near the discovered optimal location and continuing the iteration, local details can be refined. Further, by initializing a small number of scout particles in the beginning and adding more population near the best particles that were isolated during the iteration process, the computational efficiency can be greatly improved. On this basis, this paper proposed an optimization strategy called density-based elite breeding. The implementation steps are as follows:
(1)
Conditional judgment for particle breeding
  • Two conditions must be met for particle breeding:
  • A certain proportion of the particle population should reach optima. Optima here was defined as the difference between global minimum and the value of a particle is within a predetermined tolerance error.
  • Population size of the particles should be less than the maximum allowable population size NPend.
(2)
Density calculation
When the above two conditions were met, density of the optimal particles will be calculated first to determine the breeding candidate. Density of a particle was defined as the quantity of other particles within its certain distance Rcut. Apparently, a particle with low density has high breeding value. When the density of an optimal particle is lower than the breeding density limit Md, it will be selected as a breeding candidate. The first 20% of the candidate particles with the lowest density will have the breeding procedure implemented.
(3)
Reproduction procedure.
Randomly generate nb particles within a radius Rcut.
Through density-based elite breeding strategy, distribution uniformity and computational efficiency will be improved.

3.5. NSPSO Sampling Algorithm for Reliability Analysis

To summarize, the NSPSO was proposed to solve RMOPs so as to deal with the candidate samples generating problem in DOEs. Figure 6 shows the flow chart of candidate samples generating process using NSPSO.
Implement steps of NSPSO on candidate samples generating process is described as follows:
Step 1: Initialize the population in state variable space and set all relevant parameters.
Step 2: Calculate the fitness value for each particle. In the engineering case part of this study, the fitness value in reliability analysis was calculated as a b s   ( FS ( x )   1 ) , in which FS was calculated through slip surface stress method as described in literature [28]. Global minimum pg(t) was also recorded in this step.
Step 3: Identify neighbor list for each particle according to ns.
Step 4: Use Equation (8) to calculate the normal search vector for each particle.
Step 5: Use Equation (11) to add repulsion vector to each particle.
Step 6: Fetch the historical data of each particle and identify the personal best location as well as the corresponding function using particle memory according to Equation (14).
Step 7: Update velocity vector and location vector for each particle using Equations (15)–(17):
v i ( t + 1 ) = χ ( v i ( t ) + φ 1 r 1 ( p i ( t ) x i ( t ) ) N + φ 2 r 2 N i ( t ) r i ( t 1 ) ) + r i ( t ) + α r e i ( t ) , i f   N i ( t ) 0     χ ( v i ( t ) r i ( t 1 ) ) + r i ( t ) , i f   N i ( t ) = 0  
where
( · ) N = d · N i ( t ) N i o ( t )
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 )
0.16199 < χ < 0.78540 ; φ 1 = φ 2 = 2.05 [29]; α is between 0.2–0.5; ( · ) N is the projection of vector · to normal search vector N i ( t ) ; d · N i is the corresponding projected length; N i o ( t ) is the unitization of N i ( t ) .
Step 8: Execute boundary check for both velocity and location [30] according to Equations (18) and (19), respectively:
v i ( t + 1 ) = v i ( t + 1 ) , i f   v i ( t + 1 ) v m a x   v i ( t + 1 ) v i ( t + 1 ) × v m a x ,   i f   v i ( t + 1 )   >   v m a x  
where vmax is a predefined value of velocity boundary.
x i ( t + 1 ) = x m a x   ( x t + 1 x m a x ) ,   i f   x i ( t + 1 ) x m a x x m i n + ( x m i n x t + 1 ) ,   i f   x i ( t + 1 )   <   x m i n
where xi(t + 1)′ is the revised location after location boundary check; xmax and xmin are the predefined upper and lower bounds for all particles in the variable space, respectively.
Step 9: Calculate particle densities and perform the elite breeding strategy as described in Section 3.4.
Step 10: Stop iteration if convergence criterion was met. Otherwise, repeat steps 2–9. Convergence criterion was set as: function values of 95 percent of all particles is smaller than a predefined acceptable error and the densities of all particles is bigger than Md
Step 11: Archive the particle positions and use them as candidate samples in the DOEs.
Pseudocode for the NSPSO sampling algorithm was shown in Algorithm 1.
Algorithm 1 Pseudocode for the NSPSO sampling algorithm
Applsci 13 06323 i001

4. Numerical Examples

In the present study, four explicit performance function and two practical engineering cases were employed to evaluate the effectiveness and robustness of the proposed candidate samples generating algorithm. Four explicit performance functions, including two two-dimensional and two three-dimensional problems, are used to observe the characteristics of the proposed method through the visualization of the whole sampling process and results of the algorithm. Two practical engineering cases includes a two-layered saturated clay slope with two state variables and a two-layered slope with three state variables.

4.1. Comparative Analysis Settings

The proposed NSPSO is designed to deal with RMOPs such as the finding of LSF which can also be seen as a special multimodal optimization problem. Therefore four—NMMSO [31], RSCMSAESII [25], Multi_AMO [32] and DP-MSCC-ES [33]—are chosen as competitive algorithms for the comparative study. Notably, the RS-CMSA-ESII won the championship in the GECCO 2020 Competition on Niching Methods for Multimodal Optimization. NMMSO won the CEC’2015 competitions on multimodal optimization. Parameter settings of each comparison algorithm uses their corresponding recommendations. The main parameters settings of NSPSO are as follows:
  • ns = d + 4;
  • k = 25;
  • drep = x 1 max + x 1 min + x 2 max + x 2 min + + x i max + x i min + + x d max + x d min N P end , where x i min and x i max are the lower and upper limit of the i-th variable, respectively.
  • Bp = 50%.
  • nb = 1 (two dimensions), and 2 for (three dimensions).
To make the comparison fair, Npend, maximum function evaluation (MFE), and the acceptable error (Ɛf) were set to be the same for each method, as shown in Table 1. For different dimensions, the optimization difficulties were different, so the parameter settings were set differently. All algorithms were conducted fifty times for each example.

4.2. Performance Indicators

4.2.1. Standardized Inverse Generation Distance (IGD_S)

The inverse generation distance (IGD) [34] is an indicator of MOOPs that was used to estimate the diversity and convergence ability of algorithms. To calculate IGD, a reference solution set should be defined in advance. Then, minimal distance between each point in the reference solution set to all particles in the solution set will be calculated and averaged. For RMOPs, a standardized version of IGD was proposed as defined in Equation (20):
IGD _ S ( O , P ) = std ( d ( v , O ) ) v P
where P is the reference point set that evenly distributed on the optimal region; O denotes the acquired point set and v is a point in P; d(v,O) denotes the minimum distance between v and all point in O; std() is a standard deviation operator.
A small IGD_S suggest that the resulting particle set can evenly cover the optimal region.

4.2.2. Effective Optimal Ratio (EOR)

Clustering trends in the optimization process may create many redundant data for DOEs. In the training process of the surrogate model, sampling points in the same or similar locations make little contribution to increasing the accuracy of the model. Therefore, to quantify the effective training samples acquired from the optimization process, the effective optimal ratio (EOR) is used.
E O R = N eop N p end
where Neop is the quantity of effective particles in the resulting solution set. Points that clustered together according to the distance to the reference set were considered as one effective optimal point in our study.
A bigger EOR indicates better convergence and more effective candidate sample DOEs.

4.3. Explicit Performance Functions

Four explicit performance functions were used for comparative study. Two of them are two-dimensional and the other two are three-dimensional. Corresponding equations and state variable ranges are shown in Table 2.
Since LSF is the region where performance function equals zero, in order to transform the LSF-finding problem to a RMOP, the first step here is to take the absolute value of the performance function in Table 2 while keeping other settings fixed. Then the proposed NSPSO and the four competitive algorithms were employed to solve the transformed RMOPs. Resulting solution were visualized in Figure 7, in which the red dots are the found optimal points while the lines and surfaces are the true optimal regions (i.e., the real LSF). The figures show the converged result of each algorithm optimization for each test function. It can be seen that the optimal solution obtained by NSPSO can cover the entire LSF evenly. With more found optimal points and better distribution uniformity, there is a bigger chance that more sampling points were found near the LSF, which may indicate a more accurate LSF predicted by the surrogate model. On the other hand, due to the clustering trend and ignoring the continuity of the solution of RMOP, all competitive algorithms have very poor performance. The reason for these results is that the proposed NSPSO take the continuity of the solution of RMOP into consideration and employ a multi-strategy framework to ensure the uniformity and diversity of the solution. Moreover, higher dimensions make the competitive algorithms perform more poorly, since an extra dimension brings more spatial complexity for RMOPs, whereas the performance of NSPSO was almost unaffected. With a better training sample, the resulting surrogate model can certainly have a better approximation of the LSF.
To have a further quantitative comparison of NSPSO and competitive algorithms, IGD_S and EOR were calculated for all resulting data. The Wilcoxon ranksum [35] test was conducted, in which the significance level was set to 0.05. Calculation results are shown in Table 3 and Table 4. We use Symbols “+”, “−”, and “~” to represent that NSPSO is significantly better than, worse than, or similar to competing algorithms, respectively. The best algorithm for each problem was highlighted in the tables.
From Table 3 and Table 4 it can be seen that, IGD_S of NSPSO is far less than those of other algorithms, denoting a better diversity and convergency for NSPSO. As for EOR, the far bigger values of NSPSO denotes that NSPSO can find much more effective training samples on the LSF for establishing surrogate models.
Take F1 for example. Figure 8 shows the optimization history of all algorithms. Optimization history is the recorded 10,000 positions at which the performance function has been evaluated. In the five algorithms, NSPSO shows the clearest outline of the LSF, most sampled points were evenly distributed on and around the LSF. The movement trajectory of the sampled points shows a clear trend of the particles in searching the LSF with the shortest route and spreading around the LSF. The search route is attributed to the normal search pattern, while the spreading behavior should be attributed to the breeding strategy. Also, repulsion strategy makes sure that the whole LSF was covered and also extends the sampling range around LSF. In this way, the contribution of the function evaluations was maximized. On the other hand, results of the competitive algorithms either show a strong clustering trend or are evenly distributed in the variable space.
Moreover, it is simple for NSPSO to control the sampling density by setting different repel distance drep, as shown in the comparative results in Figure 9. This is a useful feature for reliability analysis. When the computational cost for each function evaluation—for example a single slope stability analysis in the slope reliability analysis—is determined, by altering the sampling density, the overall computational cost can be controlled while maximizing the calculation accuracy.

4.4. Engineering Cases

4.4.1. Case 1: Double Slope, Multilayered, Two Random Variables

The first engineering case is a two-layered cohesive slope. Safety of the slope is controlled by the two state variables, i.e., the cohesions of its two layers. Slope dimension and layer conditions were shown in Figure 10. state variables of case 1 are shown in Table 5. Performance function evaluation was realized by slip surface stress method as described in literature [1,36,37]. The stress field of the slope was obtained through a FEM model with 5728 elements.
Optimization results of all algorithms were shown in Figure 11. It can be seen that the result of NSPSO have a more uniform and better coverage of the LSF, while other competitive algorithms have a clustering problem to a greater or lesser extent. The found LSF points show a clear pattern. When x2 (strength of layer 2) is big enough, the safety of the slope will be completely decided by x1 (strength of layer 1) since slip surface will not overlap with layer 2. When x2 is small, slip surface will go through both layers, so the LSF is related to both x1 and x2. To show the superiority of the proposed method, a comparison analysis was also conducted like Section 4.3. The resulting optimal points of all algorithms were shown in Figure 11. A huge amount of performance function calculation was carried out to obtain the true LSF of case 1 illustrated as solid lines in Figure 11. Apparently, NSPSO outperformed all competitive algorithms. Performance indicator results in Table 6 also indicates that the NSPSO produced the best coverage and diversity.

4.4.2. Case 2: Double Slope, Multilayered, Three Random Variables

The cross section and state variables of case 2 are shown in Figure 12 and Table 7, respectively [38]. The cohesion strength of layer 1 and cohesion and angle of friction of layer 2 are considered random, while the bulk density is considered constant, making the problem three-dimensional. Performance function evaluation was realized by the slip surface stress method. The FEM model contains 3428 elements as shown in Figure 12.
Figure 13 and Table 8 show the optimization results and performance indicators of all algorithms on case 2. Difference between NSPSO and competitive algorithms became more obvious compared to case 1. The resulting LSF also shows a pattern that when the two-strength parameter of layer 2 is big, the LSF will be controlled by the cohesive strength x1 solely as illustrated by the plane on left side of the figures.
In conclusion, through the comparative analysis of the explicit performance functions and the engineering cases, it can be concluded that NSPSO shows the best capability in the sampling process for SDOEs. The proposed method can provide candidate samples that cover every detail of the LSF and generate accurate. However, when the LSF is highly nonlinear, the solution may have a rather inferior quality (for example in the four corners in Figure 9b). This limitation is hopefully be addressed in our future study with the help of grid strategy. Nevertheless, the present study still provides a novel and useful sampling method for active learning reliability analysis.

5. Discussion

From the comparative analysis in Section 4, it can be concluded that the proposed NSPSO provide solution that evenly and completely cover the solution of a RMOP, which is suitable for the candidate samples generation of an active learning reliability analysis. Significance test results in Table 3, Table 4, Table 6 and Table 8 show that the NSPSO is statistically better than state-of-the-art algorithms. This is because the proposed NSPSO seized the characteristics of RMOPs that the solution is continuous. The normal search mode can lead the particles towards the potential solution region, avoiding the unwanted particle clustering. The dynamic particle repulsion strategy guarantees the evenness of the distribution of the particles. The particle memory strategy improves the convergency of the method. The density-based elite breeding strategy greatly reduce the computational cost. On the other hand, existing algorithms cannot treat RMOPs well because they are designed to deal with problems with finite optimal solutions. From the results of the two engineering case studies, another advantage of the proposed NSPSO can also be inferred, that the results of NSPSO is not influenced by the dimensions of the problem, which is important as problems with high dimensions are quite common in practice.
Complexity of the normal search and dynamic repulsion strategy can be calculated as O(i × d × Np2), where i represent iteration times, d is the number of the variables. Computational complexity for particle memory strategy is O(i × d × Np). Complexity of density-based elite breeding strategy contains two parts, the selection process and breeding process, which can be calculated as O(j × d × Np) and O(j × d × Np × Bp × nb), respectively, in which j is the count of breeding iterations. In conclusion, the overall complexity of NSPSO is O(i × d × Np2 + i ×d × Np + j × d × Np + j × d × Np × Bp × nb), and can finally be simplify to O(i × d × Np2), which is bigger than that of PSO (O(i × d× Np)). However, the density-based elite breeding strategy can greatly reduce the complexity of NSPSO by using much smaller initial population and increase the population size during the iteration.
It should also be noted that, we chose PSO with niching technique as the base of our algorithm because of its concise particle behavior pattern and stable performance. But the framework of the proposed NSPSO especially the normal search pattern is universal, which can be integrated with various meta-heuristic algorithms, for example, Butterfly Optimization Algorithm (BOA) [39], Trees Social relations optimization algorithm (TSR) [40], and Whale optimization algorithm (WOA) [41].
There are also two limitations of the proposed NSPSO in dealing with candidate samples generating problems in active learning reliability analysis: first, for problems with highly nonlinear LSF, the population size of NSPSO should be big enough to cover the LSF, as if the population size is small, details of the LSF may be lost; and second, breeding strategy and repulsion adjustment procedure should be further improved in order to deal with high dimensions and highly complex problems.

6. Conclusions

Aiming at the candidate samples generating problems in active learning reliability analysis, the present study provided a novel optimization-based algorithm that use a normal search pattern in the searching logic. Integrating the normal search pattern and particle swarm optimization (PSO), as well as a multi-strategy framework, the normal search PSO (NSPSO) was presented. The proposed method considered the continuousness of the solution of optimization problem, and thus can provide discrete solutions that can evenly cover the real solution with arbitrary density. On this basis, the sampling process was considered as an optimization process in which the objective function was set as the absolute value of the performance function. Using the proposed method, the sampling points can evenly cover the LSF, resulting accurate and efficient candidate sampling points. With the help of four explicit performance functions and two engineering cases and four competitive state-of-the-art multi-modal optimization algorithms, capability, and superiority of the proposed NSPSO was proved. Standard deviation of inverse generation distance and effective optimal ratio were employed as the performance indicators for the algorithm comparison study. Comparative analysis shows that NSPSO outperformed all state-of-the-art competitive methods and can provide uniformly distributed samples on the LSF. Additionally, performance indicator results show that, with a higher dimension of the problem or a more complex LSF, the accuracy and efficiency of NSPSO may become more evident. For the two engineering cases with different variable numbers, the proposed NSPSO produced the best sampling results. The proposed algorithm can be used as an accurate and efficient candidate sample generating method for reliability analysis based on active learning.

Author Contributions

Y.-l.Y.: Conceptualization, Methodology, Formal analysis, Writing—review & editing, Visualization. C.-m.H.: Writing—review & editing, Supervision. L.L.: Investigation, Software, Writing—Original Draft. J.X.: Resources, Funding acquisition. G.W.: Validation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China [grant number 52178302], the Key Research and Development Program of Shaanxi Province [grant number 2021SF-523], and the Natural Science Basic Research Program of Shaanxi [grant number 2022JQ-375].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the first author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, L.L.; Cheng, Y.M. System Reliability Analysis of Soil Slopes Using an Advanced Kriging Metamodel and Quasi–Monte Carlo Simulation. Int. J. Geomech. 2018, 18, 6018011–6018019. [Google Scholar] [CrossRef]
  2. Ji, J. A simplified approach for modeling spatial variability of undrained shear strength in out-plane failure mode of earth embankment. Eng. Geol. 2014, 183, 315–323. [Google Scholar] [CrossRef]
  3. Low, B.K. FORM, SORM, and spatial modeling in geotechnical engineering. Struct. Saf. 2014, 49, 56–64. [Google Scholar] [CrossRef]
  4. Remmerswaal, G.; Vardon, P.J.; Hicks, M.A. Evaluating residual dyke resistance using the Random Material Point Method. Comput. Geotech. 2021, 133, 104034. [Google Scholar] [CrossRef]
  5. Zhou, X.; Sun, Z. Quantitative assessment of landslide risk using Monte Carlo material point method. Eng. Comput. 2020, 37, 1577–1596. [Google Scholar] [CrossRef]
  6. Gr Iffiths, D.V.; Fenton, G.A. Probabilistic Slope Stability Analysis by Finite Elements. J. Geotech. Geoenviron. Eng. 2004, 130, 507–518. [Google Scholar] [CrossRef]
  7. Li, X.; Liu, Y.; Yang, Z.; Meng, Z.; Zhang, L. Efficient slope reliability analysis using adaptive classification-based sampling method. Bull. Eng. Geol. Environ. 2021, 80, 8977–8993. [Google Scholar] [CrossRef]
  8. Echard, B.; Gayton, N.; Lemaire, M. AK-MCS: An active learning reliability method combining Kriging and Monte Carlo Simulation. Struct. Saf. 2011, 33, 145–154. [Google Scholar] [CrossRef]
  9. Wong, F.S. Slope Reliability and Response Surface Method. J. Geotech. Eng. Asce. 1985, 111, 32–53. [Google Scholar] [CrossRef]
  10. Li, X.Y.; Zhang, L.M.; Jiang, S.H.; Li, D.Q.; Zhou, C.B. Assessment of slope stability in the monitoring parameter space. J. Geotech. Geoenviron. Eng. 2016, 142, 4016029. [Google Scholar] [CrossRef]
  11. Ma, J.Z.; Zhang, J.; Huang, H.W.; Zhang, L.L.; Huang, J.S. Identification of representative slip surfaces for reliability analysis of soil slopes based on shear strength reduction. Comput. Geotech. 2017, 85, 199–206. [Google Scholar] [CrossRef]
  12. Ling, Q.; Zhang, Q.; Wei, Y.; Kong, L.; Zhu, L. Slope reliability evaluation based on multi-objective grey wolf optimization-multi-kernel-based extreme learning machine agent model. Bull. Eng. Geol. Environ. 2021, 80, 2011–2024. [Google Scholar] [CrossRef]
  13. Rahimi, M.; Wang, Z.; Shafieezadeh, A.; Wood, D.; Kubatko, E.J. Exploring Passive and Active Metamodeling-Based Reliability Analysis Methods for Soil Slopes: A New Approach to Active Training. Int. J. Geomech. 2020, 20, 4020001–4020009. [Google Scholar] [CrossRef]
  14. Xiao, N.-C.; Zuo, M.J.; Zhou, C. A new adaptive sequential sampling method to construct surrogate models for efficient reliability analysis. Reliab. Eng. Syst. Saf. 2018, 169, 169. [Google Scholar] [CrossRef]
  15. Shi, L.; Sun, B.; Ibrahim, D.S. An active learning reliability method with multiple kernel functions based on radial basis function. Struct. Multidiscip. Optim. 2019, 60, 211–229. [Google Scholar] [CrossRef]
  16. Kang, F.; Xu, Q.; Li, J. Slope reliability analysis using surrogate models via new support vector machines with swarm intelligence-ScienceDirect. Appl. Math. Model. 2016, 40, 6105–6120. [Google Scholar] [CrossRef]
  17. Samui, P.; Lansivaara, T.; Kim, D. Utilization relevance vector machine for slope reliability analysis. Appl. Soft. Comput. 2011, 11, 4036–4040. [Google Scholar] [CrossRef]
  18. Kang, F.; Li, J. Artificial Bee Colony Algorithm Optimized Support Vector Regression for System Reliability Analysis of Slopes. J. Comput. Civ. Eng. 2016, 30, 04015040. [Google Scholar] [CrossRef]
  19. Li, X.; Zhang, L.; Zhang, S. Efficient Bayesian networks for slope safety evaluation with large quantity monitoring information. Geosci. Front. 2017, 9, S1411469204. [Google Scholar] [CrossRef]
  20. Pan, Q.; Dias, D. An efficient reliability method combining adaptive Support Vector Machine and Monte Carlo Simulation. Struct. Saf. 2017, 67, 85–95. [Google Scholar] [CrossRef]
  21. Zeng, P.; Zhang, T.; Li, T.; Jimenez, R.; Sun, X. Binary classifcation method for efcient and accurate system reliability analyses of layered soil slopes. Georisk 2020, 16, 1–17. [Google Scholar]
  22. Yuan, Y.; Hu, C.; Li, L.; Mei, Y.; Wang, X. Regional-modal optimization problems and corresponding normal search particle swarm optimization algorithm. Swarm Evol. Comput. 2023, 78, 101257. [Google Scholar] [CrossRef]
  23. Houssein, E.H.; Gad, A.G.; Hussain, K.; Suganthan, P.N. Major Advances in Particle Swarm Optimization: Theory, Analysis, and Application. Swarm Evol. Comput. 2021, 63, 100868. [Google Scholar] [CrossRef]
  24. Li, Y.; Chen, Y.; Zhong, J.; Huang, Z. Niching particle swarm optimization with equilibrium factor for multi-modal optimization. Inf. Sci. 2019, 494, 233–246. [Google Scholar] [CrossRef]
  25. Ahrari, A.; Elsayed, S.; Sarker, R.; Essam, D.; Coello, C.A.C. Static and Dynamic Multimodal Optimization by Improved Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations. IEEE Trans. Evol. Comput. 2021, 26, 527–541. [Google Scholar] [CrossRef]
  26. Lu, H.; Sun, S.; Cheng, S.; Shi, Y. An adaptive niching method based on multi-strategy fusion for multimodal optimization. Memet. Comput. 2021, 13, 341–357. [Google Scholar] [CrossRef]
  27. Qu, B.Y.; Suganthan, P.N.; Das, S. A Distance-Based Locally Informed Particle Swarm Model for Multimodal Optimization. IEEE Trans. Evol. Comput. 2013, 17, 387–402. [Google Scholar] [CrossRef]
  28. Kim, J.Y.; Lee, S.R. An improved search strategy for the critical slip surface using finite element stress fields. Comput. Geotech. 1997, 21, 295–313. [Google Scholar] [CrossRef]
  29. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  30. Helwig, S.; Branke, J.; Mostaghim, S. Experimental Analysis of Bound Handling Techniques in Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2013, 17, 259–271. [Google Scholar] [CrossRef]
  31. Fieldsend, J.E. Running Up Those Hills: Multi-modal search with the niching migratory multi-swarm optimiser. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014. [Google Scholar]
  32. Farshi, T.R. A memetic animal migration optimizer for multimodal optimization. Evol. Syst. 2022, 13, 133–144. [Google Scholar] [CrossRef]
  33. Xu, P.; Luo, W.; Xu, J.; Qiao, Y.; Zhang, J.; Gu, N. An alternative way of evolutionary multimodal optimization: Density-based population initialization strategy. Swarm Evol. Comput. 2021, 67, 100971. [Google Scholar] [CrossRef]
  34. Coello, C.A.C.; Cortes, N.C. Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  35. Capitani, L.D.; Martini, D.D. Reproducibility probability estimation and testing for the Wilcoxon rank-sum test. J. Stat. Comput. Simul. 2015, 85, 468–493. [Google Scholar] [CrossRef]
  36. Low, B.K.; Zhang, J.; Tang, W.H. Efficient system reliability analysis illustrated for a retaining wall and a soil slope. Comput. Geotech. 2010, 38, 196–204. [Google Scholar] [CrossRef]
  37. Zhang, J.; Huang, H.W.; Juang, C.H.; Li, D.Q. Extension of Hassan and Wolff method for system reliability analysis of soil slopes. Eng. Geol. 2013, 160, 81–88. [Google Scholar] [CrossRef]
  38. Cho, S.E. Probabilistic stability analyses of slopes using the ANN-based response surface. Comput. Geotech. 2009, 36, 787–797. [Google Scholar] [CrossRef]
  39. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Alimoradi, M.; Azgomi, H.; Asghari, A. Trees Social Relations Optimization Algorithm: A New Swarm-Based Metaheuristic Technique to Solve Continuous and Discrete Optimization Problems. Math. Comput. Simul. 2022, 194, 629–664. [Google Scholar] [CrossRef]
Figure 1. The reliability problem.
Figure 1. The reliability problem.
Applsci 13 06323 g001
Figure 2. Transformed minimum optimization problem.
Figure 2. Transformed minimum optimization problem.
Applsci 13 06323 g002
Figure 3. Point search (left) and normal search (right).
Figure 3. Point search (left) and normal search (right).
Applsci 13 06323 g003
Figure 4. Vibration problem caused by normal search.
Figure 4. Vibration problem caused by normal search.
Applsci 13 06323 g004
Figure 5. The over-repel problem.
Figure 5. The over-repel problem.
Applsci 13 06323 g005
Figure 6. Flowchart of the NSPSO sampling algorithm.
Figure 6. Flowchart of the NSPSO sampling algorithm.
Applsci 13 06323 g006
Figure 7. Optimal results of all algorithms.
Figure 7. Optimal results of all algorithms.
Applsci 13 06323 g007
Figure 8. Optimization history for all algorithms (a) NMMSO. (b) Multi_AMO. (c) DP-MSCC-ES. (d) RSCMSAESII. (e) NSPSO.
Figure 8. Optimization history for all algorithms (a) NMMSO. (b) Multi_AMO. (c) DP-MSCC-ES. (d) RSCMSAESII. (e) NSPSO.
Applsci 13 06323 g008
Figure 9. Results from NSPSO with different sampling density. (a) Lower density. (b) Higher density.
Figure 9. Results from NSPSO with different sampling density. (a) Lower density. (b) Higher density.
Applsci 13 06323 g009
Figure 10. Geometry of the slope and statistics of soil parameters for engineer case 1.
Figure 10. Geometry of the slope and statistics of soil parameters for engineer case 1.
Applsci 13 06323 g010
Figure 11. Resulting optimal points for Engineer case 1: (a) NMMSO. (b) Multi_AMO. (c) DP-MSCC-ES. (d) RSCMSAESII. (e) NSPSO.
Figure 11. Resulting optimal points for Engineer case 1: (a) NMMSO. (b) Multi_AMO. (c) DP-MSCC-ES. (d) RSCMSAESII. (e) NSPSO.
Applsci 13 06323 g011
Figure 12. Geometry of the slope in case 2.
Figure 12. Geometry of the slope in case 2.
Applsci 13 06323 g012
Figure 13. Resulting optimal results for Engineer case 1: (a) NMMSO. (b) Multi_AMO. (c) DP-MSCC-ES. (d) RSCMSAESII. (e) NSPSO.
Figure 13. Resulting optimal results for Engineer case 1: (a) NMMSO. (b) Multi_AMO. (c) DP-MSCC-ES. (d) RSCMSAESII. (e) NSPSO.
Applsci 13 06323 g013
Table 1. Setting of basic parameters.
Table 1. Setting of basic parameters.
Parameter2d3d
Npend100400
Ɛf0.050.05
MFE10,00050,000
Table 2. Basic information about the test functions.
Table 2. Basic information about the test functions.
No.Function EquationdState Variable Range
1 Z 1 = G 1 ( x 1 , x 2 ) = min 3 + 0.1 ( x 1 x 2 ) 2 ( x 1 + x 2 ) 2 ; 3 + 0.1 ( x 1 x 2 ) 2 + ( x 1 + x 2 ) 2 ; ( x 1 x 2 ) + 6 2 ; ( x 2 x 1 ) + 6 2 ; 2 x 1 6 ,   6 x 2 6 ,   6
2 Z 2 = G 2 ( x 1 , x 2 ) = 1 ( 4 2.1 x 1 2 + x 1 4 / 3 ) × x 1 2 + x 1 × x 2 + ( 4 x 2 2 4 ) × x 2 2 2 x 1 2 ,   2 x 2 1.2 ,   1 .2
3 Z 3 = G 3 ( x 1 , x 2 , x 3 ) = ( cos ( x 2 ) 2 e x 1 3 10 sin ( x 1 + x 2 ) x 3 ) 2 3 x 1 2 ,   2 x 2 2 ,   2 x 3 1 ,   1 . 5
4 Z 4 = G 4 ( x 1 , x 2 , x 3 ) = ( 100 x 1 2 x 2 2 x 3 2 ) 2 3 x 1 10 ,   10 x 2 10 ,   10 x 3 10 ,   10
Table 3. Statistical results of IGD_S of different algorithms for four explicit performance functions.
Table 3. Statistical results of IGD_S of different algorithms for four explicit performance functions.
Algorithms Z1Z2Z3Z4
NMMSOmean0.60820.09001.18150.9949
std(0.1643)+(0.2111)+(0.1177)+(0.0625)+
Multi_AMOmean0.30260.22150.5006 1.2302
std(0.2141)+(0.4230)+(0.0683)+(0.0514)+
DP-MSCC-ESmean0.95090.26571.35371.2122
std(0.2898)+(0.1137)+(0.1625)+(0.0583)+
RSCMSAESIImean1.54060.32143.29561.9924
std(0.1435)+(0.0352)+(0.1013)+(0.0539)+
NSPSOmean0.17760.02410.19610.4031
std(0.2573)(1.4025)(0.0597)(0.0474)
+/−/~4/0/04/0/04/0/04/0/04/0/0
Table 4. Statistical results of EOR of different algorithms for four explicit performance functions.
Table 4. Statistical results of EOR of different algorithms for four explicit performance functions.
Algorithms Z1Z2Z3Z4
NMMSOmean0.27950.22300.10480.1783
std(0.0980)+(0.3901)+(0.0725)+(0.0707)+
Multi_AMOmean0.62700.39190.72260.1808
std(0.0549)+(0.1003)+(0.0352)+(0.0642)+
DP-MSCC-ESmean0.12550.26460.04960.1888
std(0.1745)+(0.3114)+(0.0887)+(0.0546)+
RSCMSAESIImean0.02500.05970.00750.0083
std(0.3040)+(10.5494)+(0.0952)+(0.1928)+
NSPSOmean0.89850.99080.95160.9333
std(0.0337)(0.0107)(0.0136)(0.0135)
+/−/~4/0/04/0/04/0/04/0/04/0/0
Table 5. Statistical information of soil parameters for engineer case 1.
Table 5. Statistical information of soil parameters for engineer case 1.
Slope LayerParametersMeanCOVDistribution Type
Top layerc1 [kPa]1200.3LogNormal
ϒ1 [kN/m3]190Constant
Bottom layerc2 [kPa]1600.3LogNormal
ϒ2 [kN/m3]190Constant
Table 6. Statistical results of IGD_S and EOR of different algorithms for engineer case 1 over 20 runs.
Table 6. Statistical results of IGD_S and EOR of different algorithms for engineer case 1 over 20 runs.
Algorithms IGD_SEOR
NMMSOmean4.140.16
std(0.78)+(0.02)+
Multi_AMOmean5.560.19
std(1.94)+(0.05)+
DP-MSCC-ESmean4.650.26
std(1.48)+(0.03)+
RSCMSAESIImean7.220.07
std(4.79)+(0.02)+
NSPSOmean0.461.00
std0.050.00
+/−/~ 4/0/04/0/0
Table 7. Statistical information of soil parameters for engineer case 2.
Table 7. Statistical information of soil parameters for engineer case 2.
Slope LayerParametersMeanCOVDistribution Type
Top layerc1 [kPa]38.310.2Normal
ϒ1 [kN/m3]190Constant
Bottom layerc2 [kPa]23.940.2Normal
φ2 [o]120.1Normal
ϒ2 [kN/m3]190Constant
Table 8. Statistical results of IGD_S and EOR of different algorithms for engineer case 2 over 20 runs.
Table 8. Statistical results of IGD_S and EOR of different algorithms for engineer case 2 over 20 runs.
Algorithms IGD_SEOR
NMMSOmean6.980.11
std(2.88)+(0.01)+
Multi_AMOmean4.000.12
std(0.81)+(0.02)+
DP-MSCC-ESmean4.640.07
std(0.83)+(0.01)+
RSCMSAESIImean9.660.01
std(2.61)+(0.01)+
NSPSOmean0.760.96
std(0.04)+(0.01)+
+/−/~ 4/0/04/0/0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, Y.-l.; Hu, C.-m.; Li, L.; Xu, J.; Wang, G. A Novel Sampling Method Based on Normal Search Particle Swarm Optimization for Active Learning Reliability Analysis. Appl. Sci. 2023, 13, 6323. https://doi.org/10.3390/app13106323

AMA Style

Yuan Y-l, Hu C-m, Li L, Xu J, Wang G. A Novel Sampling Method Based on Normal Search Particle Swarm Optimization for Active Learning Reliability Analysis. Applied Sciences. 2023; 13(10):6323. https://doi.org/10.3390/app13106323

Chicago/Turabian Style

Yuan, Yi-li, Chang-ming Hu, Liang Li, Jian Xu, and Ge Wang. 2023. "A Novel Sampling Method Based on Normal Search Particle Swarm Optimization for Active Learning Reliability Analysis" Applied Sciences 13, no. 10: 6323. https://doi.org/10.3390/app13106323

APA Style

Yuan, Y. -l., Hu, C. -m., Li, L., Xu, J., & Wang, G. (2023). A Novel Sampling Method Based on Normal Search Particle Swarm Optimization for Active Learning Reliability Analysis. Applied Sciences, 13(10), 6323. https://doi.org/10.3390/app13106323

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop