Reliability analysis has been receiving significant attention in engineering practice because of its important role in safety design. Purpose of reliability analysis is to quantify and evaluate the uncertainty of the safety of the system [
1]. Among different methods, the commonly used first-order reliability method (FORM) [
2] and Second-order reliability method (SORM) [
3] are only applicable for relatively simple engineering problems. Monte Carlo simulation (MCS)-based methods [
4,
5,
6] can provide an unbiased estimation of the failure probability (P
f) and are not limited by the implications or complexity of performance function. However, MCS methods often require extensive computational cost for the performance function (usually more than 10
5 times [
7]). For complex and highly nonlinear engineering problems, numerical calculation for the performance function may reach an unacceptable level. Therefore, it is necessary to achieve an accurate reliability analysis with a much lower calculation cost [
8].
Accordingly, methods incorporating a surrogate model with MCS were proposed [
9]. The core of these methods is the design of experiments (DOEs) that help to establish the surrogate model which characterize the performance function. The MCS was then performed on the surrogate model, instead of the actual performance function, to calculate the probability of failure. In these methods, computational cost was reduced to the number of training samples obtained from DOEs. The key to guaranteeing the accuracy of the surrogate MCS method lies in the goodness-of-fit of the surrogate model, which fundamentally depends on the selection of the training samples. In the early stage of DOEs research, a sampling method called one-shot design (OSD) was often used. In OSD, to improve the convenience of sample acquisition, a large number of training samples were obtained at once in the design space according to a certain sampling method (such as Latin hypercube sampling (LHS)) [
10,
11,
12]. New random samples may be added depending on the MCS calculation results. This kind of sampling method is simple and straightforward but aimless. Since P
f was calculated as
Nz<0/
Nmcs, i.e., the ratio of MCS realization times the value of surrogate model is smaller than 0 compared to the total realization times, it is not necessary for surrogate model to be accurate everywhere. An accurately predicted limit state function (LSF) is enough. Therefore, the ultimate objective of the established surrogate model is to locate the LSF that divide the variable space into safe and unsafe subareas [
7]. In this case, different positions in variable space should be valued differently. Sampled points that have closer distances to the LSF indicates higher potential contributions. OSD method is therefore inefficient in that the sample process is totally random. Sequential design of experiments (SDOEs) was then invented to improve the calculation efficiency. In SDOEs, training samples are selected based on its potential value. A large sampling population is firstly initialized. A small number of samples (such as ten samples) are selected to establish the initial surrogate model. Then, active learning function (ALF) is used to add new sampled points from the large sample population iteratively. During iteration process, P
f was calculated based on the continuously updated surrogate model through MCS until the iterative termination condition is met. Usually, the purpose of ALF is to select the points those are close to LSF or with greater uncertainty. This is the key in guaranteeing the accuracy and efficiency of the SDOEs method. The Kriging model is able to obtain both the predicted value and prediction variance. Therefore, the SDOEs-based Kriging–MCS method is often used in reliability analysis [
1,
8,
13]. Using the knife variance method, Xiao et al. [
14] successfully improve the application range of SDOEs method by adding the ability of solving the variance of non-sample points to surrogate model other than Kriging model. Furthermore, application of different models such as radial based neural network [
15], Support vector regression model [
16,
17,
18], and Bayesian regression network [
19] were also investigated. Among them, the SDOEs method, that uses support vector machine (SVM) as a surrogate model, has attracted much attention. This is mainly because the calculation of P
f can be regarded as a binary classification problem in which LSF serve as a classification boundary, while SVM is good at classification of sample data. Accordingly, in their study, Pan et al. [
20,
21] introduce the pool-based ALF and build the surrogate model based on the adaptive SVM model.
Despite these substantial investigations, SDOEs still suffer from a shortcoming, which is that the candidate samples are randomly generated in each iteration. This makes the results rely heavily on the initial sample population [
14]. Different initial population may lead to different spent time of sampling process and different results, which restricted the accuracy and stability of the method. To overcome the above problems, a novel optimization-based method was proposed in this study to produce candidate samples with high potential value during the iteration of SDOEs. Since LSF is a sub-area in variable space where value of the performance function equal zero, by taking the absolute value of the performance function and using it as the objective function, the process of finding LSF can be treated as a minimization problem with a continuous region as solution. Usually, based on the number of solutions, optimization problems can be categorized as unimodal optimization problems (UMOPs) and multimodal optimization problems (MMOPs). Optimization problems that have a continuous region as the optimal solution have barely been investigated. In our study, we named this kind of optimization problem as Regional-modal optimization problems (RMOPs) [
22]. Furthermore, an innovative particle swarm optimization using a normal search pattern called normal search particle swarm optimization (NSPSO) was introduced for the sampling process to reach an accurate and comprehensive approximation of LSF. The proposed algorithm replaces the normally used point search pattern in optimization algorithms with a normal search pattern to consider the continuity of the solution. Further, a multi-strategy framework with three components was introduced to improve the performance of NSPSO, so as to increase the precision and integrity of the found LSF. The finding of LSF is regarded as an optimization process, therefore the sampling process is purposive, so the sampled points can approach LSF with higher accuracy.