Next Article in Journal
Prediction of Bond-Slip Behavior of Circular/Squared Concrete-Filled Steel Tubes
Previous Article in Journal
3D Printing Devices and Reinforcing Techniques for Extruded Cement-Based Materials: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Reliability Analysis of Structures Using Symbiotic Organisms Search-Based Active Learning Support Vector Machine

Department of Civil and Construction Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan
*
Author to whom correspondence should be addressed.
Buildings 2022, 12(4), 455; https://doi.org/10.3390/buildings12040455
Submission received: 27 February 2022 / Revised: 25 March 2022 / Accepted: 1 April 2022 / Published: 7 April 2022
(This article belongs to the Section Building Structures)

Abstract

:
Reliability-based design optimization considers the uncertainties that lie in the designing process of resilient buildings and structures. To model uncertainty, the major challenge is to lower the high computational expense incurred by the double-loop approach, where the design optimization (outer loop) repeatedly calls the reliability analysis of each structural design (inner loop). An alternative is to convert the reliability constraints to deterministic constraints by using optimality conditions. Yet, the approximated results are often inaccurate when constraint functions are highly non-linear, non-continuous, or non-differentiable. To achieve better accuracy while attaining sufficient flexibility, the present study proposes a new framework to classify the structural designs into feasible/infeasible designs. The proposed framework is called SOS-ASVM by integrating the symbiotic organism search (SOS) and the active-learning support vector machine (ASVM). ASVM is adopted as the surrogate model, while SOS is used to seek more representative samples to improve the classification accuracy of ASVM. The SOS-ASVM was validated by comparisons with popular classification tools: conventional support vector machine, artificial neural network, and Kriging model. Three practical engineering cases are used to demonstrate the performance of the SOS-ASVM: a cantilever beam, a bracket structure, and a 25-bar space truss. The comparison results confirm the superiority of the proposed framework to other tools.

1. Introduction

Structural design must comply with safety standards and minimize cost. Deterministic design optimization (DDO) is usually the preferred method for obtaining an optimal strategy. In DDO, the design variables and parameters are considered deterministic values. However, the structure is always affected by uncertainties in real-world applications, e.g., loading conditions, material properties, and manufacturing tolerances. DDO takes account of these uncertainties indirectly by using partial safety factors that are specified in design codes. Nonetheless, the solution may be close to the constraints because of this simplification, thereby increasing the probability of failure [1].
One method for addressing these problems is reliability-based design optimization (RBDO). RBDO is computationally expensive, and therefore, numerous studies have attempted to improve its efficiency. One approach to increase the computational efficiency of RBDO involves using a more efficient method to calculate structural reliability. The most probable point (MPP) concept is commonly used to calculate failure probability. However, MPP-based methods such as the first-order reliability method (FORM) and second-order reliability method (SORM) are suboptimal for practical and complex limit state functions [2]. Separately, Monte Carlo simulation (MCS) and other simulation methods are commonly used to estimate failure probability. In contrast to MPP-based methods, MCSs do not depend on the shape of the limit state function, but they require extensive computation to simulate a low-probability event [3].
Alternatively, subset simulation (SS), a relatively new method, estimates failure probability by using a sequence of conditional failure probabilities [4]. This method addresses the computational inefficiency of MCSs in handling small failure probability events. However, the precision of a result is sensitive to the parameters that govern the intermediate failure levels [5]. Another approach to increase the efficiency of RBDO involves modifying the integration of reliability analysis and design optimization. Three methods for solving this problem have been reported: double-loop [6,7,8,9,10], single-loop [11,12,13,14,15], and decoupled methods [16,17,18,19,20], shown in Figure 1. The double-loop method uses two nested loops, with the outer and inner loops performing optimization and reliability evaluation, respectively. This means that every optimization candidate must be subjected to a computationally expensive reliability analysis, which increases the computational cost of this method. In the single-loop method, the computationally expensive reliability analysis is replaced with an approximation function or surrogate model. The decoupled method attempts to decouple the two loops into a series of single loops by employing a specific strategy and solving the loops sequentially until a particular stopping condition, for which some convergence criteria are usually specified, is fulfilled [21]. Although the single-loop and decoupled methods significantly increase the efficiency of solving the RBDO problem, they do not necessarily guarantee convergence to a stable result [22]. Both methods introduce a tradeoff between accuracy and efficiency.
As mentioned, the single-loop method increases the computational efficiency of RBDO by using a computationally cheaper surrogate model in place of the computationally expensive reliability analysis. A variety of surrogate models have been proposed for this purpose, such as the artificial neural network (ANN) [23], support vector machine (SVM) [24], response surface method (RSM) [25], and Kriging interpolation [26]. Various surrogate-based RDBO techniques have been developed in recent years to lessen the computation burden of performing reliability analysis. Liu et al. [27] proposed to use a new RBDO framework where the cooperation of SVM and Kriging is used to find the optimum design point. Hawchar et al. [28] presented a Kriging-based model that addresses the time-variant RBDO problems. Zhou and Lu [29] investigated the application of sparse polynomial chaos expansion which is complemented with an active learning technique. Shang et al. [30] combined radial basis function with sparse polynomial chaos expansion to enhance the capability of model prediction. Fan et al. [31] performed reliability-based design optimization on crane system designs which are modeled using the Kriging model.
To achieve a high level of accuracy while maintaining adequate flexibility, in the present study, we propose a new framework to classify structural designs into feasible/infeasible designs without running a time-consuming reliability analysis. The proposed framework, called SOS-ASVM, integrates symbiotic organisms search (SOS), an active-learning support vector machine (ASVM), and MCS. SOS is a new and powerful metaheuristic algorithm that simulates the symbiotic interaction strategies used by organisms to survive in an ecosystem. The main advantage of using SOS is that it does not require a tuning parameter [32]. In the preliminary research, several well-known optimizers such as genetic algorithm (GA) and particle swarm optimization (PSO) were examined. We found that SOS gives the best performance out of the investigated optimization techniques. We used an SVM because it has good learning capacity and generalization capability, even with a small sample set [33]. In addition, we developed an active-learning strategy to boost the efficiency of the SVM model. This strategy improves accuracy by actively selecting the most informative samples rather than randomly picking samples. The classification of the training samples is performed by MCS according to the pre-specified failure probability threshold.
The remainder of this paper is organized as follows. In Section 2, we define and explain the RBDO problem. Section 3 explains the methodology of the SOS-ASVM and each of its components in detail. In Section 4, we provide several examples to verify the performance of the SOS-ASVM. Our concluding remarks are given in Section 5.

2. Reliability-Based Design Optimization

In RBDO, the variables contributing to a structure’s performance can be divided into two types: design variables and random variables. Design variables represent the elements of a structure that may be picked from continuous or discrete selections. Random variables are used to account for the uncertainties that exist in the material and the loading condition of the structure.
The RBDO problem can be formulated as follows:
min X C ( X , R ) s . t .   P f ( X , R ) P f ¯
where C ( X , R ) is the cost function, which includes the material cost or structural weight, X denotes the design variables, R denotes the random variables, P f ( X , R ) is the system failure probability function, and P f ¯ is the specified failure probability threshold.
In RBDO, the selection of design variables is considered infeasible when the failure probability of the system exceeds a specified threshold. Finding the failure probability of a design is a time-consuming process, and given a considerable number of design combinations for optimization, it is vital to find a more efficient method to replace reliability analysis. We thus developed a framework to replace the time-consuming reliability analyses used to determine the feasibility of a design.

3. The SOS-ASVM Framework

The proposed SOS-ASVM framework consists of three components: SOS, ASVM, and MCS. SOS and the ASVM are described later, followed by a complete explanation of the SOS-ASVM framework.

3.1. Symbiotic Organisms Search

SOS is a simple and powerful metaheuristic that employs a population-based search strategy to identify the optimal solution for a given objective function.
Similar to other population-based metaheuristic algorithms, SOS begins with an initial population called the ecosystem. In the ecosystem, a group of organisms is generated randomly in the search space. In the next step, a new generation consisting of new organisms is generated by imitating the biological interactions between two organisms in an ecosystem. The following three phases that resemble the real-world biological interaction model are used in SOS: mutualism, commensalism, and parasitism. Each organism interacts with the other organisms randomly in all of these phases. The process is repeated until the termination condition is fulfilled. The entirety of this procedure is summarized in Figure 2.

3.1.1. Mutualism Phase

The mutualism phase simulates mutualistic interactions that benefit both participants. An example of such interactions is evident in the relationship between bees and flowers. Bees benefit by gathering nectar and pollen from flowers. Flowers benefit because the pollination performed by the bees helps the flowers to reproduce.
In the mutualism phase, organism Xi is matched randomly with organism Xj. The new candidate solutions are then calculated based on the mutualistic relationship modeled using Equations (2) and (3).
X i n e w = X i + r a n d ( 0 ,   1 ) ( X b e s t M V × B F 1 )
X j n e w = X j + r a n d ( 0 ,   1 ) ( X b e s t M V × B F 2 )
M V = X i + X j 2
Here, the benefit factors BF1 and BF2 are determined randomly as either 1 or 2. These factors simulate whether an organism partially or fully benefits from an interaction. Xbest represents the best organism within the current ecosystem, and rand (0, 1) is a uniformly distributed random number between 0 and 1.

3.1.2. Commensalism Phase

The commensalism phase simulates a commensalistic relationship in which one participant is benefitted and the other is generally unaffected. An example of such a relationship is that between sharks and remora fish. The remora attaches itself to the shark and benefits by eating the scraps of food left by the shark. The shark is unaffected by the remora’s activities and receives almost no benefit from the relationship.
Similar to the mutualism phase, in the commensalism phase, two organisms are randomly selected. However, in this scenario, only organism Xi benefits from the interaction. The new candidate solution is then calculated using Equation (5):
X i n e w = X i + r a n d ( 1 ,   1 ) ( X b e s t X j )
where Xbest represents the best organism within the current ecosystem, and rand (−1, 1) is a uniformly distributed random number between −1 and 1.

3.1.3. Parasitism Phase

The parasitism phase simulates parasitic interactions in which one participant is benefitted and the other is harmed. An example of such a relationship is that between a mosquito and its host. The mosquito benefits by feeding on its host’s blood, whereas the host may be harmed by receiving a deadly disease caused by a pathogen the mosquito carries.
In SOS, organism Xi is modeled similar to the Anopheles mosquito. Organism Xi creates an artificial parasite by copying and mutating itself. Then, the parasite is matched randomly with organism Xj, which serves as the host to the newly created parasite. If the parasite has a greater fitness value than the host, it replaces the position of organism Xj in the ecosystem. However, if the fitness value of organism Xj is superior to that of the parasite, the parasite is removed from the ecosystem.

3.2. Support Vector Machine

An SVM is a highly efficient machine learning technique that has been used in many applications and fields for its classification and pattern-recognition abilities. Fundamentally, an SVM uses training data samples to construct a hyperplane that can separate a given set of data samples into their respective categories. This section provides a brief overview of SVMs; a more comprehensive account can be referred to [34].
In the present study, an SVM is used to replace the time-consuming reliability analysis. To this end, the SVM is used to develop a hyperplane that can predict the feasibility of each design parameter vector based on the failure probability computed by means of MCS. In this case, each vector is assigned a label that indicates its feasibility.
Consider a two-class problem. A set of N training samples, Xi with d-dimensional space, and the label indicator yi with a value of either 1 or −1, as illustrated in Figure 3, are used as the training samples to build a separating hyperplane. The SVM finds the most optimal manner in which to assign each training sample to two data classes. The optimal hyperplane is defined as having the maximum margin. The linear function of the hyperplane can be formulated as follows:
w · X + b = 0
where w is the normal vector of the hyperplane and b is the bias parameter. All the training samples should satisfy the following constraints:
y i ( w · X i + b ) 1 0
This constraint ensures that no sample is within the margin. The margin width can be defined as 2 | | w | | . Therefore, the determination of the optimal parameter w, b can be formulated as the following optimization problem:
min w , b | | w | | 2 2 s . t .   y i ( w · X i + b ) 1 0 ,   i = 1 ,   ,   N
When the data are not linearly separable, the SVM is extended to deal with this problem by using the soft margin method. The inequality constraint is relaxed by introducing a slack variable ξ i that penalizes each misclassification in the training dataset. The original problem is then transformed into the following Equation (9):
min w , b | | w | | 2 2 + C i = 1 N ξ i s . t .   y i ( w · X i + b ) 1 ξ i ,   i = 1 ,   ,   N
where C is a regularization parameter. The parameter C in the soft margin method permits misclassification, and stricter separation between classes can be achieved by increasing the value of C.
The optimization problem given in Equations (8) and (9) is a quadratic programming (QP) problem, and it can be solved using the existing optimization solvers. The results obtained include the optimal w, b, and the Lagrange multiplier α i . The nonzero Lagrange multiplier exists only at the points closest to the hyperplane. Thus, only the points that contribute to the construction of the SVM hyperplane are called support vectors. A new arbitrary point X can be classified using the following Equation (10):
y ( X ) = s i g n [ i = 1 N S V α i y i X i T X + b ]
where NSV is the number of support vectors, which represents a small fraction of the total number of training samples.
The SVM can also be extended to non-linear cases in which the samples cannot be separated using the linear hyperplane. The main idea is to project the data points onto a higher-dimensional space (feature space) where they can be separated linearly. A kernel function is used to execute the transformation required to solve this problem. Many types of kernel function are available, and the Gaussian kernel used in this paper is defined as:
K ( X i , X ) = exp ( | | X i X | | 2 2 σ 2 )
where σ is the adjustable width parameter of the Gaussian kernel. With the addition of the kernel function to the SVM, Equation (10) for the classification of a new arbitrary point X can be rewritten as:
y ( X ) = s i g n [ i = 1 N S V α i y i K ( X i , X ) + b ]
Notably, the effectiveness of the SVM strongly depends on the selection of its hyper-parameters.

3.3. Active-Learning Support Vector Machine

The primary difference between an active learner and a passive learner is in how they enrich the training set. The active learner enriches its training set by actively picking the most informative samples to improve the model’s performance continuously. On the other hand, the passive learner enriches its training set randomly. Therefore, the active learner can outperform its counterpart in terms of efficiency because it can build the model with as few training samples as possible. The active learning for classification was proposed by Lewis and Gale [35] for text classification purposes. Lewis and Gale [35] propose that the samples used for training should be the samples with the highest probability of being misclassified. Song et al. [36] proposed an active-learning SVM (ASVM) for calculating the failure probability. In that framework, the new samples are chosen from the candidate samples within the margin with the maximum distance to the nearest existing training sample. Pan and Dias [37] proposed an active learning scheme similar to the scheme developed by Song, Choi, Lee, Zhao, and Lamb [36]. However, the learning function is modified to find the sample closest to the SVM boundary, which need not be inside the margin and have the maximum distance from its nearest existing training sample.

4. SOS-ASVM Framework

In this section, the proposed integrated SOS-ASVM framework is introduced. In this framework, to continuously improve the SVM model, the concept of an active-learning strategy is implemented in the SVM to actively select samples instead of randomly selecting training samples. This active-learning strategy has been shown to be more efficient than its passive counterpart [37]. The main concept behind the ASVM is the inclusion of training samples that contain the most information about and have the most substantial influence on the shape of the hyperplane. In this regard, the samples near the hyperplane, as opposed to those relatively far from the hyperplane, gain importance because the SVM hyperplane is influenced only by its support vectors. The density of the next set of training samples can also be considered in the ASVM because a sparse area holds more new information than a densely sampled area does. To further increase the efficiency of the framework, SOS is used to efficiently navigate through the search space to identify the best sample based on these criteria. The interaction between the components of the SOS-ASVM is illustrated in Figure 4.
A flowchart describing the integrated SOS-ASVM framework is depicted in Figure 5. The entire process of the proposed framework comprises the following steps:
1.
Generate the initial training samples. The initial training samples should contain samples from both classes because the ASVM uses the available information to select the next sample. To capture the overall behavior of the search space, we used the Latin hypercube design (LHD) to enforce uniformity in the samples. The initial samples should subsequently be evaluated using MCS to determine the feasibility of the design through comparison with the predetermined threshold. According to our experiment, the suggested number of initial training samples is 20–50 samples depending on the complexity of the problem. The process of generating the initial samples is illustrated in Figure 6a.
2.
Construct the SVM model based on the current training samples. The construction of the SVM hyperplane is illustrated in Figure 6b.
3.
SOS is employed to find the next sample for enriching the current SVM model. As stated earlier in regards to the ASVM, the best sample to add to the model is the one with the most information and the strongest influence on the model. Therefore, to find the next sample, SOS is used to find the sample closest to the hyperplane and the farthest from the current training samples. The objective function of SOS can be formulated as follows:
f ( X ) = min X s ( X ) d ( X )
s ( X ) = a b s ( i = 1 N S V α i y i K ( X i , X ) + b )
d ( X ) = | | X X n e a r e s t | |
where s ( X ) is a function that calculates the representative distance between point X and the SVM hyperplane, and d ( X ) is the distance function for calculating the distance between point X and the nearest sample within the current training samples ( X n e a r e s t ). After the best candidate is identified, the sample is evaluated using MCS and classified according to its feasibility. The process of finding the next optimal sample by using SOS is illustrated in Figure 6c.
4.
Enrich and reconstruct the SVM model with optimization of the hyperparameter after every nth iteration. The samples obtained in the previous step are added to the pool of training samples to update the SVM model. To improve the efficiency of the framework, the hyperparameter is updated after every nth addition of new samples. In this paper, the variable n is set to 2, but it can be adjusted to improve either the effectiveness or efficiency of the framework. The process of reconstructing the SVM hyperplane is illustrated in Figure 6d.
5.
The SOS-ASVM framework is terminated when the stopping condition is fulfilled. The stopping conditions, such as the maximum number of iterations, and convergence limit, can be defined by the user. If the stopping condition is not achieved, the framework returns to Step 3.
Notably, the SOS-ASVM is proposed herein to improve the traditional surrogate-based reliability analyses, which deliver less satisfactory results when faced with a more complex limit state function, such as discontinuous or non-linear limit state functions. Moreover, throughout the process of the proposed framework, the time-consuming MCS is used only to evaluate the training samples. In step 3, the framework uses the classification ability of the SVM model instead, which is computationally less expensive.

5. Case Study

The performance of the proposed SOS-ASVM framework was tested on three practical structural examples, namely, a cantilever beam, a bracket structure, and a 25-bar space truss. A feasibility constraint was set for each problem to ensure that the probability of failure did not exceed 1 × 10−3:
P ( G ( X ,   R ) < 0 ) 1 × 10 3
For each of the problems, the results obtained using the SOS-ASVM were compared with those obtained with traditional surrogate models that use SVM, ANN, and Kriging models. To this end, the classification accuracies of these models on randomly generated test samples were compared.

5.1. Experimental Setup

To make a fair comparison, all the models start with the same group of random samples called the “training” dataset. In every iteration, all the models except for the SOS-ASVM pick a random sample that is added to the “training” dataset. The “training” dataset is used to train each surrogate model. In contrast, the SOS-ASVM will actively pick the best sample to add to the “training” dataset instead of a random sample. The models are then tested on the “test” dataset to distinguish the samples between two classes (feasible or infeasible). In this study, the “test” dataset contains 105 randomly generated samples. The “training” and “test” datasets are generated by sampling the combinations of design variables according to their individual distributions. The classification error is calculated by the Equation below:
e r r = n m i s s n t o t a l × 100 %
where nmiss is the number of misclassified samples in the “test” dataset and ntotal is the total number of samples in the “test” dataset.
The improvement of each surrogate model is calculated to show the level of improvement the current model achieved compared with the worst one among the SVM, ANN, and Kriging models. The improvement is calculated by the equation below:
i m p = | e r r e r r w o r s t | e r r w o r s t × 100 %
where err is the classification error of the current model and errworst is the classification error of the worst performing model. For example, if the worst model yields 20% error and the current model yields 10% the improvement will be | 10 %     20 % | 20 %   ×   100 % = 50 % . Since the metaheuristic algorithm has an inherent stochastic property, multiple runs of SOS-ASVM have been considered.

5.2. Parameter Selection

The present SOS-ASVM adopts the Gaussian kernel according to preliminary runs. The C parameter is set to be infinite to enforce a strict hyperplane and give the model more stability. Lastly, the remaining γ parameter will be determined using 5-fold cross-validation.
The SOS-ASVM is compared with several popular surrogate models: SVM, ANN, and Kriging models. The parameters of these models are set as follows. For SVM, the hyper-parameter was tuned using the same treatment as that for the proposed framework, which involved the five-fold crossover method. A multilayer feedforward backpropagation network, one of the most well-known and widely used ANN paradigms [38], was selected as the ANN architecture. The said ANN structure consisted of an input layer, a hidden layer, and an output layer. The process for the selection of the optimal number of neurons within the hidden layer is up for debate because there are no general rules for selecting the correct number. Therefore, in this study, we selected (2n + 1) neurons for the hidden layer, where n denotes the number of neurons in the input layer [39]. A logistic transfer function is used to transfer the values of the input layer nodes to the hidden layer nodes, whereas a linear transfer function is adopted to transfer the values from the hidden layer to the output layer. Lastly, for the Kriging model, ordinary Kriging and the Gaussian correlation function were adopted in this paper.

5.3. Cantilever Beam

The first example was a simple cantilever beam under point load [40], as illustrated in Figure 7. The assumed normally distributed random variables of the cantilever beam were as follows: concentrated load (P) = N(20, 1.2) kN, beam length (L) = N(400, 1.0) mm, and strength of material (R) = N(200, 10) MPa. The design variables, including beam width (B) and beam depth (H), were selected from a continuous value with a maximum of 100 mm. The probability of failure for this problem was formulated as follows:
P f = P ( G ( X ,   R ) < 0 ) = P ( R 6 P L B H 2 < 0 )
The performance of the SOS-ASVM is compared with other popular surrogate modeling methods in terms of classification accuracy and computational effort. The best result achieved for each method using 200 samples can be seen in Table 1. The classification error of the proposed SOS-ASVM framework is merely 0.22%, the lowest among the four methods. The error of the SOS-ASVM yields an average of 89.460% improvement over the classification error of ANN—the worst one among all four methods. The whole process took SOS-ASVM about 2.968 h to complete.
Figure 8 compares the classification errors of four methods as the number of training samples increases. The proposed SOS-ASVM framework can achieve much lower classification error and convergence quicker compared with the other models. It can be observed that the classification error of ANN fluctuates significantly without any sign of convergence. The performance of SVM and Kriging is pretty similar in terms of the convergence speed but with much better accuracy. Note that oscillation in the classification error is to be expected because the addition of an extra sample may not always improve the accuracy. However, increasing the number of training samples would gradually lead to better accuracy.

5.4. Bracket Structure

The next case is a bracket structure [41] shown in Figure 9. The bracket structure is loaded with concentrated load (P) on the right tips and its own weight due to gravity (g). The design and random variables are presented in Table 2. There are two failure events considered for this problem: maximum stress and buckling. The system fails when either one of the failure events occurs.
The maximum stress ( σ m a x ) should not exceed the yielding stress of the material ( f y ). The failure event can be formulated as follows:
G 1 ( X , Z ) = f y σ m a x
where:
σ m a x = 6 M B w C D t 2
M B = P L 3 + ρ g w C D t L 2 18
The second failure event considers the buckling effect where the maximum axial load ( F m a x ) in each member should not exceed the Euler critical buckling load ( F b u c k l i n g ) (neglecting its own weight). Therefore, the second limit state function can be expressed as:
G 2 ( X , Z ) = F b u c k l i n g F m a x
where:
F b u c k l i n g = π 2 E I L A B 2 = π 2 E t w A B 3 9 sin 2 θ 48 L 2
F m a x = 1 cos θ ( 3 P 2 + 3 ρ g w C D t L 4 )
The proposed SOS-ASVM framework is compared with other popular surrogate models. The best result achieved for each method is listed in Table 3. Here, the SOS-ASVM achieved a classification error of 2.01%, which is significantly better than all the other models. The Kriging model has the worst performance as the classification error is the highest, 5.906%. Compared with the Kriging method (the worst of the four methods), the SOS-ASVM yields over 65.9% improvement in terms of classification error. The whole process took SOS-ASVM about 3.459 h to complete.
Figure 10 shows the classification error of each method as a function of the number of training samples used. Observe that the classification error of ANN fluctuates significantly and possesses no sign of convergence. The best classification error of the SOS-ASVM and SVM in Table 3 is quite close; however, as seen in Figure 10, the proposed SOS-ASVM framework converged much faster. Fast convergence is particularly desirable when dealing with time-consuming reliability analysis, as it reduces the number of samples needed to train the model.

5.5. 25-Bar Space Truss

The performance of the proposed SOS-ASVM framework was verified by applying it to the 25-bar space truss problem [42], as illustrated in Figure 11. The truss members were selected from the 72 choices of standard-sized hollow pipe listed in Table 4, which also lists the outside diameter (D), thickness (t), and area (A) of these pipes. The truss was subjected to two normal loads under condition 1 and two loads with an uncertain component under condition 2, as summarized in Table 5. The modulus of elasticity (E) was set to 2 × 105 N/mm2. The 25 bars were divided into six groups, where the same hollow bar was assigned to the members of the same group. The grouping in this problem, summarized in Table 6, was intended to ease the connection between the truss bars and minimize errors during construction. The following random variables were used in this problem: loads P1 and P2, cross-sectional areas A1A6, and yield stresses Fy1Fy25. Details of the properties of these random variables are listed in Table 7.
In this problem, a failure event was defined as occurring when one or more truss components exceeded the allowable/yield stress, and it is expressed as the following equation:
| σ i | > F y
where σi denotes the axial stress of the i-th member in the structure, and Fy is the yield stress.
The effect of buckling on compressed members was also considered to be one of the failure conditions of the structure. Buckling refers to a sudden change in the shape of a structural component under compression. This deformation may cause complete loss of the member’s load-carrying capacity and eventually lead to failure (collapse) of the entire system. The critical buckling yield stress is modeled as Equations (27) and (28):
F c r = ( 0.658 λ c 2 ) F y   ( λ c 1.5 )
F c r = ( 0.877 λ c 2 ) F y   ( λ c > 1.5 )
where F c r denotes the critical buckling stress, and λ c denotes the slenderness ratio that can be calculated using Equation (29):
λ c = K L π r F y E
r = I A
where K is the effective length factor, L the unbraced length, r the radius of gyration, and E the modulus of elasticity. The radius of gyration is calculated using Equation (30), where I denotes the area moment of inertia, and A denotes the cross-sectional area of the member.
Like the previous examples, the SOS-ASVM is compared with other popular surrogate models in terms of classification accuracy and computational effort. The best result achieved by each respective method is listed in Table 8. The performance of the SOS-ASVM is substantially better than the other models; it achieves a low classification error of 3.887%, representing 52.135% improvement in contrast to Kriging. The whole process took SOS-ASVM about 2.954 h to complete.
Figure 12 shows the classification error yielded by all the methods as a function of the number of samples. As seen in Figure 12, the SOS-ASVM outperforms the other methods in terms of classification error and convergence speed. This problem addresses many practical issues, such as multiple failure conditions, non-linear limit state function, and discontinuous design variables. It highlights the advantage of the SOS-ASVM in practical situations, where the other surrogate modeling methods are less than satisfactory.

5.6. Summary

Table 9 summarizes the performance of all four methods. For each case, individual methods are ranked based on their classification error. The overall ranking is determined using the total rank obtained by summing all the ranks of each case study. The total rank is arranged in ascending order: the method with the lowest total rank is determined as the best method. In all three cases, the SOS-ASVM always obtains the best classification error compared with the other surrogate models. It highlights that the SOS-ASVM is more consistent than all the other surrogate modeling methods: ANN, SVM, and Kriging.

6. Conclusions

A new framework called SOS-ASVM that combines three components—SOS, ASVM, and MCS—into one cohesive framework is introduced in this paper. This framework was developed to improve the accuracy and efficiency of surrogate-based models to replace the time-consuming process of reliability analysis. For the proposed framework, the concept of ASVM is adopted to actively select samples and improve model performance. SOS is employed to efficiently navigate the search space to find the best samples, which are then evaluated using MCS before being incorporated into the model.
The proposed SOS-ASVM framework was applied to three practical problems: a cantilever beam, a bracket structure, and a 25-bar space truss. The results were used to compare the SOS-ASVM with traditional surrogate-based models such as the SVM, ANN, and Kriging methods. It is found that the proposed SOS-ASVM framework is more effective and efficient in classifying the feasibility of the design solutions in all the cases. The SOS-ASVM provides much lower classification error: 0.22% in cantilever beam, 2.01% in bracket structure, and 3.89% in 25-bar space truss, compared with the other methods. The comparison results indicated that in all of the presented examples, the proposed SOS-ASVM framework was more effective and efficient for modeling the feasibility of a design. The difference between the SOS-ASVM and the other methods is more pronounced in the last example, the 25-bar space truss, where the SOS-ASVM achieved 3.89% classification error compared with SVM (6.02%), ANN (7.27%), and Kriging (8.12%). This indicates that the proposed SOS-ASVM framework is more attractive when the problem complexity is higher.

Author Contributions

Conceptualization, I.-T.Y. and H.P.; methodology, I.-T.Y. and H.P.; software, H.P.; validation, I.-T.Y. and H.P.; resources, I.-T.Y.; data curation, H.P.; writing—original draft preparation, H.P.; writing—review and editing, I.-T.Y. and H.P.; visualization, H.P.; supervision, I.-T.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partially funded by the Ministry of Science and Technology, Taiwan under Grant no. 110-2221-E-011-032-MY3.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohsine, A.; Elhami, A. A Robust Study of Reliability-Based Optimisation Methods under Eigen-frequency. Comput. Methods Appl. Mech. Eng. 2010, 199, 1006–1018. [Google Scholar] [CrossRef]
  2. Eamon, C.D.; Charumas, B. Reliability estimation of complex numerical problems using modified conditional expectation method. Comput. Struct. 2011, 89, 181–188. [Google Scholar] [CrossRef] [Green Version]
  3. Bichon, B.; Mahadevan, S.; Eldred, M. Reliability-Based Design Optimization Using Efficient Global Reliability Analysis. In Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Structures, Structural Dynamics, and Materials and Co-Located Conferences, Palm Springs, CA, USA, 4–7 May 2009. [Google Scholar]
  4. Au, S.-K.; Beck, J.L. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Eng. Mech. 2001, 16, 263–277. [Google Scholar] [CrossRef] [Green Version]
  5. Sen, D.; Chatterjee, A. Subset simulation with Markov chain Monte Carlo: A review. J. Struct. Eng. 2013, 40, 142–149. [Google Scholar]
  6. Hao, P.; Ma, R.; Wang, Y.; Feng, S.; Wang, B.; Li, G.; Xing, H.; Yang, F. An augmented step size adjustment method for the performance measure approach: Toward general structural reliability-based design optimization. Struct. Saf. 2019, 80, 32–45. [Google Scholar] [CrossRef]
  7. Hao, P.; Wang, Y.; Liu, X.; Wang, B.; Li, G.; Wang, L. An efficient adaptive-loop method for non-probabilistic reliability-based design optimization. Comput. Methods Appl. Mech. Eng. 2017, 324, 689–711. [Google Scholar] [CrossRef]
  8. Hao, P.; Wang, Y.; Liu, C.; Wang, B.; Wu, H. A novel non-probabilistic reliability-based design optimization algorithm using enhanced chaos control method. Comput. Methods Appl. Mech. Eng. 2017, 318, 572–593. [Google Scholar] [CrossRef]
  9. Keshtegar, B.; Hao, P.; Meng, Z. A self-adaptive modified chaos control method for reliability-based design optimization. Struct. Multidiscip. Optim. 2017, 55, 63–75. [Google Scholar] [CrossRef]
  10. Meng, Z.; Li, G.; Wang, B.P.; Hao, P. A hybrid chaos control approach of the performance measure functions for reliability-based design optimization. Comput. Struct. 2015, 146, 32–43. [Google Scholar] [CrossRef]
  11. Choi, S.-H.; Lee, G.; Lee, I. Adaptive single-loop reliability-based design optimization and post optimization using constraint boundary sampling. J. Mech. Sci. Technol. 2018, 32, 3249–3262. [Google Scholar] [CrossRef]
  12. Meng, Z.; Zhang, Z.; Zhang, D.; Yang, D. An active learning method combining Kriging and accelerated chaotic single loop approach (AK-ACSLA) for reliability-based design optimization. Comput. Methods Appl. Mech. Eng. 2019, 357, 112570. [Google Scholar] [CrossRef]
  13. Keshtegar, B.; Hao, P. Enhanced single-loop method for efficient reliability-based design optimization with complex constraints. Struct. Multidiscip. Optim. 2018, 57, 1731–1747. [Google Scholar] [CrossRef]
  14. Meng, Z.; Yang, D.; Zhou, H.; Wang, B.P. Convergence control of single loop approach for reliability-based design optimization. Struct. Multidiscip. Optim. 2018, 57, 1079–1091. [Google Scholar] [CrossRef]
  15. Meng, Z.; Keshtegar, B. Adaptive conjugate single-loop method for efficient reliability-based design and topology optimization. Comput. Methods Appl. Mech. Eng. 2019, 344, 95–119. [Google Scholar] [CrossRef]
  16. Torii, A.J.; Lopez, R.H.; Miguel, L.F.F. A general RBDO decoupling approach for different reliability analysis methods. Struct. Multidiscip. Optim. 2016, 54, 317–332. [Google Scholar] [CrossRef]
  17. Meng, Z.; Zhou, H. New target performance approach for a super parametric convex model of non-probabilistic reliability-based design optimization. Comput. Methods Appl. Mech. Eng. 2018, 339, 644–662. [Google Scholar] [CrossRef]
  18. Faes, M.G.R.; Valdebenito, M.A. Fully decoupled reliability-based design optimization of structural systems subject to uncertain loads. Comput. Methods Appl. Mech. Eng. 2020, 371, 113313. [Google Scholar] [CrossRef]
  19. Yu, S.; Wang, Z. A general decoupling approach for time-and space-variant system reliability-based design optimization. Comput. Methods Appl. Mech. Eng. 2019, 357, 112608. [Google Scholar] [CrossRef]
  20. Li, G.; Yang, H.; Zhao, G. A new efficient decoupled reliability-based design optimization method with quantiles. Struct. Multidiscip. Optim. 2020, 61, 635–647. [Google Scholar] [CrossRef]
  21. Lopez, R.; Beck, A. Reliability-Based Design Optimization Strategies Based on FORM: A Review. J. Braz. Soc. Mech. Sci. Eng. 2012, 34, 506–514. [Google Scholar] [CrossRef] [Green Version]
  22. McDonald, M.; Mahadevan, S. Design Optimization With System-Level Reliability Constraints. J. Mech. Des. 2008, 130, 21403. [Google Scholar] [CrossRef]
  23. Cardoso, J.B.; de Almeida, J.R.; Dias, J.M.; Coelho, P.G. Structural reliability analysis using Monte Carlo simulation and neural networks. Adv. Eng. Softw. 2008, 39, 505–513. [Google Scholar] [CrossRef]
  24. Bourinet, J.M.; Deheeger, F.; Lemaire, M. Assessing small failure probabilities by combined subset simulation and Support Vector Machines. Struct. Saf. 2011, 33, 343–353. [Google Scholar] [CrossRef]
  25. Winkelmann, K.; Górski, J. The use of response surface methodology for reliability estimation of composite engineering structures. J. Theor. Appl. Mech. 2014, 52, 1019–1032. [Google Scholar] [CrossRef] [Green Version]
  26. Gaspar, B.; Teixeira, A.P.; Soares, C.G. Assessment of the efficiency of Kriging surrogate models for structural reliability analysis. Probabilistic Eng. Mech. 2014, 37, 24–34. [Google Scholar] [CrossRef]
  27. Liu, X.; Wu, Y.; Wang, B.; Ding, J.; Jie, H. An adaptive local range sampling method for reliability-based design optimization using support vector machine and Kriging model. Struct. Multidiscip. Optim. 2017, 55, 2285–2304. [Google Scholar] [CrossRef]
  28. Hawchar, L.; El Soueidy, C.-P.; Schoefs, F. Global kriging surrogate modeling for general time-variant reliability-based design optimization problems. Struct. Multidiscip. Optim. 2018, 58, 955–968. [Google Scholar] [CrossRef]
  29. Zhou, Y.; Lu, Z. Active Polynomial Chaos Expansion for Reliability-Based Design Optimization. AIAA J. 2019, 57, 5431–5446. [Google Scholar] [CrossRef]
  30. Shang, X.; Ma, P.; Yang, M.; Chao, T. An efficient polynomial chaos-enhanced radial basis function approach for reliability-based design optimization. Struct. Multidiscip. Optim. 2021, 63, 789–805. [Google Scholar] [CrossRef]
  31. Fan, X.; Wang, P.; Hao, F. Reliability-based design optimization of crane bridges using Kriging-based surrogate models. Struct. Multidiscip. Optim. 2019, 59, 993–1005. [Google Scholar] [CrossRef]
  32. Cheng, M.-Y.; Prayogo, D. Symbiotic Organisms Search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar] [CrossRef]
  33. Li, H.-S.; Lü, Z.-z.; Yue, Z.-f. Support vector machine for structural reliability analysis. Appl. Math. Mech. 2006, 27, 1295–1303. [Google Scholar] [CrossRef]
  34. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  35. Lewis, D.; Gale, W. A Sequential Algorithm for Training Text Classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland, 3–6 July 1994; pp. 3–12. [Google Scholar] [CrossRef]
  36. Song, H.; Choi, K.; Lee, I.; Zhao, L.; Lamb, D. Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems. Struct. Multidiscip. Optim. 2013, 47, 479–491. [Google Scholar] [CrossRef]
  37. Pan, Q.; Dias, D. An efficient reliability method combining adaptive Support Vector Machine and Monte Carlo Simulation. Struct. Saf. 2017, 67, 85–95. [Google Scholar] [CrossRef]
  38. Cheng, J. An artificial neural network based genetic algorithm for estimating the reliability of long span suspension bridges. Finite Elem. Anal. Des. 2010, 46, 658–667. [Google Scholar] [CrossRef]
  39. Deng, J.; Gu, D.; Li, X.; Yue, Z. Structural reliability analysis for implicit functions using artificial neural networks. Struct. Saf. 2005, 27, 25–48. [Google Scholar] [CrossRef]
  40. Jiang, X.; Li, H.; Long, H. Reliability-based robust optimization design for cantilever structure. In Proceedings of the the 2nd International Conference on Information Science and Engineering, Hangzhou, China, 4–6 December 2010; pp. 1402–1404. [Google Scholar]
  41. Dubourg, V.; Sudret, B.; Bourinet, J.-M. Reliability-based design optimization using kriging surrogates and subset simulation. Struct. Multidiscip. Optim. 2011, 44, 673–690. [Google Scholar] [CrossRef] [Green Version]
  42. Yang, I.T.; Hsieh, Y.-H.; Kuo, C.-G. Integrated multiobjective framework for reliability-based design optimization with discrete design variables. Autom. Constr. 2016, 63, 162–172. [Google Scholar] [CrossRef]
Figure 1. RBDO approaches.
Figure 1. RBDO approaches.
Buildings 12 00455 g001
Figure 2. Symbiotic organisms search flowchart.
Figure 2. Symbiotic organisms search flowchart.
Buildings 12 00455 g002
Figure 3. Illustration of linear SVM Classifier separating the two classes.
Figure 3. Illustration of linear SVM Classifier separating the two classes.
Buildings 12 00455 g003
Figure 4. Interaction diagram of SOS-ASVM.
Figure 4. Interaction diagram of SOS-ASVM.
Buildings 12 00455 g004
Figure 5. RBDO approaches.
Figure 5. RBDO approaches.
Buildings 12 00455 g005
Figure 6. SOS-ASVM workflow example.
Figure 6. SOS-ASVM workflow example.
Buildings 12 00455 g006
Figure 7. Cantilever beam problem.
Figure 7. Cantilever beam problem.
Buildings 12 00455 g007
Figure 8. Comparison of classification error for a cantilever beam.
Figure 8. Comparison of classification error for a cantilever beam.
Buildings 12 00455 g008
Figure 9. Bracket structure problem.
Figure 9. Bracket structure problem.
Buildings 12 00455 g009
Figure 10. Comparison of classification error for bracket structure.
Figure 10. Comparison of classification error for bracket structure.
Buildings 12 00455 g010
Figure 11. The 25-bar space truss problem.
Figure 11. The 25-bar space truss problem.
Buildings 12 00455 g011
Figure 12. Comparison of classification error for 25-bar space truss.
Figure 12. Comparison of classification error for 25-bar space truss.
Buildings 12 00455 g012
Table 1. Best classification accuracy achieved for cantilever beam.
Table 1. Best classification accuracy achieved for cantilever beam.
MethodAvg. err (%)Std. errimp (%)
SOS-ASVM0.2200.01189.460
ANN2.087--
Kriging1.423-31.833
SVM0.758-63.666
Table 2. Characteristics of random variables for bracket structure.
Table 2. Characteristics of random variables for bracket structure.
Random VariablesDescriptionDistributionMean Valuec.o.v
PLoad (kN)Gumbel1000.15
EYoung’s modulus (GPa)Gumbel2000.08
fyYielding stress (GPa)Normal0.2250.08
ρDensity (kg/m3)Normal78600.1
LLength (m)Normal50.05
wABWidth of beam AB (m)Normal0.1–0.30.05
wCDWidth of beam CD (m)Normal0.1–0.30.05
tHeight of beam (m)Normal0.1–0.30.05
Table 3. Best classification accuracy achieved for bracket structure.
Table 3. Best classification accuracy achieved for bracket structure.
MethodAvg. err (%)Std. errimp (%)
SOS-ASVM2.0110.03265.948
ANN3.067-48.062
Kriging5.906--
SVM2.108-64.310
Table 4. Choice of design variable.
Table 4. Choice of design variable.
Bar No.D (mm)T (mm)A (mm2)Bar No.D (mm)T (mm)A (mm2)
121.72123.837216.374602.7
227.22158.338216.385235.2
327.22.3179.939216.38.25360.9
4342.3229.140267.464927.3
542.72.3291.941267.46.65407.6
642.72.5315.742267.475726.5
748.62.3334.543267.486519.4
848.62.5362.144267.497306.1
948.62.8402.945267.49.37540.9
1048.63.2456.446318.565890.5
1160.52.3420.547318.56.96754.6
1260.53.257648318.587803.7
1360.5471049318.598750.9
1476.32.8646.550318.510.319982.2
1576.33.2734.951355.66.47021.1
1676.34908.552355.67.98629.4
1789.12.8759.153355.699799.9
1889.13.2863.654355.69.510,329.4
19101.63.2989.255355.61212,953.4
20101.641226.556406.47.99890.2
21101.651517.457406.4911,236.2
22114.33.21116.958406.49.511,845.5
23114.33.51218.359406.41214,868.5
24114.34.51552.360406.412.715,707.9
25114.35.61912.461406.41619,623.6
26139.83.61540.462457.2912,672.6
27139.841706.563457.29.513,361.7
28139.84.51912.864457.21216,783.6
29139.862522.165457.212.717,734.8
30165.24.52271.866457.21622,177.1
31165.252516.4675087.912,411.8
32165.263000.868508914,108.9
33165.27.13526.5695089.514,877.8
34216.34.52994.3705081218,698.8
35216.35.83835.67150812.719,761.6
36216.363964.1725081421,727.3
Table 5. Load conditions for 25-bar problem.
Table 5. Load conditions for 25-bar problem.
JointsCondition 1 (in kN)Condition 2 (in kN)
PxPyPzPxPyPz
1100−1000−1000000
20−1000−1000000
3000P100
6000P200
Table 6. Grouping of bars for 25-bar problem.
Table 6. Grouping of bars for 25-bar problem.
GroupBar ID
11
22, 3, 4, 5
36, 7, 8, 9
410, 11, 12, 13
514, 15, 16, 17, 18, 19, 20, 21
622, 23, 24, 25
Table 7. Characteristics of random variables for 25-bar problem.
Table 7. Characteristics of random variables for 25-bar problem.
Random VariablesDistributionMean ValueDispersion
P1, P2Extreme type I500 kNc.o.v. = 5%
A1A6UniformSelection from Table 4±5%
Fy1Fy25Lognormalμ = 0.250 GPac.o.v. = 5%
Table 8. Best classification accuracy achieved for 25-bar space truss.
Table 8. Best classification accuracy achieved for 25-bar space truss.
MethodAvg. err (%)Std err.imp (%)
SOS-ASVM3.8870.10852.135
ANN7.267-10.513
Kriging8.121-0
SVM6.017-25.911
Table 9. Performance ranking of all methods.
Table 9. Performance ranking of all methods.
MethodSOS-ASVMSVMANNKriging
Case 1err (%)0.2200.7582.0871.423
Rank1243
Case 2err (%)2.0112.1083.0675.906
Rank1234
Case 3err (%)3.8877.2678.1216.017
Rank1342
Total rank310116
Overall ranking1342
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, I.-T.; Prayogo, H. Efficient Reliability Analysis of Structures Using Symbiotic Organisms Search-Based Active Learning Support Vector Machine. Buildings 2022, 12, 455. https://doi.org/10.3390/buildings12040455

AMA Style

Yang I-T, Prayogo H. Efficient Reliability Analysis of Structures Using Symbiotic Organisms Search-Based Active Learning Support Vector Machine. Buildings. 2022; 12(4):455. https://doi.org/10.3390/buildings12040455

Chicago/Turabian Style

Yang, I-Tung, and Handy Prayogo. 2022. "Efficient Reliability Analysis of Structures Using Symbiotic Organisms Search-Based Active Learning Support Vector Machine" Buildings 12, no. 4: 455. https://doi.org/10.3390/buildings12040455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop