Next Article in Journal
Comparative Analysis of Classifiers for Classification of Emergency Braking of Road Motor Vehicles
Previous Article in Journal
Linked Data for Life Sciences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method

1
Aircraft Strength Research Institute of China, Xi’an 710065, China
2
College of Aerospace Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(4), 128; https://doi.org/10.3390/a10040128
Submission received: 20 August 2017 / Revised: 10 November 2017 / Accepted: 12 November 2017 / Published: 21 November 2017

Abstract

:
This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO) and the dynamic augmented Lagrangian multiplier method (DALMM). The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.

1. Introduction

In modern design practice, structural engineers are often faced with structural optimization problems, which aim to find an optimal structure with minimum weight (or cost) under multiple general constraints, e.g., displacement, stress, or bulking limits, etc. It is theoretically straightforward to formulate the structural optimization problem as a constrained optimization problem in terms of equations under the framework of mathematical programming. This, however, is very challenging to accomplish in practice, at least, due to three reasons: (a) the number of design variables is large; (b) the feasible region is highly irregular; and (c) the number of design constraints is large.
Flexible optimization methods, which are able to deal with multiple general constraints that may have non-linear, multimodal, or even discontinuous behaviors, are desirable to explore complex design spaces and find the global optimal design. Optimization methods can be roughly classified into two groups: gradient-based methods and gradient-free methods. Gradient-based methods use the gradient information to search the optimal design starting from an initial point [1]. Although these methods have been often employed to solve structural optimization problems, solutions may not be good if the optimization problem is complex, particularly when the abovementioned major difficulties are involved. An alternative could be the deterministic global optimization methods, e.g., the widely used DIRECT method [2,3], which is a branch of gradient-free methods. Recently, Kvasov and his colleagues provided a good guide on the deterministic global optimization methods [4] and carried out a comprehensive comparison study between the deterministic and stochastic global optimization methods for one-dimensional problems [5]. One of the main disadvantages of the deterministic global optimization methods is the high-dimensionality issue caused by the larger number of design variables [3] or constraints. In contrast, another branch of gradient-free methods (or stochastic optimization methods), such as Genetic Algorithms (GA) [6,7,8,9,10,11,12], Simulated Annealing (SA) [13,14,15], Ant Colony Optimization (ACO) [16,17,18,19], Particle Swarm Optimizer (PSO) [17,20,21,22,23,24,25,26,27], Harmony Search (HS) [28,29,30,31], Charged System Search (CSS) [32], Big Bang-Big Crunch (BB-BC) [33], Teaching–Learning based optimization (TLBO) [34,35], Artificial Bee Colony optimization (ABC-AP) [36], Cultural Algorithm (CA) [37], Flower Pollination Algorithm (FPA) [38], Water Evaporation Optimization (WEO) [39], and hybrid methods combining two or more stochastic methods [40,41] have been developed to find the global optimal designs for both continuous and discrete structural systems. They have been attracting increasing attention for structural optimization because of their ability to overcome the drawbacks of gradient-based optimization methods and the high-dimensionality issue. A comprehensive review of stochastic optimization of skeletal structures was provided by Lamberti and Pappalettere [42]. All of the stochastic optimization methods share a common feature, that is, they are inspired by the observations of random phenomena in nature. For example, GA mimics natural genetics and the survival-of-the-fittest code. To implement a stochastic optimization method, random manipulation plays the key role to “jump” out local optima, such as the crossover and mutation in GA, the random velocity and position in PSO, etc. Although these stochastic optimization methods have achieved many applications in structural design optimization, structural engineers are still concerned with seeking more efficient and robust methods because no single universal method is capable of handling all types of structural optimization problems.
This paper aims to propose an efficient and robust structural optimization method that combines subset simulation optimization (SSO) and the augmented Lagrangian multiplier method [43]. In our previous studies [44,45,46], a new stochastic optimization method using subset simulation—the so-called subset simulation optimization (SSO)—was developed for both unconstrained and constrained optimization problems. Compared with some well-known stochastic methods, it was found to be promising for exploiting the feasible regions and searching optima for complex optimization problems. Subset simulation was originally developed for reliability problems with small failure probabilities, and then became a well-known simulation-based method in the reliability engineering community [47,48]. By introducing artificial probabilistic assumptions on design variables, the objective function maps the multi-dimensional design variable space into a one-dimensional random variable space. Due to the monotonically non-decreasing and right-continuous characteristics of the cumulative distribution function of a real-valued random variable, the searching process for optimized design(s) in a global optimization problem is similar to the exploring process for the tail region of the response function in a reliability problem [44,45,46].
The general constraints in structural optimization problems should be carefully handled. Since most stochastic optimization methods have been developed as unconstrained optimizers, common or special constraint-handling methods are required [49]. A modified feasibility-based rule has been proposed to deal with multiple constraints in SSO [45]. However, the rule fails to directly detect active constraints. As an enhancement, the dynamic augmented Lagrangian multiplier method (DALMM) [25,26,50] is presented and integrated with SSO for the constrained optimization problem in this study, which can automatically detect active constraints. Based on DALMM, the original constrained optimization problem is transformed into a series of unconstrained optimization sub-problems, which subsequently forms a nested loop for the optimization design. The outer loop is used to update both the Lagrange multipliers and penalty parameters; the inner loop aims to solve the unconstrained optimization sub-problems. Furthermore, DALMM can guarantee that the proposed method obtains the correct solution with finite penalty parameters because it already transforms a stationary point into an optimum by adding a penalty term into the augmented Lagrangian function.
This paper begins with a brief introduction of SSO for the unconstrained optimization problem, followed by development of the proposed method for structural optimization problems in Section 3. Then, four classical truss sizing optimization problems are used to illustrate the performance of the proposed method in Section 4. Finally, major conclusions are given in Section 5.

2. Subset Simulation Optimization (SSO)

2.1. Rationale of SSO

Subset simulation is a well-known, efficient Monte Carlo technique for variance reduction in the structural reliability community. It exploits the concept of conditional probability and an advanced Markov Chain Monte Carlo technique by Au and Beck [48]. It should be noted that subset simulation was originally developed for solving reliability analysis problems, rather than optimization design problems. The studies carried out by Li and Au [45], Li [44], and Li and Ma [46]—in addition to the current one—aim to apply subset simulation in optimization problems. Previous studies [44,45,46] have shown that optimization problems can be considered as reliability problems by treating the design variables as the random ones. Then, along a similar idea of Monte Carlo Simulation for reliability problems [51], one can construct an artificial reliability problem from its associated optimization problem. As a result, SS can be extended to solve optimization problems as a stochastic search and optimization algorithm. The gist of this idea is based on a conceptual link between a reliability problem and an optimization problem.
Consider the following constrained optimization problem
min   W ( x ) s . t .   g i ( x ) 0 , i = 1 , , n i    h j ( x ) = 0 , j = 1 , , n e    x L x x U
where W ( x ) is the objective function at hand, x is the design variable vector, g i ( x ) is the ith inequality constraint, h j ( x ) is the jth equality constraint, n i is the number of inequality constraints, n e is the number of equality constraints, and x L and x U are the lower bounds and upper bounds for the design vector. Here, only the continuous design variables are considered in Equation (1). The following artificial reliability problem is formulated through randomizing the design vector and applying the conceptual link between a reliability problem and an optimization problem:
P F = P ( F ) = P ( W ( x ) W o p t )
where W o p t is the minimum value of the objective function, F = { W ( x ) W o p t } is the artificial failure event, and P F is its corresponding failure probability. As suggested by Li and Au [45], there is no special treatment process for the design vector except for using truncated normal distributions to characterize the design vector and capture its corresponding bounds. The conversion given by Equation (2) maps the objective function W ( x ) into a real-valued random variable W . According to the definition of random variable, a random variable is a real function and its cumulative distribution function (CDF) is a monotonically non-decreasing and right-continuous function. Thus, it is obvious that the CDF value at W o p t is 0. This indicates that the failure probability P F in Equation (2) is 0, too. However, of actual interest are the regions or points of x where the objective function acquires this zero failure probability, rather than the P F itself. In addition, based on the conversion in Equation (2), it is worthy of note that local optima can be avoided, at least from a theoretical point of view.
The governing equation for subset simulation optimization is still given by [48]
P F = P ( F ) = P ( F 1 ) i = 1 m 1 P ( F i + 1 | F i )
where F i ( i = 1 , , m ) are the intermediate failure events and are nested, satisfying F 1 F 2 F m = F . The nesting feature of all intermediate events introduces a decomposition of a small probability. Then, searching for W o p t in an optimization problem is converted to exploring the failure region in a reliability problem. The most important key step for a successful implementation of subset simulation is to obtain conditional samples for each intermediate event to estimate its corresponding conditional failure probability. Because the probability density functions (PDFs) for intermediate events are implicit functions, it is not practical to generate samples directly from their respective PDFs. This can be achieved using Markov Chain Monte Carlo (MCMC). A modified Metropolis–Hastings (MMH) algorithm has been developed for subset simulation, which employs component-wise proposal PDFs instead of an n-dimensional proposal PDF so that the acceptance ratio for the next candidate sample in MMH is significantly increased far away from zero, which is often encountered by the original Metropolis–Hastings algorithm in high dimensions. More details about MMH are given in [48].

2.2. Implementation Procedure of SSO

Based on subset simulation, an optimization algorithm is proposed that generally comprises 6 steps:
  • Initialization. Define the distributional parameters for the design vector x and determine the level probability ρ and the number of samples at a simulation level (i.e., N ). Let N S = I N T [ N ρ ] , where INT[∙] is a function that rounds the number in the bracket down to the nearest integer. Set iteration counter K = 1 .
  • Monte Carlo simulation. Generate a set of random samples { x i , i = 1 , 2 , N } according to the truncated normal distribution.
  • Selection. Calculate the objective function W ( x i ) for the N random samples, and sort them in ascending order, i.e., W ( 1 ) W ( N ) . Obtain the first N S samples from the ascending sequence. Let the sample ρ -quantile of the objective function be W ^ ( 1 ) , and set W ^ ( 1 ) = W ( N S ) , and then define the first intermediate event F 1 = { W W ^ ( 1 ) } .
  • Generation. Generate conditional samples using the MMH algorithm from the sample { x 1 , , x N S } , and set K = K + 1 .
  • Selection. Repeat the same implementation as in Step 3.
  • Convergence. If the convergence criterion is met, the optimization is terminated; otherwise, return to Step 4.
In this study, the stopping criterion is defined as [44,45]
| σ ^ k σ ^ k 1 |   o r   | σ ^ k σ ^ k 1 x U x L | ε
where ε is the user-specified tolerance and σ ^ k is the standard deviation estimator of the samples at the kth simulation level. Numerical studies [44,45,46] suggest that this stopping criterion is preferable to those defined only using a maximum number of function evaluations or by comparing the objective function value difference with a specified tolerance between two consecutive iterations, although the latter ones are frequently used in other stochastic optimization methods. Thus, Equation (4) is adopted in this study.

3. The Augmented Lagrangian Subset Simulation Optimization

In this study, we propose a new SSO method for constrained optimization problems. It combines DALMM with SSO and is referred to as “augmented Lagrangian subset simulation optimization (ALSSO)”. The proposed method converts the original constrained optimization problem into a series of unconstrained optimization sub-problems sequentially, which are formulated using the augmented Lagrangian multiplier method. Since the exact values of Lagrangian multipliers and penalty parameters at the optimal solution are unknown at the beginning of the current iteration, they are adaptively updated in the sequence of unconstrained optimization sub-problems. The term “dynamic” in DALMM refers to the automatic updating of the Lagrange multipliers and penalty parameters and indicates the difference between the conventional augmented Lagrangian multiplier method and DALMM.

3.1. The Augmented Lagrangian Multiplier Method

Dealing with multiple constraints is pivotal to applying SSO in a nonlinear constrained optimization. Although the basic idea of SSO is highly different from other stochastic optimization algorithms, one can still make use of the constraint-handling strategies used in them. Substantial research studies have been devoted to GA, PSO, etc. In our previous study [45,46], a modified feasibility-based rule motivated by Dong et al. [24] in PSO was proposed to handle multiple constraints for SSO, which incorporates the effect of constraints during the generation process of random samples of design variables. By this method, the feasible domain is properly depicted by the population of random samples. However, this rule fails to detect the active constraints directly.
The purpose of this paper is to present an alternative strategy to deal with multiple constraints in SSO for constrained optimization problems, which makes use of the augmented Lagrangian multiplier method. Consider, for example, a nonlinear constrained optimization problem with equality and inequality constraints. It can be converted into an unconstrained optimization problem by introducing Lagrange multiplier vector λ and penalty parameter vector σ [43] through the Lagrangian multiplier method. Then, the unconstrained optimization problem is given by
min   L ( x , λ , σ ) s.t.   x L x x U
where L ( x , λ , σ ) is the following augmented Lagrangian function [25,26,50]:
L ( x , λ , σ ) = W ( x ) + i = 1 n e + n i λ i θ i ( x ) + i = 1 n e + n i λ i θ i 2 ( x )
with
θ i = { g i ( x ) , i = 1 , , n i max ( h i ( x ) , λ i 2 σ i ) , i = 1 , , n e .
The advantages of this augmented Lagrangian function in Equation (6) lie in bypassing the ill-conditioning caused by the need for infinite penalty parameters and transforming a stationary point in an ordinary Lagrangian function into a minimum point. It can be easily proved that the Karush–Kuhn–Tucker solution ( x , λ ) of the problem in Equation (6) is a solution to the problem in Equation (1) [26,43]. As a result, SSO for unconstrained optimization [44] can be applied to the problem in Equation (1) after it has been converted into the problem in Equation (6). It is also well-known that if the magnitude of penalty parameters is larger than a positive real value, the solution to the unconstrained problem is identical to that of the original constrained problem [43].
Figure 1 shows a flowchart of the proposed method that contains an outer loop and inner loop. The outer loop is performed to formulate the augmented Lagrangian function, update the Lagrange multipliers and penalty parameters, and check the convergence. For the kth outer loop, the Lagrange multipliers λ k and penalty parameters σ k are considered as constant in the inner loop. Then, SSO is applied to solve the global minimum design of the augmented Lagrangian function L ( x , λ k , σ k ) with the given set of λ k and σ k in the kth outer loop. This also means that the Lagrange multipliers and penalty parameters are held at a fixed value in the inner loop.

3.2. Initialization and Updating

The solution x cannot be obtained from a single unconstrained optimization since the correct Lagrange multipliers and penalty parameter are unknown and problem-dependent. Initial guesses for λ 0 and σ 0 are required, and these are updated in the subsequent iterations. The initial values of the Lagrange multipliers are set to zero, as suggested by previous studies [25,26,50]. At the kth iteration, the update scheme of the Lagrange multipliers is given by
λ i k + 1 = λ i k + 2 σ i k θ i ( x )
where θ i ( x ) are calculated from the solution x = x k to the sub-problem in Equation (6) with the given Lagrange multipliers λ k and penalty parameters σ k . This updating scheme is based on the first optimality condition of the kth sub-problem [25,26], i.e.,
[ W ( x ) x + i = 1 n e + n i λ i k θ i ( x ) x + i = 1 n e + n i 2 θ i k θ i ( x ) x ] x = x k 0 .
The penalty parameters are arbitrarily initialized with 1, i.e., σ 0 = { 1 , , 1 } T . Sedlaczek and Eberhard [26] have proposed a heuristic updating scheme for the penalty parameters. In their scheme, the penalty parameter of the ith constraint will be doubled when the constraint violation increases, reduced by half when the constraint violation decreases, or left unchanged when the constraint violation lies within the same order of the previous iteration. This updating scheme is applicable to both equality constraints and inequality constraints. For the determination of constraint violation, user-defined tolerances of acceptable constraint violation for equality constraints and inequality constraints are required to be specified before starting to solve the problem. It is noted that this updating scheme often leads to different values of a penalty parameter for a constraint in different iterations. Based on the authors’ experience, the variation in the penalty parameters may cause instability in the convergence of the Lagrange multipliers and increase the computational efforts. In this study, we prefer to use a modified updating scheme that excludes the reduction part for penalty parameters. The updating scheme is
σ i k + 1 = { 2 σ i k i f   g ˜ i ( x k ) > g ˜ i ( x k 1 ) g ˜ i ( x k ) > ε σ i k o t h e r w i s e 1 i f   g ˜ i ( x k ) < ε
where g ˜ i ( x k ) is the unified constraint violation measure. For equality constraints, g ˜ i ( x k ) = | h i ( x k ) | , and for inequality constraints g ˜ i ( x k ) = g i ( x k ) . Instead of changing the value of a penalty parameter in each iteration, they are increased only if the search process remains far from the optimum region. In the updating scheme, the penalty parameter remains equal to 1 when the solution x k of the current sub-problem is feasible. It should be pointed out that one can specify different tolerances for equality and inequality constraints if necessary. In order to assign an effective change to the Lagrange multipliers, a lower limit is imposed to all penalty parameters [25,26]:
r i 1 2 | λ i | ε .
This special updating rule is extremely effective for early iterations, where the penalty parameters may be too small to provide a sufficient change. The automatic update of penalty parameters is an important feature of the proposed method.

3.3. Convergence Criterion for the Outer Loop

This subsection defines the convergence criterion for the outer loop. It has been suggested that if the absolute difference between λ k + 1 and λ k is less than or equal to a pre-specified tolerance ε , then the optimization is considered to have converged. However, when this stopping criterion was checked for active constraints, both the authors and Jansen and Perez [25] experienced instability on the convergence properties of the Lagrange multipliers and penalty parameters. Jansen and Perez [25] proposed a hybrid convergence criterion by combining the absolute difference between the Lagrange multipliers with that of the objective function. In this study, we add a new convergence standard to the convergence criterion of Lagrange multipliers or the objective functions. In the new convergence standard, a feasible design is always preferable, and it possesses a higher priority than the other two criteria, i.e., the absolute difference criterion or the hybrid one. Suppose that x k is the global minimum obtained from the kth sub-problem in Equation (6) using SSO; the process is terminated if
g ˜ ( x k ) 2 ε
where ε is the user-defined tolerance for feasibility and g ˜ ( x k ) 2 is the feasibility norm defined by
g ˜ ( x k ) 2 = i = 1 n e h i 2 ( x k ) + i = 1 n i ( max { g i ( x k ) , 0 } ) 2 .
Combining the above three convergence criteria, the proposed ALSSO produces comparable optimal solutions in most cases. In some cases, however, a maximum number of iterations must be included in the convergence criterion because the instability of Lagrange multipliers is encountered even if using all three convergence criteria. If the iterative procedure does not meet the convergence criteria even at the maximum iteration number, the optimal solution obtained by the proposed ALSSO shall be considered as an approximate solution to the constrained optimization problem.

4. Test Problems and Optimization Results

The proposed method was tested in four classical truss sizing optimization problems. The level sample size was set to N = 100, 200, and 500 with a level probability of 0.1. The maximum number of simulation levels is set to 20 for the inner loop, while the maximum number of iterations is set to 50 for the outer loop. The selection of N and level probability has been discussed in the previous studies related to SSO and SS for reliability assessment. The numbers of iterations of inner and outer optimization loops are new parameters for the new framework of the structural optimization problem. We selected their values based on both our implementation experience and the experience of the other stochastic methods combined with DALMM, e.g., GA and PSO. The stopping tolerance in SSO and all the constraint tolerances are set to a user-defined tolerance of ε = 10 4 . Since the present method is a stochastic one, 30 independent runs with different initial populations are performed to investigate the statistical performance of the proposed method, including several measures of the quality of the optimal results, e.g., the best, mean, and worst optimal weight as well as robustness (in terms of coefficient of variation, COV) for each optimization case. It should be pointed out that the total number of samples in the SSO stage (i.e., the inner loop) varies from iteration to iteration, as does the total number of samples for each structural optimization design case.
The proposed method was implemented in Matlab R2009, and all the optimizations were performed on a desktop PC with an Intel Core2 [email protected] GHz and 8.0 GB RAM.

4.1. Planar 10-Bar Truss Structure

The first test problem regards the well-known 10-bar truss structure (see Figure 2), which was previously used as a test problem in many studies. The modulus of elasticity is 10 Msi and the mass density is 0.1 lb/inch3. The truss structure is designed to resist a single loading condition with 100 kips acting on nodes 2 and 4. Allowable stresses are tensile stress σ U = + 25 kip and compressive stress σ L = 25 kip. Displacement constraints are also imposed with an allowable displacement of 2 inch for all nodes. There is a total of 22 inequality constraints. The cross-sectional areas A 1 , , A 10 are defined as the design variables with minimum size limit of 0.1 inch2 and maximum size limit of 35.0 inch2.
The optimization results of ALSSO are compared with those available in the literature in Table 1, including optimal design variables and minimal weight. It should be pointed out that the maximum displacements evaluated from GA, HA, HPSACO, HPSSO, SAHS, and PSO at the optimum designs exceed the displacement limits of 2.0 inch. Furthermore, the GA solution also violates the stress limits. The proposed method and ALPSO converged to fully feasible designs. The present results are in good agreement with those obtained using PSO and DALMM (i.e., ALPSO) by Jansen and Perez [25]. The optimized design obtained by ALSSO is competitive with the feasible designs from ABC-AP, TLBO, MSPSO, and WEO.
The statistical performance of the proposed algorithm is evaluated in Table 2. The reported results were obtained from 30 independent optimization runs each including 100, 200, and 500 samples at each simulation level. The “SD” column refers to the standard deviation on optimized weights, while the “NSA” column refers to the number of structural analyses. The number in parentheses is the NSA required by the proposed method for the best design run. It appears that the proposed approach is very robust for this test problem by checking the SD column.
Figure 3 shows the convergence curve obtained for the last optimization run with N = 500. ALSSO automatically detects the active constraints by checking values of Lagrange multipliers. This design problem was dominated by two active displacement constraints. This information can be utilized to explain why some optimization results reported in the literature tended to violate displacement constraints.

4.2. Spatial 25-Bar Truss Structure

The second test problem regards the spatial 25-bar truss shown in Figure 4. The modulus of elasticity and material density are the same as in the previous test case. The 25 truss members are organized into 8 groups, as follows: (1) A1, (2) A2A5, (3) A6A9, (4) A10A11, (5) A12A13, (6) A14A17, (7) A18A21, and (8) A22A25. The cross-sectional areas of the bars are defined as the design variables and vary between 0.01 inch2 and 3.5 inch2. The weight of the structure is minimized, subject to all members satisfying the stress limits in Table 3 and nodal displacement limits in three directions of ±0.35 inch. There is a total of 110 inequality constraints. The truss is subject to the two independent loading conditions listed in Table 4, where P x , P y and P z are the loads along x-, y- and z-axis, respectively.
This truss problem has been previously studied by using deterministic global methods [3] and many stochastic optimization methods [21,27,28,31,34,35,36,37,38,39,41,45]. Table 5 compares the best designs found by ALSSO and the above-mentioned stochastic methods. Among the deterministic global methods, the Pareto–Lipschitzian Optimization with Reduced-set (PLOR) algorithm is better than the three DIRECT-type algorithms [3]. Therefore, only the optimized result obtained by PLOR is listed in Table 5. All the published results satisfy the limits on the stress. To the best of our knowledge, CA [37] generated the best design for this test problem. The best designs from HM, HPSSO, improved TLBO, FPA, and WEO slightly violated the nodal displacement constraints. It can be seen from Table 5 that the proposed algorithm produced better designs than PLOR, HS, HPSO, SSO ABC-AP, SAHB, and MSPSO, and comparable with TLBO.
Statistical data listed in Table 6 prove that the present algorithm was also very robust for this test problem.
Figure 5 shows the convergence curve obtained for the last optimization run with N = 500. For both load cases, ALSSO detected the y-displacement constraints of top nodes as active.

4.3. Spatial 72-Bar Truss Structure

The third test example regards the optimal design of the spatial 72-bar truss structure shown in Figure 6. The modulus of elasticity and material density are the same as in the previous test cases. The 72 members are organized into 16 groups, as follows: (1) A1A4, (2) A5A12, (3) A13A16, (4) A17A18, (5) A19A22, (6) A23A30, (7) A31A34, (8) A35A36, (9) A37A40, (10) A41A48, (11) A49A52, (12) A53A54, (13) A55A58, (14) A59A66, (15) A67A70, and (16) A71A72. The truss structure is subject to 160 constraints on stress and displacement. A displacement limit within ±0.25 inch is imposed on the top nodes in both x and y directions, and all truss elements have an axial stress limit within ±25 ksi. The truss is subject to the two independent loading conditions listed in Table 7.
This truss problem has been previously studied using deterministic global methods [3], a GA-based method [9], HS [28], ACO [16], PSO [23], ALPSO [25], BB-BC [33], SAHS [27], TLBO [34], CA [37], and WEO [39]. Their best designs are compared against that obtained by the proposed method, and are shown in Table 8. The proposed method produced a new best design with a weight of 379.5922 lb. Among the deterministic global methods, the DIRECT-l algorithm found the best design for this test case (Table 8). However, the corresponding structural weight is larger than the structural weight obtained by the proposed method. It should be noted that the maximum displacements of GA, HS, ALPSO, BB-BC, and FPA exceed the x- and y-displacement limits on node 17. In particular, the optimum design found by HS also violates the compressive stress limits of bar members 55, 56, 57, and 58 under load case 2. The optimum design found by the proposed method satisfies both stress limits and displacement limits and is better than SAHS, TLBO, and CA.
Statistical data listed in Table 9 prove that the present algorithm was also very robust for this test problem.
Figure 7 shows the convergence curve obtained for the last optimization run with N = 500. ALSSO detected the x- and y-displacement constraints of node 17 for load case 1 as active.

4.4. Planar 200-Bar Truss Structure

The last test example regards the optimal design of the planar 200-bar truss structure shown in Figure 8. The modulus of elasticity is 30 Msi and the mass density is 0.283 lb/inch3. The stress limit on all elements is ±10 ksi. There are no displacement constraints. The structural elements are divided into 29 groups as shown in Figure 8. The minimum cross-sectional area of all design variables is 0.1 inch2. The structure is designed against three independent loading conditions: (1) 1.0 kip acting in the positive x-direction at nodes 1, 6, 15, 20, 29, 34, 43, 48, 57, 62, and 71; (2) 10.0 kips acting in the negative y-direction at nodes 1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 24, 26, 28, 29,30, 31, 32, 33, 34, 36, 38, 40, 42, 43, 44, 45, 46, 47, 48, 50, 52, 54, 56, 57, 58, 59, 60, 61, 62, 64, 66, 68, 70, 71, 72, 73, 74, and 75; (3) loading conditions (1) and (2) acting together.
This truss problem has been previously studied using SAHS [27], TLBO [34], ABC-AP [36], WEO [39], FPA [38], HPSACO [40], and HPSSO [41]. The optimized designs are compared in Table 10. SAHS, TLBO, HPSSO, and WEO generated feasible designs while HPSACO, ABC-AP, FPA, and ALSSO slightly violated the constraints. TLBO produced the best design for this test problem.
Statistical data listed in Table 11 prove that the present algorithm was also robust for this test problem. Figure 9 shows the convergence curve obtained for the last optimization run with N = 500.

5. Conclusions

This paper presented a hybrid algorithm for structural optimization, named ALSSO, which combines subset simulation optimization (SSO) with the dynamic augmented Lagrangian multiplier method (DALMM). The performance of SSO is comparable to other stochastic optimization methods. ALSSO employs DALMM to decompose the original constrained problem into a series of unconstrained optimization sub-problems, and uses SSO to solve each unconstrained optimization sub-problem. By adaptively updating the Lagrangian multipliers and penalty parameters, the proposed method can automatically detect active constraints and provide the globally optimal solution to the problem at hand.
Four classical truss sizing problems were used to verify the accuracy and robustness of ALSSO. Compared to the results published in the literature, it is found that the proposed method is able to produce equivalent solutions. It also shows a remarkable statistical performance based on 30 independent runs. A potential drawback of ALSSO at present is that its convergence rate will slow down when the search process approaches the active constraints. In the future, we will introduce a local search strategy into ALSSO to further improve its efficiency and apply ALSSO to various real-life problems.

Acknowledgments

The authors are grateful for the supports by the Natural Science Foundation of China (Grant No. U1533109).

Author Contributions

Feng Du and Hong-Shuang Li conceived, designed and coded the algorithm; Qiao-Yue Dong computed and analyzed the first three examples; Feng Du computed and analyzed the forth example; Feng Du and Hong-Shuang Li wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haftka, R.; Gurdal, Z. Elements of Structural Optimization, 3th ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1992. [Google Scholar]
  2. Jones, D.R.; Perttunen, C.D.; Stuckman, B.E. Lipschitzian optimization without the Lipschitz constant. J. Opt. Theory Appl. 1993, 79, 157–181. [Google Scholar] [CrossRef]
  3. Mockus, J.; Paulavičius, R.; Rusakevičius, D.; Šešok, D.; Žilinskas, J. Application of Reduced-set Pareto-Lipschitzian Optimization to truss optimization. J. Glob. Opt. 2017, 67, 425–450. [Google Scholar] [CrossRef]
  4. Kvasov, D.E.; Sergeyev, Y.D. Deterministic approaches for solving practical black-box global optimization problems. Adv. Eng. Softw. 2015, 80 (Suppl. C), 58–66. [Google Scholar] [CrossRef]
  5. Kvasov, D.E.; Mukhametzhanov, M.S. Metaheuristic vs. deterministic global optimization algorithms: The univariate case. Appl. Math. Comput. 2018, 318, 245–259. [Google Scholar] [CrossRef]
  6. Adeli, H.; Kumar, S. Distributed genetic algorithm for structural optimization. J. Aerosp. Eng. 1995, 8, 156–163. [Google Scholar] [CrossRef]
  7. Kameshki, E.S.; Saka, M.P. Optimum geometry design of nonlinear braced domes using genetic algorithm. Comput. Struct. 2007, 85, 71–79. [Google Scholar] [CrossRef]
  8. Rajeev, S.; Krishnamoorthy, C.S. Discrete optimization of structures using genetic algorithms. J. Struct. Eng. 1992, 118, 1233–1250. [Google Scholar] [CrossRef]
  9. Wu, S.J.; Chow, P.T. Steady-state genetic algorithms for discrete optimization of trusses. Comput. Struct. 1995, 56, 979–991. [Google Scholar] [CrossRef]
  10. Saka, M.P. Optimum design of pitched roof steel frames with haunched rafters by genetic algorithm. Comput. Struct. 2003, 81, 1967–1978. [Google Scholar] [CrossRef]
  11. Erbatur, F.; Hasançebi, O.; Tütüncü, İ.; Kılıç, H. Optimal design of planar and space structures with genetic algorithms. Comput. Struct. 2000, 75, 209–224. [Google Scholar] [CrossRef]
  12. Galante, M. Structures optimization by a simple genetic algorithm. In Numerical Methods in Engineering and Applied Sciences; Centro Internacional de Métodos Numéricos en Ingeniería: Barcelona, Spain, 1992; pp. 862–870. [Google Scholar]
  13. Bennage, W.A.; Dhingra, A.K. Single and multiobjective structural optimization in discrete-continuous variables using simulated annealing. Int. J. Numer. Methods Eng. 1995, 38, 2753–2773. [Google Scholar] [CrossRef]
  14. Lamberti, L. An efficient simulated annealing algorithm for design optimization of truss structures. Comput. Struct. 2008, 86, 1936–1953. [Google Scholar] [CrossRef]
  15. Leite, J.P.B.; Topping, B.H.V. Parallel simulated annealing for structural optimization. Comput. Struct. 1999, 73, 545–564. [Google Scholar] [CrossRef]
  16. Camp, C.V.; Bichon, B.J. Design of Space Trusses Using Ant Colony Optimization. J. Struct. Eng. 2004, 130, 741–751. [Google Scholar] [CrossRef]
  17. Kaveh, A.; Talatahari, S. A particle swarm ant colony optimization for truss structures with discrete variables. J. Constr. Steel Res. 2009, 65, 1558–1568. [Google Scholar] [CrossRef]
  18. Kaveh, A.; Shojaee, S. Optimal design of skeletal structures using ant colony optimisation. Int. J. Numer. Methods Eng. 2007, 70, 563–581. [Google Scholar] [CrossRef]
  19. Kaveh, A.; Farahmand Azar, B.; Talatahari, S. Ant colony optimization for design of space trusses. Int. J. Space Struct. 2008, 23, 167–181. [Google Scholar] [CrossRef]
  20. Li, L.J.; Huang, Z.B.; Liu, F. A heuristic particle swarm optimization method for truss structures with discrete variables. Comput. Struct. 2009, 87, 435–443. [Google Scholar] [CrossRef]
  21. Li, L.J.; Huang, Z.B.; Liu, F.; Wu, Q.H. A heuristic particle swarm optimizer for optimization of pin connected structures. Comput. Struct. 2007, 85, 340–349. [Google Scholar] [CrossRef]
  22. Luh, G.C.; Lin, C.Y. Optimal design of truss-structures using particle swarm optimization. Comput. Struct. 2011, 89, 2221–2232. [Google Scholar] [CrossRef]
  23. Perez, R.E.; Behdinan, K. Particle swarm approach for structural design optimization. Comput. Struct. 2007, 85, 1579–1588. [Google Scholar] [CrossRef]
  24. Dong, Y.; Tang, J.; Xu, B.; Wang, D. An application of swarm optimization to nonlinear programming. Comput. Math. Appl. 2005, 49, 1655–1668. [Google Scholar] [CrossRef]
  25. Jansen, P.W.; Perez, R.E. Constrained structural design optimization via a parallel augmented Lagrangian particle swarm optimization approach. Comput. Struct. 2011, 89, 1352–1366. [Google Scholar] [CrossRef]
  26. Sedlaczek, K.; Eberhard, P. Using augmented Lagrangian particle swarm optimization for constrained problems in engineering. Struct. Multidiscip. Opt. 2006, 32, 277–286. [Google Scholar] [CrossRef]
  27. Talatahari, S.; Kheirollahi, M.; Farahmandpour, C.; Gandomi, A.H. A multi-stage particle swarm for optimum design of truss structures. Neural Comput. Appl. 2013, 23, 1297–1309. [Google Scholar] [CrossRef]
  28. Lee, K.S.; Geem, Z.W. A new structural optimization method based on the harmony search algorithm. Comput. Struct. 2004, 82, 781–798. [Google Scholar] [CrossRef]
  29. Lee, K.S.; Geem, Z.W.; Lee, S.H.; Bae, K.W. The harmony search heuristic algorithm for discrete structural optimization. Eng. Opt. 2005, 37, 663–684. [Google Scholar] [CrossRef]
  30. Saka, M. Optimum geometry design of geodesic domes using harmony search algorithm. Adv. Struct. Eng. 2007, 10, 595–606. [Google Scholar] [CrossRef]
  31. Degertekin, S.O. Improved harmony search algorithms for sizing optimization of truss structures. Comput. Struct. 2012, 92 (Suppl. C), 229–241. [Google Scholar] [CrossRef]
  32. Kaveh, A.; Talatahari, S. Optimal design of skeletal structures via the charged system search algorithm. Struct. Multidiscip. Opt. 2010, 41, 893–911. [Google Scholar] [CrossRef]
  33. Kaveh, A.; Talatahari, S. Size optimization of space trusses using Big Bang–Big Crunch algorithm. Comput. Struct. 2009, 87, 1129–1140. [Google Scholar] [CrossRef]
  34. Degertekin, S.O.; Hayalioglu, M.S. Sizing truss structures using teaching-learning-based optimization. Comput. Struct. 2013, 119 (Suppl. C), 177–188. [Google Scholar] [CrossRef]
  35. Camp, C.V.; Farshchin, M. Design of space trusses using modified teaching–learning based optimization. Eng. Struct. 2014, 62 (Suppl. C), 87–97. [Google Scholar] [CrossRef]
  36. Sonmez, M. Artificial Bee Colony algorithm for optimization of truss structures. Appl. Soft Comput. 2011, 11, 2406–2418. [Google Scholar] [CrossRef]
  37. Jalili, S.; Hosseinzadeh, Y. A Cultural Algorithm for Optimal Design of Truss Structures. Latin Am. J. Solids Struct. 2015, 12, 1721–1747. [Google Scholar] [CrossRef]
  38. Bekdaş, G.; Nigdeli, S.M.; Yang, X.-S. Sizing optimization of truss structures using flower pollination algorithm. Appl. Soft Comput. 2015, 37 (Suppl. C), 322–331. [Google Scholar] [CrossRef]
  39. Kaveh, A.; Bakhshpoori, T. A new metaheuristic for continuous structural optimization: Water evaporation optimization. Struct. Multidiscip. Opt. 2016, 54, 23–43. [Google Scholar] [CrossRef]
  40. Kaveh, A.; Talatahari, S. Particle swarm optimizer, ant colony strategy and harmony search scheme hybridized for optimization of truss structures. Comput. Struct. 2009, 87, 267–283. [Google Scholar] [CrossRef]
  41. Kaveh, A.; Bakhshpoori, T.; Afshari, E. An efficient hybrid Particle Swarm and Swallow Swarm Optimization algorithm. Comput. Struct. 2014, 143, 40–59. [Google Scholar] [CrossRef]
  42. Lamberti, L.; Pappalettere, C. Metaheuristic Design Optimization of Skeletal Structures: A Review. Comput. Technol. Rev. 2011, 4, 1–32. [Google Scholar] [CrossRef]
  43. Bertsekas, D.P. Constrained Optimization and Lagrange Multiplier Methods; Athena Scientific: Belmont, TN, USA, 1996. [Google Scholar]
  44. Li, H.S. Subset simulation for unconstrained global optimization. Appl. Math. Model. 2011, 35, 5108–5120. [Google Scholar] [CrossRef]
  45. Li, H.S.; Au, S.K. Design optimization using Subset Simulation algorithm. Struct. Saf. 2010, 32, 384–392. [Google Scholar] [CrossRef]
  46. Li, H.S.; Ma, Y.Z. Discrete optimum design for truss structures by subset simulation algorithm. J. Aerosp. Eng. 2015, 28, 04014091. [Google Scholar] [CrossRef]
  47. Au, S.K.; Ching, J.; Beck, J.L. Application of subset simulation methods to reliability benchmark problems. Struct. Saf. 2007, 29, 183–193. [Google Scholar] [CrossRef]
  48. Au, S.K.; Beck, J.L. Estimation of small failure probabilities in high dimensions by subset simulation. Probab. Eng. Mech. 2001, 16, 263–277. [Google Scholar] [CrossRef]
  49. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  50. Long, W.; Liang, X.; Huang, Y.; Chen, Y. A hybrid differential evolution augmented Lagrangian method for constrained numerical and engineering optimization. Comput.-Aided Des. 2013, 45, 1562–1574. [Google Scholar] [CrossRef]
  51. Au, S.K. Reliability-based design sensitivity by efficient simulation. Comput. Struct. 2005, 83, 1048–1061. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Algorithms 10 00128 g001
Figure 2. Schematic of the planar 10-bar truss structure.
Figure 2. Schematic of the planar 10-bar truss structure.
Algorithms 10 00128 g002
Figure 3. Convergence curve obtained for the 10-bar truss problem with N = 500.
Figure 3. Convergence curve obtained for the 10-bar truss problem with N = 500.
Algorithms 10 00128 g003
Figure 4. Schematic of the spatial 25-bar truss structure.
Figure 4. Schematic of the spatial 25-bar truss structure.
Algorithms 10 00128 g004
Figure 5. Convergence curve obtained for the 25-bar truss problem with N = 500.
Figure 5. Convergence curve obtained for the 25-bar truss problem with N = 500.
Algorithms 10 00128 g005
Figure 6. Schematic of the spatial 72-bar truss structure.
Figure 6. Schematic of the spatial 72-bar truss structure.
Algorithms 10 00128 g006
Figure 7. Convergence curve obtained for the 72-bar truss problem with N = 500.
Figure 7. Convergence curve obtained for the 72-bar truss problem with N = 500.
Algorithms 10 00128 g007
Figure 8. Schematic of the planar 200-bar truss structure.
Figure 8. Schematic of the planar 200-bar truss structure.
Algorithms 10 00128 g008
Figure 9. Convergence curve obtained for the 200-bar truss problem with N = 500.
Figure 9. Convergence curve obtained for the 200-bar truss problem with N = 500.
Algorithms 10 00128 g009
Table 1. Comparison of optimized designs for the 10-bar truss problem.
Table 1. Comparison of optimized designs for the 10-bar truss problem.
Design VariablesGA [12]HPSO [21]HS [28]HPSACO [17]PSO [23]ALPSO [25]HPSACO [40]ABC-AP [36]SAHS [31]TLBO [34]MSPSO [27]HPSSO [41]WEO [39]ALSSO
A130.44030.70430.15030.49333.50030.51130.30730.54830.39430.428630.525730.583830.575530.4397
A20.1000.1000.1020.1000.1000.1000.10.10.10.10.10010.10.10.1004
A321.79023.16722.71023.23022.76623.23023.43423.1823.09823.243623.22523.1510323.336823.1599
A414.26015.18315.27015.34614.41715.19815.50515.21815.49115.367715.411415.2056615.149715.2446
A50.1000.1000.1020.1000.1000.1000.10.10.10.10.10010.10.10.1003
A60.4510.5510.5440.5380.1000.5540.52410.5510.5290.57510.55830.5488970.52760.5455
A721.63020.97821.56020.99020.39221.01721.07921.05821.18920.966520.917221.0643720.989221.1123
A87.6287.4607.5417.4517.5347.4527.43657.4637.4887.44047.43957.4653227.44587.4660
A90.1000.1000.1000.1000.1000.1000.10.10.10.10.10.10.10.1000
A1021.36021.50821.45821.45820.46721.55421.22921.50121.34221.53321.509821.5293521.523621.5191
Weight (lb)4987.005060.925057.885058.435024.255060.855056.565060.885061.425060.965061.005060.865060.995060.885
Table 2. Comparison of statistical performance in the 10-bar truss problem.
Table 2. Comparison of statistical performance in the 10-bar truss problem.
MethodNBestMeanWorstSDNSA
ALSSO1005060.9315064.5595079.8943.36369948,064 (44,710)
2005060.9315062.3915065.7150.88947294,540 (108,600)
5005060.8855061.7135062.2910.360457247,828 (253,400)
HPSACO [33] 5056.565057.665061.121.4210,650
ABC-AP [36] 5060.88N/A5060.95N/A500,000
SAHS [31] 5061.425061.955063.390.717081
TLBO [34] 5060.965062.085063.230.7916,872
MSPSO [27] 5061.005064.465078.005.72N/A
HPSSO [41] 5060.865062.285076.904.32514,118
WEO [39] 5060.995062.095975.412.0519,540
Table 3. Stress limits for the 25-bar truss problem.
Table 3. Stress limits for the 25-bar truss problem.
Design VariablesCompressive Stress Limit (ksi)Tensile Stress Limit (ksi)
1A135.09240.0
2A2A511.59040.0
3A6A917.30540.0
4A10A1135.09240.0
5A12A1335.09240.0
6A14A176.75940.0
7A18A216.95740.0
8A22A2511.80240.0
Table 4. Loading conditions acting on the 25-bar truss.
Table 4. Loading conditions acting on the 25-bar truss.
Load CasesNodesLoads
Px (kips)Py (kips)Pz (kips)
110.020.0−5.0
20.0−20.0−5.0
211.010.0−5.0
20.010.0−5.0
30.50.00.0
60.50.00.0
Table 5. Comparison of optimized designs for the 25-bar truss problem.
Table 5. Comparison of optimized designs for the 25-bar truss problem.
Design VariablesHS [28]HPSO [21]SSO [45]PLOR [3]ABC-AP [36]SAHS [31]TLBO [34]MSPSO [27]CA [37]HPSSO [41]TLBO [35]FPA [38]WEO [39]ALSSO
1A10.0470.0100.0100.0100.0110.010.010.010.010.010.010.010.010.01001
2A2A52.0221.9702.0571.9511.9792.0742.07121.98482.020641.99071.98781.83081.91841.983579
3A6A92.9503.0162.8923.0253.0032.9612.9572.99563.017332.98812.99143.18343.00232.998787
4A10A110.0100.0100.0100.0100.010.010.010.010.010.010.01020.010.010.010008
5A12A130.0140.0100.0140.0100.010.010.010.010.010.010.010.010.010.010005
6A14A170.6880.6940.6970.5920.690.6910.68910.68520.693830.68240.68280.70170.68270.683045
7A18A211.6571.6811.6661.7061.6791.6171.62091.67781.634221.67641.67751.72661.67781.677394
8A22A252.6632.6432.6752.7892.6522.6742.67682.65992.652772.66562.6642.57132.66122.66077
Weight (lb)544.38545.19545.37546.80545.193545.12545.09545.172545.05545.164545.175545.159545.166545.1057
Table 6. Comparison of statistical performance in the 25-bar truss problem.
Table 6. Comparison of statistical performance in the 25-bar truss problem.
NBestMeanWorstSDNSA
ALSSO100545.1241545.2569545.77930.13516127,009 (23,170)
200545.1254545.205545.42920.06782138,301 (32,580)
500545.1057545.185545.28190.04492486,490 (90,500)
ABC-AP [36] 545.19N/A545.28N/A300,000
SAHS [31] 545.12545.94546.60.919051
TLBO [34] 545.09545.41546.330.4215,318
MSPSO [27] 545.172546.03548.780.810,800
CA [37] 545.05545.93N/A1.559380
HPSSO [41] 545.164545.556546.990.43213,326
TLBO [35] 545.175545.483N/A0.30612,199
FPA [38] 545.159545.73N/A0.598149
WEO [39] 545.166545.226545.5920.08319,750
Table 7. Loading conditions acting on the 72-bar truss.
Table 7. Loading conditions acting on the 72-bar truss.
NodeCase 1 (kips)Case 2 (kips)
PxPyPzPxPyPz
175.05.0−5.00.00.0−5.0
180.00.00.00.00.0−5.0
190.00.00.00.00.0−5.0
200.00.00.00.00.0−5.0
Table 8. Comparison of optimized designs for the 72-bar truss problem.
Table 8. Comparison of optimized designs for the 72-bar truss problem.
Design VariablesGA [11]ACO [16]HS [28]PSO [23]ALPSO [25]DIRECT-l [3]BB-BC [33]SAHS [27]TLBO [34]CA [37]FPA [38]ALSSO
A1A41.9101.9481.7901.7431.8981.6991.90421.861.88071.860931.87581.900283
A5A120.5250.5080.5210.5190.5130.4760.51620.5210.51420.50930.5160.511187
A13A160.1220.1010.1000.1000.1000.1000.10.10.10.10.10.100084
A17A180.1030.1020.1000.1000.1000.1000.10.10.10.10.10.100258
A19A221.3101.3031.2291.3081.2581.3711.25821.2931.27111.262911.29931.268814
A23A300.4980.5110.5220.5190.5130.5470.50350.5110.51510.503970.52460.510226
A31A340.1000.1010.1000.1000.1000.1000.10.10.10.10.10010.100076
A35A360.1030.1000.1000.1000.1000.1000.10.10.10.10.10.100113
A37A400.5350.5610.5170.5140.5200.6180.51780.4990.53170.523160.49710.519311
A41A480.5350.4920.5040.5460.5180.4760.52140.5010.51340.525220.50890.516303
A49A520.1030.1000.1000.1000.1000.1000.10.10.10.100010.10.100062
A53A540.1110.1070.1010.1100.1000.1120.10070.10.10.102540.10.100502
A55A580.1610.1560.1560.1620.1570.1530.15660.1680.15650.1559620.15750.156389
A59A660.5440.5500.5470.5090.5460.5820.54210.5840.54290.553490.53290.550278
A67A700.3790.3900.4420.4970.4050.4050.41320.4330.40810.420260.40890.40533
A71A720.5210.5920.5900.5620.5660.6550.57560.520.57330.56150.57310.563667
Weights (lb)383.12380.24379.27381.91379.61382.34379.66380.62379.632379.69379.095379.59
Table 9. Comparison of statistical performance in the 72-bar truss problem.
Table 9. Comparison of statistical performance in the 72-bar truss problem.
NBestMeanWorstSDNSA
ALSSO100379.7376380.1562382.87990.6046762,292 (77,020)
200379.6001379.7373380.01770.100794131,819 (115,840)
500379.5922379.7058379.9810.103908260,928 (307,700)
BBBC [33] 379.66381.85N/A1.20113,200
SAHS [31] 380.62382.85383.891.3813,742
TLBO [34] 379.632380.20380.830.4121,542
CA [37] 379.69380.86N/A1.850718,460
FPA [38] 379.095379.534N/A0.2729029
Table 10. Comparison of optimized designs for the 200-bar truss problem.
Table 10. Comparison of optimized designs for the 200-bar truss problem.
Element GroupHPSACO [40]ABC-AP [36]SAHB [31]TLBO [34]HPSSO [41]FPA [38]WEO [39]ALSSO
10.10330.10390.15400.14600.12130.14250.11440.132626
20.91840.94630.94100.94100.94260.96370.94431.004183
30.12020.10370.10000.10000.12200.10050.13100.100772
40.10090.11260.10000.10100.10000.10000.10160.104438
51.86641.95201.94201.94102.01431.95142.03531.969623
60.28260.2930.30100.29600.28000.29570.31260.285843
70.10000.10640.10000.10000.15890.11560.16790.145089
82.96833.12493.10803.12103.06663.11333.15413.136798
90.10000.10770.10000.10000.10020.10060.10030.120883
103.94564.12864.10604.17304.04184.11004.10054.124644
110.37420.42500.40900.40100.41420.41650.43500.438346
120.45010.10460.19100.18100.48520.18430.11480.163695
134.96035.48035.42805.42305.41965.45675.38235.514607
141.07380.10600.10000.10000.10000.10000.16070.148495
155.97856.48536.42706.42206.37496.45596.41526.415737
160.78630.56000.58100.57100.68130.58000.56290.592158
170.73740.18250.15100.15600.15760.15470.40100.186473
187.38098.04457.97307.95808.14478.01327.97358.037395
190.66740.10260.10000.10000.10000.10000.10920.130935
208.30009.03348.97408.95809.09209.01359.01559.017311
211.19670.78440.71900.72000.74620.73910.86280.780634
221.00000.75060.42200.47800.21140.78700.22200.312574
2310.826211.305710.892010.897010.958711.179511.025411.03076
240.10000.22080.10000.10000.10000.14620.13970.112562
2511.697612.273011.887011.897011.983212.179912.034012.00723
261.38801.40551.04001.08000.92411.34241.00431.017312
274.95235.16006.64606.46206.76765.48446.57626.458830
288.80009.993010.804010.799010.963910.137210.726510.66930
2914.664514.7014413.870013.922013.818614.526213.966613.96069
Best weight (lb)25,156.525,533.7925,491.925,488.1525,698.8525,521.8125,674.8325,569.98
Table 11. Comparison of statistical performance in the 200-bar truss problem.
Table 11. Comparison of statistical performance in the 200-bar truss problem.
NBestMeanWorstSDNSA
ALSSO10025,722.2225,938.9926,743.77303.537989,655 (88,080)
20025,617.5025,694.7325,842.5957.9310181,068 (173,280)
50025,569.9825,624.8925,696.4732.3777453,465 (447,600)
HPSACO [40] 25,156.525,786.226,421.6830.59800
ABC-AP [36] 25,533.79N/AN/AN/A1,450,000
SAHB [31] 25,491.9025,610.2025,799.30141.8514,185
TLBO [34] 25,488.1525,533.1425,563.0527.4428,059
HPSSO [41] 25,698.8528,386.72N/A240314,406
FPA [38] 25,521.8125,543.51N/A18.1310,685
WEO [39] 25,674.8326,613.45N/A702.8019,410

Share and Cite

MDPI and ACS Style

Du, F.; Dong, Q.-Y.; Li, H.-S. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method. Algorithms 2017, 10, 128. https://doi.org/10.3390/a10040128

AMA Style

Du F, Dong Q-Y, Li H-S. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method. Algorithms. 2017; 10(4):128. https://doi.org/10.3390/a10040128

Chicago/Turabian Style

Du, Feng, Qiao-Yue Dong, and Hong-Shuang Li. 2017. "Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method" Algorithms 10, no. 4: 128. https://doi.org/10.3390/a10040128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop