Next Article in Journal
Private and Mutual Authentication Protocols for Internet of Things
Previous Article in Journal
The Impact of Two-Sided Market Platforms on Participants’ Trading Strategies: An Evolutionary Game Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Successive Approximation Genetic Algorithm (SAGA) for Optimization Problems with Single Constraint

1
State Key Laboratory of Hydraulic Engineering Simulation and Safety, Tianjin University, Tianjin 300072, China
2
Department of Civil Engineering, Tianjin University, Tianjin 300072, China
3
Department of Civil Engineering, Hebei University of Engineering, Handan 056038, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1928; https://doi.org/10.3390/math11081928
Submission received: 18 March 2023 / Revised: 15 April 2023 / Accepted: 17 April 2023 / Published: 19 April 2023

Abstract

:
A limited feasible region restricts individuals from evolving optionally and makes it more difficult to solve constrained optimization problems. In order to overcome difficulties such as introducing initial feasible solutions, a novel algorithm called the successive approximation genetic algorithm (SAGA) is proposed; (a) the simple genetic algorithm (SGA) is the main frame; (b) a self-adaptive penalty function is considered and the penalty factor is adjusted automatically by the proportion of feasible and infeasible solutions; (c) a generation-by-generation approach and a three-stages evolution are introduced; and (d) dynamically enlarging and reducing the tolerance error of the constraint violation makes it much easier to generate initial feasible solutions. Then, ten benchmarks and an engineering problem were adopted to evaluate the SAGA in detail. It was compared with the improved dual-population genetic algorithm (IDPGA) and eight other algorithms, and the results show that SAGA finds the optimum in 5 s for an equality constraint and 1 s for an inequality constraint. The largest constraint violation can be accurate to at least three decimal fractions for most problems. SAGA obtains a better value, of 1.3398, than the eight other algorithms for the engineering problem. In conclusion, SAGA is very suitable for solving nonlinear optimization problems with a single constraint, accompanied by more accurate solutions, but it takes a longer time. In reality, SAGA searched for a better solution along the bound after several iterations and converged to an acceptable solution in early evolution. It is critical to improve the running speed of SAGA in the future.

1. Introduction

As novel and efficient optimization algorithms, heuristic algorithms have been applied widely in many kinds of optimization areas, such as the sizing, topology and geometry optimization of steel structures [1], the optimization of the seismic behavior of steel planar frames with semi-rigid connections [2] and so on [3,4,5,6,7,8]. Heuristic algorithms have remarkable advantages over mathematical programming methods; for example, the relevant derivative information of objectives, including the first-order Jacobian matrix or second-order Hessian matrix, is not needed to solve problems, and continuous or convex design regions or explicit constraints are not required. The genetic algorithm (GA) described by Holland [9] is a typical and the most-used heuristic algorithm and is composed of population initialization and calculating fitness, judgments, iterations, evaluations and updates. Moreover, other intelligent algorithms also have been proposed and utilized to solve various optimization problems, such as the simulated annealing algorithm [10], particle swarm algorithm [11], neural networks [12], ant colony algorithm [13], artificial fish swarm algorithm [14], firefly algorithm [15], cuckoo search [16], etc.
Genetic algorithms search for the optimum by imitating natural phenomena or referring to life experience, which has been demonstrated to be very efficient and low-cost in practice. In the past, many studies have studied and improved genetic algorithms to solve unconstrained and constrained optimization problems and achieved the desired effect. Nonlinear constrained optimization problems are more difficult to deal with than unconstrained optimization problems in practice, because the limited feasible region restricts individuals from evolving optionally, and inappropriate constraint handling can easily cause the algorithm to converge to an infeasible solution or a feasible solution far from the optimum, or even fail to converge. A general form of nonlinear constrained optimization problems is shown in Equation (1), where g i ( x ) and h i ( x ) are inequality constraints and equality constraints, respectively, and at least one of objectives and constraints is nonlinear.
min f ( x ) s . t .       { g i ( x ) 0 , i = 1 ,   2 ,   ,   m h i ( x ) = 0 , i = m + 1 ,   m + 2 ,   ,   n
The most common way to deal with constraints is to convert the constraints to objectives by introducing penalty functions in constrained optimization problems. The objective function is represented as shown in Equation (2); G ( x ) is the objective after conversion, C is a penalty factor, φ ( x ) is a combined penalty item and ε is a small positive number that represents the tolerance error of an equality constraint.
G ( x ) = f ( x ) + C φ ( x ) φ ( x ) = i = 1 n φ i ( x ) φ i ( x ) = { max { 0 ,   g i ( x ) } , i = 1 ,   2 ,   ,   m max { 0 ,   | h i ( x ) | ε } , i = m + 1 ,   m + 2 ,   ,   n
Thus, the original constrained problem is converted to an unconstrained problem, as shown in Equation (3).
min G ( x )
Many studies on constraint handling have been conducted in the literature. Typical penalty functions include static penalty functions [17,18,19], dynamic penalty functions [20,21], annealing penalty functions [20,22,23], self-adaptive penalty functions [24,25,26], etc. The details will be explained below. Furthermore, there are some other studies on constraint handling. Deb [27] exploited GA’s population-based approach and ability to make pair-wise comparisons in a tournament selection operator to devise a penalty function approach, in which any penalty parameter is not required; then, feasible and infeasible solutions are compared carefully, once sufficient feasible solutions are found,. A niching method (along with a controlled mutation operator) is used to maintain diversity among feasible solutions. Shang et al. [28] proposed a hybrid flexible tolerance genetic algorithm (FTGA) to solve nonlinear, multimodal and multi-constraint optimization problems, in which, as one of the adaptive genetic algorithm (AGA) operators, a flexible tolerance method (FTM) is organically merged into an AGA. Li and Du [29] proposed a boundary simulation method to address inequality constraints for GAs, which can efficiently generate a feasible region boundary point set to approximately simulate the boundary of the feasible region. Meanwhile, they proposed a series of genetic operators that abandon or repair infeasible individuals produced during the search process. Montemurro et al. [30] proposed the Automatic Dynamic Penalization (ADP) method, which belongs to the category of exterior penalty-based strategies, in which in order to guide the search through the whole definition domain and to achieve a proper evaluation of the penalty coefficients, it is possible to exploit the information restrained in the population at the current generation. Liang et al. [31] proposed a genetic algorithm to handle constraints which searches the decision space of a problem through the arithmetic crossover of feasible and infeasible solutions, and performs a selection on feasible and infeasible populations, respectively, according to fitness and constraint violations. Meanwhile, the algorithm uses the boundary mutation on feasible solutions and the non-uniform mutation on infeasible solutions and maintains the population diversity through dimension mutation. Garcia et al. [32] presented a constraint handling technique (CHT) called the Multiple Constraint Ranking (MCR), which extends the rank-based approach from many CHTs by building multiple separate queues based on the values of the objective function and the violation of each constraint and overcomes difficulties found by other techniques when faced with complex problems characterized by several constraints with different orders of magnitude and/or different units. Miettinen et al. [33] studied five penalty function-based constraint handling techniques to be used with genetic algorithms in global optimization and discovered the method of adaptive penalties turns out to be most efficient, while the method of parameter-free penalties is the most reliable. Yuan et al. [34] combined the genetic algorithm with hill-climbing search steps to propose a new algorithm which can be widely applied to a class of global optimization problems for continuous functions with box constraints. A repair procedure was proposed by Chootinan and Chen [35] and embedded into a simple GA as a special operator, in which the gradient information derived from the constraint set is used to systematically repair infeasible solutions. Without defining membership functions for fuzzy numbers, Lin [36] used the extension principle, interval arithmetic and alpha-cut operations for fuzzy computations, and a penalty method for constraint violations to investigate the possibility of applying GAs to solve this kind of fuzzy linear programming problem. Hassanzadeh et al. [37] considered nonlinear optimization problems constrained by a system of fuzzy relation equations and proposed a genetic algorithm with max-product composition to obtain a near-optimal solution for convex or nonconvex solution sets. Abdul-Rahman et al. [38] proposed a Binary Real-coded Genetic Algorithm (BRGA) which combines cooperative Binary-coded GA (BGA) with Real-coded GA (RGA), then introduced a modified dynamic penalty function into the architecture of the BRGA and extended the BRGA to constrained problems. Tsai and Fu [39] presented two genetic-algorithm-based algorithms that adopt different sampling rules and searching mechanisms to consider the discrete optimization via simulation problem with a single stochastic constraint. Zhan [40] proposed a genetic algorithm based on annealing infeasible degree selection for constrained optimization problems which shows great simulation results.
In order to explore the potential of genetic algorithms adequately and develop an algorithm suitable for engineering optimization, this paper proposes a novel and efficient genetic algorithm named the successive approximation genetic algorithm (SAGA) for nonlinear constrained optimization problems with a single constraint, considering a self-adaptive penalty function, dividing solution space into feasible and infeasible solutions and introducing a generation-by-generation approaching strategy. The performance of SAGA was evaluated in detail through a series of mathematical and engineering optimization problems; then, comparing it with other existing algorithms, it was further proved that SAGA has better accuracy and robustness. Considering that existing algorithm scripts are mostly written in MATLAB while the open source language Python is growing in popularity, all scripts in this paper were completed in Python and its third-party library, including the main program of SAGA (Python and Numpy), the images of the objectives and contour lines or surfaces (Numpy, Matplotlib and Mayavi), writing the results into the text (Python and Pandas), etc.

2. Related Work on Typical Penalty Functions

Just a few typical penalty functions are listed here. The equation constraint h i ( x ) = 0 is usually converted as | h i ( x ) | ε 0 , where ε is a small positive number. This section briefly summarizes some commonly used penalty function methods.

2.1. Static Penalty Functions

The penalty function proposed by Homaifar, Qi and Lai [17] is shown in Equation (4). The constraint violation is divided into several levels; R i is a penalty factor and M is the number of constraint violations after converting the equation constraints to inequation constraints.
G ( x ) = f ( x ) + i = 1 M ( R i × max { 0 ,   g i ( x ) } 2 )
Like a unilateral penalty, Hoffmeister and Sprave [18] proposed the penalty function shown in Equation (5) and adopted the Heavyside function H ( x ) = { 0 , x 0 1 , x > 0 .
G ( x ) = f ( x ) ± i = 0 m H ( g i ( x ) ) g i ( x ) 2
Kuri and Quezada [19] proposed the penalty function shown in Equation (6), where n is the number of constraints, q is the number of constraints met and K is a very large number; they suggested 10 9 .
G ( x ) = { f ( x ) , i f   f e a s i b l e K i = 1 q K n , i f   i n f e a s i b l e

2.2. Dynamic Penalty Functions

Joines and Houck [20] proposed the dynamic penalty function shown in Equation (7), where C , α and β are three constants—they suggested 0.5, 1 or 2 and 1 or 2, respectively— t is the current generation and D i ( x ) and D j ( x ) are the inequality and equality penalty item, respectively.
G ( x ) = f ( x ) + ( C t ) α × S V C ( β ,   x ) S V C ( β ,   x ) = i = 1 m D i β ( x ) + i = m + 1 n D j ( x ) D i ( x ) = { 0 , g i ( x ) 0 | g i ( x ) | , e l s e ,           i = 1 ,   2 ,   ,   m D j ( x ) = { 0 , | h i ( x ) | ε | h i ( x ) | , e l s e ,           i = m + 1 ,   m + 2 ,   ,   n

2.3. Annealing Penalty Functions

Joines and Houck [20] proposed another annealing penalty function, shown in Equation (8), where the meanings of parameters are the same as above, and C , α and β are suggested to be 0.05, 1 and 1, respectively.
G ( x ) = f ( x ) + e ( C t ) α × S V C ( β ,   x )
Michalewicz and Attia [22] divided constraints into four categories: linear equality constraints, nonlinear equality constraints, linear inequality constraints and nonlinear inequality constraints; then, they proposed the annealing penalty function shown in Equation (9), where τ is a variable parameter and gradually decreases based on iterations.
G ( x ) = f ( x ) + 1 2 τ ϕ i 2 ( x ) ϕ i ( x ) = { max { 0 ,   g i ( x ) } , i = 1 ,   2 ,   ,   m | h i ( x ) | , i = m + 1 ,   m + 2 ,   ,   n
Carlson and Shonkwiler [23] proposed a similar annealing penalty function, shown in Equation (10), where M is a distance parameter and T is a time parameter that gradually decreases to zero.
G ( x ) = A f ( x ) A = e M T T = 1 t

2.4. Self-Adaptive Penalty Functions

Hadj-Alouane and Bean [26] utilized the feedback of search process to propose a new self-adaptive penalty function, shown in Equation (11), where α 1 > α 2 > 1 , case1 means all best solutions belong to feasible region in the previous k iterations, and case2 means all best solutions belong to infeasible region in the previous k iterations.
G ( x ) = f ( x ) + λ ( t ) ( i = 1 m g i 2 ( x ) + i = m + 1 n h i 2 ( x ) ) λ ( t + 1 ) = { 1 α 1 × λ ( t ) , i f   c a s e 1 α 2 × λ ( t ) , i f   c a s e 2 λ ( t ) , e l s e

3. Methodology

The penalty functions listed above have achieved delightful results in actual application, but have also faced some difficulties, for instance, the accurate parameters are difficult to select, an initial feasible solution that satisfies many constraints needs to be introduced beforehand and so on. The proposed successive approximation genetic algorithm (SAGA) in this paper adopts a series of easy and efficient strategies and gradually searches for the global optimum based on iterations. The main frame of SAGA is still a simple genetic algorithm (SGA). This section explains each part of SAGA in detail.

3.1. The Penalty Function of SAGA

The penalty function form in Equation (2) is still adopted for constraint handling in SAGA, and a self-adaptive penalty function is utilized in which it is critical to select the penalty factor C . C should be a larger value in the earlier evolution because there are fewer feasible solutions, and C should gradually decrease as iterations go on. The penalty factor C selected in the paper is referred to by Cai [41], as shown in Equation (12), where ρ represents the proportion of feasible solutions, N f e a and N a l l represent the number of feasible solutions and all solutions, respectively, and α is used to control the range of penalty factors, and a large positive integer is generally preferred.
C ( ρ ) = e α ( 1 ρ ) 1 ρ = N f e a N a l l
The penalty factor C changes with ρ ; when ρ equals zero, no feasible solutions exist and C reaches the maximum; when ρ equals one, all solutions are feasible and C reaches the minimum and equals zero, which means no individual needs to be penalized. One great advantage of such a penalty function is that the penalty factor is not too large or small, changes dynamically with the population and does not need to be specified manually.

3.2. The Successive Approximation Strategy of SAGA

Plenty of problems have proved that the global optimum is often located on the boundary of constraints. Some algorithms simply discard infeasible solutions during the solution process, which seems simple, but it is actually difficult to search for an exact optimal solution. There are also some algorithms that consider infeasible solutions to take full advantage of all solutions, which may be complicated to operate.
It is well known that equation constraint handling is more difficult; an optimization problem with an equality constraint is as shown in Figure 1, where the area enclosed by the outer black rectangle represents the design space and the inner black ring represents the equality constraint.
Evidently, it is very difficult, even impossible, to generate a solution that is located exactly on the equality constraint by randomly initializing the population. For inequality constraints | h i ( x ) | ε 0 from the conversion of equation constraints, ε is used to distinguish feasible and infeasible solutions. The feasible region is transformed from a line to an area, but a small positive number ε means this area is relatively narrow, which leads to a larger popsize required to create the possibility of at least one solution being located in the feasible region after randomly initializing the population. Contrarily, a larger ε cannot be used in order to guarantee higher computation accuracy. While infeasible solutions are not useless, a better solution may be produced by the combination of a feasible solution and an infeasible solution, two feasible solutions or two infeasible solutions.
The successive approximation strategy of SAGA has a dynamically changing tolerance error of constraints. In the preliminary stage of evolution, a relatively large ε is given, which makes it very easy for the randomly initialized population to include several feasible solutions, and better solutions can be generated based on these temporary feasible solutions. Then, ε gradually decreases with the iterations. Finally, ε reaches the precision required. The whole progress is shown in Figure 1; green points and red stars represent feasible and infeasible solutions, respectively, the area between the two blue rings represents the current temporary feasible region after the conversion of equality constraint and the gray rings represent the boundaries of the temporary feasible regions of other generations. From (b) to (d) in Figure 1, ε gradually decreases, the temporary feasible region gradually shrinks and all solutions gradually approach the original equation constraint.
Considering that the maximum number of generations is relatively large, it is too cumbersome to operate if ε is updated after every evolution. Meanwhile, the temporary feasible region shrinking too rapidly may lead to the loss of feasible solutions, so ε is recommended to be 0.8 times the previous value after every five generations.

3.3. Three-Stages Evolution of SAGA

For the sake of making full use of all solutions, a new population of SAGA is composed of three parts, including the individuals evolving from two feasible solutions (the first part), a feasible and an infeasible solution (the second part) and two infeasible solutions (the third part). In the meantime, the whole evolution progress is divided into three stages including the first third, the middle third and the last third of the generations.
The number of feasible solutions in the solutions generated by randomly initializing the population is very small, and the preliminary stage of evolution is more likely to produce feasible solutions by evolution from infeasible solutions, which is easily found in Figure 1, so the second and third parts account for a larger proportion of the evolution. As iterations go on, the number of feasible solutions gradually increases. The algorithm is also likely to produce feasible solutions by evolution from feasible solutions, so the proportion of the first part in the total evolutionary population gradually increases and the proportion of the third part gradually decreases. In the later stage of evolution, there are many feasible solutions, so the proportion of the first part in the total evolutionary population further increases and the proportion of the third part further decreases (set as zero in the paper). The proportions of every part in different stages are set as shown in Table 1.

3.4. The Whole Progress of SAGA

SAGA is still mainly composed of population initialization, judgments, iterations, updates and evaluations. Simple and efficient real encoding is adopted in SAGA.
When initializing the population, firstly, two empty sets (set1 and set2) are defined to store feasible and infeasible solutions, respectively. Then, it initializes the population several times, saves the feasible solution obtained after each initialization into set1 until the number of feasible solutions is not less than 10, and only saves the infeasible solution obtained after the last initialization into set2. Finally, it selects set1 as the initial feasible solution set, and randomly selects (popsize—the length of set1) solutions without repeating set2 as the initial infeasible solution set.
The fitness of the individual is calculated according to Equation (13), where c max is a larger positive number. Roulette wheel selection is adopted in SAGA.
f i t ( x ) = { c max G ( x ) , G ( x ) < c max 0 , e l s e
The recombination approach applied by Cai [41] is adopted in SAGA, as shown in Equation (14), where x i and x j are two parents, x ˜ i and x ˜ j are two offspring and r 11 , r 12 , r 21 and r 22 are four random numbers between zero and one.
{ x ˜ i = r 11 r 11 + r 12 x i + r 12 r 11 + r 12 x j x ˜ j = r 21 r 21 + r 22 x i + r 22 r 21 + r 22 x j
The mutation approach applied by Cai [41] is adopted in SAGA as shown in Equation (15), where x k is a parent, x ˜ k is an offspring, u k and l k are the upper and lower bounds of a variable, respectively, T is the maximum number of generations and r is a random number between zero and one.
{ x ˜ k = x k + u k l k T r , r 0.5 x ˜ j = x k u k l k T r , r > 0.5
During evolution, the solutions are divided into feasible and infeasible solutions after each evolution, and they are extended to the feasible and infeasible solution sets of the last evolution, respectively, so the lengths of set1 and set2 are larger. In order to prevent many similar feasible solutions from appearing too early, which may cause the algorithm to converge prematurely to an imprecise feasible solution, three upper limits on the number of feasible solutions involved in evolution are given in the three stages, as shown in Table 2. For example, in the first stage, if the length of set1 is less than 0.4 times the popsize, all feasible solutions in set1 are involved in the next evolution, and the (popsize—the length of set1) best infeasible solutions in set2 are involved in the next evolution; contrarily, if the length of set1 is more than 0.4 times the popsize, the (0.4 times the popsize) best feasible solutions in set1 are involved in the next evolution, and the (popsize—0.4 times the popsize) best infeasible solutions in set2 are involved in the next evolution, similarly to other stages. A feasible solution determines whether it is good or bad based on the fitness of itself, and an infeasible solution determines whether it is good or bad based on its constraint violation.
Finally, all individuals of every generation are re-divided into feasible and infeasible solutions by using the target precision. If feasible solutions exist, the one with the largest fitness is selected as the best individual of the current generation. If no feasible solution exists, the one with the least constraint violation is selected as the best individual of the current generation. The complete process of SAGA is as shown in Figure 2.

4. Mathematical Tests and Results of SAGA

Ten nonlinear optimization benchmarks including five problems with an equality constraint and five problems with an inequality constraint were utilized to evaluate the performance of SAGA. All the scripts were written in Python and all the tests were conducted on an Intel Core i7-11370H 3.30GHz machine with 16 GB RAM and single thread under Win10 platform.

4.1. Benchmarks

Five benchmarks with an equality constraint were marked as f1, f2, f3, f4 and f5, and five benchmarks with an inequality constraint were marked as f6, f7, f8, f9 and f10. n in f5 was set as 10 in the paper. The mathematical description and optimums of the ten benchmarks are presented in Table 3.
The contour lines or surfaces of the objectives and feasible regions of the decision variables of the ten benchmarks were plotted with Matplotlib and Mayavi. In Figure 3, the optimum of every problem is marked with a red point, of which the corresponding decision variables are also marked in a red transparent box. For f5, n = 3 makes the problem visible. It is important to note that the figure drawn for f3 is the image of the objective and constraint. The contour surfaces of f9 and f10 are three-dimensional, so the feasible regions are not marked to be observed more visually. Meanwhile, only half of the contour surfaces of f10 are drawn. Furthermore, most optimums of the ten benchmarks are located on the tangent points (tangent planes) of the contour lines (contour surfaces) and feasible regions.

4.2. Simulation Settings

To explore the appropriate popsize of SAGA to solve the problems well, the popsize of 200, 300, 400 and 500, respectively, was used to run the scripts. In order to obtain more than 10 feasible solutions in a short time in continuous initialization, the popsize used in the initialization was set as a slightly larger number, which was 500 for most of the problems in the paper. The value of α in the penalty factor C was set as 7. The probability of recombination and mutation were set as 0.65 and 0.05. The maximum number of generations was set as 100. The initial tolerance error ε 0 was set as 0.1, 0.1 × 0.8 100 5 = 0.00115 , so the final tolerance error ε u was set as 0.001. The stop criterion was that the maximum number of generations were reached.
All scripts of SAGA were run 30 times consecutively and the results were recorded.

4.3. Simulation Results

The best result of the 30 runs is defined as the one with the least constraint violation, and the worst result is that with the largest constraint violation. The results and time consumption with different popsizes are shown in Table 4 and Table 5, including the best, worst, mean and standard deviation (SD) of the objectives; the minimum, maximum and mean of the constraint violation; the minimum, maximum and mean errors of the objectives; and the minimum, maximum and mean of the time consumption.
The optimum is located on the bound of the constraint for f1 to f9, and it is located in the constraint for f10. In general, for most problems, when the popsize increases, the best and worst values of the objectives of the 30 runs change slightly. In reality, the means of the 30 runs do not mean much because the objective of a solution with a large constraint violation may be closer to the theoretical optimum. The results also demonstrate that the optimums of the 30 runs have small standard deviations; in other words, the proposed SAGA performs with great robustness. Correspondingly, the relevant information of the constraint violation of a solution is more crucial, which directly determines the quality of a solution. The largest constraint violation of ten problems is 0.010938, the others are accurate to at least three decimal fractions.
The tolerance error of the constraint violation and the accuracy of the objective are taken into account at the same time. Then, the appropriate popsize for each problem is found as the bolded texts in Table 4. For the problems with an equality constraint, the suitable popsize is 300 for f1, 200 for f2 and f3, and 400 for f4 and f5. For the problems with an inequality constraint, the suitable popsize is 200 for f6 to f10, which guarantees that the constraint violation can be accurate to three fractions and the objective can be accurate to two fractions.
The errors of the objectives and time consumption corresponding to the appropriate popsize are bolded in Table 5.
All errors corresponding the appropriate popsize listed in Table 5 are very small. For the problems with an equality constraint, SAGA takes an average time of 4.73, 2.31, 0.78, 2.84 and 3.27 s to search for the optimums. For the problems with an inequality constraint, SAGA takes an average time of 0.73, 0.72, 0.75, 0.77 and 0.84 s to search for the optimums. To sum up, SAGA solves the problems with an equality constraint in 5 s and with an inequality constraint in 1 s.
For each problem with the appropriate popsize, the decision variables corresponding to the best and worst constraint violation of 30 runs are presented in Table 6. The decision variables of the best and worst of the 30 best individuals simulated are very close to the theoretical solution, which proves the effectiveness and reliability of SAGA once again.

4.4. Evolution Figures

In order to show the entire evolution process of SAGA more intuitively, for each problem with the appropriate popsize, the evolution processes of the best and worst objectives and corresponding constraint violations of 30 runs are plotted as shown in Figure 4 and Figure 5. For example, SAGA-300-B in a figure means the propriate popsize is 300 and the evolution shown in the figure is the best one of the 30 runs; similarly, W means it is the worst one of the 30 runs.
For the problems with an equality constraint, the results show that the objective fluctuates in the early stages of evolution and gradually levels off in the late stages. The general tendency is that SAGA converges with iterations. Refer to the evolution progress of constraint violation; it is easy to see the phenomenon in which the constraint violation constantly oscillates near zero in the early stages and converges to zero at last, which indicates that SAGA is searching for better and better optimums near the bound of the equality constraint. The above results prove once again that the successive approximation strategy of SAGA is highly efficient.
For the problems with an inequality constraint, the feasible region is an area, not a line, which makes searching for some feasible solutions much easier, so the objective fluctuates slightly in the early stages of evolution and gradually levels off in the late stages. Similarly, SAGA converges with iterations in general. The constraint violation is not less than zero, and it gradually trends towards zero in fluctuations. In actuality, constraint violation may also be less than zero in the evolution, considering the corresponding solution satisfies the inequality constraint, so the constraint violation is set as zero in the figure above. These results also demonstrate that the solutions are more and more accurate and SAGA is highly efficient.
Moreover, combined with the evolution figures of the objectives and constraint violation, it is evident that SAGA converges to an acceptable solution before the maximum number of generations is reached.
Finally, on behalf of the optimization problems with an equality constraint and an inequality constraint, the scripts of f4 and f7 with a popsize of 200 were run once again and the distributions of the population during evolution were plotted to observe the convergence progress of SAGA. Only the distributions of the first, sixth, twelfth, twentieth, fiftieth and hundredth generations are shown in Figure 6 and Figure 7 to describe the process briefly, where the orange pluses represent all individuals of the current generation, the green star represents the best individual of the current generation and the red point represents the theoretical optimum.
The results show that all individuals are randomly distributed discretely in the whole design space at first, and they gradually move to the bound of constraint with iterations. Most individuals are near the bound after the sixth generation; in other words, SAGA starts to search for a better solution along the bound which satisfies the constraint better afterwards. In the end, all individuals collect near the theoretical optimum. These results again prove that SAGA solves the problems quickly and efficiently.
What is different from Figure 6 is that the feasible region is an area; in the early stages, most individuals are distributed in the feasible region. Other conclusions are consistent with the above.

5. Comparison with the Improved DPGA

Next, the improved dual-population genetic algorithm (IDPGA) was utilized for comparing the performance of SAGA, and the computing platform was consistent with the above. Two improved strategies, including a random dynamic immigration operator and keeping the best individuals, were adopted. Different popsizes and iterations were used for equality constraint problems. The popsize and iteration were taken as 100 for inequality constraint problems. Finally, the tolerance error for equality constraint was set as 0.01.

5.1. Simulation Results of IDPGA

Similarly, the results of 30 runs about objective, constraint violation, error, time consumption and decision variable are listed in Table 7, Table 8 and Table 9.
For equality constraint problems, IDPGA behaves better with a larger popsize or iteration. For inequality constraint problems, IDPGA reaches satisfying solutions with a small popsize and iteration. The constraint violation of all results is less than the set value (0.01). Similarly, IDPGA also performs with great robustness. IDPGA takes less than 0.7 s for equality constraint problems and less than 0.1 s for inequality constraint problems to search for the optimums, and the decision variables are close to the theoretical solutions.

5.2. Evolution Figures of IDPGA

The evolution processes of the best and worst result with the appropriate popsize and iteration are plotted in Figure 8 and Figure 9. For example, IDPGA-B in a figure means the evolution shown in the figure is the best one of the 30 runs; similarly, W means it is the worst one of the 30 runs.
For ten constraint problems, the results show that the objective does not fluctuate in the early stages, but follows one direction and gradually converges to the optimum. Similarly, the constraint violation oscillates in the early stages and converges near zero at last.

5.3. Comparison of Two Algorithms

In order to compare two algorithms, the best and worst results corresponding to the appropriate popsize and iteration are selected, whose error and constraint violation are shown in Figure 10. Apparently, SAGA can obtain a more precise solution and violate the constraint less than IDPGA. However, combined with the above, IDPGA takes much less time to search than SAGA.

5.4. An Engineering Problem

An engineering optimization problem was solved by two algorithms. The problem is to minimize the weight of a stepped cantilever beam under a vertical concentrated force, as shown in Figure 11. The constraint is the upper limit on the vertical displacement of the free end. The thickness of each element is supposed to be constant, and the decision variables are the width of each element. The mathematical expression is shown in Equation (16).
min f ( x ) = 0.06224 ( x 1 + x 2 + x 3 + x 4 + x 5 ) s . t .     { g ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0.01 x i 100       ( i = 1 , 2 , 3 , 4 , 5 )
All results of two algorithms are listed in Table 10 and Table 11.
If a constraint is required to be strictly satisfied, IDPGA obtains a best value of 1.3440 and SAGA obtains 1.3398. In reference [42], eight algorithms achieved the best values shown in Table 12. From our findings, SAGA obtains a better value and it also obtains a satisfying value. If the constraint is relaxed appropriately, IDPGA can obtain a value of 1.3344, which can be acceptable in engineering. Furthermore, IDPGA solves the problem much faster than SAGA.

6. Conclusions

A new algorithm named the successive approximation genetic algorithm (SAGA) has been proposed by introducing several strategies. After solving a series of mathematic and engineering problems and comparing it with nine other algorithms, we proved that SAGA is highly accurate and effective in solving optimization problems with a constraint, but with a high time cost. In the future, there are several ways in which to improve SAGA. One aspect is to speed up the searching process of SAGA. We are trying to combine the advantages of SAGA and particle swarm optimization to greatly improve its solving efficiency. On the other hand, as seen in Table 1, the proportion of individuals of each part in each stage is determined artificially after trial calculation. Although the results of the problems in the paper are all satisfying, there is no doubt that the proportion of individuals needs to be evaluated in detail in the future. Most engineering problems have many constraints, so another aspect is to extend SAGA to solving optimization problems with multiple, even many, constraints. In this case, if the equality constraints are not properly scaled, the algorithm may easily fall into local optimal even stagnation, which is the direction we are currently exploring.

Author Contributions

Formal analysis, X.X.; Investigation, H.L.; Methodology, Z.C.; Software, X.X.; Supervision, Z.C.; Validation, H.L.; Visualization, Z.C.; Writing—original draft, X.X.; Writing—review and editing, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research was under the support of the National Key R&D Program of China (2019YFD1101005).

Data Availability Statement

All scripts of algorithms were completed in Python and all contours were plotted with Matplotlib and Mayavi. Some or all data, models or code that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

On behalf of all authors, the corresponding author states that there are no conflicts of interest.

References

  1. Ashtari, P.; Barzegar, F. Accelerating fuzzy genetic algorithm for the optimization of steel structures. Struct. Multidiscip. Optim. 2012, 45, 275–285. [Google Scholar] [CrossRef]
  2. Oskouei, A.V.; Fard, S.S.; Aksogan, O. Using genetic algorithm for the optimization of seismic behavior of steel planar frames with semi-rigid connections. Struct. Multidiscip. Optim. 2012, 45, 287–302. [Google Scholar] [CrossRef]
  3. Kers, J.; Majak, J.; Goljandin, D.; Gregor, A.; Malmstein, M.; Vilsaar, K. Extremes of apparent and tap densities of recovered GFRP filler materials. Compos. Struct. 2010, 92, 2097–2101. [Google Scholar] [CrossRef]
  4. Majak, J.; Pohlak, M. Decomposition method for solving optimal material orientation problems. Compos. Struct. 2010, 92, 1839–1845. [Google Scholar] [CrossRef]
  5. Wang, X.; Yeo, C.S.; Buyya, R.; Su, J. Optimizing the makespan and reliability for workflow applications with reputation and a look-ahead genetic algorithm. Future Gener. Comput. Syst. 2011, 27, 1124–1134. [Google Scholar] [CrossRef]
  6. Majak, J.; Pohlak, M.; Eerme, M.; Velsker, T. Design of car frontal protection system using neural networks and genetic algorithm. Mechanika 2012, 18, 453–460. [Google Scholar] [CrossRef]
  7. Qiu, Q.; Cui, L.; Gao, H.; Yi, H. Optimal allocation of units in sequential probability series systems. Reliab. Eng. Syst. Saf. 2018, 169, 351–363. [Google Scholar] [CrossRef]
  8. Yang, A.; Qiu, Q.; Zhu, M.; Cui, L.; Chen, W.; Chen, J. Condition-based maintenance strategy for redundant systems with arbitrary structures using improved reinforcement learning. Reliab. Eng. Syst. Saf. 2022, 225, 108643. [Google Scholar] [CrossRef]
  9. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  10. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  11. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  12. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  13. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the First European Conference on Artificial Life, Paris, France, 11–13 December 1991; pp. 134–142. [Google Scholar]
  14. Li, X.L.; Shao, Z.J.; Qian, J.X. An optimizing method based on autonomous animats: Fish-swarm algorithm. Syst. Eng.—Theory Pract. 2002, 22, 32–38. (In Chinese) [Google Scholar]
  15. Yang, X.-S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Beckington, UK, 2010. [Google Scholar]
  16. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), New Delhi, India, 9–11 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 210–214. [Google Scholar]
  17. Homaifar, A.; Qi, C.X.; Lai, S.H. Constrained Optimization via Genetic Algorithms. Simulation 1994, 62, 242–253. [Google Scholar] [CrossRef]
  18. Hoffmeister, F.A.; Sprave, J. Problem-Independent Handling of Constraints by Use of Metric Penalty Functions. In Evolutionary Programming; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  19. Kuri, A.; Quezada, C. A universal eclectic genetic algorithm for constrained optimization. In Proceedings of the 6th European Congress on Intelligent Techniques & Soft Computing, EUFIT’98, Aachen, Germany, 7–10 September 1998. [Google Scholar]
  20. Joines, J.A.; Houck, C.R. On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA’s. In Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, Orlando, FL, USA, 27–29 June 1994; Volume 2, pp. 579–584. [Google Scholar]
  21. Kazarlis, S.; Petridis, V. Varying Fitness Functions in Genetic Algorithms: Studying the Rate of Increase of the Dynamic Penalty Terms; Springer: Berlin/Heidelberg, Germany, 1998; pp. 211–220. [Google Scholar]
  22. Michalewicz, Z.; Attia, N. Evolutionary Optimization of Constrained Problems. In Evolutionary Optimization of Constrained Problems; Springer: Berlin/Heidelberg, Germany, 1994. [Google Scholar]
  23. Carlson, S.E.; Shonkwiler, R. Annealing a genetic algorithm over constraints. In Proceedings of the SMC’98 Conference Proceedings, 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218), Urbana, IL, USA, 14 October 1998; Volume 4, pp. 3931–3936. [Google Scholar]
  24. Smith, A.E.; Tate, D.M. Genetic Optimization Using a Penalty Function. In Proceedings of the 5th International Conference on Genetic Algorithms, Urbana-Champaign, IL, USA, 17–21 July 1993; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1993; pp. 499–505. [Google Scholar]
  25. Coit, D.W.; Smith, A.E. Penalty guided genetic search for reliability design optimization. Comput. Ind. Eng. 1996, 30, 895–904. [Google Scholar] [CrossRef]
  26. Hadj-Alouane, A.B.; Bean, J.C. A Genetic Algorithm for the Multiple-Choice Integer Program. Oper. Res. 1997, 45, 92–101. [Google Scholar] [CrossRef]
  27. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  28. Shang, W.F.; Zhao, S.D.; Shen, Y.J. A flexible tolerance genetic algorithm for optimal problems with nonlinear equality constraints. Adv. Eng. Inform. 2009, 23, 253–264. [Google Scholar] [CrossRef]
  29. Li, X.; Du, G. Inequality constraint handling in genetic algorithms using a boundary simulation method. Comput. Oper. Res. 2012, 39, 521–540. [Google Scholar] [CrossRef]
  30. Montemurro, M.; Vincenti, A.; Vannucci, P. The Automatic Dynamic Penalisation method (ADP) for handling constraints with genetic algorithms. Comput. Methods Appl. Mech. Eng. 2013, 256, 70–87. [Google Scholar] [CrossRef]
  31. Liang, X.M.; Qin, H.Y.; Long, W. Genetic Algorithm for Solving Constrained Optimization Problem. Comput. Eng. 2010, 36, 147–149. (In Chinese) [Google Scholar]
  32. Garcia, R.D.; de Lima, B.; Lemonge, A.C.D.; Jacob, B.P. A rank-based constraint handling technique for engineering design optimization problems solved by genetic algorithms. Comput. Struct. 2017, 187, 77–87. [Google Scholar] [CrossRef]
  33. Miettinen, K.; Makela, M.M.; Toivanen, J. Numerical comparison of some penalty-based constraint handling techniques in genetic algorithms. J. Glob. Optim. 2003, 27, 427–446. [Google Scholar] [CrossRef]
  34. Yuan, Q.; He, Z.Q.; Leng, H.N. A hybrid genetic algorithm for a class of global optimization problems with box constraints. Appl. Math. Comput. 2008, 197, 924–929. [Google Scholar] [CrossRef]
  35. Chootinan, P.; Chen, A. Constraint handling in genetic algorithms using a gradient-based repair method. Comput. Oper. Res. 2006, 33, 2263–2281. [Google Scholar] [CrossRef]
  36. Lin, F.T. A genetic algorithm for linear programming with fuzzy constraints. J. Inf. Sci. Eng. 2008, 24, 801–817. [Google Scholar]
  37. Hassanzadeh, R.; Khorram, E.; Mahdavi, I.; Mahdavi-Amiri, N. A genetic algorithm for optimization problems with fuzzy relation constraints using max-product composition. Appl. Soft Comput. 2011, 11, 551–560. [Google Scholar] [CrossRef]
  38. Abdul-Rahman, O.A.; Munetomo, M.; Akama, K. An adaptive parameter binary-real coded genetic algorithm for constraint optimization problems: Performance analysis and estimation of optimal control parameters. Inf. Sci. 2013, 233, 54–86. [Google Scholar] [CrossRef]
  39. Tsai, S.C.; Fu, S.Y. Genetic-algorithm-based simulation optimization considering a single stochastic constraint. Eur. J. Oper. Res. 2014, 236, 113–125. [Google Scholar] [CrossRef]
  40. Zhan, S.C. Genetic Algorithm for Constrained Optimization Problems Which is Based on the Annealing Infeasible Degree. J. Basic Sci. Eng. 2004, 12, 299–304. (In Chinese) [Google Scholar]
  41. Cai, H.L. Research and Application of Penalty Funciton Method in Constrained Optimization. Master’s Thesis, East China Normal University, Shanghai, China, 2015. (In Chinese). [Google Scholar]
  42. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of evolution of SAGA.
Figure 1. Schematic diagram of evolution of SAGA.
Mathematics 11 01928 g001
Figure 2. The flowchart of SAGA.
Figure 2. The flowchart of SAGA.
Mathematics 11 01928 g002
Figure 3. The contour lines or surfaces and feasible regions of the ten benchmarks.
Figure 3. The contour lines or surfaces and feasible regions of the ten benchmarks.
Mathematics 11 01928 g003aMathematics 11 01928 g003b
Figure 4. The evolution figures for f1 to f5.
Figure 4. The evolution figures for f1 to f5.
Mathematics 11 01928 g004aMathematics 11 01928 g004b
Figure 5. The evolution figures for f6 to f10.
Figure 5. The evolution figures for f6 to f10.
Mathematics 11 01928 g005aMathematics 11 01928 g005b
Figure 6. The distributions of population for f4.
Figure 6. The distributions of population for f4.
Mathematics 11 01928 g006
Figure 7. The distributions of population for f7.
Figure 7. The distributions of population for f7.
Mathematics 11 01928 g007aMathematics 11 01928 g007b
Figure 8. The evolution figures for f1 to f5.
Figure 8. The evolution figures for f1 to f5.
Mathematics 11 01928 g008aMathematics 11 01928 g008b
Figure 9. The evolution figures for f6 to f10.
Figure 9. The evolution figures for f6 to f10.
Mathematics 11 01928 g009
Figure 10. Comparison of error and constraint violation.
Figure 10. Comparison of error and constraint violation.
Mathematics 11 01928 g010
Figure 11. The diagram of a stepped cantilever beam.
Figure 11. The diagram of a stepped cantilever beam.
Mathematics 11 01928 g011
Table 1. The proportion of individuals produced by each part in different stages.
Table 1. The proportion of individuals produced by each part in different stages.
StageFeasible + FeasibleFeasible + InfeasibleInfeasible + Infeasible
11/52/52/5
22/52/51/5
33/52/50
Table 2. The upper limits on the number of feasible solutions involved in evolution.
Table 2. The upper limits on the number of feasible solutions involved in evolution.
StageNumber
1 0.4 × p o p s i z e
2 0.6 × p o p s i z e
3 0.8 × p o p s i z e
Table 3. The mathematical expressions of ten benchmarks.
Table 3. The mathematical expressions of ten benchmarks.
BenchmarkMathematical FormOptimum
f1 min f ( x ) = x 1 2 + x 2 2 s . t .     { h ( x ) = x 1 + x 2 2 = 0 10 x i 10       ( i = 1 , 2 ) f ( 1 , 1 ) = 2
f2 min f ( x ) = x 1 2 + ( x 2 1 ) 2 s . t .     { h ( x ) = x 2 x 1 2 = 0 1 x i 1       ( i = 1 , 2 ) f ( ± 2 2 , 1 2 ) = 3 4
f3 min f ( x ) = 2 x 1 3 + 12 x 1 2 16 x 1 s . t .     { h ( x ) = x 1 2 4 x 1 + x 2 = 0 0 x i 5       ( i = 1 , 2 ) f ( 2 2 3 3 , 8 3 ) = 32 3 9
f4 min f ( x ) = ( x 1 + 4 ) ( x 2 + 2 ) 128 s . t .     { h ( x ) = x 1 x 2 = 128 0 x i 20       ( i = 1 , 2 ) f ( 16 , 8 ) = 72
f5 min f ( x ) = ( n ) n i = 1 n x i s . t .     { h ( x ) = i = 1 n x i 2 1 = 0 0 x i 1 ( i = 1 , 2 , , n )     f ( 1 n , 1 n , , 1 n ) = 1
f6 min f ( x ) = x 1 2 + x 2 s . t .     { g ( x ) = 2 x 1 x 2 5 0 10 x i 10 ,     ( i = 1 , 2 ) f ( 1 , 7 ) = 6
f7 min f ( x ) = x 1 x 2 s . t .     { g ( x ) = x 1 + 2 x 2 12 0 0 x i 10 ,     ( i = 1 , 2 ) f ( 6 , 3 ) = 3 2
f8 min f ( x ) = x 1 + x 2 s . t .     { g ( x ) = 2 x 1 2 + x 2 2 54 0 10 x i 10 ,     ( i = 1 , 2 ) f ( 3 , 6 ) = 9
f9 min f ( x ) = x 1 x 2 x 3 s . t .     { g ( x ) = x 1 x 2 + 2 x 2 x 3 + 2 x 3 x 1 12 0 0 x i 10 ,     ( i = 1 , 2 , 3 )   f ( 2 , 2 , 1 ) = 4
f10 min f ( x ) = ( x 1 5 ) 2 + ( x 2 5 ) 2 + ( x 3 5 ) 2 100 100 s . t .     { g ( x ) = ( x 1 3 ) 2 + ( x 2 4 ) 2 + ( x 3 5 ) 2 10 0 0 x i 10     ( i = 1 , 2 , 3 ) f ( 5 , 5 , 5 ) = 1
Table 4. The objective and constraint violation of 30 runs.
Table 4. The objective and constraint violation of 30 runs.
BenchmarkPopsizeObjectiveConstraint Violation
BestWorstMeanSDMin.Max.Mean
f12002.01097.74942.19331.03180.0003890.0028740.001385
3002.00521.99622.00200.00490.0014420.0019970.001516
4001.99842.00342.00150.00570.0008460.0015300.001448
5002.00651.99732.00130.00420.0014430.0015390.001463
f22000.74920.74890.75140.00310.0010950.0019150.001451
3000.75020.75180.75100.00190.0000720.0015680.001419
4000.75150.75170.75380.01280.0001210.0015800.001387
5000.74940.74880.75050.00290.0006500.0015280.001403
f3200−6.1584−6.1584−6.15840.00000.0000860.0011470.000614
300−6.1584−6.1574−6.15620.01090.0000020.0028310.000655
400−6.1584−6.1584−6.15840.00000.0000110.0023550.000623
500−6.1584−6.1584−6.15840.00000.0000670.0009600.000489
f420072.010772.109372.40911.51580.0002080.0109380.002718
30072.123472.252972.08580.15430.0003990.0052890.001888
40072.002871.997472.10360.22580.0004280.0028460.001395
50072.006172.143772.15690.26690.0001050.0036450.001254
f5200−0.9631−1.0007−0.98800.01350.0000190.0014250.000753
300−0.9529−0.9748−0.98860.01320.0001450.0014440.000803
400−0.9922−1.0007−0.99580.00560.0001610.0012200.000825
500−0.9836−1.0016−0.99590.00620.0003910.0012860.000811
f6200−5.9986−6.0022−5.99970.00210.0000000.0023140.001449
300−5.9997−6.0021−6.00000.00210.0000000.0020870.001385
400−5.9987−5.9937−5.99860.00750.0000590.0018530.001335
500−5.9970−5.9996−6.00000.00220.0000000.0015590.001283
f7200−4.2430−4.2434−4.24290.00040.0012910.0021620.001580
300−4.2428−4.2432−4.24290.00030.0005610.0017320.001457
400−4.2430−4.2432−4.24300.00030.0009450.0016910.001492
500−4.2431−4.2430−4.24280.00050.0013180.0016180.001492
f8200−8.9995−9.0002−8.99970.00100.0000000.0020990.001267
300−9.0000−9.0000−9.00000.00010.0000710.0042530.001560
400−8.9999−8.9996−9.00000.00020.0000000.0015900.001291
500−9.0000−9.0002−9.00000.00020.0005880.0018810.001349
f9200−3.9995−3.9999−3.99890.00270.0000000.0018170.000887
300−3.9984−4.0003−3.99540.01440.0000000.0015880.000773
400−3.9995−4.0001−4.00010.00050.0000000.0015410.000904
500−3.9997−3.9981−4.00020.00050.0000000.0014910.000936
f10200−1.0000−1.0000−1.00000.00000.0000000.0000000.000000
300−1.0000 −1.0000−1.00000.00000.0000000.0000000.000000
400−1.0000 −1.0000−1.00000.00000.0000000.0000000.000000
500−1.0000 −1.0000−1.00000.00000.0000000.0000000.000000
Table 5. The error and time consumption of 30 runs.
Table 5. The error and time consumption of 30 runs.
BenchmarkPopsizeErrorTime (s)
Min.Max.MeanMin.Max.Mean
f12000.01095.74940.19520.730.790.76
3000.00520.00380.00421.565.324.73
4000.00160.00340.00372.759.405.86
5000.00650.00270.00374.1914.366.35
f22000.00080.00110.00230.732.532.31
3000.00020.00180.00181.531.661.60
4000.00150.00170.00474.859.178.97
5000.00060.00120.00184.3414.6613.24
f32000.00000.00000.00000.760.830.78
3000.00000.00100.00221.591.721.64
4000.00000.00000.00002.792.932.87
5000.00000.00000.00004.274.414.33
f42000.01070.10930.40970.730.880.80
3000.12340.25290.08631.421.731.64
4000.00280.00260.10402.682.912.84
5000.00610.14370.15714.124.474.35
f52000.03690.00070.01240.983.393.15
3000.04710.02520.01151.892.041.98
4000.00780.00070.00473.183.383.27
5000.01640.00160.00514.825.024.88
f62000.00140.00220.00180.690.760.73
3000.00030.00210.00151.521.641.59
4000.00130.00630.00292.752.892.82
5000.00300.00040.00154.5214.6213.84
f72000.00030.00070.00040.700.760.72
3000.00020.00060.00041.531.621.58
4000.00030.00060.00042.682.842.78
5000.00040.00040.00054.214.374.27
f82000.00050.00020.00040.720.790.75
3000.00000.00000.00011.531.601.57
4000.00010.00040.00012.692.802.74
5000.00000.00020.00014.144.234.19
f92000.00050.00010.00140.750.830.77
3000.00160.00030.00491.605.492.33
4000.00050.00010.00032.753.122.86
5000.00030.00190.00044.204.654.29
f102000.00000.00000.00000.820.880.84
3000.00000.00000.00001.691.791.74
4000.00000.00000.00002.903.042.98
5000.00000.00000.00004.454.684.54
Table 6. Decision variables corresponding to the best and worst results.
Table 6. Decision variables corresponding to the best and worst results.
BenchmarkPopsizeTheoreticalBestWorst
f1300 ( 1 , 1 ) (1.0344, 0.9670)(0.9883, 1.0097)
f2200 ( ± 2 2 , 1 2 ) (0.6951, 0.4843)(0.6848, 0.4709)
f3200 ( 2 2 3 3 , 8 3 ) (0.8453, 2s.6668)(0.8453, 2.6678)
f4400 ( 16 , 8 ) (15.8377, 8.08197)(16.0897, 7.9552)
f5400 ( 1 10 , 1 10 , , 1 10 ) (0.3201, 0.3166, 0.3087, 0.3062, 0.3215, 0.3106, 0.3394, 0.3159, 0.3162, 0.3060)(0.3296, 0.3190, 0.3191, 0.3169, 0.3196, 0.3032, 0.3074 0.3186, 0.3214, 0.3087)
f6200 ( 1 , 7 ) (−0.9926, −6.9839)(−1.0123, −7.0268)
f7200 ( 6 , 3 ) (5.9562, 3.0226)(6.0283, 2.9869)
f8200 ( 3 , 6 ) (−3.0421, −5.9574)(−3.0039, −5.9963)
f9200 ( 2 , 2 , 1 ) (1.9673, 2.0035, 1.0147)(1.9651, 2.0510, 0.9925)
f10200 ( 5 , 5 , 5 ) (5.0000, 5.0000, 5.0000)(5.0018, 5.0000, 4.9999)
Table 7. The objective and constraint violation of 30 runs.
Table 7. The objective and constraint violation of 30 runs.
BenchmarkIterationsPopsizeObjectiveConstraint Violation
BestWorstMeanSDMin.Max.Mean
f11004002.07821.98012.07110.11370.0003810.0099970.006635
5002.01051.98012.00120.03130.0008060.0100000.007284
f21001000.76640.74020.74800.01140.0010780.0099970.007357
2000.75030.74000.74150.00220.0056130.0100000.009244
f3100400−6.1386−5.9414−6.13850.04510.0001190.0093690.004352
500−6.1584−6.1579−6.15500.01060.0003830.0099730.004198
f410040072.031772.002373.12332.27740.0000270.0098010.005178
50072.003971.987572.64331.27000.0004070.0099960.005583
f5400100−0.8991−0.8935−0.96530.07730.0035450.0099220.008490
500−1.0024−1.0313−1.01610.01250.0064040.0099940.009385
f6100100−5.9845−6.0100−6.00460.01090.0034960.0100000.009075
f7−4.2408−4.2462−4.24500.00150.0031260.0099990.008755
f8−8.9998−9.0008−8.99350.01410.0000000.0100000.005242
f9−3.9930−4.0043−3.97120.05220.0000000.0099560.003277
f10−1.0000−1.0000−1.00000.00000.0000000.0000000.000000
Table 8. The error and time consumption of 30 runs.
Table 8. The error and time consumption of 30 runs.
BenchmarkIterationsPopsizeErrorTime (s)
Min.Max.MeanMin.Max.Mean
f11004000.07820.01990.08200.260.400.30
5000.01050.01990.01810.340.510.38
f21001000.01640.00980.00810.050.060.05
2000.00030.01000.00850.130.410.20
f31004000.01980.21700.01990.270.350.30
5000.00000.00050.00340.360.460.40
f41004000.03170.00231.12440.240.340.28
5000.00390.01250.64660.320.490.36
f54001000.10090.10650.04650.510.610.53
5000.00240.03130.01780.640.700.66
f61001000.01550.01000.00970.050.060.05
f70.00180.00350.00260.050.070.05
f80.00020.00080.00710.050.060.05
f90.00700.00430.02960.060.070.06
f100.00000.00000.00000.070.080.07
Table 9. Decision variables corresponding to the best and worst results.
Table 9. Decision variables corresponding to the best and worst results.
BenchmarkIterationsPopsizeTheoreticalBestWorst
f1100500 ( 1 , 1 ) (0.9336, 1.0672)(0.9962, 0.9938)
f2100200 ( ± 2 2 , 1 2 ) (−0.6461, 0.4231)(−0.7005, 0.5008)
f3100500 ( 2 2 3 3 , 8 3 ) (0.8439, 2.6631)(0.8539, 2.6765)
f4100500 ( 16 , 8 ) (15.8125, 8.0948)(16.0150, 7.9919)
f5500100 ( 1 10 , 1 10 , , 1 10 ) (0.3082, 0.3448, 0.3036, 0.3458, 0.3269, 0.3139, 0.2875, 0.3058, 0.3135, 0.3177)(0.3412, 0.3044, 0.3131, 0.3162, 0.2984, 0.3005, 0.3215, 0.3163, 0.3382, 0.3253)
f6100100 ( 1 , 7 ) (−0.8622, −6.7280)(−0.9978, −7.0056)
f7 ( 6 , 3 ) (5.7796, 3.1118)(6.0030, 3.0035)
f8 ( 3 , 6 ) (−3.0098, −5.9900)(−3.0095, −5.9913)
f9 ( 2 , 2 , 1 ) (2.0736, 2.0599, 0.9348)(2.0224, 2.0225, 0.9790)
f10 ( 5 , 5 , 5 ) (5.0000, 5.0000, 5.0000)(5.0000, 5.0000, 5.0000)
Table 10. The objective and constraint violation of 30 runs.
Table 10. The objective and constraint violation of 30 runs.
AlgoriThmIterationsPopsizeObjectiveConstraint Violation
BestWorstMeanSDMin.Max.Mean
IDPGA1001001.34401.33431.33910.00500.0000000.0098850.006631
2001.33441.33381.33350.00120.0071490.0099990.009638
SAGA1002001.34291.64531.48920.17570.0000000.0009240.000122
3001.34601.34761.44360.10110.0000000.0007370.000087
4001.33981.34131.37420.04500.0000000.0009880.000208
5001.34081.34191.36690.03210.0000000.0009110.000205
Table 11. The time consumption of 30 runs.
Table 11. The time consumption of 30 runs.
AlgorithmIterationsPopsizeTime (s)
Min.Max.Mean
IDPGA1001000.080.140.09
2000.160.170.17
SAGA1002000.871.311.05
3001.832.802.25
4003.094.013.46
5004.525.524.95
Table 12. The best objective of 8 algorithms.
Table 12. The best objective of 8 algorithms.
AlgorithmValueAlgorithmValueAlgorithmValue
SOS1.33996GCA-I1.3400PSO-DE1.339957
CS1.33999GCA-II1.3400AHA1.339957
MMA1.3400MFO3.399880
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Z.; Xu, X.; Liu, H. The Successive Approximation Genetic Algorithm (SAGA) for Optimization Problems with Single Constraint. Mathematics 2023, 11, 1928. https://doi.org/10.3390/math11081928

AMA Style

Chen Z, Xu X, Liu H. The Successive Approximation Genetic Algorithm (SAGA) for Optimization Problems with Single Constraint. Mathematics. 2023; 11(8):1928. https://doi.org/10.3390/math11081928

Chicago/Turabian Style

Chen, Zhihua, Xuchen Xu, and Hongbo Liu. 2023. "The Successive Approximation Genetic Algorithm (SAGA) for Optimization Problems with Single Constraint" Mathematics 11, no. 8: 1928. https://doi.org/10.3390/math11081928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop