Next Article in Journal
Effects of Different Full-Reference Quality Assessment Metrics in End-to-End Deep Video Coding
Next Article in Special Issue
Safe and Trustful AI for Closed-Loop Control Systems
Previous Article in Journal
Application of the Relative Orbit in an On-Orbit Service Mission
Previous Article in Special Issue
An Enhanced Method on Transformer-Based Model for ONE2SEQ Keyphrase Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Optimization with Interval Uncertainties Using Hybrid State Transition Algorithm

School of Automation, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(14), 3035; https://doi.org/10.3390/electronics12143035
Submission received: 8 June 2023 / Revised: 28 June 2023 / Accepted: 6 July 2023 / Published: 11 July 2023
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)

Abstract

:
Robust optimization is concerned with finding an optimal solution that is insensitive to uncertainties and has been widely used in solving real-world optimization problems. However, most robust optimization methods suffer from high computational costs and poor convergence. To alleviate the above problems, an improved robust optimization algorithm is proposed. First, to reduce the computational cost, the second-order Taylor series surrogate model is used to approximate the robustness indices. Second, to strengthen the convergence, the state transition algorithm is studied to explore the whole search space for candidate solutions, while sequential quadratic programming is adopted to exploit the local area. Third, to balance the robustness and optimality of candidate solutions, a preference-based selection mechanism is investigated which effectively determines the promising solution. The proposed robust optimization method is applied to obtain the optimal solutions of seven examples that are subject to decision variables and parameter uncertainties. Comparative studies with other robust optimization algorithms (robust genetic algorithm, Kriging metamodel-assisted robust optimization method, etc.) show that the proposed method can obtain accurate and robust solutions with less computational cost.

1. Introduction

Real-world optimization problems are subject to uncertainties due to, for example, the presence of uncontrolled changes in environmental conditions [1,2], the lack of complete knowledge of models [3,4], and the manufacturing tolerances on actual processes [5]. According to the classification of optimization problems under uncertainties [6], deriving an optimal solution that is insensitive to uncertainties is defined as robust optimization (RO).
An optimal solution is robust if, when uncertainties exist, the values of the corresponding objective functions and constraint functions fluctuate within acceptable ranges [7]. By adjusting the decision variables to counteract the effects of uncertainties, the performance of optimal solution can be improved and the degradations of objective or constraint function values can also be avoided. In general, there are two methods to describe the uncertain parameters in the optimization problem: probabilistic models and nonprobabilistic models. Probabilistic uncertainties are customarily based on the statistic information, such as the mean and variance, and they are usually handled by optimizing the expected value of the solution [8]. Moreover, probabilistic uncertainties can also be modeled by other methods, such as fitness approximation [9], multiobjective approach [10,11], and sampling-based methods [12]. Since it is difficult to obtain the accurate probability distribution information of uncertain parameters and the optimal solution can hardly guarantee complete robustness, the applications of probabilistic robust optimization methods have been limited. Nonprobabilistic uncertainties methods are usually based on interval uncertainty modeling [13,14], evidence theory [15], and possibility theory [16]. In this paper, the uncertainties are modeled as a certain interval that can be obtained without a presumed probability distribution. Such interval uncertainty can be analyzed by the worst-case scenario [17], and this method has received considerable attention in the robust optimization [18]. The main idea of the worst-case scenario is to have the best worst-case performance in the presence of uncertainty. In typical engineering design problems, when knowing information such as the manufacturing tolerance specifications, operating ranges, nominal operating points, and historical data, it is not too difficult to determine the bounds of the uncertain parameters [19]. Hence, robust optimization with interval uncertainty has been applied to a range of engineering design and systems control problems [20].
When using worst-case analysis to solve the robust optimization problem with interval uncertainty, it almost always involves a min–max problem of a nested double loop optimization structure [21,22]. Since the inner optimization is performed iteratively for every candidate evaluation in the outer optimization, the computational efficiency is one of the most critical concerns [23,24]. For inner optimization, the most common way to improve computational efficiency is to find a surrogate model that approximates the original functions. Chen et al. [25] and Zhou et al. [26] used the Taylor series expansion to analyze the robustness of each candidate. Rehman et al. [21] proposed an efficient global optimization method based on Kriging interpolation to reduce the function evaluations. In [27], a modified Benders decomposition method was applied to a variety of robust optimization problems. Note that if the approximate accuracy of the surrogate model is low, then the robustness evaluated by inner optimization may be inaccurate, leading to the incorrect result that the solution fails to meet the robustness requirement. In order to reduce the computational complexity while improving the calculation accuracy for robust optimization, it is necessary to find an efficient inner optimization method. The function of outer optimization is to find promising candidate solutions and choose the best decision variables. The deterministic optimization methods based on gradient information [28] offer a fast convergence rate but the solution often converges to a locally optimal point. Stochastic optimization methods, such as genetic algorithm (GA) [29] and particle swarm optimization (PSO) [30], are well suited for global search [31,32], which can increase the probability of finding the global optimum through randomly searching for candidates. State transition algorithm (STA) [33,34] is a stochastic optimization method consisting of four state transformation operators and each operator has a special searching function. The STA method has been shown to be capable of both local and global search with stable convergence rate [35,36]. It thus appears that there are merits to investigating STA for outer optimization of the robust problem.
The issues of computational efficiency and convergence rate have been an obstacle to a robust optimization method implementing efficient and accurate search. To overcome these problems, a hybrid state transition algorithm for a robust optimization problem is proposed in this paper. The novelty and contribution of this method are three-fold: (1) to reduce the computational cost, the second-order Taylor series surrogate model is used to simplify the calculation of objective function values in the inner optimization, and a low-computational-cost method sequential quadratic programming is used in the outer optimization; (2) to strengthen the convergence, the outer optimization is conducted by the cooperation of state transition algorithm and sequential quadratic programming, which not only avoids the premature convergence but also improves the solution precision; and (3) to balance robustness and optimality, a selection mechanism is proposed, which evaluates the candidates based on their feasibility, robustness, and optimality [37]. By comparing the experimental results obtained by the hybrid state transition algorithm and other robust optimization algorithms (robust genetic algorithm, Kriging metamodel-assisted robust optimization method, etc.), the results denote that the method proposed in this paper has better performance with respect to both accuracy and efficiency.
The remainder of this paper is organized as follows: Section 2 introduces the background to the optimization problems, including the formulation of the robust optimization problem with interval uncertainty, the Taylor series surrogate model, and the state transition algorithm. The hybrid state transition algorithm is derived in Section 3. Section 4 analyzes the robust performance of the proposed method based on a comparative study of eight number of algorithms on seven examples. Section 5 concludes this paper and discusses the directions of future research.

2. Background and Terminology

2.1. Robust Optimization Problem

In general, the deterministic optimization problems can be defined as follows:
min x χ f ( x , p ) s . t . g i ( x , p ) 0 , i = 1 , , n , x l x x u ,
where f ( · ) is the objective function and g ( · ) is the constraints function, and n is the number of constraints. The vector x is the decision variable whose lower and upper bounds are x l and x u , respectively, and p is the parameter of the problem.
In optimization problems, uncertainties can be involved in both decision variables and parameters. Thus, the formulation of the optimization problem under interval uncertainty is given as
min [ x ] χ f ( [ x ] , [ p ] ) s . t . g i ( [ x ] , [ p ] ) 0 , i = 1 , , n , x l [ x ] x u ,
where [ x ] and [ p ] are interval numbers corresponding to the uncertain decision variables and uncertain parameters. They can be expressed as
[ x ] = [ x c + Δ x ̲ , x c + Δ x ¯ ] [ p ] = [ p c + Δ p ̲ , p c + Δ p ¯ ] ,
where x c and p c are the nominal value of x and p , respectively, with Δ x ̲ and Δ x ¯ being the lower and upper bounds of decision variable ( x ) variation, and Δ p ̲ and Δ p ¯ being the lower and upper bounds of parameter ( p ) variation. For simplicity, it is usually assumed that the nominal value is the central value of the variation range, which implies that Δ x ̲ = Δ x ¯ and Δ p ̲ = Δ p ¯ .
For evaluating the solutions of the robust optimization problem in (2), three indexes are introduced:
  • Objective robustness: This index, denoted as η f , is a measure of the sensitivity for the objective function to uncertainties. When decision variables and/or parameters fluctuate in their uncertain intervals, the objective function variations should still within an acceptable range. In engineering problems, the acceptable range of objective function is usually defined by decision makers according to design requirements.
  • Feasibility robustness: This index, denoted as η g , is a measure of the sensitivity for the constraints to uncertainties. When decision variables and/or parameters fluctuate in their uncertain intervals, the constraints still should be satisfied.
  • Optimality: This index, represented as f, is the objective function value. For a deterministic optimization problem, the optimum should be the solution with best objective value.
Based on the above indexes, the optimization problem in (2) can be reformulated as follows:
min x c χ f ( x c , p c ) s . t . g i ( x c , p c ) 0 , i = 1 , , n , η f Δ f 0 0 η g 0 where η f = max x [ x ] , p [ p ] | f ( x , p ) f ( x c , p c ) | η g = max { max x [ x ] , p [ p ] g i ( x , p ) , i = 1 , , n } [ x ] = [ x c + Δ x ̲ , x c + Δ x ¯ ] [ p ] = [ p c + Δ p ̲ , p c + Δ p ¯ ] x l [ x ] x u ,
where Δ f 0 means the acceptable variation range of the objective function.
The above formulation about the robust optimization problem contains a nested double-loop optimization structure. The outer optimization functions to find promising nominal values of decision variables and the inner optimization is used to verify the robustness of candidate solutions. The nested double-loop optimization structure incurs high computational costs. Thus, this paper proposes a robust optimization method that solves the computationally costly optimization problem using (i) the state transition algorithm with sequential quadratic programming, called the hybrid state transition algorithm (H-STA), to solve the outer optimization problem, and (ii) the second-order Taylor series expansion to estimate the robustness indexes for inner optimization. In the sections that follow, the two techniques of the proposed RO method are further discussed.

2.2. Taylor Series Surrogate Model

In general, the inner optimization is performed iteratively for every candidate solution in the outer optimization. To reduce the computational cost, the procedure for inner optimization should be as simple as possible. In the proposed RO method, a second-order Taylor series surrogate model is used to approximate the objective function and calculate the extreme points for inner optimization.
Based on the Taylor’s theorem, a multivariable function f ( x , p ) can be expanded by the Taylor series around ( x , p ) = ( x c , p c )
f ( x , p ) = f ( x c , p c ) + f ( x c , p c ) x ( x x c ) + f ( x c , p c ) p ( p p c ) + 1 2 ! 2 f ( x c , p c ) x 2 ( x x c ) 2 + 2 f ( x c , p c ) x p ( x x c ) ( p p c ) + 1 2 ! 2 f ( x c , p c ) p 2 ( p p c ) 2 + .
Let
x x c = Δ x , p p c = Δ p ,
then (5) can be written as
f ( x , p ) = f ( x c , p c ) + f ( x c , p c ) x Δ x + f ( x c , p c ) p Δ p + 1 2 ! 2 f ( x c , p c ) x 2 Δ x 2 + 2 f ( x c , p c ) x p Δ x Δ p + 1 2 ! 2 f ( x c , p c ) p 2 Δ p 2 + .
The remainder term of the second-order Taylor expansion of f ( x , p ) around point ( x c , p c ) can be written as follows:
R ( Δ x , Δ p ) = 1 3 ! ( Δ x x + Δ p p ) 3 f ( x c + θ Δ x , p c + θ Δ p )
where θ ( 0 , 1 ) .
It is worth noting that R ( Δ x , Δ p ) is a function of a cubic polynomial with respect to Δ x and Δ p , and the values of Δ x and Δ p are usually small. Meanwhile, the higher-order derivatives have relatively small values compared to the lower-order derivatives [38]. The above two points make R ( Δ x , Δ p ) a tiny value and guarantee the accuracy of the second-order Taylor expansion alternative model.
For solving the maximization problem in (4), the second-order Taylor series is adopted and the inner optimization problem can be transformed to
η f max Δ x , Δ p | f ( x c + Δ x , p c + Δ p ) f ( x c , p c ) | = max Δ x , Δ p | a 1 Δ x + b 1 Δ p + c 1 Δ x 2 + d 1 Δ x Δ p + e 1 Δ p 2 | ,
where
a 1 = f ( x c , p c ) x , b 1 = f ( x c , p c ) p c 1 = 1 2 ! 2 f ( x c , p c ) x 2 , d 1 = 2 f ( x c , p c ) x p e 1 = 1 2 ! 2 f ( x c , p c ) p 2 Δ x [ Δ x ̲ , Δ x ¯ ] , Δ p [ Δ p ̲ , Δ p ¯ ] .
Similarly, the feasibility robustness index can be transformed to
η g max { max Δ x , Δ p g i ( x c + Δ x , p c + Δ p ) } = max { max Δ x , Δ p a 2 i Δ x + b 2 i Δ p + c 2 i Δ x 2 + d 2 i Δ x Δ p + e 2 i Δ p 2 + h i } ,
where
a 2 i = g i ( x c , p c ) x , b 2 i = g i ( x c , p c ) p c 2 i = 1 2 ! 2 g i ( x c , p c ) x 2 , d 2 i = 2 g i ( x c , p c ) x p e 2 i = 1 2 ! 2 g i ( x c , p c ) p 2 , h i = g i ( x c , p c ) ) Δ x [ Δ x ̲ , Δ x ¯ ] , Δ p [ Δ p ̲ , Δ p ¯ ] .
In (9) and (11), the extreme points can be computed by the quadratic formula, and the maximum is calculated by backsubstituting into the original function in (4). With the calculation of the quadratic function, the computational cost of the inner optimization problem can be reduced.

2.3. State Transition Algorithm

For outer optimization problems, most metaheuristic methods have competitive performance. The state transition algorithm (STA) [34,39] is an intelligent optimization method based on the control theory of state space representation. The unified form of the generation of solutions in the STA method can be described as follows:
x k + 1 = A k x k + B k u k y k + 1 = f ( x k + 1 )
where x k represents a state, corresponding to a candidate solution of the problem; u k is a function of historical states; A k and B k stand for state transition matrices; and y k means the fitness value of the objective function f.
In STA method, there are four state transformation operators that generate candidate solutions:
x k + 1 = x k + α 1 n x k 2 R r x k ,
x k + 1 = x k + β R t x k x k 1 x k x k 1 2 ,
x k + 1 = x k + γ R e x k ,
x k + 1 = x k + δ R a x k ,
where (14)–(17) are the rotation transformation, translation transformation, expansion transformation, and axesion transformation, respectively. The parameters α , β , γ , and δ represent the transformation factors. The parameters R r , R t , R e , and R a are the random matrix with specific elements. The rotation transformation is a local search operator and the translation transformation has a function of line search. The expansion transformation is used for global search and the accession transformation is designed to strengthen single-dimensional search ability.
For a given solution, the aforementioned state transformation operators are performed alternately to generate candidate solutions. In general, these four operators in the STA method can find promising candidate solutions and converge to the global optimum point. However, in a robust optimization problem, the solution requires not only to be a superior candidate in the deterministic condition but also to satisfy the robustness requirements. Thus, it is important to strengthen the search ability and design appropriate selection mechanism when solving a robust optimization problem. Since the STA method is a stochastic algorithm which does not use the gradient information, the local search ability is restricted and the precision of the solution still needs some improvements. In this paper, a hybrid state transition algorithm that combines the improved STA operator with a traditional local search procedure is proposed to address the robust optimization problem.

3. Hybrid State Transition Algorithm for Robust Optimization Problem

In the STA method, the expansion transformation, as the main global search operator, still requires further improvement to enlarge the range of global search. In addition, the local search direction of the STA method is stochastic, which may have a slow convergence rate; thus, sequential quadratic programming (SQP) is used to exploit the local area and improve the precision of solutions. In this paper, the hybrid state transition algorithm (H-STA) for robust optimization is proposed and it combines the improved STA and SQP to maximize their advantages of global optimization and minimize their disadvantages of premature convergence. Moreover, in order to balance the feasibility, robustness, and optimality, an efficient selection mechanism is proposed to evaluate candidate solutions and select the best one as the final result.

3.1. Exploration Stage-Improved STA

The first stage of the H-STA method for robust optimization problems is to explore all search areas and find some promising candidates. In the basic state transition algorithm, there are four different transformation operators and the expansion transformation operator is the main global search operator. As shown in (16), the expansion transformation operator includes a Gaussian distribution matrix R e , which means that it can generate elements between [ , + ] with probability. However, (16) also shows that the search range of expansion transformation not only depends on the expansion factor ( γ ) and the mean and standard deviation of R e , but also relates to x k . Thus, if the value of x k is small, the search range will be small. For example, if we set the initial point as [ 10 , 10 ] and [ 1 , 1 ] separately, and the lower bound and upper bound of x are set to 10 and 10, respectively, then the parameter setting of the expansion transformation operator is the same as in previous papers [39], which are γ = 1 , and the mean and the standard deviation of R e equal 0 and 1, respectively. In this study, we use the expansion transformation operator to generate 500 candidates, and if the candidate value is out of the range, a random value within the range will be selected as a substitute.
The performance of the expansion transformation is shown in Figure 1, with Figure 1a showing the expansion transformation operator that can generate a candidate in the search space with the initial point [ 10 , 10 ] . However, when the initial point is set to [ 1 , 1 ] , the search range of the expansion transformation operator becomes narrow (see Figure 1b). Thus, the global search ability of expansion transformation still requires further improvements.
One solution is to take into account the ranges of the decision variables in the expansion transformation:
x k + 1 = x k + γ R e R x ,
where R x = ( x u x l ) / 2 is the search radius of the decision variables, including the upper bound and lower bound of the decision variable.
The pseudocode of the new expansion transformation operator is shown in Algorithm 1. We use the same parameter settings and initial points to perform the new expansion transformation operator, and the results are shown in Figure 2. It is shown that no matter how the initial point changes ( [ 10 , 10 ] in Figure 2a and [ 1 , 1 ] in Figure 2b), the candidates generated by the new expansion transformation can always distribute throughout the entire search space.
The structure of the improved STA method is shown in Algorithm 2. Firstly, the parameters are predefined and the initial solution is generated randomly. The parameter SE is the search enforcement, which means that every transformation operator will generate SE candidate solutions. During the optimization process, the rotation factor α decreases exponentially from a maximum value ( α max ) to a minimum value ( α min ) with the base fc . If the rotation factor α is less than α min , it will return to the maximum value α max .
Algorithm 1 Pseudocode of improved STA method
Require: 
  • i t e r max : maximum number of iterations
  • S E : search enforcement
  • B e s t : initial solution
Ensure:
  • B e s t : optimal solution
 1:
for k = 1 to i t e r m a x  do
 2:
    if  α < α min  then
 3:
         α α max
 4:
    end if
 5:
     n e w B e s t new expansion(funfcn, B e s t , SE , )
 6:
    if  B e s t n e w B e s t  then
 7:
         n e w B e s t translation(funfcn, B e s t , n e w B e s t , )
 8:
    end if
 9:
     B e s t = n e w B e s t
10:
     n e w B e s t rotation(funfcn, B e s t , SE , )
11:
    if  B e s t n e w B e s t  then
12:
         n e w B e s t translation(funfcn, B e s t , n e w B e s t , )
13:
    end if
14:
     B e s t = n e w B e s t
15:
     n e w B e s t axesion(funfcn, B e s t , SE , )
16:
    if  B e s t n e w B e s t  then
17:
         n e w B e s t translation(funfcn, B e s t , n e w B e s t , )
18:
    end if
19:
     B e s t = n e w B e s t
20:
     α α · fc 1
21:
end for
22:
B e s t B e s t
Algorithm 2 Pseudocode of new expansion transformation operator
Require: 
  • o l d B e s t : the best solution in the last transformation
Ensure:
  • B e s t : optimal solution
1:
State ← oldBest+ γ R e R x
2:
if i , j , let S t a t e ( i , j ) < x l or S t a t e ( i , j ) > x u  then
3:
     S t a t e ( i , j ) x l + ( x u x l ) r a n d
4:
end if
5:
B e s t S t a t e
Using the improved STA (I-STA) method, the four transformation operators can explore the whole space for neighborhoods that may contain the global optimal solution. Once the STA method converges to a promising candidate and stops at this point after several iterations, then the SQP method is used to improve the precision of the solutions.

3.2. Exploitation Stage-SQP

The second stage of the H-STA method for robust optimization problems needs to exploit the local area based on a promising initial point. Sequential quadratic programming is effective for solving a nonlinearly constrained optimization problem since the goal of SQP is to find a locally optimal solution to the problem, which means that the algorithm will continuously optimize the local area near the current solution in order to reduce the value of the objective function as much as possible. That ensures the accuracy of SQP. As an iterative procedure, the SQP method transforms a nonlinear optimization problem into a quadratic programming subproblem, and by solving that subproblem, the solution will converge to a local minimum. Thus, with the promising candidate found by the STA method and taking it as the initial point, the SQP method will exploit the neighborhood around that point and can improve the accuracy of the solution. The SQP structure [40] is shown in Algorithm 3.
Algorithm 3 Pseudocode of SQP method
Require: 
  • k: number of iterations, and k = 0
  • B e s t 0 : the initial solution (final solution obtained by STA)
Ensure:
  • B e s t : optimal solution
 1:
Approximate the problem with a linearly constrained quadratic programming at B e s t k
 2:
Solve for the optimal d k
 3:
if d k 0 then
 4:
     B e s t = B e s t k
 5:
    Break
 6:
else
 7:
     B e s t k + 1 = B e s t k + d k
 8:
     k = k + 1
 9:
end if
10:
Go to Step 1
We now explain the searching process of the SQP method; based on the Lagrangian function and quasi-Newton method, at each iteration, the outer optimization problem in (4) can be approximated by the following quadratic programming subproblem:
min x c χ f ( x c ( k + 1 ) ) = f ( x c ( k ) + d ) f ( x c ( k ) ) + f ( x c ( k ) ) T d + 1 2 d T H k d s . t . g i ( x c ( k + 1 ) ) = g i ( x c ( k ) + d ) g i ( x c ( k ) ) + g ( x c ( k ) ) T d 0 , i = 1 , , n ,
where H k is the Hessian matrix of f ( x ) and d is the step length. For a given x c k , functions f ( x c ( k ) ) and g i ( x c ( k ) ) in (19) can be calculated. In order to find a desired step length, the subproblem can be rewritten as
min d f ( x c ( k ) ) T d + 1 2 d T H k d s . t . g ( x c ( k ) ) T d 0 , i = 1 , , n ,
By solving (20), the optimal step length is obtained. After the iterative calculation, the SQP method will stop when the optimal step length approaches 0.
Since the SQP method has a good exploitation ability and a low computation cost, it can effectively improve the accuracy of the solutions obtained by the I-STA method. Moreover, the I-STA method can generate a good initial solution for the SQP method, avoiding the possibility of falling into a local optimum due to a poor initial solution.

3.3. Selection Mechanism

Since the robust optimization problem requires to meet not only the constraints but also the robustness requirement, a selection mechanism is needed to balance the feasibility, robustness, and optimality of the solutions. To quantify the performance of the candidate solutions, two definitions are given as follows.
Definition 1
(Constraint violation). For the constraints g i ( x , p ) 0 , the value G ( x , p ) = i = 1 n max { 0 , g i ( x , p ) } indicates the relative constraint violation of solution x on all constraints.
For a solution x , if G ( x , p ) = 0 , then x is a feasible solution; otherwise, x is an infeasible solution, and the larger the value, the stronger the constraint violation.
Definition 2
(Robustness violation). For the robustness indexes ( η f ( x , p ) and η g ( x , p ) ), the value R ( x , p ) = max { 0 , η f ( x , p ) Δ f 0 } + max { 0 , η g ( x , p ) } provides a reference showing the relative robustness violation of solution x against the robustness indexes. For a solution x , if R ( x , p ) = 0 , then x is a robust solution; otherwise, x is a nonrobust solution, and the larger the value, the stronger the robustness violation.
The selection steps for two solutions of the robust optimization problem are as follows:
  • The feasible solution is always preferred to the infeasible solution;
  • If two solutions are both infeasible, then the one having a smaller constraint violation value is preferred;
  • If two solutions are both feasible, then the robustness indexes will be calculated by inner optimization, and the following criteria will be considered:
    (a)
    The robust solution is always preferred to the nonrobust solution;
    (b)
    If two solutions are both nonrobust, then the one having a smaller robustness violation value is preferred;
    (c)
    If two solutions are both robust, then the one having better objective function value is preferred.
The code structure of the selection mechanism is shown in Algorithm 4. After generating candidate solutions in the outer optimization process, the inner optimization is performed only when the feasible solution exists. Note that the infeasible solution can never satisfy the requirement of feasible robustness.
Algorithm 4 Pseudocode of the selection mechanism
Require: 
  • S t a t e : candidate solutions generated by improved STA or SQP (outer optimization)
  • G ( S t a t e ) : constraint violation of the S t a t e
  • R ( S t a t e ) : robustness violation of the S t a t e
  • f ( S t a t e ) : objective function value of the S t a t e
Ensure:
  • B e s t : optimal solution
 1:
n u m g = f i n d ( G ( S t a t e ) = 0 )
 2:
if l e n g t h ( n u m g ) 1 then
 3:
    Calculate the robustness indexes (inner optimization)
 4:
     n u m r = f i n d ( R ( S t a t e ) = 0 )
 5:
    if  l e n g t h ( n u m r ) 1  then
 6:
         B e s t min f ( S t a t e )
 7:
    else
 8:
         B e s t min R ( S t a t e )
 9:
    end if
10:
else
11:
     B e s t min G ( S t a t e )
12:
end if

3.4. Framework of the H-STA for Solving Robust Optimization Problem

The framework of the H-STA method for robust optimization problem with interval uncertainty is shown in Figure 3.
Step 1 
(Initialization): The first step includes the generation of the initial solution and initialization of the parameters.
Step 2 
(Outer optimization - I-STA): Based on the initial solution, the I-STA method generates candidates using four transformation operators.
Step 3 
(Selection mechanism): The proposed selection mechanism is used to select the best solution among many candidates. If there is a feasible solution in the candidate solutions, the inner optimization is conducted based on Taylor series surrogate model.
Step 4 
(Switching criterion): The switching criterion is used to determine whether the I-STA method should be replaced by the SQP method. The I-STA method is a stochastic optimization algorithm, and in the later iterations, the rate of convergence is slow. Nevertheless, the SQP method offers fast convergence rate leading to the optimal solution based on a good initial solution. Thus, if the objective value in the improved STA method changes very slowly, the SQP method will be performed to improve the rate of convergence. In this paper, the switching index is defined as follows:
λ k = f k f k 1 f k
where f k and f k 1 are the objective function values of x k and x k 1 . If λ k is less than a threshold value ( λ ), the rate of convergence for the I-STA method is considered slow and the SQP method is carried out.
Step 5 
(Outer optimization—SQP): Considering the final solution of the I-STA method as an initial point, the SQP method is performed to improve the precision of the solution.
Step 6 
(Selection mechanism): The proposed selection mechanism is used to compare the final solution and the initial solution of the SQP method.
Step 7 
(Stopping criterion): We use the maximum number of iteration as the stopping criterion. It is worth noting that the number of iterations in the SQP method is also taken into account when calculating the total number of iterations.

4. Verification Examples

In this section, the proposed method is applied to seven optimization examples with interval uncertainty. Table 1 gives the uncertainty occurrences in each example. To demonstrate the effectiveness of the H-STA method for robust optimization (H-STA-RO), the following methods are used for comparison: the I-STA method for robust optimization problems (I-STA-RO), the basic STA method for robust optimization problems (STA-RO), the SQP method for robust optimization problems (SQP-RO), and five well-known robust optimizers (the robust optimization method with Chebyshev surrogate models (I-RO) [22], the robust genetic algorithm (GA-RO), the Kriging metamodel-assisted robust optimization method (IK-GA-RO) [41], the robust optimization method based on Benders decomposition (BD-RO) [27], and the robust optimization method using differential evolution and sequential quadratic programming (DE-SQP-RO) [42]).
The complexity of H-STA-RO can be represented by the value of function evaluations ( FE ), which is calculated as
FE = SE × i t e r S T A × N S T A + FE S Q P + N R × ( R f + R g ) ,
where S E is the search enforcement; i t e r S T A and N S T A are the number of generations and transformation operators used in the I-STA method, respectively; the parameter FE S Q P is the total function evaluation value in the SQP method. Since the inner optimization is only performed when the feasible solution is found, N R means the number of times that inner optimization performed, and R f and R g present the average value of function evaluations for candidates to obtained their objective robustness index and feasibility robustness index, respectively.
In order to evaluate the performance of different methods, all results are obtained after 20 runs under MATLAB R2016a, Windows 10 machine with 2.40 GHz Intel core i5 and 16.0 GB RAM. The SQP method and the genetic algorithm method are performed by using “fmincon” and “ga” function, respectively, and all the parameters are set by the default values. The parameters included in the H-STA-RO method are selected empirically based on numerous experiments and application cases. In the standard continuous STA, the parameter settings are given as follows: α min = 10 4 , β = 1 , γ = 1 , δ = 1 , SE = 30 , and fc = 2 . Many numerical experiments and engineering applications have shown the effectiveness of the above parameter settings [34,35,39]. In this paper, to obtain better results for different problems, the parameters of α max , λ , and i t e r max are fine-tuned based on the following guidelines:
  • The rotation factor α , which controls the search range of the rotation transformation, is bounded as α min α α max . A larger value of α allows more explorations of the local search space, and a smaller value of α can refine the quality of solutions. The value of α max is typically set as 1 based on the previous study. However, for the problem in which the ranges of decision variables are less than 1, it is useless to search in a hypersphere with a radius equal to 1. Thus, the parameter α max in Example 4 ( 0 x 1 , x 2 , x 3 , x 4 1 ) is adjusted according to the statistical analysis. As shown in Figure 4, by performing 20 trails in each test, we compare the average iterative results with different α max value. Given the same initial solution, the iterative curve with α max = 0.1 has fastest rate of convergence to search better solutions. Therefore, α max in Example 4 is set to 0.1.
  • The threshold value of switching index λ controls the frequency of the switching between two search operators. If one operator is trapped in a slow convergence, another search operator is taken into consideration. As shown in Figure 5, a larger value of λ can increase the switching frequency but it may give a low-quality solution under the SQP method (e.g., λ = 10 2 ). A smaller value of λ may cause slow convergence (e.g., λ = 10 5 ). In this paper, λ , as the threshold value of the relative difference between two objective values, is adjusted between [ 10 4 , 10 3 ] .
  • The maximum number of iterations i t e r max depends on the complexity of the problem. For the two engineering optimization problems considered in this section, a choice of 80 to 100 iterations may be sufficient. For the four numerical problems, i t e r max is set to 40 to 60. Take Example 4 for example (Figure 5), the red dotted line nearly has no update from 45 generation, thus i t e r max is set to 60 so that the convergence performance of the H-STA method is guaranteed.
Table 2 summarizes the parameter settings of the H-STA method used in the study.

4.1. Example 1

This unconstrained optimization problem is used to verify the accuracy of the inner optimization method. The uncertain problem is given by as follows:
min x 1 c , x 2 c f ( x 1 c , x 2 c ) = x 2 c ( x 1 c + 0.25 ) 2 + ( x 1 c + 0.25 ) 3 + ( x 1 c + 0.25 ) 4 + 4 s . t . 3 [ x 1 ] , [ x 2 ] 3 with [ x 1 ] = [ x 1 c 0.1 , x 1 c + 0.1 ] [ x 2 ] = [ x 2 c 0.1 , x 2 c + 0.1 ] .
With uncertainties existing in the decision variables, the optimal solution should be the minimum of its objective upper bound, which can be represented as
min x 1 c , x 2 c f u ( x 1 c , x 2 c ) where f u = max x 1 [ x 1 ] , x 2 [ x 2 ] f ( x 1 , x 2 ) .
The upper bound of the objective function f u is calculated by inner optimization. Table 3 shows the best results obtained by three different robust optimization methods from 20 trails. The interval robust optimization (I-RO) method [22] uses the multi-island genetic algorithm (MIGA) as the outer optimization method, and its inner optimization is based on the second-order Chebyshev surrogate model. The linearization robust optimization (L-RO) process [22] replaces the inner optimization method of I-RO with the first-order Taylor series surrogate model. In the H-STA-RO method, the inner optimization method is based on the second-order Taylor series surrogate model.
To obtain the accurate value of the objective upper bound, we used the Monte Carlo method in the uncertain range around the design point (using 2 × 10 8 samples), which can be used as the reference value (R) for assessing the accuracy of inner optimization method.
The results show that the values of f u based on the second-order Taylor series model and the second-order Chebyshev model are closer to the reference value (R). Figure 6 illustrates the optimization results of these three methods. Figure 6a shows the contour lines of the objective function values, and Figure 6b is the results of the Monte Carlo test with 200 samples. We observe that the H-STA-RO method can find the global optimum and its inner optimization method provides sufficient accuracy to evaluate the robustness index.

4.2. Example 2

This example is a nonlinear numerical problem with uncertainty in decision variables [42]. The deterministic version of this example is a classical multimodal function called “peaks function”. In the deterministic problem, the optimal solution is x = [ 0.2283 , 1.6255 ] and the objective function value is f = 6.5511 . When the decision variable x 1 is subject to interval uncertainty, the robust optimization problem takes the form:
min x 1 c , x 2 f ( x 1 c , x 2 ) = 3 ( 1 x 1 c ) 2 e ( x 1 c 2 ( x 2 + 1 ) 2 ) 10 ( x 1 c 5 x 1 c 3 x 2 5 ) e x 1 c 2 x 2 2 1 3 e ( ( x 1 c + 1 ) 2 x 2 2 ) s . t . g 1 = 2 x 1 c 2 x 2 2 0 g 2 = 8.5 x 1 c + 1.2 x 2 0.1 0 3 [ x 1 ] , x 2 3 with Δ f 0 = 0.02 [ x 1 ] = [ x 1 c 0.05 , x 1 c + 0.05 ] .
Based on the proposed selection mechanism, shown in Table 4 are the optimal results obtained by the DE-SQP-RO method, the GA-RO method, the STA-RO method, the I-STA-RO method, and the H-STA-RO method. Since this example is a classical multimodal function, it has a known local optimum and global optimum. To verify the global search ability of the I-STA method, the success rate p s of removing the local optimum (the percentage of successful runs in total runs) is analyzed.
In Table 4, the decision variable, the constraint function value, and the robustness violation value all correspond to the results with the best objective function value in 20 runs. The values for the success rate p s and the robust rate p r (the percentage of robust runs in total runs) are obtained by statistical analysis. The value for F E is the average value of the function evaluations and the standard deviation is also presented. The value of T represents the average runtime (in seconds). From Table 4, we observe the following:
(1) The success rate of removing the local optimum in the STA-RO method is 60 % , and there are five results that fall into the local optimum [−0.2606, 0.4667] with the objective function value 0.7881. In the I-STA-RO method, however, all the tests can find the results that are close to the global optimum. Thus, the new expansion transformation operator offers a better global search ability.
(2) The robust rates of the STA-RO method, the I-STA-RO method, and the H-STA-RO method are all 100 % (with the proposed selection mechanism), whereas only 95 % and 75 % of the results in the GA-RO method and the SQP-RO method (without the proposed selection mechanism) are robust, which demonstrates that the proposed selection mechanism can obtain a robust solution with higher probability.
(3) In the DE-SQP-RO method [42], although its objective function value is better than that of the H-STA-RO method, the result cannot satisfy the requirements of the robustness according to the value of R.
(4) The average function evaluations and runtime of the H-STA-RO method are smaller than that of others (except the SQP-RO method); this is because on the one hand, the proposed selection mechanism can avoid useless calculations in inner optimization, and on the other hand, the SQP method can improve the efficiency of the search process. The proposed selection mechanism may also cause the variation of inner optimization computational cost, leading to a high standard deviation of F E .
To verify the robustness of the obtained solution in the H-STA-RO method, the Monte Carlo simulation is conducted. By using 200 samples around the nominal value within the uncertainty interval, the objective robustness index and the feasibility robustness index are calculated. In Figure 7, when decision variable x 1 is perturbed, the variations of the objective function (shown in Figure 7a) and the constraints (shown in Figure 7b) for the solution obtained by the H-STA-RO method are always within the acceptance range, whereas the deterministic solution cannot satisfy the robustness requirements.

4.3. Example 3

This constrained nonlinear problem originated from [41]. The problem formulation is:
min x 1 c , x 2 c f ( x 1 c , x 2 c ) = x 1 c 3 sin ( x 1 c + 4 ) + 10 x 1 c 2 + 22 x 1 c + 5 x 1 c x 2 c + 2 x 2 c 2 + 3 x 2 c + 12 s . t . g 1 = x 1 c 2 + 3 x 1 c x 1 c sin x 1 c + x 2 c 2.75 0 g 2 = log ( 0.1 x 1 c + 0.41 ) + x 2 c e x 1 c + 3 x 2 c 4 + x 2 c 3 0 4 [ x 1 ] 1 , 1 [ x 2 ] 1.5 with Δ f 0 = 2.5 [ x 1 ] = [ x 1 c 0.4 , x 1 c + 0.4 ] [ x 2 ] = [ x 2 c 0.4 , x 2 c + 0.4 ] ,
where the decision variables x 1 and x 2 have uncertainty [ 0.4 , 0.4 ] around x 1 c and x 2 c . Without incurring uncertainties, the optimal solution of the deterministic problem is x = [ 1.8256 , 0.7411 ] with f = 3.2871 . The robust results of the STA-RO method, the I-STA-RO method, the SQP-RO method, the H-STA-RO method, the GA-RO method, and the IK-GA-RO method are shown in Table 5.
From Table 5, it is observed that the H-STA-RO method can find the optimal solution with 100% robustness, and its function evaluations and runtime are less than that of the GA-RO method, the STA-RO method, and the I-STA-RO method. Although the function calls of the IK-GA-RO method is smaller, the optimality and robustness of the optimum obtained by the IK-GA-RO method are inferior to that of the H-STA-RO method. Figure 8 shows the robustness indexes of the deterministic optimum and robust optimum obtained by the H-STA-RO method. The deterministic optimum violates the robust requirement at some points, but the robust optimum of the H-STA-RO method can remain stable within the uncertain range.

4.4. Example 4

This example [27,43] illustrates the solving of the robust optimization problem with uncertainty in both decision variables and parameters. The formulation of this problem is
min x 1 , x 2 , x 3 c , x 4 f ( x 1 , x 2 , x 3 c , x 4 ) = ( x 1 0.6 ) 2 + ( x 2 0.6 ) 2 x 3 c x 4 + 10 s . t . g 1 = p 1 c + x 1 + x 2 0 g 2 = p 2 c + x 3 c + x 4 0 0 x 1 , x 2 , [ x 3 ] , x 4 1 with p 1 c = 1 , p 2 c = 1 [ x 3 ] = [ x 3 c 0.1 , x 3 c + 0.1 ] [ p 1 ] = [ p 1 c 0.1 , p 1 c + 0.1 ] [ p 2 ] = [ p 2 c 0.1 , p 2 c + 0.1 ] .
Without incurring the uncertainties, the deterministic optimal solution is x = [ 0.5 , 0.5 , 0.5 , 0.5 ] with f = 9.770 . Table 6 shows the results for this example using robust optimization methods. These methods are the BD-RO method, the GA-RO method, the SQP-RO method, the STA-RO method, the I-STA-RO method, and the H-STA-RO method. Figure 9 shows the results of the Monte Carlo tests.
From Table 6, it is observed that the objective function values of the I-STA-RO method and the STA-RO method are close to the optimal value, but it is hard to improve the precision. The SQP-RO method has a good rate of convergence but it cannot ensure the robustness of the solutions. Thus, the H-STA-RO method takes advantage of I-STA and SQP to obtain the robust solution and search the global optimum with less computational cost. For the GA-RO method, it has better objective function value but the computational efficiency and the robustness of the solution need to be improved. The BD-RO method can find the optimum with fewer function calls but its outer optimization approach is based on the gradient information; thus, its results are highly influenced by the initial point. Figure 9 also demonstrates that when there are uncertainties in both decision variable and parameters, the solution obtained by the H-STA-RO method can still satisfy the constraints.

4.5. Example 5: Welded Beam Design

The welded beam design (Figure 10) is a classical constrained optimization problem in engineering applications [36,44]. The design objective is to minimize the total cost of the welded beam f. The decision variables are the thickness of the weld x 1 , the length of the welded joint x 2 , the width of the beam x 3 , and the thickness of the beam x 4 . The decision variables must satisfy the constraints about the shear stress ( τ ), the bending stress ( σ ), the buckling load on the bar ( P c ), the end deflection of the beam ( δ ), and the side constraints. The deterministic optimal result is x = [ 0.2053 , 3.2604 , 9.0366 , 0.2057 ] and f = 1.6956 . When the decision variables are subject to uncertainties, the problem is modified as follows:
min x 1 , x 2 , x 3 c , x 4 c f ( x 1 , x 2 , x 3 c , x 4 c ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 c x 4 c ( 14 + x 2 ) s . t . g 1 = τ τ m a x 0 g 2 = σ σ m a x 0 g 3 = x 1 x 4 c 0 g 4 = 0.125 x 1 0 g 5 = δ δ m a x 0 g 6 = P P c 0 g 7 = ξ 1 x 1 2 + ξ 2 x 3 c x 4 c ( 14 + x 2 ) 5 0 0.125 x 1 2 , 0.1 x 2 10 0.1 [ x 3 ] 10 , 0.1 [ x 4 ] 2 where ξ 1 = 0.10471 , ξ 2 = 0.04811 τ = τ 1 2 + 2 τ 1 τ 2 ( x 2 2 R ) + τ 2 2 , τ 1 = P 2 x 1 x 2 , τ 2 = M R J M = P ( L + x 2 2 ) , R = x 2 2 4 + ( x 1 + x 3 c 2 ) 2 J = 2 { 2 x 1 x 2 [ x 2 2 4 + ( x 1 + x 3 c 2 ) 2 ] } σ = 6 P L x 4 c x 3 c 2 , δ = 6 P L 3 E x 3 c 3 x 4 c P c = 4.013 E ( x 3 c 2 x 4 c 6 ) / 36 L 2 ( 1 x 3 c 2 L E 4 G ) G = 12 × 10 6 psi , E = 30 × 10 6 psi , P = 6000 lb , L = 14 in , τ m a x = 13600 psi σ m a x = 30000 psi , δ m a x = 0.25 in with Δ f 0 = 0.1 [ x 3 ] = [ x 3 c 0.05 , x 3 c + 0.05 ] [ x 4 ] = [ x 4 c 0.01 , x 4 c + 0.01 ] .
Table 7 shows the results obtained by the DE-SQP-RO method [42], the GA-RO method, the SQP-RO method, the STA-RO method, the I-STA-RO method, and the H-STA-RO method.
From Table 7, the H-STA-RO method can generate competitive solutions when compared with other solutions. This is because (1) based on the I-STA method, the H-STA method can search the whole space and choose promising candidates, and (2) the SQP method can search the local area and improve the precision of the solution. The selection strategy and the inner optimization method proposed in this paper can guarantee the robustness of the best solutions. The findings presented in Figure 11 verify the robustness of the final solutions with respect to their objective function (Figure 11a) and constraints (Figure 11b).

4.6. Example 6: Pressure Vessel Design

The optimal design of pressure vessel (see Figure 12) is a typical application example to illustrate the min–max problem. To minimize the total cost of the vessel f, there are four decision variables to optimize: x 1 (the thickness of the shell), x 2 (the thickness of the head), x 3 (the inner radius), and x 4 (the length of the vessel without the head). The best deterministic result is f = 5886.4544 , corresponding to x = [ 0.7785 , 0.3848 , 40.3389 , 199.7753 ] . With the uncertainties in the decision variables taking into account, the pressure vessel optimization problem is formulated as follows:
min x 1 c , x 2 , x 3 , x 4 c f ( x 1 c , x 2 , x 3 , x 4 c ) = 0.6224 x 1 c x 3 x 4 c + 1.7781 x 2 x 3 2 + + 3.1661 x 1 c 2 x 4 c + 19.84 x 1 c 2 x 3 s . t . g 1 = x 1 c + 0.0193 x 3 0 g 2 = x 2 + 0.00954 x 3 0 g 3 = π x 3 2 x 4 c 4 3 π x 3 3 + 1296000 0 g 4 = x 4 c 240 0 0 [ x 1 ] 1.5 , 0 x 2 1.5 30 x 3 50 , 160 [ x 4 ] 200 with Δ f 0 = 100 [ x 1 ] = [ x 1 c 0.01 , x 1 c + 0.01 ] [ x 4 ] = [ x 4 c 0.05 , x 4 c + 0.05 ] .
where x 1 c and x 4 c are the nominal value of x 1 and x 4 , respectively.
The results obtained by seven interval robust optimization methods are shown in Table 8. It is observed that (1) for feasibility, all the methods can find feasible solutions and satisfy the constraints, (2) for robustness, the methods using IK-GA-RO, STA-RO, I-STA-RO, and H-STA-RO can obtain robust solutions since their robustness violation values (R) equal to 0, and (3) for optimality, although the result of the SQP-RO method has smaller objective function value, it violates the robustness requirements; within the robust solutions, the result of the H-STA-RO method is superior to others because of its lower f value. Moreover, the results of p r , F E , and T demonstrate the efficiency of the H-STA-RO method. Figure 13 shows the Monte Carlo test results of the deterministic solution and the robust solution of the H-STA-RO method. The deterministic optimum become infeasible in some cases, but the robust optimum of the H-STA-RO method is always feasible even when the decision variables are subject to interval variations.

4.7. Example 7: Power Scheduling Design

Power scheduling optimization is an important issue for energy consumption in the process industry. Take the zinc electrowinning process as an example: it accounts for 80 % of the total energy consumption of zinc hydrometallurgy. Based on the power time-of-use pricing policy, the aim of power scheduling optimization is to minimize the electricity charge (y) by adjusting the current density ( x 1 ), the concentration of Z n 2 + ( x 2 ), and H + ( x 3 ) in different periods.
A zinc electrowinning process shown in Figure 14 contains seven series potrooms and each potroom has several parallel electrolytic cells. With an appropriate current and zinc acid ratio, zinc ions are deposited on the cathode surface. The electricity use is charged at different prices during three different time periods (peak, shoulder, and off-peak); it is a practice to produce more with lower electricity price. To analyze the power scheduling system, the zinc electrowinning process model is established based on electrochemical reaction mechanism and historical data. With the deterministic parameter, the best result is y = 1.5922 × 10 6 , corresponding to x 1 = [ 261 , 317 , 650 ] , x 2 = [ 60 , 45 , 60 ] , x 3 = [ 200 , 200 , 200 ] . With incomplete knowledge of the process model, it is more accurate to estimate some parameters as interval values. The uncertain power scheduling optimization problem is formulated as follows:
min x 1 , x 2 , x 3 f ( x 1 , x 2 , x 3 ) = J 0 + i = 1 3 j = 1 7 P i T i V i j L i j s . t . g = i = 1 3 j = 1 7 q E i T i L i j = g 0 c 100 x 1 , i 650 , 45 x 2 , i 60 160 x 3 , i 200 where V i j = V ( x 1 , i , x 2 , i , x 3 , i ) = N j ( p 1 p 2 ln ( p 3 c x 3 , i 1 ) p 4 c ln ( p 5 x 2 , i ) + p 6 c x 1 , i ( p 7 c + p 8 c x 3 , i p 9 c x 2 , i ) 1 + p 10 c lg x 1 , i + p 11 c x 1 , i ) L i j = L ( x 1 , i ) = B j S x 1 , i E i = E ( x 1 , i , x 2 , i , x 3 , i ) = p 12 c exp ( p 13 + p 14 lg x 1 , i ) x 2 , i 1.6 x 3 , i 0.2 + ( p 15 c exp ( p 16 c + p 17 lg x 1 , i ) x 2 , i x 3 , i 0.2 + p 18 c + p 19 c x 2 , i 0.6 ) 1 x 1 , i 1 q = 1.2202   g / Ah , S = 1.13   m 2 , T = [ 11 , 5 , 8 ]   h P = 0.5627 × [ 1.6 , 1.0 , 0.7 ]   ¥ / kWh N = [ 240 , 240 , 246 , 192 , 208 , 208 , 208 ] B = [ 34 , 46 , 54 , 56 , 56 , 57 , 57 ] , g 0 c = 960   tons p = [ 1.588 , 0.027 , 1.1025 × 10 12 , 0.0135 , 8.15 , 6.2 × 10 4 , 0.5931 , 0.0181 , 0.0313 , 0.0793 , 5 × 10 4 , 1091.46 , 4.06 , 2.8 , 0.0813 , 1.8 , 3.5 , 0.35 , 2172.45 ] with Δ f 0 = 80000 [ p n ] = [ 0.95 × p n c , 1.05 × p n c ] , n = 3 , 4 , 6 , , 11 [ p n ] = [ 0.99 × p n c , 1.01 × p n c ] , n = 12 , 13 , 16 , 17 , 19 [ g 0 ] = [ g 0 c 20 , g 0 c + 20 ] .
The parameters in the above problem are as follows: J 0 is the capacity electricity charges; P i is the electricity price in the ith period; T i is the duration of the ith period; V i j is the cell voltage of the jth plant in the ith period; L i j is the magnitude of the current of the electrolysis process of the jth plant in the ith period; g 0 is the zinc daily output; q is the electrochemical equivalent of zinc; E i is the current efficiency in the ith period; N j is the number of cells in the jth plant; B j is the number of plates in a cell in the jth plant; and S is the area of the cathode plate.
The results obtained by five interval robust optimization methods are shown in Table 9. Compared to GA-RO and SQP-RO, the methods based on STA obtain smaller function values corresponding to a more accurate solution. Meanwhile, methods based on STA could obtain a more robust result. Compared to STA-RO and I-STA-RO, H-STA-RO obtains an accurate solution with less function evaluation and runtime, which denotes the efficiency of the proposed method. Figure 15 shows the Monte Carlo test results of the deterministic solution and the robust solution of the H-STA-RO method. The deterministic optimum becomes infeasible in some cases, but the robust optimum of the H-STA-RO method is always feasible.

5. Conclusions

A hybrid state transition algorithm is proposed to alleviate the problems of robust optimization, including high computational costs and poor convergence. Based on the worst-case analysis, the robust optimization problem can be transformed into a min–max problem. In the outer optimization process (minimization problem), the hybrid state transition algorithm is used to improve the rate of convergence and avoid the local optimum distraction. Meanwhile, the method of sequential quadratic programming is used to strengthen local search ability and reduce computational costs. In the inner optimization process (maximization problem), the second-order Taylor series surrogate model is used to approximate the nonlinear functions and decrease the computational cost. Moreover, to balance the robustness and optimality of candidate solutions, a novel feasibility-checking mechanism is proposed to operate the inner optimization only when a feasible solution is found. Verifying the robustness of the proposed method is conducted using seven examples. The results show that the proposed method offers competitive performance compared with existing robust optimization methods in convergence and efficiency.
In our future work, the robust optimization method for other forms of uncertainties (such as fuzzy uncertainty and interval fuzzy uncertainty) and the applicability enhancement of the surrogate model will be investigated.

Author Contributions

Methodology, J.H.; Validation, H.Z.; Formal analysis, Y.Z.; Investigation, H.Z.; Writing—original draft, H.Z.; Writing—review & editing, J.H.; Supervision, X.Z.; Funding acquisition, J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program of China (Grant No. 2021YFB1715200), the Hunan Natural Science Foundation (Grant No. 2021JJ40785 and No. 2022JJ40639), and the National Natural Science Foundation of China (Grant No. 62103447 and No. 62203471).

Data Availability Statement

The data presented in this study are available in the verification section of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, J.; Su, C. Robust optimization of microgrid based on renewable distributed power generation and load demand uncertainty. Energy 2021, 223, 120043. [Google Scholar] [CrossRef]
  2. Witte, D.D.; Qing, J.; Couckuyt, I.; Dhaene, T.; Ginste, D.V.; Spina, D. A robust bayesian optimization framework for microwave circuit design under uncertainty. Electronics 2022, 11, 2267. [Google Scholar] [CrossRef]
  3. Tirkolaee, E.B.; Mahdavi, I.; Esfahani, M.M.S.; Weber, G.-W. A robust green location-allocation-inventory problem to design an urban waste management system under uncertainty. Waste Manag. 2020, 102, 340–350. [Google Scholar] [CrossRef] [PubMed]
  4. Ren, Z.; Zhen, X.; Jiang, Z.; Gao, Z.; Li, Y.; Shi, W. Underactuated control and analysis of single blade installation using a jackup installation vessel and active tugger line force control. Mar. Struct. 2023, 88, 103338. [Google Scholar] [CrossRef]
  5. Shi, G.; Wei, Q.; Liu, D. Optimization of electricity consumption in office buildings based on adaptive dynamic programming. Soft Comput. 2017, 21, 6369–6379. [Google Scholar] [CrossRef]
  6. Acar, E.; Bayrak, G.; Jung, Y.; Lee, I.; Ramu, P.; Ravichandran, S.S. Modeling, analysis, and optimization under uncertainties: A review. Struct. Multidiscip. Optim. 2021, 64, 2909–2945. [Google Scholar] [CrossRef]
  7. Sun, X.-K.; Peng, Z.-Y.; Guo, X.-L. Some characterizations of robust optimal solutions for uncertain convex optimization problems. Optim. Lett. 2016, 10, 1463–1478. [Google Scholar] [CrossRef]
  8. Rahim, S.; Wang, Z.; Ju, P. Overview and applications of robust optimization in the avant-garde energy grid infrastructure: A systematic review. Appl. Energy 2022, 319, 119140. [Google Scholar] [CrossRef]
  9. Paenke, I.; Branke, J.; Jin, Y. Efficient search for robust solutions by means of evolutionary algorithms and fitness approximation. IEEE Trans. Evol. Comput. 2006, 10, 405–420. [Google Scholar] [CrossRef]
  10. Shargh, S.; Mohammadi-Ivatloo, B.; Seyedi, H.; Abapour, M. Probabilistic multi-objective optimal power flow considering correlated wind power and load uncertainties. Renew. Energy 2016, 94, 10–21. [Google Scholar] [CrossRef]
  11. Zhou, X.; Cai, X.; Zhang, H.; Zhang, Z.; Jin, T.; Chen, H.; Deng, W. Multi-strategy competitive-cooperative co-evolutionary algorithm and its application. Inf. Sci. 2023, 635, 328–344. [Google Scholar] [CrossRef]
  12. Summers, T. Distributionally robust sampling-based motion planning under uncertainty. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 6518–6523. [Google Scholar]
  13. Zhu, X.; Zeng, B.; Dong, H.; Liu, J. An interval-prediction based robust optimization approach for energy-hub operation scheduling considering flexible ramping products. Energy 2020, 194, 116821. [Google Scholar] [CrossRef]
  14. Ghasemloo, M.; Hejazi, M.A.; Hashemi-Dezaki, H. Flexibility optimization in robust co-optimization of combined power system and gas networks using transmission lines’ switching. Electronics 2022, 11, 2647. [Google Scholar] [CrossRef]
  15. Li, G.; Lu, Z.; Li, L.; Ren, B. Aleatory and epistemic uncertainties analysis based on non-probabilistic reliability and its kriging solution. Appl. Math. Model. 2016, 40, 5703–5716. [Google Scholar] [CrossRef]
  16. Wang, L.; Liu, D.; Yang, Y.; Wang, X.; Qiu, Z. A novel method of non-probabilistic reliability-based topology optimization corresponding to continuum structures with unknown but bounded uncertainties. Comput. Methods Appl. Mech. Eng. 2017, 326, 573–595. [Google Scholar] [CrossRef]
  17. Meng, D.; Xie, T.; Wu, P.; He, C.; Hu, Z.; Lv, Z. An uncertainty-based design optimization strategy with random and interval variables for multidisciplinary engineering systems. In Structures; Elsevier: Amsterdam, The Netherlands, 2021; Volume 32, pp. 997–1004. [Google Scholar]
  18. Liu, W.; Fu, M.; Yang, M.; Yang, Y.; Wang, L.; Wang, R.; Zhao, a.T. A bi-level interval robust optimization model for service restoration in flexible distribution networks. IEEE Trans. Power Syst. 2020, 36, 1843–1855. [Google Scholar] [CrossRef]
  19. Kang, Z.; Bai, S. On robust design optimization of truss structures with bounded uncertainties. Struct. Multidiscip. Optim. 2013, 47, 699–714. [Google Scholar] [CrossRef]
  20. Li, W.; Xiao, M.; Yi, Y.; Gao, L. Maximum variation analysis based analytical target cascading for multidisciplinary robust design optimization under interval uncertainty. Adv. Eng. Inform. 2019, 40, 81–92. [Google Scholar] [CrossRef]
  21. Rehman, S.U.; Langelaar, M. Efficient global robust optimization of unconstrained problems affected by parametric uncertainties. Struct. Multidiscip. Optim. 2015, 52, 319–336. [Google Scholar] [CrossRef] [Green Version]
  22. Wu, J.; Luo, Z.; Zhang, N.; Zhang, Y. A new interval uncertain optimization method for structures using Chebyshev surrogate models. Comput. Struct. 2015, 146, 185–196. [Google Scholar] [CrossRef]
  23. Xia, B.; Lü, H.; Yu, D.; Jiang, C. Reliability-based design optimization of structural systems under hybrid probabilistic and interval model. Comput. Struct. 2015, 160, 126–134. [Google Scholar] [CrossRef]
  24. Hao, P.; Wang, Y.; Liu, C.; Wang, B.; Wu, H. A novel non-probabilistic reliability-based design optimization algorithm using enhanced chaos control method. Comput. Methods Appl. Mech. Eng. 2017, 318, 572–593. [Google Scholar] [CrossRef]
  25. Chen, S.H.; Song, M.; Chen, Y.D. Robustness analysis of responses of vibration control structures with uncertain parameters using interval algorithm. Struct. Saf. 2017, 29, 94–111. [Google Scholar] [CrossRef]
  26. Zhou, J.; Li, M. Advanced robust optimization with interval uncertainty using a single-looped structure and sequential quadratic programming. J. Mech. Des. 2014, 136, 021008. [Google Scholar] [CrossRef]
  27. Siddiqui, S.; Azarm, S.; Gabriel, S. A modified Benders decomposition method for efficient robust optimization under interval uncertainty. Struct. Multidiscip. Optim. 2011, 44, 259–275. [Google Scholar] [CrossRef]
  28. Fu, J.; Faust, J.M.M.; Chachuat, B.; Mitsos, A. Local optimization of dynamic programs with guaranteed satisfaction of path constraints. Automatica 2015, 62, 184–192. [Google Scholar] [CrossRef] [Green Version]
  29. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning, 28th ed.; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 2006; Volume xiii. [Google Scholar]
  30. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  31. Song, Y.; Liu, Y.; Chen, H.; Deng, W. A multi-strategy adaptive particle swarm optimization algorithm for solving optimization problem. Electronics 2023, 12, 491. [Google Scholar] [CrossRef]
  32. Ye, F.; Chen, J.; Tian, Y.; Jiang, T. Cooperative task assignment of a heterogeneous multi-uav system using an adaptive genetic algorithm. Electronics 2020, 9, 687. [Google Scholar] [CrossRef] [Green Version]
  33. Dong, Y.; Zhang, H.; Wang, C.; Zhou, X. An adaptive state transition algorithm with local enhancement for global optimization. Appl. Soft Comput. 2022, 121, 108733. [Google Scholar] [CrossRef]
  34. Zhou, X.; Yang, C.; Gui, W. A statistical study on parameter selection of operators in continuous state transition algorithm. IEEE Trans. Cybern. 2019, 49, 3722–3730. [Google Scholar] [CrossRef] [Green Version]
  35. Han, J.; Yang, C.; Zhou, X.; Gui, W. A new multi-threshold image segmentation approach using state transition algorithm. Appl. Math. Model. 2017, 44, 588–601. [Google Scholar] [CrossRef]
  36. Han, J.; Yang, C.; Zhou, X.; Gui, W. A two-stage state transition algorithm for constrained engineering optimization problems. Int. J. Control Autom. Syst. 2017, 16, 522–534. [Google Scholar] [CrossRef]
  37. Li, M.; Zhang, W.; Hu, B.; Kang, J.; Wang, Y.; Lu, S. Automatic assessment of depression and anxiety through encoding pupil-wave from hci in vr scenes. ACM Trans. Multimed. Comput. Commun. Appl. 2022. [Google Scholar] [CrossRef]
  38. Stewart, J. Calculus: Concepts and Contexts; Cengage Learning: Boston, MA, USA, 2009. [Google Scholar]
  39. Zhou, X.; Yang, C.; Gui, W. State transition algorithm. J. Ind. Manag. Optim. 2012, 8, 1039–1056. [Google Scholar] [CrossRef] [Green Version]
  40. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  41. Zhou, H.; Zhou, Q.; Liu, C.; Zhou, T. A Kriging metamodel-assisted robust optimization method based on a reverse model. Eng. Optim. 2018, 50, 253–272. [Google Scholar] [CrossRef]
  42. Cheng, S.; Li, M. Robust optimization using hybrid differential evolution and sequential quadratic programming. Eng. Optim. 2015, 47, 87–106. [Google Scholar] [CrossRef]
  43. Zhou, J.; Cheng, S.; Li, M. Sequential quadratic programming for robust optimization with interval uncertainty. J. Mech. Des. 2012, 134, 100913. [Google Scholar] [CrossRef]
  44. Zhou, Q.; Jiang, P.; Shao, X.; Zhou, H.; Hu, J. An on-line Kriging metamodel assisted robust optimization approach under interval uncertainty. Eng. Comput. 2017, 34, 420–446. [Google Scholar] [CrossRef]
Figure 1. Performance of the original expansion transformation: (a) with initial point [ 10 , 10 ] , and (b) with initial point [ 1 , 1 ] .
Figure 1. Performance of the original expansion transformation: (a) with initial point [ 10 , 10 ] , and (b) with initial point [ 1 , 1 ] .
Electronics 12 03035 g001
Figure 2. Performance of the new expansion transformation: (a) with initial point [ 10 , 10 ] , and (b) with initial point [ 1 , 1 ] .
Figure 2. Performance of the new expansion transformation: (a) with initial point [ 10 , 10 ] , and (b) with initial point [ 1 , 1 ] .
Electronics 12 03035 g002
Figure 3. Framework of the hybrid state transition algorithm for robust optimization problem with interval uncertainty.
Figure 3. Framework of the hybrid state transition algorithm for robust optimization problem with interval uncertainty.
Electronics 12 03035 g003
Figure 4. Iterative curve with different α max of Example 4.
Figure 4. Iterative curve with different α max of Example 4.
Electronics 12 03035 g004
Figure 5. Iterative curve with different λ of Example 4.
Figure 5. Iterative curve with different λ of Example 4.
Electronics 12 03035 g005
Figure 6. Robust optimization results of three methods for Example 1: (a) contour lines of the objective function values; (b) Monte Carlo test results of I-RO, L-RO, and H-STA-RO.
Figure 6. Robust optimization results of three methods for Example 1: (a) contour lines of the objective function values; (b) Monte Carlo test results of I-RO, L-RO, and H-STA-RO.
Electronics 12 03035 g006
Figure 7. Robustness verification of the deterministic and robust solution for Example 2: (a) objective robustness verification, and (b) feasibility robustness verification.
Figure 7. Robustness verification of the deterministic and robust solution for Example 2: (a) objective robustness verification, and (b) feasibility robustness verification.
Electronics 12 03035 g007
Figure 8. Robustness verification of the deterministic and robust solution for Example 3: (a) objective robustness verification; (b) feasibility robustness verification.
Figure 8. Robustness verification of the deterministic and robust solution for Example 3: (a) objective robustness verification; (b) feasibility robustness verification.
Electronics 12 03035 g008
Figure 9. Robustness verification of the the deterministic and robust solution for Example 4.
Figure 9. Robustness verification of the the deterministic and robust solution for Example 4.
Electronics 12 03035 g009
Figure 10. The welded beam.
Figure 10. The welded beam.
Electronics 12 03035 g010
Figure 11. Robustness verification of the deterministic and robust solution for Example 5: (a) objective robustness verification; (b) feasibility robustness verification.
Figure 11. Robustness verification of the deterministic and robust solution for Example 5: (a) objective robustness verification; (b) feasibility robustness verification.
Electronics 12 03035 g011
Figure 12. The pressure vessel.
Figure 12. The pressure vessel.
Electronics 12 03035 g012
Figure 13. Robustness verification of the deterministic and robust solution for Example 6: (a) objective robustness verification; (b) feasibility robustness verification.
Figure 13. Robustness verification of the deterministic and robust solution for Example 6: (a) objective robustness verification; (b) feasibility robustness verification.
Electronics 12 03035 g013
Figure 14. Zinc electrowinning process.
Figure 14. Zinc electrowinning process.
Electronics 12 03035 g014
Figure 15. Robustness verification of the deterministic and robust solution for Example 7: (a) objective robustness verification; (b) feasibility robustness verification.
Figure 15. Robustness verification of the deterministic and robust solution for Example 7: (a) objective robustness verification; (b) feasibility robustness verification.
Electronics 12 03035 g015
Table 1. Uncertainty occurrences in each example.
Table 1. Uncertainty occurrences in each example.
Uncertainty OccurrencesExamples 1, 2, and 3Example 4Examples 5 and 6Examples 7
Decision variables x
Parameters p
Table 2. Parameter settings of the H-STA method used in the examples.
Table 2. Parameter settings of the H-STA method used in the examples.
ParametersExamples 1, 2, 3Example 4Examples 5, 6, and 8
α max 10.11
λ 10 3 10 4 10 3
i t e r max 606080
Table 3. Performance comparison of Example 1.
Table 3. Performance comparison of Example 1.
I-RO (Second-Order Chebyshev) [22]L-RO (First-Order Taylor Series) [22]H-STA-RO (Second-Order Taylor Series)
x 1 −1.42108−1.42542−1.42046
x 2 2.90002.90002.9000
f u 0.14100.10300.1405
R0.14100.14470.1405
Note: R is the reference value of objective upper bound obtained from the Monte Carlo simulation run.
Table 4. Performance comparison of Example 2.
Table 4. Performance comparison of Example 2.
DE-SQP-RO [42]GA-ROSQP-ROSTA-ROI-STA-ROH-STA-RO
x 1 0.19440.19450.19470.19420.19430.1945
x 2 −1.8414−1.8410−1.8395−1.8438−1.8427−1.8414
f−5.9559−5.9579−5.9655−5.9435−5.9491−5.9557
g 1 −3.3152−3.3137−3.3080−3.3242−3.3202−3.3153
g 2 −0.6573−0.6559−0.6526−0.6616−0.6596−0.6568
R 5.00 × 10 5 9.87 × 10 6 5.00 × 10 6 000
p s 90 % 35 % 75 % 100 % 100 %
p r 95 % 75 % 100 % 100 % 100 %
F E 1,941,630 ± 27,974 ± 59.00139.9000 ± 62.2738,053 ±  3.44 × 10 3 36,650 ±  1.28 × 10 3 17,456 ±  3.53 × 10 3
T0.1060.0260.1490.1530.145
Note: R is the reference value of robustness violation obtained from the Monte Carlo simulation, and − denotes data not available.
Table 5. Performance comparison of Example 3.
Table 5. Performance comparison of Example 3.
IK-GA-RO [41]GA-ROSQP-ROSTA-ROI-STA-ROH-STA-RO
x 1 −1.447−1.4409−1.4404−1.4394−1.4399−1.4405
x 2 0.2670.33680.33700.33720.33710.3369
f−1.567−1.772776−1.772834−1.772766−1.772770−1.772771
g 1 −6.166−6.0885−6.0876−6.0861−6.0869−6.0878
g 2 −1.360−1.2670−1.2670−1.2671−1.2671−1.2670
R0.1481 9.25 × 10 6 9.58 × 10 5 000
p r 60 % 65 % 100 % 100 % 100 %
F E 18,558 ± 97,211 ±  1.46 × 10 3 46.65 ± 19.074038,685 ±  7.44 × 10 2 37,823 ±  1.05 × 10 3 18,896 ±  1.26 × 10 3
T0.2200.0250.3480.3390.084
Note: R is the reference value of robustness violation obtained from the Monte Carlo simulation, and − denotes data not available.
Table 6. Performance comparison of Example 4.
Table 6. Performance comparison of Example 4.
BD-RO [27]GA-ROSQP-ROSTA-ROI-STA-ROH-STA-RO
x 1 0.450.45000.42750.44990.45500.4500
x 2 0.450.45000.47250.44960.44490.4500
x 3 0.40.40050.51810.39050.40000.4000
x 4 0.40.40040.35310.40930.39990.4000
f9.88509.88469.86319.88539.88529.8850
g 1 −0.1000−0.1000−0.1000−0.1005−0.1002−0.1000
g 2 −0.1000−0.1990−0.1288−0.2002−0.2001−0.2000
R0 9.9988 × 10 4 0.0712000
p r 50 % 40 % 100 % 100 % 100 %
F E 21 ± −76,541 ± 17.07178 ± 67.9755,573 ±  1.94 × 10 3 54,658 ±  1.05 × 10 3 23,644 ±  3.75 × 10 3
T0.1530.02520.1830.1770.071
Note: R is the reference value of robustness violation obtained from the Monte Carlo simulation, and − denotes data not available.
Table 7. Performance comparison of Example 5.
Table 7. Performance comparison of Example 5.
DE-SQP-ROGA-ROSQP-ROSTA-ROI-STA-ROH-STA-RO
x 1 0.20570.20640.21570.18490.19860.2050
x 2 7.09243.24523.08363.66573.36343.2686
x 3 9.08669.07519.08679.11599.16239.0774
x 4 0.21570.21630.21570.21570.21540.2162
f2.32081.78091.76961.80961.79481.7818
g 1 7.05 × 10 3 −51.0749−51.2823−69.9634−50.2844−51.1169
g 2 1.70 × 10 3 1.70 × 10 3 1.70 × 10 3 1.88 × 10 3 2.12 × 10 3 1.70 × 10 3
g 3 −0.0100−0.0099 4.35 × 10 9 −0.0309−0.0167−0.0111
g 4 −3.0067−3.3673−3.3840−3.3252−3.3476−3.3655
g 5 −0.0807−0.0814−0.0907−0.0599−0.0736−0.0800
g 6 −0.2364−0.2364−0.2364−0.2366−0.2367−0.2364
g 7 −940.3744−988.3397−943.0863955.7257−944.6512−979.3933
R0 9.99 × 10 4 0.0712000
p r 50 % 55 % 100 % 100 % 100 %
F E 225,700 ± −139,081 ±  1.20 × 10 4 202.5 ± 71.053491,493 ±  1.02 × 10 4 77,743 ±  1.11 × 10 3 48,522 ±  7.26 × 10 2
T1.3710.0850.7210.7210.227
Note: R is the reference value of robustness violation obtained from the Monte Carlo simulation, and − denotes data not available.
Table 8. Performance comparison of Example 6.
Table 8. Performance comparison of Example 6.
IK-GA-ROGA-ROSQP-ROSTA-ROI-STA-ROH-STA-RO
x 1 0.8450.78970.78040.79730.79480.78831
x 2 0.4120.38540.38470.38920.38790.38472
x 3 43.00840.399840.326840.787640.657040.32681
x 4 165.758198.9669199.9500193.7079195.5939199.9500
f 6.09 × 10 3 5.96 × 10 3 5.90 × 10 3 5.97 × 10 3 5.97 × 10 3 5.95 × 10 3
g 1 −0.015−0.0100−0.0021−0.0101−0.0102−0.0100
g 2 −0.02 5.91 × 10 8 3.00 × 10 6 9.66 × 10 5 91.98 × 10 6 0
g 3 −463.936−256.4015−255.7847−631.8675 1.23 × 10 3 −964.9065
g 4 −74.242−41.0631−40.00500−46.2921−44.4061−40.0500
R0 1.61 × 10 5 0.0079000
p r 60 % 15 % 100 % 100 % 100 %
F E 14,139 ± −127,390 ±  1.16 × 10 6 129.7500 ± 58.9257,564 ±  5.85 × 10 3 54,275 ±  6.57 × 10 3 34,575 ±  5.88 × 10 3
T0.2210.0230.1560.1590.073
Note: R is the reference value of robustness violation obtained from the Monte Carlo simulation, and − denotes data not available.
Table 9. Performance comparison of Example 7.
Table 9. Performance comparison of Example 7.
GA-ROSQP-ROSTA-ROI-STA-ROH-STA-RO
x 1 [255, 324, 600][406, 265, 593][100, 598, 50][189, 404, 650][159, 489, 646]
x 2 [60, 60, 60][45, 45, 60][60, 60, 60][60, 59, 60][59, 58, 60]
x 3 [184, 184, 200][200, 160, 200][185, 200, 200][160, 200, 179][188, 161, 200]
f 1.53 × 10 6 1.89 × 10 6 1.48 × 10 6 1.47 × 10 6 1.47 × 10 6
g 1 945.04951.10947.64956.00954.15
R0 1.20 × 10 4 000
p r 90 % 0 % 100 % 100 % 100 %
F E 110,082 ±  2.84 × 10 4 3392 ± 830.4645,529 ±  5.46 × 10 3 45,334 ±  4.25 × 10 3 31,470 ±  5.13 × 10 3
T0.4340.0860.4870.4980.394
Note: R is the reference value of robustness violation obtained from the Monte Carlo simulation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Han, J.; Zhou, X.; Zheng, Y. Robust Optimization with Interval Uncertainties Using Hybrid State Transition Algorithm. Electronics 2023, 12, 3035. https://doi.org/10.3390/electronics12143035

AMA Style

Zhang H, Han J, Zhou X, Zheng Y. Robust Optimization with Interval Uncertainties Using Hybrid State Transition Algorithm. Electronics. 2023; 12(14):3035. https://doi.org/10.3390/electronics12143035

Chicago/Turabian Style

Zhang, Haochuan, Jie Han, Xiaojun Zhou, and Yuxuan Zheng. 2023. "Robust Optimization with Interval Uncertainties Using Hybrid State Transition Algorithm" Electronics 12, no. 14: 3035. https://doi.org/10.3390/electronics12143035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop