Next Article in Journal
Comparative Analysis of Viscous Damping Model and Hysteretic Damping Model
Previous Article in Journal
Multi-UAV Cooperative Path Planning with Monitoring Privacy Preservation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Multi-Objective Optimization Method and Its Application to Electromagnetic Device Designs

1
College of Information Science and Technology, Donghua University, Shanghai 201620, China
2
College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 12110; https://doi.org/10.3390/app122312110
Submission received: 5 September 2022 / Revised: 17 November 2022 / Accepted: 22 November 2022 / Published: 26 November 2022
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
Optimization algorithms play a critical role in electromagnetic device designs due to the ever-increasing technological and economical competition. Although evolutionary algorithm-based methods have successfully been applied to different design problems, these methods exhibit deficiencies when solving complex problems with multimodal and discontinuous objective functions, which is quite common in electromagnetic device optimization designs. In this paper, a hybrid multi-objective optimization algorithm based on a non-dominated sorting genetic algorithm (NSGA-II) and a multi-objective particle swarm optimization method (MOPSO) is proposed. In order to enhance the convergence and diversity performance of the algorithm, a new population update mechanism of MOPSO is introduced. Moreover, an adaptive operator involving crossover and mutation is presented to achieve a better balance between global and local searches. The performance of the hybrid algorithm is validated using standard test functions and the multi-objective design of a superconducting magnetic energy storage (SMES) device. Numerical results demonstrate the effectiveness and superiority of the proposed method.

1. Introduction

The optimal design of electromagnetic devices requires compromising multiple, often conflicting objectives, such as the cost and specific performance parameters, under certain constraints. Mathematically, the optimal design of electromagnetic devices can be formulated as a multi-objective optimization (MOO) problem. The solution methods for MOO problems can be roughly classified into two kinds, non-heuristic optimization methods and heuristic optimization ones. Typical non-heuristic methods include goal programming, weighted sum approach, goal attainment, the ε-constraint method [1], inverse scattering methods [2,3,4], and topology optimization-based methods [5,6]. These methods have been successfully applied to the optimization design of antennas, waveguides, and other electromagnetic devices. In the most of these non-heuristic methods, the original multiple objectives (or constraints) are generally combined together to form a new objective function and then solved, which inevitably changes the original optimization problem. By contrast, the heuristic MOO methods can optimize multiple objectives simultaneously. These methods are often developed by following the procedures of multi-objective evolutionary algorithms (MOEAs), generally inspired by natural laws or rules. Hitherto, MOEAs have already been applied to solve the MOO problems of electromagnetic devices. For example, Zhang et al. used NSGA-II to optimize and design a large-scale, magnetically suspended, turbo-molecular pump [7]; Heydarianasl and Rahmat applied MOPSO to optimize electro-static sensor electrodes [8]; and Niu et al. combined the linear layer analysis method with NSGA-II to optimize the parameters of a permanent magnet eddy current retarder [9].
Although these optimization algorithms have been successfully applied to multifarious practical MOO problems, there are still some deficiencies in dealing with complex optimization problems, especially objective functions with multimodal and discontinuous landscapes. Specifically, NSGA-II is a reputed MOO algorithm with the advantage of good global search ability and versatility. However, its convergence performance needs to be improved, and its population distribution is not uniform enough [10]. Although MOPSO has a fast convergence speed and a good robustness [11], it is difficult to guarantee the convergence between the ideal Pareto frontier (PF) and the searched Pareto frontier (PF) for this algorithm [12]. In this regard, a new hybrid algorithm (INSGAP) combining an improved NSGA-II and MOPSO is proposed. A population update mechanism of MOPSO is introduced to improve the diversity and convergence of the NSGA-II algorithm, while ensuring its original advantages. Moreover, adaptive evolutionary operators are designed to achieve a better balance between the global and the local search processes.
The remainder of this paper is organized as follows. Section 2 briefly introduces the background of multi-objective optimization, and the proposed hybrid algorithm is elaborated upon in Section 3. Performance validation and multi-objective design of a superconducting magnetic energy storage (SMES) device are presented in Section 4. In Section 5, conclusions are drawn and future work is outlined.

2. Multi-Objective Optimization

2.1. Problem Definition

To generalize, a multi-objective optimization problem can be formulated as
min F ( x ) = ( f 1 ( x ) , f 2 ( x ) , f m ( x ) ) T s . t         x   ϵ   Ω R n
where m is the number of objective functions, n is the number of decision variables, Ω is the decision space formed by the decision variable x , and f i ( x )   ( i 1 , 2 , , m ) is the i-th objective function. F ( x ) is the function that maps the decision space Ω into the m-dimensional objective space.
To facilitate their description, relevant definitions of the terminologies in multi-objective optimizations are introduced as:
Definition 1.
Pareto domination. For any two variables x , y belonging to the decision space, their corresponding function values F ( x ) ,   F ( y ) are belonging to the target space, and are labeled x y if x Pareto dominates y .
f i ( x ) f i ( y ) , i { 1 , 2 , , m } f j ( x ) < f j ( y ) , j { 1 , 2 , , m }
Definition 2.
Pareto optimal solution. If the decision variable x is a Pareto optimal solution, then no other decision variable y in the decision variable space satisfies y   and Pareto dominates x . The set of all the Pareto optimal solutions is called the Pareto set (PS).
P S = { x   ϵ   Ω | y   ϵ   Ω , y x }  
Definition 3.
Pareto frontier (PF). PF is composed by the image of PS in the objective space.
In a multi-objective optimization (MOO) problem, the objectives are mutually constrained so that the improvement of the performance of one objective is often at the expense of the performance of the other objectives, and there cannot exist a solution that makes the performance of all objectives optimal. Therefore, it is difficult to determine which solution of the multiple Pareto-optimal solutions is better without considering other information about the problem, and all Pareto-optimal solutions, in this case, can be considered equally important. Obviously, for a MOO problem, as many different Pareto solutions as possible, whose corresponding PF in objective space is as close as possible to the true PF, would be preferred.

2.2. Performance Indicators

In order to evaluate the performance of a MOO algorithm, appropriate indicators need to be chosen. The common performance indicators include the convergence metric, diversity metric, and non-dominated individual ratio, hypervolume, etc.

2.2.1. Convergence Metric

The convergence metric, γ [13], characterizes the distance from the Pareto solution obtained by the optimization algorithm to the true front of the test function, and is given by
γ = 1 N p i = 1 N p d i m
where N p is the number of solutions in the solution set sought by an algorithm, and d i m is the minimum distance between the i-th non-dominated solution of the optimization result and that in the true PF.

2.2.2. Diversity Metric

The diversity metric, Δ [13], indicates the degree of uniformity of the distribution of the non-dominated solution set, and is given by
Δ = 1 d f + d l + ( N p 1 ) d ¯ ( d f + d l + j = 1 N p 1 | d j d ¯ | )  
where N p is the number of Pareto solutions. There are N p 1 adjacent distances; d f and d l are the Euclidean distances between the endpoints of the obtained PF and the true PF; d j denotes the Euclidean distance between two adjacent points of the non-dominated solution set in the optimization result; and d ¯ is the average of the distances between adjacent points.

2.2.3. Non-Dominated Individual Ratio

The non-dominated individual ratio, NDR [14], can be used to compare solution sets obtained by different algorithms. Suppose that there exist three Pareto solution sets, A, B, and C, of a MOO problem. The NDR metric of each solution set can be expressed as
{ r a t i o ( A ) = 1 | P ( S ) | x ϵ A | x P ( S ) | r a t i o ( B ) = 1 | P ( S ) | x ϵ B | x P ( S ) | r a t i o ( C ) = 1 | P ( S ) | x ϵ C | x P ( S ) |
where   P ( S )   represents all non-dominated solution sets composed of A, B, and C. The proportion of non-dominated individuals can directly indicate the accuracy of different solution sets, and the larger the value, the higher the quality of the solution set.

3. A Hybrid Multi-Objective Optimization Method

Numerical results demonstrate that the original non-dominated sorting genetic algorithm (NSGA-II) is difficult to converge to the true Pareto frontier when dealing with benchmark test functions with numerous local optima. To improve the global search ability, the population update mechanism of multi-objective particle swarm optimization method (MOPSO) is introduced in the proposed hybrid algorithm. This also contributes to a more uniform distribution of obtained solutions. Moreover, adaptive operators, including crossover and mutation, are proposed to achieve a better balance between global and local searches.

3.1. NSGA-II

NSGA-II follows the basic framework of a genetic algorithm (GA), which imitates the evolution process in nature. The procedure of standard NSGA-II is shown in Algorithm 1. After the initialization, N individuals are generated in order to form the population P, then the non-dominated front of each individual is determined and the corresponding crowding distance is calculated. Next, the algorithm follows basic GA operators, including selection, crossover, and mutation, to generate new individuals Q. Then, the non-dominated front and crowding distance of each individual in the mixed population L is calculated. The individuals with a comparatively low rank of non-dominated front and small crowding distance values survive to become the population of the next generation. The above steps repeat until the termination criterion is satisfied. The main characteristics of NSGA-II are: (1) a fast, non-dominated sorting strategy is proposed to reduce the algorithm complexity; (2) an elite retention mechanism is used to prevent the loss of excellent individuals; (3) the concept of crowding distance is proposed to maintain the diversity of the population.
Algorithm 1: NSGA-II
Applsci 12 12110 i001

3.2. MOPSO

MOPSO is also a bio-inspired algorithm. It imitates the food hunting process of fish or a flock of birds. It is believed that in the case of a bird flying and searching randomly for food, for instance, all birds in the flock can share their discovery and help the entire flock to achieve the best hunt. Each bird or fish in the population is called a particle. In the implementation, each particle is given two variables, namely position and velocity. The critical part of the MOPSO algorithm is the update mechanism of the particle position and velocity. The updating of the particle velocity and position is governed by
V k + 1 = ω V k + c 1 r 1 ( P i d k X k ) + c 2 r 2 ( P g d k X k )
X k + 1 = V k + 1 + X k
where V is the particle velocity, X is the particle position, k is the index of iterations, P i d k is the individual optimal particle (Pbest) position, P g d k is the global optimal particle (Gbest) position, c 1 is the personal learning coefficient, c 2 is the global learning coefficient, and ω is the inertia weight, which represents the ability of the particle to maintain its current velocity. r 1 and r 2 are two random numbers between 0 and 1. The general structure of the MOPSO algorithm is shown in Algorithm 2, and an Archive is set to preserve the excellent particles in the search process. In the beginning, N particles are generated to form the population P, and the velocity of each particle is initialized; K best individuals are selected to form the initial Archive R; then, the velocity and position of each particle is updated according to (7) and (8) to form the new population P′. Next, the K best individuals of the mixed population are selected after the non-dominated sorting and adaptive lattice methods are applied, and Archive R is updated. The above process repeats until the termination criterion is satisfied. Here, the adaptive lattice method is used to select good individuals for the next generation, contributing to the maintenance of population diversity.
Algorithm 2: MOPSO
Input: N (particle swarm size), K (Archive size)
Output: R (Archive)
1PInitialization(N);
2V ← Initialize the velocity of each particle;
3RNon-dominated-sort(P); //Select the better K individuals from P
4while termination criterion not fulfilled do
5  Pbest ← Update the individual optimal particle according to P;
6  V ← Compute the velocity of each particle by (8);
7  P′ ← Update P by (7) and (8);
8  PRP′;
9  R ← Select the better K individuals by Non-dominated-sort(P) and adaptive-lattice(P);
10  Gbest ← Update the global optimal particle according to R;
11return R;

3.3. Adaptive Operators

Mechanisms of generating offspring constitute a main component of evolutionary algorithms. To achieve a balance between the global search and local search ability, adaptive operators involving crossover and mutation are presented.

3.3.1. Crossover Operator

The commonly used crossover operators include two-point crossover, uniform crossover, differential evolution (DE) crossover [15], the simulated binary crossover (SBX) [16], etc. The two-point crossover is a random setting of two identical crossover points in two parental individuals, wherein the crossover is then performed at the two points where the two parental individuals are located. It has an optimal balance between intrinsic originality and randomness [17]. On the other hand, the SBX operator is a crossover operator that simulates a single-point binary crossover, with an excellent local optimization capability [15].
Based on this, a dynamic selection crossover operator combining the two-point crossover and the SBX is proposed. Initially, the two-point crossover operator is selected with a higher probability, and a larger search area is proposed to be obtained, while as the iterations proceed, the probability of selecting the SBX operator gradually becomes higher in order to better retain the good individuals. A schematic of the two-point crossover is shown in Figure 1.
For two parent individuals x 1 ( x 11 , , x 1 n ) and x 2 ( x 21 , , x 2 n ) , the offspring y 1 ( y 11 , , y 1 n ) and y 2 ( y 21 , , y 2 n ) can be obtained using the SBX operator by (9).
{ y 1 i = 0.5 × [ ( 1 + β ) x 1 i + ( 1 β ) x 2 i ] y 2 i = 0.5 × [ ( 1 β ) x 1 i + ( 1 + β ) x 2 i ]
where β is determined by a random number   r a n d ( 0 , 1 ) .
β = { ( 2 r a n d ) 1 1 + μ c ,     r a n d 0.5 [ 1 / ( 2 2 r a n d ) ] 1 1 + μ c ,   o t h e r w i s e    
where μ c is a user defined parameter, usually taken as 20. The larger the value of μ c , the greater the probability that the superior individual will be preserved.

3.3.2. Mutation Operator

The common mutation operators include polynomial mutation [18], Gaussian mutation, Cauchy mutation, and Lévy mutation [19]. Polynomial mutation helps the current population to jump out of the local optima [20] and to increase the population diversity. The Gaussian mutation offers more flexibility and dominates for local exploitations [21].
A dynamic mutation operator based on the polynomial and Gaussian mutations is proposed. In the early search phase, the proposed dynamic mutation operator selects the polynomial mutation with a higher probability to help the population to jump out of the local optimum. The polynomial mutation operator generates offspring individuals y 1 ( y 11 , , y 1 n ) from parent individual x 1 ( x 11 , , x 1 n ) as
y 1 i = x 1 i + α i
where α i is determined by a random number r a n d [ 0 , 1 ) , and μ m is a predefined constant, which is taken as 20.
α i = { ( 2 r a n d ) 1 1 + μ m 1 ,       r a n d < 0.5 1 ( 2 2 r a n d ) 1 1 + μ m ,   o t h e r w i s e
As the evolution progress, the probability of selecting the Gaussian mutation operator with superior local search ability gradually increases, as shown by (14). Additionally, the Gaussian mutation operator generates child individuals y 1 ( y 11 , , y 1 n ) from parent individual x 1 ( x 11 , , x 1 n ) as
y 1 i = x 1 i + N ( 0 , 1 )
where N ( 0 , 1 ) denotes a one-dimensional, normally distributed random number with mean 0 and variance 1 .

3.3.3. Selection of the Operators

Based on the characteristics of the two kinds of crossover operators, namely SBX and two-point crossover, and the two kinds of mutation operators, namely Gaussian and polynomial mutations, it can be drawn that the SBX crossover possesses better local search ability than the two-point crossover, and that the Gaussian mutation is more suitable for local exploitations than the polynomial mutation. In this regard, SBX crossover and the Gaussian mutation are chosen as the set to be used to endow the proposed hybrid algorithm with better local search ability. Accordingly, two-point crossover and polynomial mutation are chosen as another set to further enhance the global search ability.
The probability of selecting the corresponding set of operators is determined by
{ P S = τ / τ m a x         P T = 1 τ / τ m a x  
where τ is the current iteration number, τ m a x is the maximum number of iterations,   P S is the probability of selecting the SBX crossover operator and Gaussian mutation operator, and P T is the probability of selecting the two-point crossover operator and polynomial mutation operator.
In the initial stage of the proposed hybrid algorithm, the two-point crossover and polynomial mutation are executed to obtain a larger searching space, which helps to jump out of local optima. As the iterations proceed, the SBX crossover and Gaussian mutation operator are gradually chosen, local exploitation is strengthened, and excellent individuals are more inclined to be persevered, which facilitates the convergence of the optimization procedure.

3.4. Framework of the Hybrid Algorithm

The framework of the hybrid algorithm is illustrated in Algorithm 3. Relevant parameters are set, population P is initialized, and the Archive R of MOPSO is updated. In the main loop, the offspring population Q is generated by using the mechanism of NSGA-II, where the adaptive operators involving crossover and mutation play a part, and Archive R is updated by using the mechanism of MOPSO, successively. After that, a greedy strategy is adopted in order to select the better N individuals out of P, Q, and R, which possess relatively low rank levels of the Pareto frontier as well as large crowding-distance values. These N individuals are then chosen as the population for the next generation. These procedures are repeated until the stop criterion is satisfied.
Algorithm 3: INSGAP
Applsci 12 12110 i002

4. Numerical Examples

To evaluate the performance of the proposed algorithm, two renowned multi-objective optimization algorithms, namely NSGA-II and MOPSO, in addition to their simple hybrid algorithm, NSGA-MOPSO (without proposed adaptive operators), are chosen as their counterparts. The SBX crossover and polynomial mutation are adopted in NSGA-II and NSGA-MOPSO. The relevant algorithm parameters are shown in Table 1. p c is the crossover probability and p m is the mutation probability, which represent the probability that the crossover operation and mutation operation will occur, respectively. p n g is the mutation probability of NSGA-II in the hybrid algorithm and p m p is the mutation probability of MOPSO in the hybrid algorithm, and each takes the same value as that of the original algorithm, respectively. i r is the inflation rate, which is a predefined parameter in the adaptive lattice method of MOPSO.

4.1. Performance Validation

Standard test functions, including KUR, ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 [13], were used to validate the performance of the proposed algorithm. The details of these test functions are shown in Table 2. Each algorithm was run 30 times independently for different test functions in order to obtain its statistical properties.
The comparison results of PFs obtained by the aforementioned four algorithms for each test function are presented in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7, respectively. It can be observed that the Pareto solutions obtained by the proposed hybrid algorithm are much closer to the true PFs as compared to those of its three counterparts, especially for ZDT4, which has a large number of local optima, and ZDT6, which has a non-uniformly distributed PF.
To quantitatively evaluate the performances of an algorithm, its convergence metric γ and diversity metric ∆ are calculated and given in Table 3 and Table 4. In the tables, E(∙) denotes the mean value and σ(∙) stands for the variance. Smaller values of both mean and variance indicate a better performance.
From Table 3, the convergence performance of the proposed hybrid algorithm is better than that of the latter three for most of the test functions, especially for ZDT4 and ZDT6, which have a large number of local optima. From Table 4, the proposed hybrid algorithm generally outperformed NSGA-II, MOPSO, and NSGA-MOPSO in terms of diversity for most of the test functions. Only the mean of the diversity metric for KUR failed to perform as well as NSGA-II. Meanwhile, it can be observed that the simple hybrid algorithm NSGA-MOPSO obtains better results than NSGA-II and MOPSO when handling the ZDT4 problem. However, there was no improvement in other test functions, and in some problems, it obtained results slightly worse than NSGA and MOPSO individually. In comparison, the proposed algorithm possesses better statistical results in both convergence and diversity metrics. This is owed to the proposed adaptive operator; in the initial stage, two-point crossover and polynomial mutation dominated to improve the global search ability of the algorithm, which contributes to finding non-dominated individuals. As the evolution went on, the SBX and Gaussian mutation improved the local search ability of the algorithm, which prompted it to obtain more uniformly distributed PFs. This design helps to achieve a balance between global and local search. The excellent individuals can be reserved and the population diversity can be improved at the same time. Thus, it can be concluded that the convergence and diversity performance of the proposed algorithm have been notably improved.

4.2. Multi-Objective Design of a SMES Device

A superconducting magnetic energy storage system (SMES) was optimized to obtain the required storage energy with minimal stray fields. The configuration of the device [22] is shown in Figure 8; two concentric coils, carrying currents in opposite directions and operating under superconducting conditions, provide the conditions to store large amounts of magnetic energy in their magnetic fields while keeping the stray field within certain limits.
A three-parameter MOO problem is defined as
min { f 1 ( r 2 , h 2 / 2 , d 2 ) = 1 B n o r m 2 B s t r a y 2       f 2 ( r 2 , h 2 / 2 , d 2 ) = 1 E r e f | E E r e f | s . t .   | B m a x | 4.92 T          
where E r e f = 180   MJ ,   B n o r m = 3   μ T , and B s t r a y 2 and E are the practical stray fields energy and stored energy, respectively. r 2 , h 2 , and   d 2 are the dimension parameters of the outer coils. B s t r a y 2 can be calculated by
B s t r a y 2 = 1 22 i = 1 22 | B s t r a y , i | 2          
The proposed hybrid algorithm is applied to solve this case study, and then compared with its two counterparts, NSGA-II and MOPSO. The comparison results of the Pareto solutions for the three algorithms with the same number of iterations are shown in Figure 9. The NDR metrics for Hybrid, NSGA-II, and MOPSO are 57.72%, 13.79%, and 34.48%, respectively, which indicates that the proposed hybrid algorithm obtains better Pareto solutions. The trajectory of the Pareto solutions obtained by the proposed hybrid algorithm are shown in Figure 10 and Figure 11. The pentagram in Figure 10 denotes the solution with min (f1 + f2). It can be observed that the objective values of f1 and f2 are becoming smaller as the number of iterations increases, which demonstrates that the Pareto solution sets are becoming closer to the true PF.
These numerical results of the multi-objective optimization design of a SMES device further validate the effectiveness and superiority of the proposed hybrid MOO method.

5. Conclusions

A hybrid MOO algorithm, which combines NSGA-II and MOPSO, was proposed in this study. The performance of the algorithm was validated using standard test functions and the multi-objective design of a SMES device. Numerical results demonstrated that the convergence performance of the proposed hybrid algorithm is superior to that of its counterparts, and the distribution of the obtained Pareto solutions is more uniform. Further work of the authors will focus on multi-objective optimization problems, with a large number of decision variables and objective function with multimodal landscapes.

Author Contributions

Conceptualization, Y.L. and S.Y.; methodology, Y.L. and Z.X.; software, Z.X. and Y.L.; validation, Y.L., Z.X. and S.Y.; formal analysis, Y.L. and Z.X.; investigation, Z.X.; resources, Z.X.; data curation, Z.X. and Y.L.; writing—original draft preparation, Z.X. and Y.L.; writing—review and editing, S.Y.; visualization, Z.X. and Y.L.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shanghai Sailing Program, grant number 21YF1400300 and the Fundamental Research Funds for Central Universities, grant number 2232020D-52.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Coello, C.A. An updated survey of GA-based multiobjective optimization techniques. ACM Comput. Surv. (CSUR) 2000, 32, 109–143. [Google Scholar] [CrossRef]
  2. Palmeri, R.; Bevacqua, M.; Morabito, A.; Isernia, T. Design of artificial-material-based antennas using inverse scattering techniques. IEEE Trans. Antennas Propag. 2018, 66, 7076–7090. [Google Scholar] [CrossRef]
  3. Palmeri, R.; Isernia, T. Inverse design of artificial materials based lens antennas through the scattering matrix method. Electronics 2020, 9, 559. [Google Scholar] [CrossRef] [Green Version]
  4. Palmeri, T.I.R. Inverse design of EBG waveguides through scattering matrices. EPJ Appl. Metamaterials 2020, 7, 10. [Google Scholar] [CrossRef]
  5. Jensen, J.S.; Sigmund, O. Systematic design of photonic crystal structures using topology optimization: Low-loss waveguide bends. Appl. Phys. Lett. 2004, 84, 2022–2024. [Google Scholar] [CrossRef] [Green Version]
  6. Callewaert, F.; Velev, V.; Kumar, P.; Sahakian, A.V.; Aydin, K. Inverse-designed broadband all-dielectric electromagnetic metadevices. Sci. Rep. 2018, 8, 1358. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, Y.; Tang, J.; Xu, X.; Huang, Z. Optimal design of magnetically suspended high-speed rotor in turbo-molecular pump. Vacuum 2021, 193, 110510. [Google Scholar] [CrossRef]
  8. Heydarianasl, M.; Rahmat, M.F. Design optimization of electrostatic sensor electrodes via MOPSO. Measurement 2020, 152, 107288. [Google Scholar] [CrossRef]
  9. Niu, B.; Wang, D.; Pan, P. Multi-objective optimal design of permanent magnet eddy current retarder based on NSGA-II algorithm. Energy Rep. 2022, 8, 1448–1456. [Google Scholar] [CrossRef]
  10. Wang, X.; Chen, G.; Xu, S. Bi-objective green supply chain network design under disruption risk through an extended NSGA-II algorithm. Clean. Logist. Supply Chain 2022, 3, 100025. [Google Scholar] [CrossRef]
  11. Tian, Z.; Zhang, Z.; Zhang, K.; Tang, X.; Huang, S. Statistical modeling and multi-objective optimization of road geopolymer grouting material via RSM and MOPSO. Constr. Build. Mater. 2021, 271, 121534. [Google Scholar] [CrossRef]
  12. Zhao, W.; Luan, Z.; Wang, C. Parameter optimization design of vehicle E-HHPS system based on an improved MOPSO algorithm. Adv. Eng. Softw. 2018, 123, 51–61. [Google Scholar] [CrossRef]
  13. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  14. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [Green Version]
  15. Zhu, Q.; Lin, Q.; Du, Z.; Liang, Z.; Wang, W.; Zhu, Z.; Chen, J.; Huang, P.; Ming, Z. A novel adaptive hybrid crossover operator for multiobjective evolutionary algorithm. Inf. Sci. 2016, 345, 177–198. [Google Scholar] [CrossRef]
  16. Zhang, X.; Wang, D.; Fu, Z.; Liu, S.; Mao, W.; Liu, G.; Jiang, Y.; Li, S. Novel biogeography-based optimization algorithm with hybrid migration and global-best Gaussian mutation. Appl. Math. Model. 2020, 86, 74–91. [Google Scholar] [CrossRef]
  17. Raj, R.; Mathew, J.; Kannath, S.K.; Rajan, J. Crossover based technique for data augmentation. Comput. Methods Programs Biomed. 2022, 218, 106716. [Google Scholar] [CrossRef]
  18. Deb, K.; Goyal, M. A combined genetic adaptive search (GeneAS) for engineering design. Comput. Sci. Inform. 1996, 26, 30–45. [Google Scholar]
  19. Gong, W.; Cai, Z.; Ling, C.X.; Li, H. A real-coded biogeography-based optimization with mutation. Appl. Math. Comput. 2010, 216, 2749–2758. [Google Scholar] [CrossRef]
  20. Stacey, A.; Jancic, M.; Grundy, I. Particle swarm optimization with mutation. In Proceedings of the 2003 Congress on Evolutionary Computation, Canberra, Australia, 8–12 December 2003; Volume 2, pp. 1425–1430. [Google Scholar]
  21. Singh, P.; Dwivedi, P.; Kant, V. A hybrid method based on neural network and improved environmental adaptation method using Controlled Gaussian Mutation with real parameter for short-term load forecasting. Energy 2019, 174, 460–477. [Google Scholar] [CrossRef]
  22. Alotto, P.; Baumgartner, U.; Freschi, F.; Jaindl, M.; Kostinger, A.; Magele, C.; Renhart, W.; Repetto, M. SMES optimization benchmark extended: Introducing Pareto optimal solutions into TEAM22. IEEE Trans. Magn. 2008, 44, 1066–1069. [Google Scholar] [CrossRef]
Figure 1. Two-point crossover diagram.
Figure 1. Two-point crossover diagram.
Applsci 12 12110 g001
Figure 2. Comparison results for KUR.
Figure 2. Comparison results for KUR.
Applsci 12 12110 g002
Figure 3. Comparison results for ZDT1.
Figure 3. Comparison results for ZDT1.
Applsci 12 12110 g003
Figure 4. Comparison results for ZDT2.
Figure 4. Comparison results for ZDT2.
Applsci 12 12110 g004
Figure 5. Comparison results for ZDT3.
Figure 5. Comparison results for ZDT3.
Applsci 12 12110 g005
Figure 6. Comparison results for ZDT4.
Figure 6. Comparison results for ZDT4.
Applsci 12 12110 g006
Figure 7. Comparison results for ZDT6.
Figure 7. Comparison results for ZDT6.
Applsci 12 12110 g007
Figure 8. Configuration of a SMES device.
Figure 8. Configuration of a SMES device.
Applsci 12 12110 g008
Figure 9. Comparison results of three algorithms.
Figure 9. Comparison results of three algorithms.
Applsci 12 12110 g009
Figure 10. 3D view of the Pareto solution trajectory of the hybrid algorithm.
Figure 10. 3D view of the Pareto solution trajectory of the hybrid algorithm.
Applsci 12 12110 g010
Figure 11. 2D view of the Pareto solution trajectory of the hybrid algorithm.
Figure 11. 2D view of the Pareto solution trajectory of the hybrid algorithm.
Applsci 12 12110 g011
Table 1. Parameter settings.
Table 1. Parameter settings.
ParameterHybridNSGA-MOPSOMOPSONSGA-II
τ m a x 125125250250
p o p s i z e 100100100100
p c 0.90.90.9
p m p n g   &   p m p p n g   &   p m p ( 1 τ 1 τ m a x ) 10 0.1
ω 0.50.50.5
i r 0.10.10.1
μ u 202020
c 1 111
c 2 222
Table 2. Test functions.
Table 2. Test functions.
ProblemnVariable BoundsObjective FunctionsOptimal SolutionsPF Characteristics
KUR3[–5, 5] f 1 ( x ) = i = 1 n 1 ( 10 exp ( 0.2 x i 2 + x i + 1 2 ) )
f 2 ( x ) = i = 1 n ( | x i | 0.8 + 5 s i n x i 3 )
Ref [12]nonconvex
ZDT130[0, 1] f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) [ 1 x 1 / g ( x ) ]
g ( x ) = 1 + 9 ( i = 2 n x i ) / ( n 1 )
x 1 [ 0 , 1 ]
x i = 0 ,
i = 2 , , n
convex
ZDT230[0, 1] f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) [ 1 ( x 1 / g ( x ) ) 2 ]
g ( x ) = 1 + 9 ( i = 2 n x i ) / ( n 1 )
x 1 [ 0 , 1 ]
x i = 0 ,
i = 2 , , n
nonconvex
ZDT330[0, 1] f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) [ 1 x 1 g ( x ) x 1 g ( x ) sin ( 10 π x ) ]
g ( x ) = 1 + 9 ( i = 2 n x i ) / ( n 1 )
x 1 [ 0 , 1 ]
x i = 0 ,
i = 2 , , n
nonconvex, disconnected
ZDT410 x 1 [ 0 , 1 ]
x i [ 5 , 5 ] ,
i = 2 , , n
f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) [ 1 x 1 / g ( x ) ]
g ( x ) = 1 + 10 ( n 1 ) + A
A = i = 2 n [ x i 2 10 cos ( 4 π x i ) ]
x 1 [ 0 , 1 ]
x i = 0 ,
i = 2 , , n
nonconvex
ZDT610[0, 1] f 1 ( x ) = 1 exp ( 4 x 1 ) sin 6 ( 6 π x 1 )
f 2 ( x ) = g ( x ) [ 1 ( f 1 ( x ) / g ( x ) ) 2 ]
g ( x ) = 1 + 9 [ ( i = 2 n x i ) / ( n 1 ) ] 0.25
x 1 [ 0 , 1 ]
x i = 0 ,
i = 2 , , n
nonconvex, nonuniformly distributed
Table 3. Statistical results of convergence metric.
Table 3. Statistical results of convergence metric.
f ( · ) HybridNSGA-MOPSO MOPSONSGA-II
E(γ)σ(γ)E(γ)σ(γ)E(γ)σ(γ)E(γ)σ(γ)
KUR0.01370.01110.015230.012480.03790.03490.01430.0115
ZDT10.02310.01140.053920.013270.02130.00920.33700.0940
ZDT20.23700.00450.85260.061250.26910.12130.61270.0879
ZDT30.01710.03080.063850.052690.03930.02960.05570.0131
ZDT40.11030.00921.46030.37532.71060.40142.18330.376
ZDT60.09790.00391.49170.95260.78350.27240.93250.9128
Table 4. Statistical results of diversity metric.
Table 4. Statistical results of diversity metric.
f ( · ) HybridNSGA-MOPSOMOPSONSGA-II
E(∆)σ(∆)E(∆)σ(∆)E(∆)σ(∆)E(∆)σ(∆)
KUR0.51030.02160.50190.029830.72300.02490.49960.0326
ZDT10.48310.04360.51730.1070.95130.14550.64990.0557
ZDT20.50730.04271.020.11570.91810.11300.84970.1626
ZDT30.61820.02570.66230.034480.86550.03500.76470.0407
ZDT40.45120.04880.82970.072221.24810.39200.79060.0416
ZDT60.45900.12781.19830.10320.91430.14241.33070.1083
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, Z.; Li, Y.; Yang, S. A Hybrid Multi-Objective Optimization Method and Its Application to Electromagnetic Device Designs. Appl. Sci. 2022, 12, 12110. https://doi.org/10.3390/app122312110

AMA Style

Xie Z, Li Y, Yang S. A Hybrid Multi-Objective Optimization Method and Its Application to Electromagnetic Device Designs. Applied Sciences. 2022; 12(23):12110. https://doi.org/10.3390/app122312110

Chicago/Turabian Style

Xie, Zhengwei, Yilun Li, and Shiyou Yang. 2022. "A Hybrid Multi-Objective Optimization Method and Its Application to Electromagnetic Device Designs" Applied Sciences 12, no. 23: 12110. https://doi.org/10.3390/app122312110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop