Next Article in Journal
Geometric Numerical Integration of Liénard Systems via a Contact Hamiltonian Approach
Next Article in Special Issue
Quantum-Like Sampling
Previous Article in Journal
Machine Learning Approach for Targeting and Recommending a Product for Project Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DMO-QPSO: A Multi-Objective Quantum-Behaved Particle Swarm Optimization Algorithm Based on Decomposition with Diversity Control

1
Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education, Wuxi 214122, China
2
Faculty of Engineering and Computing, Coventry University, Coventry CV1 5FB, UK
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(16), 1959; https://doi.org/10.3390/math9161959
Submission received: 17 May 2021 / Revised: 9 August 2021 / Accepted: 12 August 2021 / Published: 16 August 2021
(This article belongs to the Special Issue Advances in Quantum Artificial Intelligence and Machine Learning)

Abstract

:
The decomposition-based multi-objective evolutionary algorithm (MOEA/D) has shown remarkable effectiveness in solving multi-objective problems (MOPs). In this paper, we integrate the quantum-behaved particle swarm optimization (QPSO) algorithm with the MOEA/D framework in order to make the QPSO be able to solve MOPs effectively, with the advantage of the QPSO being fully used. We also employ a diversity controlling mechanism to avoid the premature convergence especially at the later stage of the search process, and thus further improve the performance of our proposed algorithm. In addition, we introduce a number of nondominated solutions to generate the global best for guiding other particles in the swarm. Experiments are conducted to compare the proposed algorithm, DMO-QPSO, with four multi-objective particle swarm optimization algorithms and one multi-objective evolutionary algorithm on 15 test functions, including both bi-objective and tri-objective problems. The results show that the performance of the proposed DMO-QPSO is better than other five algorithms in solving most of these test problems. Moreover, we further study the impact of two different decomposition approaches, i.e., the penalty-based boundary intersection (PBI) and Tchebycheff (TCH) approaches, as well as the polynomial mutation operator on the algorithmic performance of DMO-QPSO.

1. Introduction

The particle swarm optimization (PSO) algorithm, originally proposed by Kennedy and Eberhart in 1995, is a population-based metaheuristic that imitates the social behavior of birds flocking [1]. In PSO, each particle is treated as a potential solution, and all particles follow their own experiences and the current optimal particle to fly through the solution space. As it requires fewer parameters to adjust and can be easily implemented, PSO has been rapidly developed in solving real-world optimization problems, including circuit design [2], job scheduling [3], data mining [4], path planning [5,6] and protein-ligand docking [7]. In 1999, Moore and Chapman extended PSO to solve multi-objective problems (MOPs) for the first time in [8]. Since then, a great interest has been aroused among researchers from different communities to tackle MOPs by using PSO. For example, Coello and Lechuga [9] introduced a proposal for multi-objective PSO, noted as MOPSO, which determines particles’ flight directions by using the concept of Pareto dominance and adopts a global repository to store previously found nondominated solutions. Later in 2004, Coello et al. [10] presented an enhanced version of MOPSO which employs a mutation operator and a constraint-handling mechanism to improve the algorithmic performance of the original MOPSO. Raquel and Prospero [11] proposed the MOPSO-CD algorithm that selects the global best and updates the external archive of nondominated solutions by calculating the crowding distance of particles. In 2008, Peng and Zhang developed a new MOPSO algorithm adopting a decomposition approach, called MOPSO/D [12]. It is based on a framework, named as MOEA/D [13], which converts an MOP into a number of single-objective optimization sub-problems and then simultaneously solves all these sub-problems. In MOPSO/D, the particle’s global best is defined by the solutions located within a certain neighborhood. Moubayed et al. [14] proposed a novel smart MOPSO based on decomposition (SDMOPSO) that realizes the information exchange between neighboring particles with fewer objective function evaluations and stores the leaders of the whole particle swarm using a crowding archive. dMOPSO, proposed by Martinez and Coello [15], selects the global best from a set of solutions according to the decomposition approach and thus update each particle’s position. Moubayed et al. [16] realized a MOPSO, called D2MOPSO, which hybrids the approach of dominance and decomposition and introduces an archiving technique using crowding distance. There have also been some other MOPSOs proposed in recent years that have proved to be effective in solving complex MOPs, such as MPSO/D [17], MMOPSO [18], AgMOPSO [19], CMOPSO [20] and CMaPSO [21].
The quantum-behaved PSO (QPSO), proposed by Sun et al. [22], is a variant of PSO inspired by quantum mechanics and the trajectory analysis of PSO. The trajectory analysis clarified the idea that each particle in PSO is in a bound state. Specifically, each particle in PSO oscillates around and converges to its local attractor [23]. In QPSO, the particle is assumed to have quantum behavior and further assumed to be attracted by a quantum delta potential well centered on its local attractor. Additionally, the concept of the mean best position was defined and employed in this algorithm to update particles’ positions. In the terms of the update equation, which is different from that of PSO, on the one hand, QPSO has no velocity vectors for each particle to update, on the other hand, QPSO requires fewer parameters to adjust [24]. Due to these advantages of QPSO, we incorporate it into the original MOEA/D framework for the purpose of obtaining a more effective algorithm for solving MOPs than other decomposition-based MOPSOs that uses the canonical PSO.
QPSO and other PSO variants generally have fast convergence speed due to more information exchange among particles. This is why such kinds of algorithms are more efficient to solve optimization problems than other population-based random search algorithms. Fast convergence speed means rapid diversity decline, which is desirable for the algorithm to find satisfying solutions quickly during the early stage of the search process. However, rapid diversity decline during the later stage of the search process results in aggregation of particles around the global best position and in turn the stagnation of the whole particle swarm (i.e., premature convergence).
Diversity maintenance is also essential when extending PSO to solve MOPs. During the past decade, researchers have done a lot of work on developing novel techniques to maintain diversity in their MOPSOs. For example, Qiu et al. [25] introduced a novel global best selection method, which is based on proportional distribution and K-means algorithm, to make particles converge to the Pareto front in a fast speed while maintaining diversity. Cheng et al. [26] presented the IMOPSO-PS algorithm, in which the preference strategy is applied for optimal distributed generation (DG) integration into the distribution system. This algorithm uses a circular nondominated selection of particles from one iteration to the next and performs mutation on particles to enhance the swarm diversity during the search process.
In this paper, we propose a multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named DMO-QPSO, which integrates the QPSO with the original MOEA/D framework and uses a strategy of diversity control. As in the literature [12,13], a neighborhood relationship is defined according to distances between the weight vectors of different sub-problems. Each sub-problem is solved utilizing the information only from its neighboring sub-problems. However, with the increasing number of iterations, the current best solutions to the neighbors of a sub-problem may get close to each other. This may result in a diversity loss of the new population produced in the next iteration, particularly at the later stage of the search process. Therefore, in DMO-QPSO, we do not adopt the neighborhood relationship described in the framework of MOEA/D and MOPSO/D. Meanwhile, we introduce a two-phased diversity controlling mechanism to make particles alternate between attraction and explosion states according to the swarm diversity. Particles move through the search space in the phase of attraction unless the swarm diversity declines to a threshold value that triggers the phase of explosion. Additionally, unlike MOPSO/D in which the global best is updated according to a decomposition approach, the proposed DMO-QPSO uses a vector set to store a pre-defined number of nondominated solutions and then randomly picks one as the current global best. All solutions in this vector set would have a chance to guide the movement of the whole particle swarm. The penalty-based boundary intersection (PBI) approach [27] is used in the algorithm owing to its advantage over other decomposition methods including the weighted sum (WS) and the Tchebycheff (TCH) [13].
The rest of this paper is organized as follows. Some preliminaries of MOP, PSO, QPSO and the framework of MOEA/D are given in Section 2. Section 3 describes the procedure of our proposed DMO-QPSO algorithm in detail. Section 4 presents the experimental results and analysis. Some further discussion on DMO-QPSO are introduced in Section 5. Finally, the paper is concluded in the last section.

2. Preliminaries

In this section, we first state the definition of MOPs and then describe the basic principles of the canonical PSO and QPSO. After that, some of the most commonly used decomposition methods and the original MOEA/D framework are presented.

2.1. Multi-Objective Optimization

A multi-objective optimization problem (MOP) can be stated as follows.
minimize   F x = f 1 x ,   ,   f m x s . t .   x Ω
where x is the decision variable vector, Ω is the decision (variable) space, and m is the number of the real-valued objective functions. F :   Ω R m is the objective function vector, where R m is the objective space. The objectives in an MOP are mutually conflicting, so no one solution can minimize all the objectives at the same time. Improvement of one objective may lead to deterioration of another. In this situation, the Pareto optimal solutions become the best tradeoffs among different objectives. Therefore, most of multi-objective optimization algorithms are designed to find a finite number of Pareto optimal solutions to approximate the Pareto front (PF), which could be good representatives of the whole PF [28,29,30,31]. In order to better understand the concept of Pareto optimality [32], some definitions are provided as follows.
Definition 1.
Let x = x 1 ,   ,   x m ,   y = y 1 ,   ,   y m R m . For all i = 1 ,   ,   m   a n d   x y , if and only if f i x f i y ,   t h e n   x dominates y , denoted as x y .
Definition 2.
Let x * Ω . If no solution x exists in Ω such that F x   d o m i n a t e s   F x , then x is a Pareto optimal solution to MOP in Equation (1), and F x is a Pareto optimal (objective) vector. The set of all the Pareto optimal solutions is called the Pareto set (PS), denoted by P S = { x Ω | x   i s   a   P a r e t o   o p t i m a l   s o l u t i o n } . The set of all the Pareto optimal (objective) vectors is called the Pareto front (PF), denoted by P F = { F x | x P S } .

2.2. Particle Swarm Optimization

In the canonical PSO algorithm with N particles, each particle i   i = 1 ,   ,   N has a position vector X i = X i , 1 ,   , X i , D and a velocity vector V i = V i , 1 ,   , V i , D . D is the dimension of the search space. During each iteration t , particle i in the swarm is updated according to its personal best ( p b e s t ) position P i = P i , 1 , , P i , D and the global best ( g b e s t ) position P g = P g , 1 ,   ,   P g , D found by the whole swarm. The update strategies are presented as follows.
V i , j t + 1 = w V i , j t + c 1 r 1 P i , j t X i , j t + c 2 r 2 P g , j t X i , j t
X i , j t + 1 = X i , j t + V i , j t + 1
for i = 1 , 2 , , N ; j = 1 , 2 , , D , where w is the inertia weight, c 1 ,   c 2 are the learning factors, r 1 ,   r 2 are two random variables uniformly distributed on (0, 1).

2.3. Quantum-Behaved Particle Swarm Optimization

In the quantum-behaved PSO (QPSO) algorithm with N particles, each particle i   i = 1 ,   ,   N has a position vector X i = X i , 1 ,   , X i , D and a personal best ( p b e s t ) position P i = P i , 1 , , P i , D . D is the dimension of the search space. During each iteration t , particle i in the swarm is updated as follows.
X i , j t + 1 = q i , j t ± α C j t X i , j t · ln 1 / u i , j t
for j = 1 , 2 , , D , where u i , j is a random number distributed on (0, 1) uniformly, q i = q i , 1 ,   ,   q i , D is the local attractor of particle i , calculated by
q i = φ · P i + 1 φ · P g ,
φ is a random number distributed on (0, 1) uniformly and P g = P g , 1 ,   ,   P g , D is the global best ( g b e s t ) position found by particles during the search process. The contraction-expansion coefficient α was designed to control the convergence speed of the QPSO algorithm. C is the mean of the personal best positions of all the particles, namely, the m b e s t position, and it can be calculated as below.
C = 1 N i = 1 N P i = 1 N i = 1 N P i , 1 ,   1 N i = 1 N P i , 2 , ,   1 N i = 1 N P i , D

2.4. The Decomposition Approaches

In the state-of-the-art multi-objective optimization algorithms based on decomposition, the most commonly used decomposition approaches are the Tchebycheff (TCH), the weighted sum (WS), and the penalty-based boundary intersection (PBI) approaches [13,28]. These methods are supposed to decompose an MOP into a finite group of single-objective optimization sub-problems, so that a certain algorithm can solve these sub-problems effectively and efficiently. Let λ i = ( λ 1 i , , λ m i ) T be a weight vector for the i th sub-problem ( i = 1 , , N ), satisfying j = 1 m λ j i = 1 and λ j i > 0 for all j = 1 ,   2 ,   ,   m ; and z * = ( z 1 * , , z m * ) T be a reference point. Below are the definitions of TCH and PBI approaches which will be used later in this paper.
  • Tchebycheff (TCH) approach:
    In the TCH approach, the sub-problem i is defined as
    minimize   g t c h x | λ i , z * = max 1 j m λ j i f j x z j * s . t .   x Ω
  • Penalty-based boundary intersection (PBI) approach:
    In PBI approach, the sub-problem i is defined as
    minimize   g p b i x | λ i , z * = d 1 + θ d 2 where   d 1 = F x z * T λ i λ i d 2 = F x z * d 1 i λ i λ i s . t .   x Ω
    where · means the L 2 n o r m , and θ > 0 is a penalty parameter.

2.5. MOEA/D

MOEA/D divides an MOP into N single-objective optimization sub-problems and attempts to simultaneously optimize all these sub-problems rather than directly solving the MOP. These sub-problems are linked together by their neighborhoods. The neighborhood of sub-problem i is defined as the sub-problems whose weight vectors are the T closest ones to its weight vector λ i and thus the neighborhood size of sub-problem i is T .
The MOEA/D algorithm maintains a population of N solutions x 1 ,   ,   x N Ω , where x i   i = 1 ,   ,   N is a feasible solution to the sub-problem i . F V i is the F -value (i.e., the fitness value) of x i , that is, F V i = F x i . z = z 1 ,   ,   z m T is a reference point and z j   j = 1 ,   ,   m is the minimal value of objective f j found so far. E P is an external population used to tore non-dominated solutions found during the search process. The main framework of MOEA/D is described in Algorithm 1.
Algorithm 1 Framework of MOEA/D
Input: The number of sub-problems, i.e., the population size, N ; The set of weight vectors, λ 1 ,   λ 2 ,   ,   λ N ; The   neighborhood   size ,   T ;
Output: E P ;
1: E P = ;
2:Calculate the Euclidean distances between any two weight vectors;
3:Successively select T weight vectors which are the closest to the weight vector λ i and store the indexes of these T weight vectors in B i = i 1 , , i T ,   i = 1 , , N ;
4:Generate an initial population randomly;
5:Evaluate F V i , i = 1 , , N ;
6:Initialize z = z 1 , , z m T ;
7:while termination criterion is not fulfilled do
8:for  i = 1 , , N  do
9:  Select two indexes k , l randomly from B i ;
10:  Use the genetic operators to produce a new solution y from x k and x l ;
11:  Repair y ;
12:  for  j = 1 , , m  do
13:   if  z j > f j y  then
14:     z j = f j y ;
15:   end if
16:  end for
17:for each j B i  do
18:  if g y | λ i , z g x j | λ i , z  then
19:    x j = y ;   F V i = F y ;
20:  end if
21:end for
22:end for
23:Update E P ;
24:end while

3. The Proposed DMO-QPSO

In this section, we propose an improved multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named as DMO-QPSO, which integrates the QPSO algorithm with the MOEA/D framework and adopts a mechanism to control the swarm diversity during the search process so as to avoid premature convergence and escape the local optimal area with a higher probability.
At the beginning of the proposed algorithm DMO-QPSO, we need to define a set of well-distributed weight vectors and then use a certain approach to decompose the original MOP into a group of single objective sub-problems. More precisely, let λ 1 ,   λ 2 ,   ,   λ N to be the weight vectors, and the PBI approach is employed in this paper owing to its advantage over other decomposition approaches.
In DMO-QPSO, the swarm P with N particles is randomly initialized. Each particle i has a position vector X i and a personal best position P i . P i is initially set to be equal to X i . Then the mean best position of all particles can be easily obtained according to Equation (6). The global best position P g is produced in a natural way according to the Pareto dominance relationship among different personal best positions. More specifically, we firstly define a vector set G S , and the size of G S is pre-set, denoted as n G S . Then, the fast nondominated sorting approach [33] is applied to sort the set of all the personal best positions and G S . The lower nondomination rank of a solution is, the better it is. Therefore, we only select the ones in the lower nondomination ranks and then store them in G S . All of the solutions in G S are regarded as candidates for the global best employed in the next iteration for updating particles’ positions.
It should be noted that the neighborhood in the original MOEA/D framework is formed according to distances between the weight vectors of different sub-problems. That is to say, the neighborhood of a sub-problem includes all sub-problems with the closest weight vectors. Hence, on the basis of this definition, solutions to neighboring sub-problems would be close in the decision space. It may enable the algorithm to converge faster at the early stage but brings the risk of diversity loss and premature convergence at the later stage. For this reason, we do not adopt in DMO-QPSO the neighborhood relationship stated in the original MOEA/D framework.
Furthermore, we measure the swarm diversity during the search process and make the swarm alternate between two phases, i.e., attraction and explosion, according to its diversity. At each iteration, the diversity of the particle swarm is calculated as below.
d i v e r s i t y P = 1 P · 1 A · i = 1 P j = 1 D X i , j X j ¯ 2
where D is the dimensionality of the problem, A is the length of longest diagonal in the search space, P is the particle swarm, P = N is the population size, X i , j is the j th value of particle i and X j ¯ is the j th value of the average point. According to the literature [34,35], the particle converges when the contraction-expansion coefficient α is less than 1.778 and otherwise it diverges. Therefore, we set a threshold, denoted as d l o w , to the swarm diversity. When the diversity drops below d l o w (i.e., in the explosion phase), the value of α will be reset to a constant α 0 , larger than 1.778, to make particles diverge and thus increase the swarm diversity. Otherwise, α linearly decreases between the predefined interval a ,   b (i.e., in the attraction phase).
Like MOEA/D, we also use an external population E P in the DMO-QPSO to store the nondominated solutions found during the search process. In each iteration step, we check the Pareto dominance relationship between the new generated solutions and the solutions in E P . Solutions in E P dominated by a new generated solution will be removed from E P and this new generated solution will be added to E P if no one in E P dominates it. The main process of the DMO-QPSO algorithm is presented in Algorithm 2.
Algorithm 2 DMO-QPSO
Input: The number of sub-problems, i.e., the population size, N ; The set of weight vectors, λ 1 ,   λ 2 ,   ,   λ N ; The maximal number of iterations, M a x I t e r ;
Output: E P ;
1: E P = ,   G D = ;
2:for i = 1 , , N  do
3: Randomly initialize the position vector X i of   particle   i ;
4: Set the personal best position P i of   particle   i as   P i = X i ;
5: Evaluate the fitness value F X i ;
6:end for
7: Update   G S ;
8: Initialize   z = z 1 , .   .   .   , z m T ;
9:for t = 1 , , M a x I t e r  do
10: Compute the mean best position C of all the particles according to Equation (6);
11: Measure d i v e r s i t y P ;
12:if d i v e r s i t y P < d l o w  do
13:   α = α 0 ;
14:else
15:  Set α linearly decreasing between the interval a , b ;
16:end if
17:for i = 1 , , N  do
18:  Update the position vector X i t + 1 using Equation (4);
19:  Repair X i t + 1 ;
20:  Evaluate F X i t + 1 ;
21:  for j = 1 , , m  do
22:   if z i > f j X i t + 1  then
23:     z i = f j X i t + 1 ;
24:   end if
25:  end for
26:  if g ( X i t + 1 | λ i , z ) g ( P i t | λ i , z )  then
27:    P i t + 1 = X i t + 1 ;
28:  end if
29:end for
30: Update G S ;
31: Update E P ;
32:end for

4. Experimental Studies

This section presents the experiments conducted to investigate the performance of our proposed DMO-QPSO algorithm. Firstly, we introduce a set of MOPs used as benchmark functions. Next, the parameter settings for different algorithms and two performance metrics are described in detail. Finally, the comparison experiments and results analysis are presented. More precisely, we compared the DMO-QPSO with two recently proposed multi-objective PSOs (i.e., MMOPSO and CMOPSO) and other three multi-objective optimization algorithms, namely, MOPSO, MOPSO/D and MOEA/D-DE [36]. The PBI approach is used in four decomposition-based algorithms (i.e., DMO-QPSO, MOPSO/D, MOEA/D-DE and MMOPSO).

4.1. Test Functions

We selected 15 test functions whose PFs have different characteristics including concavity, convexity, multi-frontality and disconnections. Twelve of these test functions are bi-objective (i.e., F1, F2, F3, F4, F5, F7, F8, F9 from the F test set [36], UF4, UF5, UF6, UF7 from the UF test set [37]) and the rest of them are tri-objective (i.e., F6 from the F test set and UF9, UF10 from the UF test set). As shown in references [36,37], the F test set and the UF test set are two sets of test instances for facilitating the study of the ability of MOEAs to solve problems with complicated PS shapes. Besides, we used 30 decision variables for the UF test set, problems from F1 to F5, and F9. Problems from F6 to F8 were tested by using 10 decision variables.

4.2. Parameter Setting

The setting of weight vectors λ 1 ,   λ 2 ,   ,   λ N is decided by an integer H [36]. More precisely, each individual weight in λ 1 ,   λ 2 ,   ,   λ N takes a value from 0 H ,   1 H ,   ,   H H . Therefore, the population size N can be presented by N = C H + m 1 m 1 , where m is the number of objectives. H was 299 for the bi-objective test functions and 33 for the tri-objective ones. Consequently, the population size N was 300 for the bi-objective test functions and 595 for the tri-objective ones. The maximal number of iterations M a x I t e r was 500 and each algorithm runs 30 independent times for each test function. The size of external population n E P was set to be 100. Besides, the penalty factor θ in PBI was 5.0.
The polynomial mutation [29] was employed in MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO and CMOPSO. Two parameters η m ,   p m in this mutation operator were 20 and 1 / D , respectively.
For DMO-QPSO, the size of G S was 10, and the contraction-expansion coefficient α in the attraction phase varied linearly from 1.0 to 0.5. The value of the lower bound of diversity d l o w was set to be 0.05. When the diversity drops below d l o w , we set the parameter α = α 0 = 2.0 .
The details are listed in Table 1.

4.3. Performance Metrics

In our experiments, the following performance metrics were used.
  • The inverted generational distance (IGD) [36]: It is proposed as a method of estimating the distance between the elements in a set of nondominated vectors and those in the Pareto optimal set, and can be stated as:
    IGD P * ,   P = v P d v , P P
    where P * is a number of points which are evenly distributed in the objective space along the PF, and P is an approximation to the PF. d v ,   P is the minimal Euclidean distance between v and the points in P , and P * is the size of the set P * . P must be as close as possible to the PF of a certain test problem and cannot miss any part of the whole PF, so as to obtain a low value of IGD P * ,   P . It can reflect both the diversity and the convergence of the obtained solutions to the real PF.
  • Coverage (C-metric) [13]: It can be stated as:
    C A , B = { u B | v A : v   dominants   u } B
    where A ,   B are two approximations to the real PF of an MOP, C A , B is defined as the proportion of the solutions in B that are dominated by at least one solution in A .

4.4. Results and Discussion

Table 2 and Table 3 present the average, minimum and standard deviation (SD) of the IGD values of 30 final populations on different test functions that were produced by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO. It is clear that our proposed algorithm, DMO-QPSO, performed better than the other five algorithms on most of the test problems. It yielded the best mean IGD values on all the problems except on F7, F8, UF4, UF6 and UF10. According to Table 2 and Table 3, both the mean and minimal IGD values on F7, F8, UF4 and UF10 obtained by DMO-QPSO are worse than those obtained by MOEA/D-DE. It should be noted that the performance of DMO-QPSO is still acceptable on UF6. The mean IGD value of DMO-QPSO on UF6 are slightly worse than that of MMOPSO, but DMO-QPSO got the lowest minimal IGD value among all of these algorithms. Besides, it is obvious that MOPSO/D performed the worst on almost all of the test problems except on F6 and UF4.
In addition, the statistics by the Wilcoxon rank sum tests in Table 2 and Table 3 also indicate that DMO-QPSO outperformed other five algorithms. The MMOPSO was the second best on F problems and the third best on UF problems respectively, while the CMOPSO was the second worst on both F and UF problems. Table 4 illustrates the total ranks of these algorithms on F and UF problems and gives their final ranks in the last column of the table. As shown in this table, MOEA/D-DE is the second-best algorithm, followed by MMOPSO. In contrast, MOPSO/D is the worst algorithm, followed by CMOPSO.
The average C-metric values are shown in Table 5, Table 6, Table 7, Table 8, Table 9 which confirm the results above. It can be seen from these tables that the final solutions obtained by DMO-QPSO is better than those obtained by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO and CMOPSO for most of the test functions.
The results of the trial runs which have the lowest IGD values on 15 test functions produced by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO are selected, respectively, and then plotted in Figure 1 and Figure 2. The figures in Figure 1 and Figure 2 clearly show the evolution of the IGD values for different algorithms versus the number of iterations on both F and UF problems. It can be seen that the results in these figures are in consistence with those in Table 2 and Table 3. For the F problems, DMO-QPSO performed the best except on F6, F7 and F8. As we can see in Figure 1, MOEA/D-DE was the best on problems F7 and F8, the IGD values of which drop quickly at the early stage and then converge to values close to 0.08 and 0.20, respectively. MMOPSO obtained the second minimal IGD value on F8, followed by DMO-QPSO, but it declines even faster than MOEA/D-DE during the first 200 iteration steps. On F6, the IGD value of DMO-QPSO fluctuates for about 400 iteration steps during the whole search process and then reaches the value (0.1634) just slightly larger than that of MMOPSO (0.1632), which may be related to the variation of the swarm diversity. MOPSO/D had the worst performance on all of F problems except on F6.
For the UF problems, DMO-QPSO still performed the best except on UF4 and UF10. It can be seen that the MOEA/D-DE was the best on UF4, the IGD value of which decreases rapidly to a value just below 0.1 while the value of DMO-QPSO is larger than 0.1. By contrast, MMOPSO had the worst performance on UF4, the IGD value of which fluctuates significantly between 0.5 and 0.9. On UF10, MOEA/D-DE and MOPSO had the similar performance, for the IGD values of them decrease rapidly during the first 80 iteration steps and then gradually converge to values around 1.0. DMO-QPSO was the third-best, followed by MMOPSO. MOPSO/D had the worst performance on all of UF problems except on UF4. Additionally, from all the figures in Figure 1 and Figure 2, we can observe that within the same iteration steps, the IGD value of DMO-QPSO declines slowly at the early stage on several test problems compared to MOEA/D-DE but gets to a much smaller value at the later stage. If we perform a greater amount of iterations, the DMO-QPSO may obtain much better results than the other compared algorithms.
In summary, DMO-QPSO has better performance on most of the test functions compared to other tested algorithms, and it is promising in solving MOPs with complicated PS shapes.

5. Further Discussions

In this section, we further study the impact of different decomposition approaches (i.e., TCH and PBI) and the polynomial mutation on DMO-QPSO. Some comparison experiments were also conducted.

5.1. The Impact of Different Decomposition Approaches

As described in Section 2.4, TCH and PBI are two commonly used approaches for decomposition-based multi-objective optimization algorithms. In terms of solution uniformness, the TCH approach may perform worse than the PBI approach, especially for the problems having more than two objectives. Therefore, we tested MOPSO/D, MOEA/D-DE and DMO-QPSO using TCH or PBI as the decomposition method respectively. The parameter settings for each algorithm are the same as those presented in Section 4.2. Table 10 and Table 11 present the average, minimum and standard deviation (SD) of the IGD values of 30 final populations on different test functions that were produced by each algorithm. In these tables, MOPSO/D-TCH, MOEA/D-DE-TCH and DMO-QPSO-TCH stand for the variants of algorithms MOPSO/D, MOEA/D-DE and DMO-QPSO using the TCH approach, respectively.
According to Table 10 and Table 11, algorithms (i.e., MOPSO/D and DMO-QPSO) using PBI performed better than those using TCH, particularly for solving tri-objective problems. As for DMO-QPSO, using PBI, to some extent, could help the algorithm to acquire better Pareto optimal solutions to approximate the entire PF. However, applying PBI to MOEA/D-DE does not show significant improvement compared to MOEA/D-DE using TCH. It may be related to the unique characteristics of the DE operators employed in MOEA/D-DE.

5.2. The Impact of Polynomial Mutation

Polynomial mutation was adopted in both MOPSO/D, MOEA/D-DE, MMOPSO and CMOPSO for producing new solutions as well as maintaining the population diversity, as stated in the literature [12,18,20,36]. In order to investigate the impact of polynomial mutation on DMO-QPSO, we tested four DMO-QPSO variants, i.e., DMO-QPSO, DMO-QPSO-TCH, DMO-QPSO-pm and DMO-QPSO-TCH-pm, on different problems here. DMO-QPSO and DMO-QPSO-TCH used PBI and TCH as the decomposition method, respectively. DMO-QPSO-pm and DMO-QPSO-TCH-pm are two variants using the polynomial mutation operator on the basis of DMO-QPSO and DMO-QPSO-TCH respectively. The parameter settings for each algorithm are the same as those presented in Section 4.2. Table 12 and Table 13 present the average, minimum and standard deviation (SD) of the IGD values of 30 final populations on different test functions that were produced by different variants of DMO-QPSO.
As we can see from Table 12 and Table 13, the DMO-QPSO variants using PBI, i.e., DMO-QPSO and DMO-QPSO-pm, outperformed the DMO-QPSO variants using TCH, i.e., DMO-QPSO-TCH and DMO-QPSO-TCH-pm, on most of the test problems. These results also confirm the conclusion presented in Section 5.1. Besides, DMO-QPSO-pm shows an advantage over DMO-QPSO on several test problems. It should be pointed out that adopting the PBI approach and the polynomial mutation at the same time can effectively improve the algorithmic performance of DMO-QPSO.

6. Conclusions

This paper has proposed a multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named DMO-QPSO, which integrates the QPSO algorithm with the original MOEA/D framework and adopts a strategy to control the swarm diversity. Without using the neighboring relationship defined in the MOEA/D framework, we employed a two-phased diversity controlling mechanism to avoid the premature convergence and make the algorithm escape sub-optimal solutions with a higher probability. In addition, we used a set of nondominated solutions to produce the global best so as to update the particle’s position. The comparison experiments were carried out among six algorithms, namely, MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO, on 15 test functions with complicated PS shapes. The experimental results show that the proposed DMO-QPSO algorithm has an advantage over other five algorithms on most of the test problems. It has a slower convergence speed than MOEA/D-DE on some test problems at the early stage, but has better balance between exploration and exploitation, finally obtaining better solutions to an MOP. In addition, we further investigated the impact of different decomposition approaches, i.e., the TCH and PBI approaches, as well as the polynomial mutation on DMO-QPSO. It was shown that using PBI and the polynomial mutation can enhance the algorithmic performance of DMO-QPSO, particularly when the tri-objective problems were being solved.
In the future, we will focus on studying new methods for generating a set of weight vectors that are as uniformly distributed as possible, modifying the mechanism of diversity control in the DMO-QPSO algorithm for dealing with more complicated test problems, and improving the quality of the solutions obtained. In addition, we will extend the proposed algorithm to problems having more than three objectives.

Author Contributions

Q.Y., J.S., F.P., V.P. and B.A. contributed to the study conception and design; Q.Y. performed the experiment, analyzed the experimental results and wrote the first draft of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China (Projects Numbers: 61673194, 61672263, 61672265), and in part by the national first-class discipline program of Light Industry Technology and Engineering (Project Number: LITE2018-25).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All of the test functions can be found in papers doi: 10.1109/TEVC.2008.925798 and https://www.researchgate.net/publication/265432807_Multiobjective_optimization_Test_Instances_for_the_CEC_2009_Special_Session_and_Competition, accessed on 1 May 2021.

Acknowledgments

The authors would like to express their sincere thanks to the anonymous referees for their great efforts to improve this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eberhart, R.C.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  2. Mallick, S.; Kar, R.; Ghoshal, S.P.; Mandal, D. Optimal sizing and design of cmos analogue amplifier circuits using craziness-based particle swarm optimization. Int. J. Numer. Model.-Electron. Netw. Devices Fields 2016, 29, 943–966. [Google Scholar] [CrossRef]
  3. Singh, M.R.; Singh, M.; Mahapatra, S.; Jagadev, N. Particle swarm optimization algorithm embedded with maximum deviation theory for solving multi-objective flexible job shop scheduling problem. Int. J. Adv. Manuf. Technol. 2016, 85, 2353–2366. [Google Scholar] [CrossRef]
  4. Sousa, T.; Silva, A.; Neves, A. Particle swarm based data mining algorithms for classification tasks. Parallel Comput. 2004, 30, 767–783. [Google Scholar] [CrossRef] [Green Version]
  5. Zhang, Y.; Wu, L.; Wang, S. Ucav path planning by fitness-scaling adaptive chaotic particle swarm optimization. Math. Probl. Eng. 2013, 2013, 705238. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Jun, Y.; Wei, G.; Wu, L. Find multi-objective paths in stochastic networks via chaotic immune pso. Expert Syst. Appl. 2010, 37, 1911–1919. [Google Scholar] [CrossRef]
  7. Ng, M.C.; Fong, S.; Siu, S.W. Psovina: The hybrid particle swarm optimization algorithm for protein-ligand docking. J. Bioinform. Comput. Biol. 2015, 13, 1541007. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Moore, J.; Chapman, R. Application of Particle Swarm to Multiobjective Optimization; Unpublished Work; Department of Computer Science and Software Engineering, Auburn University: Auburn, Alabama, 1999. [Google Scholar]
  9. Coello, C.A.C.; Lechuga, M.S. Mopso: A proposal for multiple objective particle swarm optimization. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1051–1056. [Google Scholar]
  10. Coello, C.A.C.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  11. Raquel, C.R.; Prospero, C.N. An effective use of crowding distance in multiobjective particle swarm optimization. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation (GECCO ’05), Washington, DC, USA, 25–29 June 2005. [Google Scholar]
  12. Peng, W.; Zhang, Q. A decomposition-based multi-objective particle swarm optimization algorithm for continuous optimization problems. In Proceedings of the 2008 IEEE International Conference on Granular Computing (GrC 2008), Hangzhou, China, 26–28 August 2008. [Google Scholar]
  13. Zhang, Q.; Li, H. Moea/d: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2008, 11, 712–731. [Google Scholar] [CrossRef]
  14. Moubayed, N.; Petrovski, A.; Mccall, J. A novel smart multi-objective particle swarm optimisation using decomposition. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–10. [Google Scholar]
  15. Mart´ınez, Z.; Coello, C.A.C. A multi-objective particle swarm optimizer based on decomposition. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (GECCO ’11), Dublin, Ireland, 12–16 July 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 69–76. [Google Scholar]
  16. Moubayed, N.; Petrovski, A.; Mccall, J. D2mopso: Mopso based on decomposition and dominance with archiving using crowding distance in objective and solution spaces. Evol. Comput. 2014, 22, 47–77. [Google Scholar] [CrossRef] [Green Version]
  17. Dai, C.; Wang, Y.; Ye, M. A new multi-objective particle swarm optimization algorithm based on decomposition. Inf. Sci. 2015, 325, 541–557. [Google Scholar] [CrossRef]
  18. Lin, Q.; Li, J.; Du, Z.; Chen, J.; Ming, Z. A novel multi-objective particle swarm optimization with multiple search strategies. Eur. J. Oper. Res. 2015, 247, 732–744. [Google Scholar] [CrossRef]
  19. Zhu, Q.; Lin, Q.; Chen, W.; Wong, K.; Coello, C.A.C.; Li, J.; Chen, J.; Zhang, J. An external archive-guided multiobjective particle swarm optimization algorithm. IEEE Trans. Cybern. 2017, 47, 2794–2808. [Google Scholar] [CrossRef]
  20. Zhang, X.; Zheng, X.; Cheng, R.; Qiu, J.; Jin, Y. A competitive mechanism based multi-objective particle swarm optimizer with fast convergence. Inf. Sci. 2018, 427, 63–76. [Google Scholar] [CrossRef]
  21. Yang, W.; Chen, L.; Wang, Y.; Zhang, M. Multi/many-objective particle swarm optimization algorithm based on competition mechanism. Comput. Intell. Neurosci. 2020, 2020, 5132803. [Google Scholar] [CrossRef]
  22. Sun, J.; Feng, B.; Xu, W. Particle swarm optimization with particles having quantum behavior. In Proceedings of the 2004 Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; pp. 325–331. [Google Scholar]
  23. Sun, J.; Fang, W.; Wu, X.; Palade, V. Quantum-behaved particle swarm optimization: Analysis of individual particle behavior and parameter selection. Evol. Comput. 2012, 20, 349–393. [Google Scholar] [CrossRef] [PubMed]
  24. Sun, J.; Wu, X.; Palade, V.; Fang, W.; Lai, C.H.; Xu, W. Convergence analysis and improvements of quantum-behaved particle swarm optimization. Inf. Sci. 2012, 193, 81–103. [Google Scholar] [CrossRef]
  25. Qiu, C.; Wang, C.; Zuo, X. A novel multi-objective particle swarm optimization with k-means based global best selection strategy. Int. J. Comput. Intell. Syst. 2013, 6, 822–835. [Google Scholar] [CrossRef] [Green Version]
  26. Cheng, S.; Chen, M.; Peter, J. Improved multi-objective particle swarm optimization with preference strategy for optimal dg integration into the distribution system. Neurocomputing 2015, 148, 23–29. [Google Scholar] [CrossRef]
  27. Trivedi, A.; Srinivasan, D.; Sanyal, K.; Ghosh, A. A survey of multiobjective evolutionary algorithms based on decomposition. IEEE Trans. Evol. Comput. 2017, 21, 440–462. [Google Scholar] [CrossRef]
  28. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; Wiley: Hoboken, NJ, USA, 2001. [Google Scholar]
  29. Coello, C.A.C.; Veldhuizen, D.A.V.; Lamont, G.B. Evolutionary Algorithms for Solving Multi-Objective Problems, 2nd ed.; Springer: Boston, MA, USA, 2007. [Google Scholar]
  30. Tan, K.C.; Khor, E.F.; Lee, T.H. Multiobjective Evolutionary Algorithms and Applications (Advanced Information and Knowledge Processing); Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  31. Miettinem, K. Nonlinear Multiobjective Optimization; Springer: Boston, MA, USA, 1999. [Google Scholar]
  32. Stadler, W. A survey of multicriteria optimization or the vector maximum problem, part I: 1776–1960. J. Optim. Theory Appl. 1979, 29, 1–52. [Google Scholar] [CrossRef]
  33. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: Nsgaii. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  34. Sun, J.; Xu, W.; Fang, W. Quantum-behaved particle swarm optimization algorithm with controlled diversity. In International Conference on Computational Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 2006, pp. 847–854. [Google Scholar]
  35. Sun, J.; Xu, W.; Feng, B. Adaptive parameter control for quantum-behaved particle swarm optimization on individual level. In Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 12 October 2005. [Google Scholar]
  36. Li, H.; Zhang, Q. Multiobjective optimization problems with complicated pareto sets, moea/d and nsga-ii. IEEE Trans. Evol. Comput. 2009, 13, 284–302. [Google Scholar] [CrossRef]
  37. Zhang, Q.; Zhou, A.; Zhao, S.; Suganthan, P.N.; Tiwari, S. Multiobjective Optimization Test Instances for the CEC 2009 Special Session and Competition; Mechanical Engineering, University of Essex: Colchester, UK; Nanyang Technological University: Singapore, 2008. [Google Scholar]
Figure 1. Evolution of the IGD values of MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO versus the number of iterations on F problems.
Figure 1. Evolution of the IGD values of MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO versus the number of iterations on F problems.
Mathematics 09 01959 g001aMathematics 09 01959 g001b
Figure 2. Evolution of the IGD values of MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO versus the number of iterations on UF problems.
Figure 2. Evolution of the IGD values of MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO versus the number of iterations on UF problems.
Mathematics 09 01959 g002
Table 1. Parameters setting for different algorithms.
Table 1. Parameters setting for different algorithms.
Algorithms Parameters Setting
MOPSO ω = 0.5 ,   c 1 = c 2 = 1.0 ,   η m = 20 ,   p m = 1 D , 30 divisions for the adaptive grid
MOPSO/D ω = 0.4 ,   c 1 = c 2 = 2.0 , η m = 20 ,   p m = 0.05 ,   T = 20
MOEA/D-DE η m = 20 ,   p m = 1 D ,   T = 20 ,   C R = 1.0 ,   F = 0.5
MMOPSO ω 0.1 , 0.5 ,   c 1 , c 2 1.5 , 2.0 , η m = 20 ,   p m = 1 D ,   η c = 20 ,   p c = 0.9 , δ = 0.9
CMOPSO η m = 20 ,   p m = 1 D ,   γ = 10
DMO-QPSO α = 1.0 > 0.5 ,   α 0 = 2.0 ,   d l o w = 0.05 ,   n G S = 10
Table 2. The mean, minimum and standard deviation of IGD values on F problems, where the best value for each test case is highlighted with a bold background.
Table 2. The mean, minimum and standard deviation of IGD values on F problems, where the best value for each test case is highlighted with a bold background.
IGDF1F2F3F4F5F6F7F8F9TotalFinal Rank
MOPSO0.08160.46170.18970.18320.14010.64100.50360.36530.4299
0.06610.31050.13780.14420.11180.44910.33290.30410.1763
0.01080.10700.04120.02260.02130.05330.20480.04900.1221
3−5−3−4−2−4−3−2+4−304
MOPSO/D0.23131.02230.57540.57480.55580.97811.50321.12711.0358
0.21590.88690.50130.50300.47570.65801.04910.86810.8319
0.00840.07570.04490.03310.03510.18100.20760.13340.0964
6−6−6−6−6−5−6−6−6−536
MOEA/D-DE0.11800.23940.19870.17020.14051.56020.21390.30460.1982
0.08200.14920.14640.11590.08911.41110.07920.20530.0971
0.01750.07750.03570.02940.03240.05770.07340.06450.0498
4−3−4−3−3−6−1+1+2−273
MMOPSO0.04810.16150.17610.16260.23650.30070.51400.45090.2285
0.03690.11560.08150.10720.09420.16320.32550.23440.1135
0.01020.06250.07840.02330.06880.08560.10590.12380.1574
2 2−2−2−4−2−4−3 3−242
CMOPSO0.17690.43500.31420.34670.28200.48950.61410.62370.4303
0.15400.36750.27620.28760.26260.38500.52040.54020.3003
0.00870.03650.02070.02690.01330.06430.06350.05420.0426
5−4−5−5−5−3−5−5−5−425
DMO-QPSO0.04040.10230.08840.08980.06960.26080.42830.46760.0938
0.03590.09620.07870.08100.06350.16340.22100.26860.0803
0.00300.00360.00710.00880.00350.02480.11600.09070.0108
111111241131
+ , , denote that the performance of the corresponding algorithm is significantly better than, worse than, and similar to DMO-QPSO respectively by Wilcoxon rank sum test with α = 0.05 .
Table 3. The mean, minimum and standard deviation of IGD values on UF problems, where the best value for each test case is highlighted with a bold background.
Table 3. The mean, minimum and standard deviation of IGD values on UF problems, where the best value for each test case is highlighted with a bold background.
IGDUF4UF5UF6UF7UF9UF10TotalFinal
Rank
MOPSO0.16402.82972.82640.53030.65901.6258
0.14832.27562.33980.20360.53931.0761
0.00570.35140.36080.13430.06440.3764
4−5−5−4−4−2+244
MOPSO/D0.16724.38074.37931.06522.570812.7750
0.15923.16423.17320.71752.059110.8881
0.00350.36360.36450.13860.29990.9072
5−6−6−6−6−6−356
MOEA/D-DE0.09941.75321.76160.18170.59711.4303
0.08790.98010.90030.08830.47051.0272
0.00520.54730.54810.07170.06400.3025
1+3−3−2−3−1+132
MMOPSO0.78200.90390.81070.29050.46612.3204
0.52340.65260.55990.10190.35271.7783
0.09930.12610.13360.17060.05000.4342
6−2−1 3−2−4−183
CMOPSO0.15822.77592.75800.57381.45108.1102
0.14912.24122.19830.51870.99246.0312
0.00310.17280.17940.03430.17150.7816
3−4−4−5−5−5−255
DMO-QPSO0.11080.85480.85620.07650.30021.8106
0.10520.23450.24000.06020.20491.5177
0.00290.66810.66580.02160.05520.1658
212113101
+ , , denote that the performance of the corresponding algorithm is significantly better than, worse than, and similar to DMO-QPSO respectively by Wilcoxon rank sum test with α = 0.05 .
Table 4. The final rank of different algorithms on F and UF problems.
Table 4. The final rank of different algorithms on F and UF problems.
AlgorithmsF aUF bTotal cFinal Rank
MOPSO3024544
MOPSO/D5335886
MOEA/D-DE2713402
MMOPSO2418423
CMOPSO4225675
DMO-QPSO1310231
a The numbers in this column are derived from those in column “Total” in Table 4. b The numbers in this column are derived from those in column “Total” in Table 5. c The numbers in this column are the sum of columns “F” and “UF”.
Table 5. Average C-metric between DMO-QPSO (A) and MOPSO (B).
Table 5. Average C-metric between DMO-QPSO (A) and MOPSO (B).
DMO-QPSOMOPSO
ProblemsF10.06760
F20.52820
F30.01950.0027
F40.03810.0025
F50.04450.0033
F60.02000.0075
F70.17950.0995
F800.2179
F90.36430
UF40.06920
UF50.75000
UF60.73200
UF70.25730
UF90.02500.0230
UF100.02560.1763
Table 6. Average C-metric between DMO-QPSO (A) and MOPSO/D (B).
Table 6. Average C-metric between DMO-QPSO (A) and MOPSO/D (B).
DMO-QPSOMOPSO/D
ProblemsF10.19000
F20.81250
F30.54730
F40.35500
F50.52480
F60.51650
F70.55890
F80.24450
F90.72540
UF40.08000
UF50.86750
UF60.85370
UF70.71550
UF90.99650
UF100.99700
Table 7. Average C-metric between DMO-QPSO (A) and MOEA/D-DE (B).
Table 7. Average C-metric between DMO-QPSO (A) and MOEA/D-DE (B).
DMO-QPSOMOEA/D-DE
ProblemsF10.04400.0066
F20.14050.0024
F30.03450.0020
F40.01790.0085
F50.02500.0045
F60.92650
F70.00050.1944
F800.2444
F90.19300
UF40.00050.0426
UF50.47650.1103
UF60.46280.1054
UF70.08700.0079
UF90.01700.0010
UF100.00100.2475
Table 8. Average C-metric between DMO-QPSO (A) and MMOPSO (B).
Table 8. Average C-metric between DMO-QPSO (A) and MMOPSO (B).
DMO-QPSOMMOPSO
ProblemsF10.01510.0135
F20.10310.0132
F30.12000.0732
F40.10530.0779
F50.04000.0065
F60.02350.0020
F70.15950.1459
F80.00050.0687
F90.25100.0662
UF40.05050.0010
UF50.23510.1693
UF60.13980.1720
UF70.10420.0299
UF90.25050.0025
UF100.02800.0171
Table 9. Average C-metric between DMO-QPSO (A) and CMOPSO (B).
Table 9. Average C-metric between DMO-QPSO (A) and CMOPSO (B).
DMO-QPSOCMOPSO
ProblemsF10.12900.0045
F20.31800
F30.25170
F40.15110.0139
F50.26980
F60.18110
F70.26050.0050
F80.06190.0406
F90.28740
UF40.04200
UF50.80110
UF60.78570
UF70.31450
UF90.81220
UF100.92100
Table 10. The mean, minimum and standard deviation of IGD values for different algorithms with TCH or PBI on F problems.
Table 10. The mean, minimum and standard deviation of IGD values for different algorithms with TCH or PBI on F problems.
IGDMOPSO/D-TCHMOPSO/DMOEA/D-DE-TCHMOEA/D-DEDMO-QPSO-TCHDMO-QPSO
F10.23000.23130.10740.11800.04350.0404
0.20660.21590.07630.08200.04050.0359
0.01020.00840.01690.01750.00220.0030
F20.97401.02230.13960.23940.10150.1023
0.71790.88690.08960.14920.08630.0962
0.09040.07570.02950.07750.01320.0036
F30.56240.57540.13990.19870.08620.0884
0.47500.50130.08850.14640.08160.0787
0.04160.04490.03170.03570.00290.0071
F40.57120.57480.13130.17020.09220.0898
0.50410.50300.09810.11590.08510.0810
0.03200.03310.02170.02940.00430.0088
F50.56160.55580.10700.14050.06950.0696
0.48070.47570.07860.08910.06350.0635
0.03880.03510.01960.03240.00330.0035
F61.04260.97811.11511.56020.26990.2608
0.71490.65800.94531.41110.13260.1634
0.19450.18100.12790.05770.05720.0248
F71.46951.50320.16960.21390.45440.4283
0.98581.04910.03060.07920.24440.2210
0.22080.20760.07080.07340.10420.1160
F81.10571.12710.32460.30460.48610.4676
0.86960.86810.18300.20530.37630.2686
0.09490.13340.06680.06450.06020.0907
F90.99591.03580.14890.19820.10610.0938
0.88940.83190.09160.09710.09640.0803
0.06200.09640.05660.04980.01030.0108
Table 11. The mean, minimum and standard deviation of IGD values for different algorithms with TCH or PBI on UF problems.
Table 11. The mean, minimum and standard deviation of IGD values for different algorithms with TCH or PBI on UF problems.
IGDMOPSO/D-TCHMOPSO/DMOEA/D-DE-TCHMOEA/D-DEDMO-QPSO-TCHDMO-QPSO
UF40.16330.16720.09720.09940.10920.1108
0.15350.15920.08650.08790.10300.1052
0.00450.00350.00750.00520.00410.0029
UF54.37954.38071.77841.75320.94810.8548
3.56953.16420.97300.98010.19290.2345
0.29640.36360.34920.54730.68350.6681
UF64.38494.37931.77521.76160.94820.8562
3.56933.17320.91200.90030.19890.2400
0.29630.36450.34620.54810.68480.6658
UF71.06461.06520.12780.18170.07060.0765
0.84020.71750.09020.08830.05840.0602
0.12670.13860.02930.07170.02250.0216
UF92.71862.57080.52590.59710.40360.3002
2.15462.05910.45040.47050.32030.2049
0.34870.29990.06080.06400.05490.0552
UF1013.36812.77501.70871.43032.35191.8106
11.11810.88811.02821.02722.03011.5177
0.99520.90720.41630.30250.16090.1658
Table 12. The mean, minimum and standard deviation of IGD values for four DMO-QPSO variants on F problems.
Table 12. The mean, minimum and standard deviation of IGD values for four DMO-QPSO variants on F problems.
IGDDMO-QPSO-TCHDMO-QPSO-TCH-pmDMO-QPSODMO-QPSO-pm
F10.04350.27660.04040.0407
0.04050.27390.03590.0371
0.00220.00160.00300.0028
F20.10150.09710.10230.0974
0.08630.08620.09620.0815
0.01320.00690.00360.0049
F30.08620.08450.08840.0836
0.08160.07480.07870.0775
0.00290.00450.00710.0042
F40.09220.09520.08980.0944
0.08510.08590.08100.0791
0.00430.00730.00880.0130
F50.06950.06970.06960.0688
0.06350.06460.06350.0643
0.00330.00270.00350.0030
F60.26990.27810.26080.2480
0.13260.13950.16340.1315
0.05720.05790.02480.0486
F70.45440.42310.42830.3838
0.24440.20200.22100.2104
0.10420.14690.11600.1201
F80.48610.51260.46760.4673
0.37630.38430.26860.3712
0.06020.05890.09070.0618
F90.10610.10250.09380.0919
0.09640.09050.08030.0789
0.01030.00650.01080.0074
‘pm’ in the table stands for the polynomial mutation, ‘DMO-QPSO’ means DMO-QPSO using PBI, and ‘DMO-QPSO-pm’ means the DMO-QPSO using both PBI and polynomial mutation.
Table 13. The mean, minimum and standard deviation of IGD values for four DMO-QPSO variants on UF problems.
Table 13. The mean, minimum and standard deviation of IGD values for four DMO-QPSO variants on UF problems.
IGDDMO-QPSO-TCHDMO-QPSO-TCH-pmDMO-QPSODMO-QPSO-pm
UF40.10920.10800.11080.1117
0.10300.09780.10520.1068
0.00410.00430.00290.0025
UF50.94811.12490.85480.6837
0.19290.27370.23450.2278
0.68350.69460.66810.6642
UF60.94821.12170.85620.6886
0.19890.27380.24000.2260
0.68480.68960.66580.6654
UF70.07060.11340.07650.0767
0.05840.05770.06020.0628
0.02250.10430.02160.0416
UF90.40360.40150.30020.3029
0.32030.32040.20490.2151
0.05490.05420.05520.0480
UF102.35192.37201.81061.7806
2.03011.90641.51771.4807
0.16090.23060.16580.1562
‘pm’ in the table stands for the polynomial mutation, ‘DMO-QPSO’ means DMO-QPSO using PBI, and ‘DMO-QPSO-pm’ means the DMO-QPSO using both PBI and polynomial mutation.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

You, Q.; Sun, J.; Pan, F.; Palade, V.; Ahmad, B. DMO-QPSO: A Multi-Objective Quantum-Behaved Particle Swarm Optimization Algorithm Based on Decomposition with Diversity Control. Mathematics 2021, 9, 1959. https://doi.org/10.3390/math9161959

AMA Style

You Q, Sun J, Pan F, Palade V, Ahmad B. DMO-QPSO: A Multi-Objective Quantum-Behaved Particle Swarm Optimization Algorithm Based on Decomposition with Diversity Control. Mathematics. 2021; 9(16):1959. https://doi.org/10.3390/math9161959

Chicago/Turabian Style

You, Qi, Jun Sun, Feng Pan, Vasile Palade, and Bilal Ahmad. 2021. "DMO-QPSO: A Multi-Objective Quantum-Behaved Particle Swarm Optimization Algorithm Based on Decomposition with Diversity Control" Mathematics 9, no. 16: 1959. https://doi.org/10.3390/math9161959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop