Next Article in Journal
The Rotor-Vibrator Plus Multi-Particle-Hole Description of 154Gd
Next Article in Special Issue
Isokinetic Rehabilitation Trajectory Planning of an Upper Extremity Exoskeleton Rehabilitation Robot Based on a Multistrategy Improved Whale Optimization Algorithm
Previous Article in Journal
A Fractional Atmospheric Circulation System under the Influence of a Sliding Mode Controller
Previous Article in Special Issue
Autonomous Obstacle Avoidance Path Planning for Grasping Manipulator Based on Elite Smoothing Ant Colony Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Multi-Objective Particle Swarm Optimization Algorithm Based on Angle Preference

1
School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang 212100, China
2
School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
3
Jiangsu Key Laboratory of Security Technology for Industrial Cyberspace, Zhenjiang 212013, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(12), 2619; https://doi.org/10.3390/sym14122619
Submission received: 1 October 2022 / Revised: 29 November 2022 / Accepted: 3 December 2022 / Published: 10 December 2022
(This article belongs to the Special Issue Algorithms for Optimization 2022)

Abstract

:
Multi-objective particle swarm optimization (MOPSO) algorithms based on angle preference provide a set of preferred solutions by incorporating a user’s preference. However, since the search mechanism is stochastic and asymmetric, traditional MOPSO based on angle preference are still easy to fall into local optima and lack enough selection pressure on excellent individuals. In this paper, an improved MOPSO algorithm based on angle preference called IAPMOPSO is proposed to alleviate those problems. First, to create a stricter partial order among the non-dominated solutions, reference vectors are established in the preference region, and the adaptive penalty-based boundary intersection (PBI) value is used to update the external archive. Second, to effectively alleviate the swarm to fall into local optima, an adaptive preference angle is designed to increase the diversity of the population. Third, neighborhood individuals are selected for each particle to update the individual optimum to increase the information exchange among the particles. With the proposed angle preference-based external archive update strategy, solutions with a smaller PBI are given higher priority to be selected, and thus the selection pressure on excellent individuals is enhanced. In terms of an increase in the diversity of the population, the adaptive preference angle adjustment strategy that gradually narrows the preferred area, and the individual optimum update strategy which updates the individual optimum according to the information of neighborhood individuals, are presented. The experimental results on the benchmark test functions and GEM data verify the effectiveness and efficiency of the proposed method.

1. Introduction

In real-world industrial applications, there are multiple objectives that need to be optimized at the same time, which are called multi-objective optimization problems (MOPs) [1,2,3]. To solve MOPs, many traditional multi-objective evolutionary algorithms were proposed, including elitist non-dominated sorting genetic algorithm (NSGA-II and NSGA-III) [4,5], multi-objective evolutionary algorithms based on decomposition (MOEA/D) [6], the strength Pareto evolutionary algorithm 2 (SPEA2) [7], augmented ε-constraint method (AUGMECON) [8], weighted stress function method (WSFM) [9], greedy randomized adaptive search procedure (GRASP) [10], multi-objective adaptive guided differential evolution (MOAGDE) [11], multi-objective manta ray foraging optimizer (MOMRFO) [12], fixed set search (FSS) [13], and multi-objective particle swarm optimization algorithm (MOPSO) [14,15].
Due to the good convergence and fewer parameter settings, MOPSO has attracted much attention in the optimization field. Many MOPSO algorithms and their variations have been proposed, which mainly focused on aggregating, lexicographic ordering, sub-population, and dominance-based approaches [16]. In [17], a novel multi/many-objective particle swarm optimization algorithm based on a competition mechanism is proposed to maintain population diversity by the maximum and minimum angle between ordinary and extreme individuals. A new particle swarm optimization (PSO) for solving multimodal MOPs was proposed in [18], which featured an index-based ring topology to induce stable niches that allowed the identification of a larger number of Pareto-optimal solutions, and adopted a special crowding distance concept as a density metric in the decision and objective spaces. A dynamic Pareto bi-level multi-objective PSO (DPb-MOPSO) algorithm including two parallel optimization levels was proposed in [19]. In [20], an improved MOPSO named f-MOPSO/Div was proposed to solve a real-world optimal conjunctive water use problem, in which three improvements were applied to the f-MOPSO to mitigate its shortcomings. However, as the number of objectives increases, the number of non-dominated solutions increases exponentially, significantly affecting the performance of the algorithms. Most of all, in practical applications, decision-makers are only interested in some areas of the frontier of Pareto rather than the entire Pareto front [21,22]. Moreover, the search mechanism in these methods is generally stochastic and asymmetric, which may lead to worse convergence performance. To reduce the search randomness, preference-based MOPSO was proposed by incorporating users’ preference, which leads the particle to move toward the preferred region rather than the whole Pareto optimal region [5,23]. They have, therefore, attracted much attention in the heuristic optimization field.
There are many preference-based MOPSO algorithms that can be classified into prior, interactive, and posterior methods, depending on when the preference information was provided. In [24], bipolar preferences were integrated into MOPSO, which considered both positive and negative preference information of decision makers. It could solve the problem that the performance of the algorithm deteriorates due to the rapid increase of the proportion of non-dominated solutions. In [25], preference information was integrated into MOPSO, where the self-adjustable preference-based radius was calculated to build a new preference relation model. In [26], the importance relationship among the objectives was integrated into MOPSO, which maintained the diversity of solutions through the grid. In [27], fuzzy evaluation information was transferred into objective preference information, which obtained a stronger practicality. In [28], the reference point was combined with the reference region, and the area of the reference regions was adjusted dynamically with the movement of reference points. In [29], double thresholds were used to control the number of non-inferior solutions and the distribution of the solution set. The above methods have their own characteristics and could obtain a subset of the Pareto optimal solutions that reflects user preferences. However, the above algorithms still have some problems. For example, the performance of the algorithm is affected by the position of the reference point, the control of the preference area is not easy, and the preference information should be used to express the user preference simply and accurately in the practical application. In [30], angle preference for ε-Pareto domination was integrated into MOPSO, which deleted the Pareto optimal solutions located in the non-preferred region. Due to its simple setting and flexible control of the preferred region, angle preference-based MOPSO has been favored by many researchers. The performance of the algorithm will not be affected by the position of the preference point. However, it is difficult to judge the merits of two non-dominated solutions in the preference region. When the number of objectives increases, the number of non-dominated solutions increases exponentially. As the proportion of non-dominated solutions in the preference region increases, the selection pressure on excellent individuals becomes so small that the performance of the algorithm will decrease. Therefore, the selection pressure on excellent individuals should be enhanced.
In this paper, an improved MOPSO algorithm based on angle preference (IAPMOPSO) is proposed to alleviate the problem that traditional angle preference-based MOPSO lacks enough selection pressure on excellent individuals. In addition, three new strategies have been proposed in the IAPMOPSO algorithm to alleviate the shortcoming that MOPSO is easy to fall into local optima. On one hand, to enhance the selection pressure on excellent individuals, the adaptive penalty-based boundary intersection (PBI) value is combined with MOPSO to update the external archive. On the other hand, in terms of improving the ability of solutions to jump out of local optima with a high likelihood, the individual optimum values of neighborhood individuals are integrated to update the individual optimum value, and the preference angle is adjusted adaptively during the iteration. With these three strategies, the improved algorithm could enhance the selection pressure on excellent solutions, and improve the ability of the algorithm to jump out of the local optima effectively.
The remainder of this paper is organized as follows. Section 2 introduces multi-objective optimization, PSO, and angle preference domination relationship. The proposed approach is presented in Section 3. The experiments and discussion are described in detail in Section 4. Finally, the conclusions are given in Section 5.

2. Preliminaries

2.1. Multi-Objective Optimization

Most complex decision-making problems can be formulated as multi-objective optimization problems (MOPs) in real-world applications. The mathematical model of an MOPs can be described as follows:
min F ( x ) = min ( f 1 ( x ) ,   f 2 ( x ) , f m ( x ) ) s . t .     g j ( x ) 0 , j = 1 , 2 , , p                                                           h k ( x ) = 0 , k = 1 , 2 , , q                                        
where x =   ( x 1 , x 2 , , x n ) is the n-dimensional decision variable, which constitutes the decision space, f i ( x ) (i = 1,2,...,m) are the objective functions and m is the number of objectives, g j ( x ) is an inequality constraint and p is the number of inequality constraints, h k ( x ) is an equality constraint and q is the number of equality constraints.
To better understand MOPs, the following definitions related to Pareto optimality are introduced.
Definition 1 (Pareto Dominance).
A vector x = ( x 1 , x 2 , , x m ) is said to dominate y = ( y 1 , y 2 , , y m )(denoted by x ≺ y), if and only if
i { 1 , 2 , , m } ,   f i ( x ) f i ( y )     j { 1 , 2 , , m } ,   f j ( x ) < f j ( y )
Definition 2 (Pareto Optimal).
A vector x F is said to be a Pareto optimal, if and only if
¬ y F ,   y x
Definition 3 (Optimal Set).
A Pareto optimal set P, which contains all the Pareto optimals, is to be defined as
P = { x F   |   ¬   y   F , F ( x ) F ( y ) }
Definition 4 (Pareto Front).
The Pareto front (PF) is defined as
P F = { v = F ( x )   |   x   P }

2.2. Particle Swarm Optimization

Inspired by the predatory behavior of birds, PSO was first proposed in 1995 [31]. PSO has been widely used and researched by many scholars due to its good global search ability and ease of implementation [32,33,34]. Each individual in the swarm is called a particle and each particle in PSO represents a potential solution. The algorithm controls the movement of the particles toward the optimal solutions by updating their position and velocity.
The velocity and position update equations are as follows:
v i ( t + 1 ) = w × v i ( t ) + c 1 × r 1 × ( x p , i x i ( t ) ) + c 2 × r 2 × ( x g , i x i ( t ) )  
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 )  
where i = 1,…,d, which is the dimension, w > 0 represents the inertia weight, the larger w is, the stronger the global search capability of the algorithm, and a smaller w will make the algorithm have a strong local search capability. c 1 , c 2 > 0 are the acceleration coefficients, which determine the influence of the particle experience and swarm experience on particle trajectory. A larger value of c 1 allows exploration, while a larger value of c 2 encourages exploitation. r 1   and   r 2 are two random variables having a uniform distribution in the range (0, 1), providing randomness to the flight of the swarm. x p , i is the individual optimum value, which stores the position corresponding to the particle’s best individual performance, and x g , i is the global optimal value, which corresponds to the particle with the best overall performance in the swarm.

2.3. Angle Preference Domination Relationship

In the traditional angle preference domination relationship, the objective space is divided into a preferred region and a non-preferred region, which is based on calculating the angle θ ( r , x ) between the reference point r and the current objective vector f ( x ) . If the angle θ is less than α, f ( x ) will be considered to be in the preferred region, otherwise, it will be considered to be in the non-preferred region. The angle θ ( r , x ) between the reference point r and the current objective vector f ( x ) can be mathematically described as follows:
θ = a r c c o s ( i = 1 m ( | f i | × | r i | ) i = 1 m f i 2 × i = 1 m r i 2 )
where f i represents the value of the ith-dimension of the individual, and r i represents the value of the ith-dimension of the reference point. Angle preference dominance is defined as follows:
Assuming any two individuals x and y in the population, x is said to angle preference dominance y if one of the following statements holds true:
First, x Pareto dominates y. Second, x and y are Pareto-equivalent, x is in the preferred region and y is in the non-preferred region.

3. The Proposed MOPSO Algorithm Based on Angle Preference

To alleviate the problem that traditional MOPSO algorithms based on angle preference (TAPMOPSO) lack enough selection pressure for excellent individuals as well as be easy to fall into local optima, an improved multi-objective particle swarm optimization algorithm based on angle preference (IAPMOPSO) is proposed in this study. The framework of the IAPMOPSO algorithm is the same as that of the TAPMOPSO algorithm, but the IAPMOPSO proposes three new strategies to enhance its search performance, which are the update strategy of the angle preference-based external archive, the update strategy of the individual optimum, and the adaptive adjustment strategy of the preference angle. For increasing the selection pressure for excellent individuals, an angle preference-based external archive update method is adopted. To increase the diversity of the solutions, an individual optimum value update strategy and adaptive preference angle adjustment strategy are proposed. These three strategies and the steps of the IAPMOPSO algorithm are detailed in the following subsections.

3.1. The Update Strategy of the Angle Preference-Based External Archive

As for the traditional angle preference-dominated relationship, when the number of non-dominated solutions in the preference region increases, the selection pressure of the algorithm on the solution is greatly reduced, and thus the algorithm performance will be also reduced, which increases the computational cost. As can be seen from Figure 1, solutions a and b are in the preferred region, which is Pareto-equivalent, and do not angle preference dominate each other. The situation may be fine when there are two particles here, however, when there are more particles, keeping these particles in the same region will reduce the number of solutions stored in other regions and reduce the performance of the algorithm.
In order to enhance the selection pressure of the algorithm to the candidate solutions, an angle preference-based external archive update strategy based on adaptive PBI is presented in this study, in which a set of uniformly distributed reference vectors λ1, λ2, ..., λn are generated in the angle preference region, and the candidate solutions are associated with each reference vector. After calculating the values of the parameter d with Equation (9), a solution with the smallest d will be selected to enter the external archive, where θ is the adaptive penalty factor. Figure 2 illustrates how d is calculated between the candidate solution a and the reference vector   λ 1 , where d 1 is projection length and d 2 is vertical distance. The further the candidate dissociates from the reference vector, the greater the penalty. d and θ are defined as:
{ d = d 1 + θ × d 2 θ ( t ) = θ m i n + ( θ m a x θ m i n ) × 2 ( t b × t m a x ) k
where b and k are greater than 0, which control the rate of decline of the penalty factor θ, θ m a x   and   θ m i n are the upper and lower bounds of the adaptive penalty factor, respectively, and t   and   t m a x are is iteration number and upper bounds of the iteration number, respectively.
It can be seen from Equation (9) that θ(t) is gradually becoming smaller, and its rate of decline can be controlled, therefore the punishment for d 2 is also gradually becoming smaller. The advantage is that it can make the punishment for d 2 in the early stage of the algorithm larger and pay more attention to the convergence of the algorithm. In the later period, the penalty for d 2 is small, and more attention is paid to the diversity of algorithms.
The input of the algorithm is as follows: external archive Ar, reference point r, preference angle α, k, b. The output of the algorithm is the updated external archive Ar. The steps of angle preference-based external archive update are described below.
Step 1: During each iteration, a set of uniformly distributed reference vectors are generated in the range of preference angle.
Step 2: Associating the particle in the update external archive with its closest reference vector.
Step 3: Randomly select an unselected reference vector, select the candidate solution with the minimum d to be stored in the external archive, and remove the association between the candidate solution and the reference vector. If no candidate solution is associated, the candidate solution closest to the reference vector is selected for the external archive. The selected candidate solution is removed from the population to be updated.
Step 4: When all the reference vectors have been selected, remove the selection trace of the reference vector and repeat Step 3 until the number of solutions reaches the capacity of the external archive Ar.
The corresponding algorithm for the update strategy of the angle preference-based external archive is depicted as Algorithm 1.
Algorithm 1 Update external archive.
Input:Ar (external archive), λ (reference vector), α (preference angle)
Output: Ar’ (updated external archive),
Generating a set of uniformly distributed reference vectors in the range of preference angle.
For i = 1 to size (Ar) do
  Associate the closest reference vector with particle Pi.
End
While size (Ar‘) < size (Ar)
  While not all the reference vectors have been selected
    λi ← Randomly select an unselected reference vector
    Pi ← Select a particle from search particle with minimum d forλi
    Remove Pi from search particle
    Ar‘ = Ar‘   Pi
    Associate λi with Pi
  End
  Remove the selection trace of the reference vector.
End
With the proposed angle preference-based external archive update strategy, solutions with a smaller d are given higher priority to be selected. Compared with the traditional external archive update strategy, the proposed strategy performs double selection to update the external archive, it could make the solutions in the external archive more close to the true PF, and thus the selection pressure on the excellent individuals is enhanced. In addition, the convergence and diversity of the algorithm can be balanced by adjusting parameters k and b. In this study, a high selection pressure on convergence is exerted to push the population towards the PF at the early stage of iteration, and a high diversity pressure would allow the algorithm to find well-distributed solutions at the last stage of iteration.

3.2. The Update Strategy of Individual Optimum

Because of the lack of information exchange between individuals in the swarm, the MOPSO algorithm is easy to fall into local optima for the optimization problem with a complex Pareto front. In order to increase the communication between particles, a novel method of updating the individual optimum is adopted. When updating the individual optimum of a particle, the information of its neighborhood individuals is fully utilized. The individual optimum of the particle is compared with the individual optima of the neighborhood individuals and updated, thereby increasing the information exchange between particles.
Therefore, the neighboring individuals of each particle need to be found. The neighboring individual is not the nearest particle to the particle, but the particle that is linked to the nearest reference vector of the reference vector of the particle linkage. As shown in Figure 3, there are three reference vectors,   λ 1 ,   λ 2 , and λ 3 , and the reference vector associated with a is λ 1 , while the nearest reference vector to λ   1 is λ 2 , and the candidate solutions associated with λ 2 are b and c. Thus, the neighborhood individuals of a are b and c.
The steps for updating individual optimums are described below.
Step 1: Associate the particle with the nearest reference vector.
Step 2: Select the neighborhood individuals of particles.
Step 3: Within the neighborhood individuals of particles, randomly take a particle, compare the individual optimum values between them, and take the non-dominated as the individual optimum values of the particles. If they do not dominate each other, randomly choose one.
Step 4: If the neighborhood individuals of the particle are empty, no operation is performed.
The algorithm for updating individual optimums is described as Algorithm 2.
Algorithm 2 Update individual optima
Input: P (population), λ (reference vector), α (preference angle)
Output: optima (updated individual optima)
For i = 1 to size (P) do
  λi ← Associate the closest reference vector with Pi.
  λj ← Select the neighborhood vector of λi
  Ni ← Select the associating individual of λj
  If Ni :
  Pn ← Randomly take a particle in Ni
  If there exists dominance relationship between Pi and Pn
  optimai = non-dominated(Pi,Pn)
  else
  optimai = Randomly select from (Pi,Pn)
  End
  End
End

3.3. The Adaptive Adjustment Strategy of Preference Angle

When the preference angle is too small, the swarm will lose its diversity. To alleviate this problem, an adaptive adjustment strategy of preference angle is presented in this study. In the early stage of the algorithm, a large preference angle is given to ensure the diversity of the population. With the increase of iteration times, the preference angle is gradually reduced to make the algorithm converge to the specified preference angle region. The preference angle is set as follows:
α = α + ( π α   ) × 2 ( c × t t m a x ) 2
where c is the parameter used to adjust the speed of the preference angle change.
By adaptively adjusting the preference angle, the preference angle may ensure the diversity of the swarm in the early stage, which promotes the algorithm’s exploration of the global optimum. As the number of iterations of the algorithm increases, the preference angle gradually decreases to make the swarm converge.

3.4. The Steps of the Improved Angle Preference-Based MOPSO

The steps of the proposed algorithm with the above strategies are described below.
Step 1: Input the preferred angle and reference point. Initialize the positions and velocities of all particles and the corresponding parameters, and set the external archive to empty.
Step 2: Evaluate all particles according to the fitness function and add the set of non-dominated solutions into the external archive.
Step 3: Use the polynomial mutation strategy on the population. A polynomial mutation strategy defined by Equation (11) proposed in [35] is introduced in this study.
{ v k = v k + δ ( u k l k ) δ = { ( 2 r + ( 1 2 r ) ( 1 δ 1 ) η m + 1 ) 1 η m + 1 1 , i f   u 0.5 1 ( 2 ( 1 r ) + 2 ( r 0.5 ) ( 1 η 2 ) η m + 1 ) 1 η m + 1 , i f   u > 0.5
where δ 1 = ( v k l k ) / ( u k l k ) , δ 2 = ( u k v k ) / ( u k l k ) , and r is a random number in the interval [0, 1]. η m is the distribution exponent. v k is a parent individual. u k and l k are the upper and lower limits of the particles’ velocity, respectively.
Step 4: Update individual optimum value with the strategy proposed in Section 3.2.
Step 5: In each iteration, the preference angle is adaptively adjusted with the strategy proposed in Section 3.3.
Step 6: Select the global leader and update the velocities and positions of the particles according to Equations (6) and (7).
Step 7: Update the external archive with the strategy proposed in Section 3.1.
Step 8: If the termination condition is satisfied, the PF will be output, otherwise, turn to Step 3.
The proposed algorithm is described as Algorithm 3.
Algorithm 3 The proposed algorithm (IAPMOPSO)
Input:N (population size), λ (reference vector), α (preference angle), maxgen (maximal generation number)
Output: Ar (archive)
P ← InitializeParticles (𝑁)
Ar = ;
Ar = Ar  non-dominated solutions
For i = 1 to maxgen do:
  Apply polynomial mutation strategy to particles according to Equation (11).
  optima = update individual optimum(P,λ, α) proposed in Algorithm 2.
  Update α with the strategy proposed in Section 3.3.
  Select the global leader and update the velocities and positions of the particles according to Equations (6) and (7).
  Ar = Update external archive (Ar,λ, α) proposed in Algorithm 1.
End
The improved algorithm establishes a stricter partial order among the non-dominated solutions and enhances the selection pressure on excellent individuals, in which solutions with smaller PBI are given a higher priority to be selected. Moreover, the disadvantage of easily falling into the local optima is alleviated by integrating the adaptive preference angle adjustment strategy. This then adaptively narrows the preferred area with the individual optimum update strategy which updates the individual optimum according to the information of neighborhood individuals. The improved algorithm has a stronger selection pressure on excellent individuals and the ability to jump out of the local optima.

4. Experiments and Discussion

4.1. Comparison on Test Functions

4.1.1. Test Functions and Parameters Setting

In this section, several benchmark test functions, including ZDT (ZDT1-3) [36] and DTLZ (DTLZ2, DTLZ6, DTLZ7) [37], are used to test the proposed algorithm. The proposed algorithm is compared with TAPMOPSO, reference solution-based NSGA-II (rNSGAII) [38], and angle-dominated NSGA-II (AD-NSGAII) [39]. TAPMOPSO is an algorithm based on angle preference, which redefines the dominance relationship between individuals and gives priority to the individuals within the preference angle to guide the population to search toward the preference region. The rNSGAII defines a new variant of the Pareto dominance relation, which creates a strict partial order among Pareto-equivalent solutions and guides the search toward the interesting parts of the Pareto optimal region based on the decision maker’s preferences. The AD-NSGAII is an algorithm based on the angle relationship between individuals, which redefines the dominance relationship between individuals and the clustering distance.
In the experiments, the number of decision variables is set as 30 for ZDT1-ZDT3, for DTLZ2, DTLZ6, and DTLZ7, the number of decision variables is set as 7. The population size is set as 100, and the maximum number of generations is set as 200. The distribution index η m is set as 18, and the mutation probability p m is set as 0.01. For the proposed algorithm, θ m i n , θ m a x are set as 2 and 12, and k, b, and c are set as 3, 0.5, and 7, respectively. Ar is set as 100. Non-r-dominance threshold δ is set as 0.1. Each algorithm is run 20 times for each test problem. The experiments are conducted with MATLAB2014a.
In the experiments, generational distance (GD) and hypervolume (HV) are used as indicators for comparison. The indicator of GD indicates the distance between the optimal solution set obtained and the real Pareto front of the problem. The distance indicates the degree of deviation from the true boundary. The larger the value, the farther away from the true optimal boundary, and the worse the convergence. HV is a comprehensive indicator to evaluate the convergence and diversity of the approximate solution set. The HV indicator mainly measures the volume of the region in the target space enclosed by the non-dominated solution set obtained by the algorithm and the reference point, while rNSGAII introduces different preference mechanisms and the control principle of the preference region, so HV is not suitable for comparing the performance of rNSGAII algorithm. For rNSGAII, only the GD indicator is presented.

4.1.2. Simulation Results and Discussions

Table 1 empirically provides the reference points and preferred angles for experiments. Table 2 shows the mean and standard deviation values of GD, and Table 3 shows the mean and standard deviation values of HV. Solutions obtained by each algorithm among 20 runs with the median GD value and the position of solutions in the real Pareto front are shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
It can be observed from Table 2 and Table 3 that the performance of the proposed algorithm on ZTD1, ZTD2, DTLZ2, and DTLZ7 is significantly outperformed by the comparison algorithms in terms of both GD and HV values, the performance of the proposed algorithm on ZTD3 is almost the same as that of the classic algorithm rNSGAII, the performance of the proposed algorithm on DTLZ6 is slightly worse than that of the comparison algorithm. From the overall simulation results, the proposed algorithm has the best results on most benchmark functions, and it is superior to the TAPMOPSO, rNSGAII, and AD-NSGAII in both convergence and diversity of the resulting approximation sets. From the above empirical results, it can be concluded that the algorithm proposed in this paper could obtain a set of accurate and well-distributed solutions on most MOPs.
In Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, the subfigures on the left side are the preferred solutions set obtained by the corresponding algorithm, and the subfigures on the right side are the position of the solution set at the true PF. From these Figures, the improved algorithm could better converge to the true PF and obtain more evenly distributed solutions on most test functions.
In summary, the IAPMOPSO has a smaller GD value and a larger HV value in most of the test functions, which means that the improved algorithm has better convergence and diversity than the TAPMOPSO, rNSGAII, and AD-NSGAII. Only in ZDT3 and DTLZ6, the improved algorithm is slightly inferior to the comparison algorithm. The algorithm proposed in this paper could obtain a set of accurate and well-distributed solutions on most MOPs.

4.2. A Case Study of Portfolio Selection

The financial industry is particularly important in today’s society. It has played an important role in human economic activities. Since financial investment products, the product’s risk factors, returns, and time recovery costs are different, it is difficult for investors to choose the correct investment solution. Integrating investor preference information into PSO and searching in the preference region could help investors to select investment schemes effectively.
If there are n assets in an investment portfolio, assuming that the historical return of the ith asset is r i and the weight of the investment in the portfolio is w i , the return on an investment portfolio r p is:
r p =   i = 1 n w i × r i
The cumulative product of the yield and weight of each asset is the expected return of the portfolio E ( r p ) :
E ( r p ) = i = 1 n w i × E ( r i )
where i = 1 n w i = 1   and E ( r i ) is the expected return on the asset.
According to Markowitz’s portfolio theory, the size of investment risk is usually expressed by the variance. The larger the variance, the greater the risk [40], the mathematical model is defined as follows:
σ p 2 = j = 1 n i = 1 n w i w j C o v ( r i , r j ) = j = 1 n i = 1 n w i w j p i j σ i σ j
where σ i represents the variance of the ith investment return rate, C o v ( r i , r j ) represents the covariance of the ith and jth investment return rate, and p i j represents the correlation coefficient between the ith and jth investment return rate. The formula for the unbiased estimate of the correlation coefficient p i j is:
p i j = 1 m 1 k 1 m ( r i , k μ i ' ) ( r j , k μ j ' ) σ i ' σ j '
where μ i ' ,   σ i '   are expressed as mathematical expectations and unbiased estimates of standard deviation of portfolio return rate, respectively, r i , k represents the kth observation of the ith investment.

4.2.1. Stock Dataset and Parameters Setting

The stock data used in this experiment are from GEM stocks, and 10 stocks were selected, using the daily stock data of November 2009.
In the experiment, the number of decision variables is set as 10, the population size is set as 100, and the maximum number of generations is set as 200. The distribution index η m is set as 18, and the mutation probability p m is set as 0.01. For the proposed algorithm, θ m i n and θ m a x are set as 2 and 12, respectively. K, b and c is set as 3, 0.5, and 7, respectively. Ar is set as 100.
The decision variables for this experiment represent the weight of each investment product. To better evaluate each investment scheme, divide the weight of each product by the sum of the weights of the entire scheme. The two objective functions are:
f 1 = E ( r p )
f 2 = σ p 2

4.2.2. Simulation Results and Discussions

Table 4 empirically provides the reference points and preferred angles for the improved algorithm and AD-NSGAII. Table 5 provides the mean of risk value and return of the portfolio scheme obtained by IAPMOPSO and AD-NSGAII with 20 runs. Figure 10 and Figure 11 show the different sets of preferred solutions obtained by the improved algorithm and AD-NSGAII from different preference angles and reference points in the four experiments.
As can be seen from Figure 10 and Figure 11, both of the two algorithms could obtain the preferred solution, and the solution set obtained by the improved algorithm is more uniformly distributed. The indicator E ( r p ) / σ p 2 indicates the return rate per unit risk. From Table 5, the solution set obtained by the IAPMOPSO method has a higher return rate under the same risk value than the AD-NSGAII method in most cases. Therefore, in the selection of stock portfolio investment schemes, the IAPMOPSO method is slightly superior to the AD-NSGAII algorithm. Only the change of the reference point and preference angle could flexibly control the preference region and obtain the preference solution set. Investors could adjust the reference point to choose investment solutions with different risks and different returns.

5. Conclusions

In this paper, an improved MOPSO algorithm based on angle preference was proposed. To enhance the selection pressure of the algorithm on particles, an angle preference-based external archive update strategy was used to select the excellent individuals. To improve the ability of the swarm to jump out of the local minima, the adaptive preference angle adjustment strategy which gradually narrows the preferred area and the individual optimum update strategy which updates the individual optimum according to the information of neighborhood individuals were presented. Simulation results showed that the proposed algorithm outperforms three comparison algorithms in both the convergence and diversity of the resulting approximation sets. However, as a stochastic search method, the proposed method is still time-consuming. How to reduce search cost according to the symmetry of the search space is our future work. Moreover, future work will focus on improving the proposed algorithm such as introducing the WSFM [9] to the IAPMOPSO to solve complex MOPs and large-scale problems.

Author Contributions

Conceptualization, Q.-H.L. and Z.-H.T.; methodology, Q.-H.L.; software, Z.-H.T.; validation, Q.-H.L., Z.-H.T. and G.H.; formal analysis, Z.-H.T.; investigation, Q.-H.L.; resources, Z.-H.T.; data curation, G.H.; writing—original draft preparation, Q.-H.L.; writing—review and editing, F.H.; visualization, Z.-H.T.; supervision, F.H.; project administration, F.H.; funding acquisition, F.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant nos. 61976108 and 61572241.

Data Availability Statement

Data available in a publicly accessible repository.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, J.; Han, F.; Wang, J.; Ling, Q.H.; Han, H.; Wang, Y. A two-stage evolutionary algorithm for large-scale sparse multi-objective optimization problems. Swarm Evol. Comput. 2022, 72, 101093. [Google Scholar] [CrossRef]
  2. Han, F.; Zheng, M.P.; Ling, Q.H. An improved multi-objective particle swarm optimization algorithm based on tripartite competition mechanism. Appl. Intell. 2022, 52, 5784–5816. [Google Scholar] [CrossRef]
  3. Wu, T.; Feng, Z.; Wu, C.; Lei, G.; Guo, Y.; Zhu, J.; Wang, X. Multi-objective optimization of a tubular coreless lpmsm based on adaptive multi-objective black hole algorithm. IEEE Trans. Ind. Electron. 2020, 67, 53901–53910. [Google Scholar] [CrossRef]
  4. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 2182–2197. [Google Scholar] [CrossRef] [Green Version]
  5. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference point-based nondominated sorting approach, Part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  6. Zhang, Q.; Li, H. MOEA/D: A multi-objective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  7. Zitzler, E.; Laumanns, M.; Thiele, L. Spea2: Improving the Strength Pareto evolutionary Algorithm; Technical Report 103; Computer Engineering and Networks Laboratory, Swiss Federal Institute of Technology (ETH): Zurich, Switzerland, 2001. [Google Scholar]
  8. Mavrotas, G. Effective implementation of the ε-constraint method in multi-objective mathematical programming problems. Appl. Math. Comput. 2009, 213, 455–465. [Google Scholar] [CrossRef]
  9. Ferreira, J.C.; Fonseca, C.M.; Denysiuk, R.; Gaspar-Cunha, A. Methodology to select solutions for multiobjective optimization problems: Weighted stress function method. J. Multi-Criteria Decis. Anal. 2017, 24, 103–120. [Google Scholar] [CrossRef]
  10. Casas-Martínez, P.; Casado-Ceballos, A.; Sánchez-Oro, J.; Pardo, E.G. Multi-objective grasp for maximizing diversity. Electronics 2021, 10, 1232. [Google Scholar] [CrossRef]
  11. Duman, S.; Akbel, M.; Kahraman, H.T. Development of the multi-objective adaptive guided differential evolution and optimization of the MO-ACOPF for wind/PV/tidal energy sources. Appl. Soft Comput. 2021, 112, 107814. [Google Scholar] [CrossRef]
  12. Kahraman, H.T.; Akbel, M.; Duman, S. Optimization of optimal power flow problem using multi-objective manta ray foraging optimizer. Appl. Soft Comput. 2022, 116, 108334. [Google Scholar] [CrossRef]
  13. Jovanovic, R.; Sanfilippo, A.P.; Voß, S. Fixed set search applied to the multi-objective minimum weighted vertex cover problem. J. Heuristics 2022, 28, 481–508. [Google Scholar] [CrossRef]
  14. Coello, C.A.C.; Lechuga, M.S. MOPSO: A proposal for multiple objective particle swarm optimization. Proc. Congr. Evol. Comput. 2002, 2, 1051–1056. [Google Scholar]
  15. Cui, Y.Y.; Meng, X.; Qiao, J.F. A multi-objective particle swarm optimization algorithm based on two-archive mechanism. Appl. Soft Comput. 2022, 119, 108532. [Google Scholar]
  16. Reyes-Sierra, M.; Coello, C.C.A. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. Int. J. Comput. Intell. Res. 2006, 2, 287–308. [Google Scholar]
  17. Yang, W.; Chen, L.; Wang, Y.; Zhang, M. Multi/many-objective particle swarm optimization algorithm based on competition mechanism. Comput. Intell. Neurosci. 2020, 2020, 5132803. [Google Scholar] [CrossRef]
  18. Yue, C.; Qu, B.; Liang, J. A multi-objective particle swarm optimizer using ring topology for solving multimodal multi-objective problems. IEEE Trans. Evol. Comput. 2017, 22, 805–817. [Google Scholar] [CrossRef]
  19. Aboud, A.; Rokbani, N.; Fdhila, R.; Qahtani, A.M.; Almutiry, O.; Dhahri, H.; Hussain, A.; Alimi, A.M. DPb-MOPSO: A dynamic pareto bi-level multi-objective particle swarm optimization algorithm. Appl. Soft Comput. 2022, 129, 109622. [Google Scholar] [CrossRef]
  20. Rezaei, F.; Safavi, H.R. f-MOPSO/Div: An improved extreme-point-based multi-objective PSO algorithm applied to a socio-economic-environmental conjunctive water use problem. Environ. Monit. Assess. 2020, 192, 1–27. [Google Scholar] [CrossRef]
  21. Yi, J.; Bai, J.; He, H.; Peng, J.; Tang, D. Ar-moea: A novel preference-based dominance relation for evolutionary multi-objective optimization. IEEE Trans. Evol. Comput. 2019, 23, 788–802. [Google Scholar] [CrossRef]
  22. Sun, R.; Liu, Y. Preference-based multi-objective evolutionary algorithm for power network reconfiguration. In Proceedings of the IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 10–13 June 2019; pp. 845–849. [Google Scholar]
  23. Kim, J.H.; Han, J.H.; Kim, Y.H.; Choi, S.H.; Kim, E.S. Preference-based solution selection algorithm for evolutionary multi-objective optimization. IEEE Trans. Evol. Comput. 2012, 16, 20–34. [Google Scholar] [CrossRef]
  24. Wang, L.P.; Jiang, B.; Cai, J.H.; Qiu, F.Y. Multi-objective particle swarm optimization based on bipolar preferences control. Inf. Control 2009, 38, 711–717. [Google Scholar]
  25. Wang, S.F.; Zheng, J.H.; Hu, J.J.; Zou, J.; Yu, G. Multi-objective evolutionary algorithm for adaptive preference radius to divide region. J. Softw. 2017, 28, 2704–2721. [Google Scholar]
  26. Yu, J.; He, Z.; Qian, Q. Study on multi-objective particle swarm optimization algorithm based on preference. Control Decis. 2009, 24, 66–87. [Google Scholar]
  27. Li, S.W.; Wand, J.Q.; Zeng, J.W. Fuzzy preference ranking of multi-objective particle swarm optimization algorithm. Appl. Res. Comput. 2011, 28, 477–480. [Google Scholar]
  28. Dai, Y.B. Preference multi-objective optimization algorithm with integrated guidance. J. Cent. South Univ. (Sci. Technol.) 2016, 47, 3072–3078. [Google Scholar]
  29. Dai, Y.B. Multi-objective particle swarm optimization algorithm with double thresholds based on preference information. J. Beijing Univ. Technol. 2016, 42, 492–498. [Google Scholar]
  30. Li, J.; Huang, T.M.; Chen, S.Y. Multi-objective particle swarm optimization based on angle preference for ε-pareto domination. J. Xihua Univ. (Nat. Sci. Ed.) 2018, 37, 70–74. [Google Scholar]
  31. Kennedy, J.; Eberhart, R. Particle swarm optimization. Proc. Neural Netw. 1995, 4, 1942–1948. [Google Scholar]
  32. Nagra, A.A.; Han, F.; Ling, Q.H. An improved hybrid self-inertia weight adaptive particle swarm optimization algorithm with local search. Eng. Optim. 2019, 51, 1115–1132. [Google Scholar] [CrossRef]
  33. Wang, X.Y.; Zhang, B.R.; Wang, J.; Zhang, K.; Jing, Y.C. A cluster-based competitive particle swarm optimizer with a sparse truncation operator for multi-objective optimization. Swarm Evol. Comput. 2022, 71, 101083. [Google Scholar]
  34. Wang, H.; Cai, T.; Li, K.S.; Pedrycz, W. Constraint handling technique based on Lebesgue measure for constrained multi-objective particle swarm optimization algorithm. Knowl.-Based Syst. 2021, 227, 107131. [Google Scholar] [CrossRef]
  35. Wang, Z.C.; Li, H.C. Multi-objective evolutionary algorithm using decomposition method and polynomial mutation operator. Microelectron. Comput. 2021, 38, 95–100. [Google Scholar]
  36. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multi-objective evolutionary algorithms: Empirical results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable multi-objective optimization test problems. Congress on Evolutionary Computation. In Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 825–830. [Google Scholar]
  38. Said, L.B.; Bechikh, S.; Ghédira, K. The r-dominance: A new dominance relation for interactive evolutionary multicriteria decision making. IEEE Trans. Evol. Comput. 2010, 14, 801–881. [Google Scholar] [CrossRef]
  39. Zheng, J.H.; Xie, Z.Z. A study on how to use angle information to include decision maker’s preferences. Acta Electron. Sin. 2014, 42, 2239–2246. [Google Scholar]
  40. Lee, W. Risk-based asset allocation: A new answer to an old question. J. Portf. Manag. 2011, 37, 11–28. [Google Scholar] [CrossRef]
Figure 1. The relationship between two solutions (a and b). These two solutions do not angle preference dominate each other under the traditional angle preference dominance relationship, where the two solutions, a and b, are in the preferred region.
Figure 1. The relationship between two solutions (a and b). These two solutions do not angle preference dominate each other under the traditional angle preference dominance relationship, where the two solutions, a and b, are in the preferred region.
Symmetry 14 02619 g001
Figure 2. Illustration of the penalty-based boundary intersection approach. λ1, λ2, ..., λn are a set of uniformly distributed reference vectors. d1, is the distance from point P to point Z. d2 is the distance from solution a to the point which is the projection of point P on the reference vector λ1.
Figure 2. Illustration of the penalty-based boundary intersection approach. λ1, λ2, ..., λn are a set of uniformly distributed reference vectors. d1, is the distance from point P to point Z. d2 is the distance from solution a to the point which is the projection of point P on the reference vector λ1.
Symmetry 14 02619 g002
Figure 3. Schematic diagram of neighborhood individuals. The solution a is associated with the reference vector   λ 1 , and the solutions of b and c are the neighborhood individuals of a.
Figure 3. Schematic diagram of neighborhood individuals. The solution a is associated with the reference vector   λ 1 , and the solutions of b and c are the neighborhood individuals of a.
Symmetry 14 02619 g003
Figure 4. The non-dominated solutions obtained by each algorithm with 20 runs on ZDT1.
Figure 4. The non-dominated solutions obtained by each algorithm with 20 runs on ZDT1.
Symmetry 14 02619 g004
Figure 5. The non-dominated solutions obtained by each algorithm with 20 runs on ZDT2.
Figure 5. The non-dominated solutions obtained by each algorithm with 20 runs on ZDT2.
Symmetry 14 02619 g005
Figure 6. The non-dominated solutions obtained by each algorithm with 20 runs on ZDT3.
Figure 6. The non-dominated solutions obtained by each algorithm with 20 runs on ZDT3.
Symmetry 14 02619 g006
Figure 7. The non-dominated solutions obtained by each algorithm with 20 runs on DTLZ2.
Figure 7. The non-dominated solutions obtained by each algorithm with 20 runs on DTLZ2.
Symmetry 14 02619 g007
Figure 8. The non-dominated solutions obtained by each algorithm with 20 runs on DTLZ6.
Figure 8. The non-dominated solutions obtained by each algorithm with 20 runs on DTLZ6.
Symmetry 14 02619 g008
Figure 9. The non-dominated solutions obtained by each algorithm with 20 runs on DTLZ7.
Figure 9. The non-dominated solutions obtained by each algorithm with 20 runs on DTLZ7.
Symmetry 14 02619 g009
Figure 10. The non-dominated solutions obtained by IAPMOPSO on stock portfolio selection among 20 runs with the median risk values. Figure (ad) are the solutions obtained from different preference angles and reference points.
Figure 10. The non-dominated solutions obtained by IAPMOPSO on stock portfolio selection among 20 runs with the median risk values. Figure (ad) are the solutions obtained from different preference angles and reference points.
Symmetry 14 02619 g010
Figure 11. The non-dominated solutions obtained by AD-NSGAII on stock portfolio selection among 20 runs with the median risk values. Figure (ad) are the solutions obtained from different preference angles and reference points.
Figure 11. The non-dominated solutions obtained by AD-NSGAII on stock portfolio selection among 20 runs with the median risk values. Figure (ad) are the solutions obtained from different preference angles and reference points.
Symmetry 14 02619 g011
Table 1. Parameter setting of reference point and reference angle.
Table 1. Parameter setting of reference point and reference angle.
ParametersZDT1ZDT2ZDT3DTLZ2DTLZ6DTLZ7
r(0.3,0.3)(0.3,0.3)(0.3,0.3)(0.3,0.3,0.3)(0.3,0.3,0.3)(1,1,60)
απ/10π/10π/10π/10π/10π/50
Table 2. Mean and standard deviation values of GD.
Table 2. Mean and standard deviation values of GD.
AlgorithmsZDT1ZDT2ZDT3DTLZ2DTLZ6DTLZ7
IAPMOPSOMean2.2991 × 10−54.7349 × 10−62.2991 × 10−51.5785 × 10−35.6561 × 10−61.8570 × 10−3
Std.3.15 × 10−51.76 × 10−55.95 × 10−52.66 × 10−34.62 × 10−38.27 × 10−4
TAPMOPSOMean4.3431 × 10−56.4199 × 10−65.1350 × 10−52.4986 × 10−35.0829 × 10−64.4425 × 10−1
Std.4.65 × 10−52.62 × 10−51.16 × 10−57.25 × 10−33.97 × 10−32.64 × 10−3
rNSGAIIMean2.4729 × 10−52.8797 × 10−52.0818 × 10−52.042 × 10−35.2850 × 10−62.5596 × 10−3
Std.9.131 × 10−61.3885 × 10−51.9942 × 10−55.8199 × 10−65.8142 × 10−68.2156 × 10−5
AD-NSGAIIMean1.3089 × 10−46.0239 × 10−54.089 × 10−51.8935 × 10−39.3850 × 10−63.2414 × 10−3
Std.2.6512 × 10−56.2153 × 10−69.2153 × 10−64.2182 × 10−53.2156 × 10−51.2512 × 10−6
Table 3. Mean and standard deviation values of HV.
Table 3. Mean and standard deviation values of HV.
AlgorithmsZDT1ZDT2ZDT3DTLZ2DTLZ6DTLZ7
IAPMOPSOMean5.5395 × 10−13.2421 × 10−15.5395 × 10−12.8349 × 10−17.9856 × 10−22.0227 × 10−1
Std.6.23 × 10−58.27 × 10−42.38 × 10−45.51 × 10−34.83 × 10−34.73 × 10−2
TAPMOPSOMean5.5117 × 10−13.1480 × 10−15.0881 × 10−12.7912 × 10−19.0344 × 10−21.9566 × 10−1
Std.3.39 × 10−51.64 × 10−36.33 × 10−56.91 × 10−37.81 × 10−37.64 × 10−2
AD−NSGAIIMean5.5072 × 10−13.2368 × 10−16.111 × 10−12.5329 × 10−19.0095 × 10−21.9595 × 10−1
Std.6.746 × 10−53.342 × 10−42.6425 × 10−42.9543 × 10−48.5457 × 10−43.521 × 10−3
Table 4. Parameters setting of reference point and reference angle.
Table 4. Parameters setting of reference point and reference angle.
ParametersExperiment 1Experiment 2Experiment 3Experiment 4
r(1,5)(2,1)(20,1)(1,1)
απ/60π/60π/90π
Table 5. The mean of risk value and return of the portfolio scheme obtained by IAPMOPSO and AD-NSGAII with 20 runs.
Table 5. The mean of risk value and return of the portfolio scheme obtained by IAPMOPSO and AD-NSGAII with 20 runs.
AlgorithmsIndicatorExperiment 1Experiment 2Experiment 3Experiment 4
IAPMOPSO E ( r p )0.99860.7310.34350.7887
σ p 2 512.78182.0435.81308.53
E ( r p ) / σ p 2 0.0019470.0040160.0095920.002556
AD-NSGAII E ( r p ) 0.99940.73150.32710.7253
σ p 2 521.573182.304632.3651300.2783
E ( r p ) / σ p 2 00019160.0040130.0101070.002415
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ling, Q.-H.; Tang, Z.-H.; Huang, G.; Han, F. An Improved Multi-Objective Particle Swarm Optimization Algorithm Based on Angle Preference. Symmetry 2022, 14, 2619. https://doi.org/10.3390/sym14122619

AMA Style

Ling Q-H, Tang Z-H, Huang G, Han F. An Improved Multi-Objective Particle Swarm Optimization Algorithm Based on Angle Preference. Symmetry. 2022; 14(12):2619. https://doi.org/10.3390/sym14122619

Chicago/Turabian Style

Ling, Qing-Hua, Zhi-Hao Tang, Gan Huang, and Fei Han. 2022. "An Improved Multi-Objective Particle Swarm Optimization Algorithm Based on Angle Preference" Symmetry 14, no. 12: 2619. https://doi.org/10.3390/sym14122619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop