Next Article in Journal
Quantum Adiabatic Pumping in Rashba- Dresselhaus-Aharonov-Bohm Interferometer
Previous Article in Journal
On Share Conversions for Private Information Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Shannon Performance in a Multiobjective Particle Swarm Optimization

by
E. J. Solteiro Pires
1,
J. A. Tenreiro Machado
2,* and
P. B. de Moura Oliveira
1
1
INESC TEC—INESC Technology and Science (UTAD pole), ECT–UTAD Escola de Ciências e Tecnologia, Universidade de Trás-os-Montes e Alto Douro, 5000-811 Vila Real, Portugal
2
Department of Electrical Engineering, ISEP—Institute of Engineering, Polytechnic of Porto, Rua Dr. António Bernadino de Almeida, 4249-015 Porto, Portugal
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(9), 827; https://doi.org/10.3390/e21090827
Submission received: 29 July 2019 / Revised: 18 August 2019 / Accepted: 20 August 2019 / Published: 23 August 2019
(This article belongs to the Section Complexity)

Abstract

:
Particle swarm optimization (PSO) is a search algorithm inspired by the collective behavior of flocking birds and fishes. This algorithm is widely adopted for solving optimization problems involving one objective. The evaluation of the PSO progress is usually measured by the fitness of the best particle and the average fitness of the particles. When several objectives are considered, the PSO may incorporate distinct strategies to preserve nondominated solutions along the iterations. The performance of the multiobjective PSO (MOPSO) is usually evaluated by considering the resulting swarm at the end of the algorithm. In this paper, two indices based on the Shannon entropy are presented, to study the swarm dynamic evolution during the MOPSO execution. The results show that both indices are useful for analyzing the diversity and convergence of multiobjective algorithms.

1. Introduction

Multiobjective optimization (MOO) consists either in minimizing or in maximizing a set of objective functions subject to some constraints. In these problems, the objective functions are conflicting, leading to several vectors of decision variables. Each vector represents a possible solution that solves the problem with different trade-offs among the design objectives. Evolutionary and social-based algorithms have attracted the attention of many researchers, because they are frequently superior to conventional mathematical techniques due to their stochastic proprieties [1].
The MOO is inspired by biological phenomena and adopts a population that evolves during several generations. The PSO nature metaphor mimics the behavior of birds flocking or fish schooling [2]. Each bird or fish is represented by a particle with two components, namely by its position and velocity. A set of particles forms the swarm that evolves during several iterations giving rise to a powerful optimization method.
The particle swarm optimization’s (PSO) simplicity and success led to its application in problems where more than one optimization criterion is considered. Many techniques, such as those borrowed from genetic algorithms (GA) [3,4], have been developed to find a set of solutions belonging to the Pareto front. Since the multiobjective PSO (MOPSO) proposal [5], the algorithm was used in a wide range of applications [6,7], and a considerable number of variants of refined MOPSO were developed in order to improve its performance [8,9].
The performance of multiobjective algorithms is usually analyzed at the end of their execution, and its success is measured by means of several metrics proposed in the literature [10]. Additionally, in ambiguous situations, the use of nonparametric tests can be adopted [11,12]. Some of the proposed indices are based on Shannon entropy. Wang et al. [13] presented a method revealing interesting results: (i) the computational effort increases linearly with the number of solutions, (ii) the metric qualifies the combination of uniformity and coverage of the Pareto set, and (iii) it determines when the evolution has reached its maturity. LinLin and Yunfang [14] proposed a technique to measure the performance of multiobjective problems that not only indicates when the algorithm should be stopped, but can also compare the performance of multiobjective algorithms. The technique adopts entropy that is evaluated regarding the solution density in a gridded space.
Other studies tried to unravel the population dynamics during time evolution [15,16,17,18,19,20]. Farhang-Mehr and Azarm [15] developed an entropy-based metric to assess the diversity in a MOEA during the run time. To measure the entropy, a grid of cells was used, where solutions belonging to the same cell are considered identical. They orthogonalize and project them into a plane to count 3D nondominated solutions. Deb and Jain [16] proposed two multiobjective running metrics, one for measuring the convergence and the other for assessing the diversity among solutions. Myers and Hancock [17] suggested the use of the Shannon entropy to evaluate the run-time performance of a GA to solve labeling problems. The entropy measured in the parameter space is used to provide useful information about the algorithm state. Myers and Hancock concluded that populations with entropy smaller than a given threshold become saturated and their diversity disappears. Pires et al. [18] studied the signal propagation during the evolution of a GA. The mutation operator signal suffers a perturbation during some generations, and the corresponding fitness variation is analyzed. Pires et al. adopted the Shannon entropy to study the dynamics of MOPSO [19] and nonsorting GA II [20] during their execution. Wu et al. proposed a MOEA considering individual density (cell density) where the Shannon entropy was used to estimate the evolution state [21].
Taking these ideas in consideration, this paper studies the dynamics and self-organization of solutions during the MOPSO execution. The study analyzed two entropy indices considering three optimization functions with different swarm and archive sizes.
The main contributions of the paper are:
  • New diversity indices inspired by physics and biologic systems.
  • A good agreement of measures between the indices.
  • Identification of stagnating states during the evolution.
Section 2 describes the method adopted in the work and includes a brief description of the main entropy concepts. Section 3 presents the indices for measuring the population diversity. Section 4 formulates the functions to be optimized and analyzes the simulations results. Finally, Section 5 outlines the main conclusions and the perspectives toward future work.

2. Methodology and Entropy Concepts

A careful look into MOPSO reveals a need to understand the dynamics during successive iterations with a particular focus on the particles’ convergence to the nondominated front and particle diversity. For this purpose, the Shannon entropy is used in the follow-up, and a set of tests is performed considering different optimization functions and archive sizes. Since the MOPSO algorithm is stochastic, a battery of tests is required to generate a representative statistical sample [22].
The entropy is associated with several concepts and interpretations [23]. Boltzmann used entropy to describe systems that evolve from ordered to disordered states. Spreading was used by Guggenheim to indicate the diffusion of an energy system from a small to a large volume. Lewis stated that in a spontaneous expansion gas in an isolated system, information regarding the particles’ locations decreases, while the missing information or uncertainty increases.
Shannon [24] developed a theory to quantify the information loss in the transmission of a given message. The study was carried out in a communication channel and focused on the physical and statistical constraints that limit the message transmission. Shannon defined entropy H as a measure of information, given by:
H ( X ) = - K x X p i ( x ) log p i ( x ) .
This expression considers a discrete random variable x X characterized by the probability distribution p ( x ) . The parameter K is a positive constant, often set to 1, and is used to express H in a given unit of measure.
The Shannon entropy can be easily extended to multidimensional random variables. For a pair of random variables ( x , y ) ( X , Y ) , entropy is defined as:
H ( X , Y ) = - K x X y Y p i ( x , y ) log p i ( x , y ) .

3. Entropy Indices for Assessing the MOPSO

In this section, two indices for measuring the entropy in a MOPSO are presented. The first one captures the particle diversity, while the second addresses the front diversity.

3.1. Particle Diversity

The index to measure the particle diversity was proposed previously [25]. The index follows a particular interpretation of entropy. Indeed, entropy can express the spreading of a system energy from a `better located’ state to a `more distributed’ one. Taking this idea in mind, the minimum spanning tree that connects all the archive particles, A, where each connection belongs to the set of edges that connects all the # A particles with the minimal edge distance, was considered. Let d i be one of these edges, where i { 1 , 2 , , # A - 1 } , and p i is a probability given by the following Equation:
p i = d i j = 1 # A - 1 d j .
The particle diversity index is based on this point of view and can be represented as:
H ( X ) = - i = 1 # A - 1 p i log p i .

3.2. Front Level Heterogeneity

In ecology, the diversity measure of different populations species is equated with the uncertainty that occurs when selecting randomly one individual species from the populations [26]. The information content, or population diversity, can be defined in several ways [27], and one of them is explained in the follow-up.
Consider a population with s species in proportion to p i = n i # A , i.e., { p 1 , p 2 , , p s } , where n i is the number of elements of the ith species and s denotes the total number of species. Then the population diversity is given by Shannon and Weaver’s formula [28]:
H ( X ) = - i = 1 s p i log p i .
Taking this idea in mind, Expression (5) is used to measure the Shannon front level diversity, where n i and s are the number of particles in each front and the total number of fronts, respectively. This index is called front level heterogeneity in order to avoid confusion with the particle diversity index. In MOPSO, at later evolution stages, when only the nondominated front exists in the swarm, the entropy heterogeneity H ( X ) is zero.

4. Simulations Results

This section presents the results obtained for 3 optimization problems P i , i = 1 , 2 , 3 , and different archive sizes. The dynamic behavior of algorithms was studied using the two proposed indices.
The optimization problems P 1 and P 2 (known as DTLZ2 and DTLZ4 [29]) are defined by Equations (6) and (7), and problem P 3 (known as UF8 in CEC 2009 special session competition [30]) is formulated in Equation (8) as follows:
min P 1 = [ f 11 ( X ) , f 12 ( X ) , f 13 ( X ) ] f 11 ( X ) = [ 1 + g 1 ( X ) ] cos ( x 1 π / 2 ) cos ( x 2 π / 2 ) f 12 ( X ) = [ 1 + g 1 ( X ) ] cos ( x 1 π / 2 ) sin ( x 2 π / 2 ) f 13 ( X ) = [ 1 + g 1 ( X ) ] sin ( x 1 π / 2 ) g 1 ( X ) = 1 + 9 i = 3 m ( x i - 0 . 5 ) 2
min P 2 = [ f 21 ( X ) , f 22 ( X ) , f 23 ( X ) ] f 21 ( X ) = [ 1 + g 2 ( X ) ] cos ( x 1 α π / 2 ) cos ( x 2 α π / 2 ) f 22 ( X ) = [ 1 + g 2 ( X ) ] cos ( x 1 α π / 2 ) sin ( x 2 α π / 2 ) f 23 ( X ) = [ 1 + g 2 ( X ) ] sin ( x 1 α π / 2 ) g 2 ( X ) = 1 + 9 i = 3 m ( x i α - 0 . 5 ) 2
min P 3 = [ f 31 ( X ) , f 32 ( X ) , f 33 ( X ) ] f 31 ( X ) = cos ( 0 . 5 x 1 π ) cos ( 0 . 5 x 2 π ) + 2 | J 1 | j J 1 g j ( X ) f 32 ( X ) = cos ( 0 . 5 x 1 π ) sin ( 0 . 5 x 2 π ) + 2 | J 2 | j J 2 g j ( X ) f 33 ( X ) = sin ( 0 . 5 x 1 π ) + 2 | J 3 | j J 3 g j ( X ) g j ( X ) = x j - 2 x 2 sin ( 2 π x 1 + j π m ) 2 J 1 = { j | 3 j m , and j + 2 is multiple of 3 } J 2 = { j | 3 j m , and j + 1 is multiple of 3 } J 3 = { j | 3 j m , and j is multiple of 3 }
where m is the number of parameters, f i j is the objective j { 1 , 2 , 3 } of problem i { 1 , 2 , 3 } , and g represent some auxiliary functions in order to simplify the expressions. For { P 1 , P 2 } , the parameter intervals are set to x i [ 0 , 1 ] . In Equation (7), the parameter value α = 100 allows a meta-variable mapping, x i x i 2 , between the two functions [29]. For { P 3 } , x i [ 0 , 1 ] if i 2 and x i [ - 2 , 2 ] if 2 < i m - 2 . For { P 1 , P 2 , P 3 } , the number of parameters is set to m = { 12 , 12 , 30 } .
These problems are to be optimized using a MOPSO [5,6], where the search is driven by a population of particles that move using the equations:
v i t + 1 = w · v i t + ϕ 1 · rand ( 0 , 1 ) · ( b i - x i t ) + ϕ 2 · rand ( 0 , 1 ) · ( g i - x i t ) ,
x i t + 1 = x i t + v i t + 1 ,
where t is the iteration number, w denotes the inertia coefficient, and the positions x i and velocities v i are codified by means of real numbers. In order to start with an high exploration rate of the search space, w is initialized with the value 0 . 7 and decreases linearly with t to 0 . 25 . In the stages where w is near the value 0 . 25 , more importance is given to the local search rather then the global one. In the particle motion, the same influence is given to the local best particle position b i and the position of the `best’ particle g i . Therefore, the cognitive and social coefficients are set to the values ϕ 1 = ϕ 2 = 0 . 8 . In a nondominated set, there is no best solution. Consequently, to choose a particle with similar characteristics while incorporating uncertainty, a particle determines its `global best’, or guide, by randomly selecting three particles from the archive and picking up the nearest particle g i .
The archive is updated, at the end of each iteration, using a ( μ + λ ) strategy among the archive, # μ , and swarm, # λ , solutions. Therefore, the best μ solutions are chosen among the archive and population solutions. The solutions are selected according to the maximin sorting scheme [12].
Four swarm sizes with # N = { 250 , 300 , 350 , 400 } particles and four archive sizes of # A = { 50 , 100 , 150 , 200 } particles are adopted, resulting in a total of 4 2 different experiments. For each experiment, 21 distinct runs were performed, their entropies evaluated, and the medians of the particles and the populations’ diversities and heterogeneities H m = median ( H ) and H m = median ( H ) at each iteration t taken as representing the entropy evolution at that instant.
This section presents the entropy evolution for those experiments addressing the problems { P 1 , P 2 , P 3 } .

4.1. Results of DTLZ Problems Optimization

The first optimization functions to be considered belong to problem P 1 , with three objectives described by Equation (6). Expressions (4) and (5) are adopted to monitoring the MOPSO evolution. The results are depicted in Figure 1 and Figure 2 for archive sizes of # A 1 = 50 and # A 4 = 200 particles, respectively. The charts show the median entropies H m and H m versus the iteration t, for experiments with # N = { 250 , 300 , 350 , 400 } particles. The curves with the `solid’ and `dotted’ lines represent H m and H m , respectively.
It can be observed that the two entropy signals are inversely correlated. In what concerns H m , it is verified that it starts with a low value and increases during the first iterations. Afterwards, H m remains almost stationary during some interactions and finally decreases to zero. This means that at the early stages, the number of fronts increases, remains constant during a certain number of iterations, and then decreases until only one front remains (i.e., the nondominated front), when H m = 0 . As the archive size increases, the initial transient takes more iterations, because as the number of archive size increases, it gets more difficult to find a larger number of nondominated particles in the same period. On the other hand, since the number of particles is larger, it is possible for more fronts to emerge. Therefore, H m takes more iterations to approach zero as the archive size increases.
Figure 3 and Figure 4 present the entropy indices for the P 2 problem. A behavior similar to the one exhibited by P 1 is visible.

4.2. Results of P 3 Optimization

Figure 5 and Figure 6 present the entropy indices for the P 3 problem. Here, H m starts by decreasing, showing that the number of fronts begins to reduce until only one front remains. Since the initial value of H m is low, the number of initial fronts is also small. On the other hand, the diversity between the particles begins with a high value and increases slightly during the iterations.
The number of fronts, i.e., the entropy front level diversity, can increase or decrease at early stages depending on the optimization problem.

4.3. Correlation Coefficient

The results reveal a correlation between H m and H m . The Pearson correlation coefficient r measures the strength and direction of a relationship between two indices, during T = 10 3 iterations:
r = # A t = 1 T H m ( t ) H m ( t ) - t = 1 T H m ( t ) t = 1 T H m ( t ) # A t = 1 T H m 2 ( t ) - t = 1 T H m ( t ) 2 # A t = 1 T H m 2 ( t ) - t = 1 T H m ( t ) 2 .
Table 1 presents the correlation between the particle diversity and front heterogeneity entropies. Column 1 indicates the archive size, column 2 stands for the swarm size, and the symbols r 1 , r 2 , and r 3 represent the correlation between H m and H m for the problems P 1 , P 2 , and P 3 , respectively. An almost perfect negative relationship can be observed between them for each DTLZ problem. The relationship for the P 3 problem signals is moderate for the archive size of # A = 50 particles, but it is stronger for the other archive sizes. These values of r demonstrate that the 2 indices are in good agreement, concluding that the diversity solution is highly correlated in the number of fronts.

4.4. Archive Evolution

In order to show the particle distribution of the archive, the position of the particles archive is plotted at t = { 0 , 3 , 80 , 110 , 400 , 1000 } iterations, for problem P 2 , with # N = 250 and # A = 200 . The iterations are chosen at different stages of the evolution, namely, at the beginning ( t = 1 ), when the diversity index drops down ( t = 4 ), at the end of this stagnation phase ( t = 80 ), after an abrupt increase of the index ( t = 110 ), after some iterations ( t = 400 ), and at the end of the run ( t = 1000 ). The plots represented in Figure 7 illustrate the result for one single run.
It can be observed that the entropy achieves the maximum value when the archive solutions are well dispersed.

5. Conclusions and Future Work

Two indices based on entropy were proposed to characterize the MOPSO dynamics. This work measures the diversity of the archive during the evolution, adopting one possible interpretation of entropy. The first index, the particle diversity, is used to measure the archive diversity between particles. The second index, borrowed from ecology, measures the species heterogeneity, in this case the front level heterogeneity. The indices were evaluated using different approaches, but both entropy indices were in good agreement, revealing that solution diversity is correlated with the number of fronts. The particle diversity indices when stagnated reveal that the algorithm has converged. On the other hand, the front level heterogeneity, when reaching zero, indicates that there is only one front in the archive.
For most MOPSO reported in the literature, the performance evaluation is analyzed at the end of the algorithm, by comparing the final front extension, spreading, and diversity. The indices formulated in this paper can be used to analyze the convergence rate during the time evolution. In future work, the indices will be used to identify stained stages of MOPSO and introduce mechanisms to promote the dispersion of particles during evolution optimization.

Author Contributions

All authors contributed equally to the paper.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qu, B.; Zhu, Y.; Jiao, Y.; Wu, M.; Suganthan, P.; Liang, J. A survey on multi-objective evolutionary algorithms for the solution of the environmental/economic dispatch problems. Swarm Evol. Comput. 2018, 38, 1–11. [Google Scholar] [CrossRef]
  2. Kennedy, J. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  3. Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison–Wesley: Boston, MA, USA, 1989. [Google Scholar]
  4. Deb, K. Multi-Objective Optimization using Evolutionary Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  5. Coello, C.A.C.; Lechuga, M. MOPSO: A proposal for multiple objective particle swarm optimization. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1051–1056. [Google Scholar] [CrossRef]
  6. Reyes-Sierra, M.; Coello, C. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. Int. J. Comput. Intell. Res. 2006, 2, 287–308. [Google Scholar]
  7. Zhou, A.; Qu, B.Y.; Li, H.; Zhao, S.Z.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [CrossRef]
  8. Zhao, S.Z.; Suganthan, P.N. Two-lbests based multi-objective particle swarm optimizer. Eng. Optim. 2011, 43, 1–17. [Google Scholar] [CrossRef]
  9. Freire, H.; Moura Oliveira, P.B.; Solteiro Pires, E.J. From single to many-objective PID controller design using particle swarm optimization. Int. J. Control Autom. Syst. 2017, 15, 918–932. [Google Scholar] [CrossRef]
  10. Riquelme, N.; Lücken, C.V.; Baran, B. Performance metrics in multi-objective optimization. In Proceedings of the 2015 Latin American Computing Conference (CLEI), Arequipa, Peru, 19–23 October 2015; pp. 1–11. [Google Scholar] [CrossRef]
  11. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  12. Solteiro Pires, E.J.; de Moura Oliveira, P.B.; Tenreiro Machado, J.A. Multi-objective MaxiMin Sorting Scheme. In Proceedings of the Conference on Evolutionary Multi-criterion Optimization—EMO 2005, Guanajuato, Mexico, 9–11 March 2005; Lecture Notes in Computer Science Volume 3410. Springer: Guanajuanto, Mexico, 2005; pp. 165–175. [Google Scholar] [Green Version]
  13. Wang, L.; Chen, Y.; Tang, Y.; Sun, F. The entropy metric in diversity of Multiobjective Evolutionary Algorithms. In Proceedings of the 2011 International Conference of Soft Computing and Pattern Recognition (SoCPaR), Dalian, China, 14–16 October 2011; pp. 217–221. [Google Scholar] [CrossRef]
  14. LinLin, W.; Yunfang, C. Diversity Based on Entropy: A Novel Evaluation Criterion in Multi-objective Optimization Algorithm. Int. J. Intell. Syst. Appl. 2012, 4, 113–124. [Google Scholar] [Green Version]
  15. Farhang-Mehr, A.; Azarm, S. Diversity assessment of Pareto optimal solution sets: An entropy approach. In Proceedings of the 2002 Congress on Evolutionary Computation (CEC’02), Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 723–728. [Google Scholar] [CrossRef]
  16. Deb, K.; Jain, S. Running Performance Metrics for Evolutionary Multi-Objective Optimization; Technical Report 2002004; Indian Institute of Technology: Kanpur, India, 2002. [Google Scholar]
  17. Myers, R.; Hancock, E.R. Genetic algorithms for ambiguous labelling problems. Pattern Recognit. 2000, 33, 685–704. [Google Scholar] [CrossRef]
  18. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; de Moura Oliveira, P.B. Dynamical Modelling of a Genetic Algorithm. Signal Process. 2006, 86, 2760–2770. [Google Scholar] [CrossRef]
  19. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; de Moura Oliveira, P.B. Entropy Diversity in Multi-Objective Particle Swarm Optimization. Entropy 2013, 15, 5475–5491. [Google Scholar] [CrossRef] [Green Version]
  20. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; de Moura Oliveira, P.B. Diversity study of multi-objective genetic algorithm based on Shannon entropy. In Proceedings of the 2014 Sixth World Congress on Nature and Biologically Inspired Computing (NaBIC 2014), Porto, Portugal, 30 July–1 August 2014; pp. 17–22. [Google Scholar] [CrossRef]
  21. Wu, C.; Wu, T.; Fu, K.; Zhu, Y.; Li, Y.; He, W.; Tang, S. AMOBH: Adaptive Multiobjective Black Hole Algorithm. Comput. Intell. Neurosci. 2017, 2017, 6153951. [Google Scholar] [CrossRef] [PubMed]
  22. Seneta, E. A Tricentenary history of the Law of Large Numbers. Bernoulli 2013, 19, 1088–1121. [Google Scholar] [CrossRef] [Green Version]
  23. Ben-Naim, A. Entropy and the Second Law: Interpretation and Misss-Interpretations; World Scientific Publishing Company: Singapore, 2012. [Google Scholar]
  24. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  25. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; de Moura Oliveira, P.B. Multi-objective Dynamic Analysis Using Fractional Entropy. In Intelligent Systems Design and Applications; Madureira, A.M., Abraham, A., Gamboa, D., Novais, P., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 448–456. [Google Scholar]
  26. Pielou, E.C. Shannon’s Formula as a Measure of Specific Diversity: Its Use and Misuse. Am. Nat. 1966, 100, 463–465. [Google Scholar] [CrossRef]
  27. Morris, E.K.; Caruso, T.; Buscot, F.; Fischer, M.; Hancock, C.; Maier, T.S.; Meiners, T.; Mäller, C.; Obermaier, E.; Prati, D.; et al. Choosing and using diversity indices: Insights for ecological applications from the German Biodiversity Exploratories. Ecol. Evol. 2018, 18, 3514–3524. [Google Scholar] [CrossRef] [PubMed]
  28. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Champaign, IL, USA, 1963. [Google Scholar]
  29. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable Multi-Objective Optimization Test Problems. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002. [Google Scholar]
  30. Zhang, Q.; Zhou, A.; Zhao, S.; Suganthan, P.N.; Liu, W.; Tiwari, S. Multiobjective Optimization Test Instances for the CEC 2009 Special Session and Competition; Technical Report CES-487; University of Essex and Nanyang Technological University: Essex, UK, 2008. [Google Scholar]
Figure 1. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 1 and # A = 50 .
Figure 1. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 1 and # A = 50 .
Entropy 21 00827 g001
Figure 2. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 1 and # A = 200 .
Figure 2. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 1 and # A = 200 .
Entropy 21 00827 g002
Figure 3. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 2 and # A = 50 .
Figure 3. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 2 and # A = 50 .
Entropy 21 00827 g003
Figure 4. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 2 and # A = 200 .
Figure 4. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 2 and # A = 200 .
Entropy 21 00827 g004
Figure 5. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 3 and # A = 50 .
Figure 5. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 3 and # A = 50 .
Entropy 21 00827 g005
Figure 6. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 3 and # A = 200 .
Figure 6. Evolution of H m (continuous lines) and H m (dotted lines) versus the iterations t of the MSPSO iterations for P 3 and # A = 200 .
Entropy 21 00827 g006
Figure 7. MPSO evolution for an isolated run of the P2 problem with #N = 250 and #A = 200 for iteration (a) t = 1, (b) t = 4, (c) t = 80, (d) t = 110, (e) t = 400 and (f) t = 1000.
Figure 7. MPSO evolution for an isolated run of the P2 problem with #N = 250 and #A = 200 for iteration (a) t = 1, (b) t = 4, (c) t = 80, (d) t = 110, (e) t = 400 and (f) t = 1000.
Entropy 21 00827 g007
Table 1. Pearson correlation coefficient r between H m and H m .
Table 1. Pearson correlation coefficient r between H m and H m .
Archive Size ( A p )Swarm Size ( N p ) r 1 r 2 r 3
50250 - 0 . 97 - 0 . 92 - 0 . 51
300 - 0 . 97 - 0 . 92 - 0 . 66
350 - 0 . 88 - 0 . 96 - 0 . 50
400 - 0 . 93 - 0 . 89 - 0 . 43
100250 - 0 . 99 - 0 . 99 - 0 . 81
300 - 0 . 99 - 0 . 92 - 0 . 69
350 - 0 . 97 - 0 . 96 - 0 . 73
400 - 0 . 96 - 0 . 95 - 0 . 75
150250 - 0 . 98 - 0 . 98 - 0 . 82
300 - 0 . 98 - 0 . 99 - 0 . 74
350 - 0 . 97 - 0 . 99 - 0 . 80
400 - 0 . 98 - 0 . 97 - 0 . 69
200250 - 0 . 99 - 0 . 98 - 0 . 81
300 - 0 . 98 - 0 . 98 - 0 . 72
350 - 0 . 97 - 0 . 98 - 0 . 75
400 - 0 . 96 - 0 . 98 - 0 . 77

Share and Cite

MDPI and ACS Style

Pires, E.J.S.; Machado, J.A.T.; Oliveira, P.B.d.M. Dynamic Shannon Performance in a Multiobjective Particle Swarm Optimization. Entropy 2019, 21, 827. https://doi.org/10.3390/e21090827

AMA Style

Pires EJS, Machado JAT, Oliveira PBdM. Dynamic Shannon Performance in a Multiobjective Particle Swarm Optimization. Entropy. 2019; 21(9):827. https://doi.org/10.3390/e21090827

Chicago/Turabian Style

Pires, E. J. Solteiro, J. A. Tenreiro Machado, and P. B. de Moura Oliveira. 2019. "Dynamic Shannon Performance in a Multiobjective Particle Swarm Optimization" Entropy 21, no. 9: 827. https://doi.org/10.3390/e21090827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop