Next Article in Journal
Simulation and Controller Design for a Fish Robot with Control Fins
Next Article in Special Issue
A Study on the Design of Knee Exoskeleton Rehabilitation Based on the RFPBS Model
Previous Article in Journal
An Improved Dyna-Q Algorithm Inspired by the Forward Prediction Mechanism in the Rat Brain for Mobile Robot Path Planning
Previous Article in Special Issue
Research on Economic Load Dispatch Problem of Microgrid Based on an Improved Pelican Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MOBCA: Multi-Objective Besiege and Conquer Algorithm

1
Center for Artificial Intelligence, Jilin University of Finance and Economics, Changchun 130117, China
2
Jilin Province Key Laboratory of Fintech, Jilin University of Finance and Economics, Changchun 130117, China
3
College of Foreign Languages, Jilin Agricultural University, Changchun 130118, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Biomimetics 2024, 9(6), 316; https://doi.org/10.3390/biomimetics9060316
Submission received: 17 April 2024 / Revised: 17 May 2024 / Accepted: 22 May 2024 / Published: 24 May 2024
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 2nd Edition)

Abstract

:
The besiege and conquer algorithm has shown excellent performance in single-objective optimization problems. However, there is no literature on the research of the BCA algorithm on multi-objective optimization problems. Therefore, this paper proposes a new multi-objective besiege and conquer algorithm to solve multi-objective optimization problems. The grid mechanism, archiving mechanism, and leader selection mechanism are integrated into the BCA to estimate the Pareto optimal solution and approach the Pareto optimal frontier. The proposed algorithm is tested with MOPSO, MOEA/D, and NSGAIII on the benchmark function IMOP and ZDT. The experiment results show that the proposed algorithm can obtain competitive results in terms of the accuracy of the Pareto optimal solution.

1. Introduction

With the advent of the era of artificial intelligence, the evolutionary algorithm of swarm intelligence has received more and more attention [1]. Swarm intelligence algorithms are designed by researchers who are inspired by natural phenomena or population activities [2]. These algorithms are often used to solve complex optimization problems [3].
Optimization problems can be divided into single-objective optimization problems and multi-objective optimization problems [4]. Single-objective optimization problems have more specific objectives than multi-objective optimization problems [5]. Therefore, the solution to multi-objective optimization problems is more complicated. The result of solving a multi-objective optimization problem requires a decision maker to choose the final result [6]. Therefore, when solving multi-objective optimization problems, multiple different results are required for decision-makers to choose. There are usually two ways to deal with multi-objective problems: priori and posteriori [7,8].
A priori means that the decision-maker first weights the decision variables of the problem to be optimized. In this way, the multi-objective problem can be transformed into a single-objective problem [9]. In this case, the problem can be solved by employing a single-objective optimization algorithm. As a result, only one solution can be obtained as the optimal solution. The optimizer must be run multiple times if multiple optimal solutions are required [10]. On the contrary, when dealing with multi-objective problems, the posterior method ensures that the nature of the problem does not change. In other words, the problem will not be transformed into a single-objective problem [11]. This enables the design parameters or conditions to vary within a certain range while obtaining a different set of solutions. Then, the decision maker chooses a solution from the obtained solution set as the optimal solution according to his own preference or actual situation [12].
The multi-objective optimizer aims to find a set of non-dominated solutions, obtain more non-dominated solutions through iteration, and improve the quality of the solution [13]. The resulting non-dominated solution can trade-off between multiple objectives [14]. Compared with adopting a priori method to solve it, the posterior method retains more original characteristics of the multi-objective problem, meaning the method can obtain more optimal solutions in a short time [15].
There are many multi-objective optimization algorithms in the literature that solve multi-objective problems. Most of the algorithms are inspired by the single-objective algorithm, such as the grey wolf optimizer (GWO) [16], particle swarm optimization (PSO) [17], the ant lion optimization algorithm (ALO) [18], the grasshopper optimization algorithm (GOA) [5], the genetic algorithm (GA), and the evolutionary algorithm (EA). These algorithms are summarized in the following Table 1:
Many meta-heuristic multi-objective optimization algorithms (MOAs) have been proposed according to the literature review above. There are many different types of MOAs to solve different types of multi-objective optimization problems (MOPs). The existing MOAs are faced with low speed of convergence [31], are easily trapped by local optima [32,33], lack diversity in solutions when solving MOPs [34,35], etc. Based on the single-objective besiege and conquer algorithm (BCA), we propose an MOBCA to solve these problems. The contributions can be summarized as follows:
(i) The single-objective BCA outperforms other algorithms in the literature, and converging to the local optima is not easy. Therefore, we attempt to demonstrate its effectiveness in MOPs.
(ii) The BCA has been equipped with a grid mechanism to guarantee the diversity in the convergence process.
The rest of the paper is organized as follows: Section 2 provides a literature review concerning MOPs. Section 3 proposes the multi-objective besiege and conquer algorithm. Section 4 presents, discusses, and analyzes the experiment results on the ZDT, IMOP, and real-world problems. Finally, Section 5 concludes the work and suggests future works.

2. Literature Review

2.1. Multi-Objective Optimization Problem

Multi-objective optimization is a classic optimization problem. This problem needs to optimize multiple objectives at the same time. Therefore, the algorithm is required to find the maximum or minimum of multiple objective functions at the same time [9]. Compared with single-objective optimization problems, only one objective function needs to be optimized. As a result, we can easily compare the pros and cons of the fitness value in single-objective optimization problems.
A multi-objective optimization problem can be defined as follows:
Minimize : F ( x ) = f 1 ( x ) , f 2 ( x ) , , f o ( x ) Subject to : g i ( x ) 0 , i = 1 , 2 , , m h i ( x ) = 0 , i = 1 , 2 , , p L i x i U i , i = 1 , 2 , , n
where n is the number of variables, o is the number of objective functions, m is the number of inequality constraints, p is the number of equality constraints, g i is the i t h inequality constraint, h i indicates the i t h equality constraint, and [ L i , U i ] are the boundaries of the i t h variable.
In Figure 1, we obtain two different solutions, x 1 and x 2 , which correspond to different evaluation values, f ( x 1 ) and f ( x 2 ) . We can clearly label these two fitness values on a one-dimensional coordinate axis. In this coordinate axis, point o is the origin. We can clearly understand the relationship between the fitness values corresponding to the two solutions. That means, if we need to determine the smallest fitness value for the problem, solution x 2 in the figure is desirable.
For fitness values in multi-objective problems, each evaluation process produces multiple evaluation values. This means that each evaluation process will produce at least two evaluation values. This is shown in Figure 2a,b. For example, during the two evaluations processes, two sets of fitness values are generated as follows:
F i t n e s s _ S e t 1 = { f 1 ( x ) , f 2 ( x ) } , F i t n e s s _ S e t 2 = { f 1 ( y ) , f 2 ( y ) } ,
where x is the first solution of F i t n e s s _ s e t , f 1 is the first evaluation function, f 1 ( x ) is the evaluation function value obtained by x through the first evaluation function, and f 2 and f 2 ( x ) are the same. y is the second solution of F i t n e s s _ s e t , its evaluation functions f 1 and f 2 are the same as in solution x, but the obtained evaluation value is different.
In Figure 2a, all the fitness values of the solution x are smaller than the fitness values of solution y. From this, we can easily judge that x is a better solution than y.
In Figure 2b, we can observe that f 1 ( x ) < f 1 ( y ) and f 2 ( x ) > f 2 ( y ) , and it is difficult for us to judge whether x or y is better.
As mentioned above, the evaluation of the solution is more complicated than that of the single-objective problem. When we evaluate a multi-objective solution, it is difficult to determine an evaluation standard to judge the quality of the solution. So, the introduction of a series of definitions of Pareto is necessary [36].
Definition 1.
Pareto Optimality:
Solution x X is deemed Pareto-optimal if and only if the following is applicable:
{ y X F ( y ) F ( x ) } .
Definition 2.
Pareto Dominance:
It is assumed that there are two vectors: x = ( x 1 , x 2 , . . . , x k ) and y = ( y 1 , y 2 , . . . , y k ) . Vector x dominates vector y (denoted as x y ) if and only if the following is applicable:
i { 1 , 2 , , k } : f i ( x ) f i ( y ) i { 1 , 2 , , k } : f i ( x ) < f i ( y ) .
Definition 3.
Pareto-optimal Set:
The set including all Pareto-optimal solutions is called a Pareto set:
P s : = { x , y X F ( y ) F ( x ) } .
Definition 4.
Pareto-optimal Front:
A set that contains the value of objective functions for the Pareto solutions set:
P f : = F ( x ) x P s .
In Figure 3a,b, all solutions that lie on the Pareto front can dominate solutions that do not. The set of points located on the Pareto front constitutes a Pareto optimal solution set. This solution set is the optimal solution to the multi-objective optimization problem.

2.2. Multi-Objective Optimization in Metaheuristics

The ultimate goal of a multi-objective optimization algorithm is to determine a Pareto optimal solution set and ensure that the solutions within this set are uniformly distributed across the Pareto front [37]. A solution set like this can provide decision-makers with more high-quality choices. Therefore, when solving multi-objective optimization problems, we mainly focus on two issues: the diversity of the solution set and the distance between the solution and the true Pareto front [38].
Metaheuristic optimization algorithms can perform well in solving multi-objective optimization problems [39]. The meta-heuristic algorithm usually uses a random strategy to generate populations and adaptively adjusts the position of randomly generated offspring, which can converge to cover the Pareto front. The parallelism and scalability of the heuristic algorithm are also noteworthy, enabling it to solve large-scale and complex multi-objective optimization problems.
In recent decades, multi-objective optimization algorithms have received extensive attention in their field due to their powerful ability to solve complex optimization problems [40]. Some novel and excellent multi-objective optimization algorithms have been proposed: the multi-objective grey wolf optimizer (MOGWO) [8], the multi-objective ant lion optimizer (MOALO) [11], the multi-objective whale optimization algorithm (MOWOA) [20], the vector-evaluated genetic algorithm (VEGA) [30], the niched Pareto genetic algorithm (NPGA) [28], the non-dominated sorting genetic algorithm (NSGA) [22], the non-dominated sorting genetic algorithm II (NSGA-II) [23], the non-dominated sorting genetic algorithm III (NSGA-III) [24], and the multi-objective evolutionary algorithm based on decomposition MOEA/D [26]. The above algorithms can approach the real Pareto optimal frontier in some multi-objective problems. However, according to the NFL theorem, there may be problems that these algorithms cannot solve [41]. So, new multi-objective algorithms should be proposed. In the next section, a new multi-objective optimization algorithm is proposed to solve the multi-objective optimization problem.

3. Multi-Objective Besiege and Conquer Algorithm (MOBCA)

3.1. Besiege and Conquer Algorithm (BCA)

The BCA algorithm was proposed by Jiang et al. in 2023. To enable BCA to solve multi-objective problems, the single-objective version of BCA also deserves a brief introduction here. The process of BCA optimization and the location update process can be briefly summarized by the following mathematical formulas:
S j , d t + 1 = B d t + | A r , d t A i , d t | × k 1 , i f r a n d < B C B , A r , d t + | A r , d t A i , d t | × k 2 , e l s e ,
where S j , d t + 1 is the j t h soldier of the d t h dimension with ( t + 1 ) t h iteration, B d t is the current best army with t t h iteration, A i , d t is the i t h army of the d t h dimension with t t h iteration, A r , d t is a random army of the d t h dimension with t t h iteration, and k 1 , k 2 is the disturbance coefficient.
k 1 and k 2 can be calculated using following two equations:
k 1 = s i n ( 2 × π × r a n d ) ,
k 2 = c o s ( 2 × π × r a n d ) .
Parameter B C B is set to adjust global search or local search. It can be dynamically changed according to the distance between the current army and the best army. If the current army is the best army in the population, then parameter B C B will change to 0.2 in order to enhance the global search. This will avoid local stagnation.
The BCA algorithm starts the optimization process with a set of randomly generated populations. In the optimization process, the initial population is 1/3 of the set population size, which means that each army has three soldiers. During the position update process, each army will generate three soldiers according to the position update Equation (1). But, the positions of these soldiers will only be stored when the army is updated. The soldiers’ position will be eliminated if the fitness value of soldiers is not better than that of armies when the evaluation process is completed. When all armies are updated, the algorithm judges the distance relationship between the current army and the global optimum. So, the algorithm can adjust the value of B C B according to the distance, which can control the global search or local search. This is an aggressive position update strategy, and it also enables BCA to have better global optimization capabilities.
The number of function evaluations ( F E s ) is described as Equation (4).
F E s = F E s + n S o l d i e r s × n A r m i e s ,
where n S o l d i e r s is the number of soldiers in each army, and n A r m i e s is the number of armies. The algorithm will output the best solutions when the F E s reaches max function evaluation times ( M a x _ F E s ).

3.2. Multi-Objective Besiege and Conquer Algorithm (MOBCA)

To apply BCA to solve multi-objective optimization problems, we need to introduce related mechanisms. These mechanisms function roughly as MOPSO [19], including grid, archive, and leader selection mechanisms.
The purpose of the grid mechanism is to determine the location of each solution in the objective space. During the initialization phase, the objective space is divided into several grids according to the scalar parameter d i v . As the optimization process proceeds, more solutions are obtained by the MOAs. Suppose that a solution is not located in the grid at one iteration, the grid will be updated to cover the new solution. Specifically, the grid is used to determine the relative positions between solutions in objective space. This information will be used to determine whether a solution is retained. This is associated with the archive. In the literature, MOGWO utilizes the grid mechanism to decide the α , β , and δ wolf [8]. A solution located in the sparest grid will be chosen as the α wolf, while the solutions in the second- and third-sparest grids will be chosen as the β and δ . This method aims to lead the algorithm to maintain diversity in solutions.
The archive stores non-dominated solutions during iterations. The maximum size of the archive is equal to the population size. In each iteration, the archive members will be compared with the new offspring population. Then, the archive will update the latest non-dominated solutions as the new archive members. In the replacement process, the archive size may exceed the maximum limitation. The grid mechanism comes in handy to maintain the diversity of the non-dominated population. The grid location will be rearranged if the archive size exceeds the maximum limitation. The surplus member will be deleted via the information on the crowding degree. Those solutions that have occupied a crowding grid will be more likely to be deleted. In GWO, the algorithm only stores the first three wolves in the population [16]. However, this is not suitable in the MOA. We expect to obtain more diverse solutions to guide the subsequent optimization process. So, an archive equal to the population size to store the solutions is necessary.
In PSO, the iterative update of particles is based on global best and personal best [17]. In MOPs, the personal best is easily updated if the new solution dominates the old one. In contrast, the global best is hard to update. It is easy to select a leader in single-objective optimization to guide the population to a promising area to approach the global optimum. However, in multi-objective optimization, comparing one solution with another makes it hard due to the Pareto optimal concepts mentioned before. Therefore, we must use a leader selection mechanism to solve this problem. In multi-objective problems, promising solutions approach the Pareto front and have good population diversity. The solution in the archive set is composed of many promising solutions. So, the leader selection mechanism will use the roulette method to select the least crowded solution in the archive as the leader.
The BCA is introduced in the archive mechanism to store non-dominated solutions. The global best solution will be selected from the archive to lead the population. In MOBCA, every soldier may be assigned to a different global best. This will guarantee diversity in the population. Each soldier will move in a different direction to the Pareto Front. Once armies exceed the maximum size, redundant armies will be deleted on the basis of the grid mechanism. The convergence of the MOBCA is guaranteed because it employs the same mathematical model of BCA. During the optimization process, BCA completes convergence by changing the position of the agent factor. This behavior guarantees the convergence ability of an algorithm in the search process according to [42]. The MOBCA inherits all the characteristics of the BCA, which means that the search agents explore and exploit the search space in the same manner.
The main difference is that the MOBCA is based on a set of archive members, while the BCA only saves and improves three of the best solutions.
The workflow of the BCA is shown in Figure 4, and its pseudo code is presented in Algorithm 1.
Algorithm 1: MOBCA: Mutil-objective besiege and conquer algorithm.
Biomimetics 09 00316 i001

3.3. Computational Complexity

The complexity of the proposed MOBCA is based on the number of decision variables, the objective variables, and the population size. Imagine a multi-objective problem with D decision variables, M objective variables, and N particles as an example. The MOBCA mainly includes updated soldiers, an updated archive, and updated armies.
The complexity of update soldiers is decided by N , D , and M a x _ F E s . This process will be executed M a x _ F E s / N times; thus, the computational complexity of this process is O ( D × N × M a x _ F E s / N ) .
The updated archive involves deleting redundant solutions, and the complexity is O ( M × N 2 × M a x _ F E s / N ) .
In the MOBCA, after the update archive process, we must also update armies for the next generation, so this process is similar to the update archive. The complexity of this process is O ( M × N 2 × M a x _ F E s / N + n A r m i e s × M a x _ F E s / N ) .
Since the D , M , and n A r m i e s are far lower than N, the final complexity of the proposed MOBCA is O ( M a x _ F E s / N × N 2 ) .

4. Experimental Settings

The experiment codes are executed in a MATLAB R2022b environment under the Windows 10 operating system. All simulations are carried out on a computer with an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz 2.21 GHz and memory of 16 G.

4.1. Experimental Settings

The proposed algorithm is compared with three well-known algorithms, including the multi-objective particle swarm optimizer (MOPSO) [19], the non-dominated sorting genetic algorithm III (NSGAIII) [24], and the multi-objective evolutionary algorithm based on decomposition (MOEA/D) [26]. The default parameters in PlatEMO v4.3 [43] are used.
Different from the evaluation of a single-objective optimization algorithm, the performance evaluation of a multi-objective optimization algorithm needs to be evaluated by other calculation methods. The specific calculation method of the performance evaluation index is as follows:
For the inverted generational distance (IGD) for measuring convergence [44], its mathematical formula can be expressed as Equation (5):
IGD = i = 1 n t d i 2 n ,
where n t shows the size of the true Pareto optimal solutions set, and d i indicates the Euclidean distance (ED) between the true Pareto optimal solutions set and obtained solution set. n represents the number of obtained Pareto solutions.
IGD can evaluate the distance between the Pareto optimal solution and the actual obtained solution. However, evaluating the sparsity of the Pareto solution set requires the use of another evaluation indicator: hypervolume (HV).
Hypervolume is a widely employed performance metric in the domain of multi-objective optimization [45,46,47]. It quantifies the hypervolume enclosed by a set of solutions within the objective space, representing the volume of the space dominated by these solutions. HV serves as a crucial indicator for assessing the quality of solution sets generated by different multi-objective optimization algorithms, where larger HV values typically indicate superior performance.
The calculation of HV involves the determination of the volume within the Pareto front formed by the set of solutions. It is computed as the integral of the dominated portion of the objective space. Formally, the hypervolume (HV) is defined as follows:
Hypervolume (HV) is calculated using the following Equation (6):
H V ( X ) = R m H ( X , z ) d z ,
where X represents the set of solutions, R m is the objective space, and H ( X , z ) is the hypervolume contribution Equation (7):
H ( X , z ) = max x X i = 1 m max ( 0 , z i x i ) .
Here, z is a reference point in the objective space. The integral spans the entire objective space, and the computation involves evaluating the contribution of each solution in X to the overall hypervolume.
Utilizing the above evaluation metrics allows us to quantitatively compare MOBCA with MOPSO, NSGAIII, and MOEA/D. In addition, we can illustrate the best set of Pareto optimal solutions obtained by each algorithm on the search space. This method allows us to compare the performance of the algorithms qualitatively. All algorithms are run 30 times on the test problems, and the statistical results of these 30 runs are provided in Table 2 and Table 3. Note that we use 10,000 function evaluations for each algorithm. The qualitative results are also provided in Figure 5, Figure 6 and Figure 7.

4.2. Experimental Results and Discussion

The results of algorithms on the test functions are presented in Table 2 and Table 3, and the best Pareto optimal fronts obtained by all algorithms are illustrated in Figure 5. At the same time, the tracking results of HV and IGD during the iteration process are shown in Figure 6 and Figure 7.

4.2.1. Results on ZDT Test Suite

Table 2 and Table 3 show that the MOBCA outperforms other algorithms in three of five ZDT test problems. The superiority can be seen in the columns, showing higher accuracy and better robustness of the MOBCA compared to others in ZDT1, ZDT3, and ZDT6.
The shape of the Pareto optimal fronts obtained by the four algorithms on ZDT1, ZDT2, ZDT3, and ZDT6 is illustrated in Figure 5a–d. Inspecting these figures, it may be observed that NSGAIII shows the poorest convergence despite its good coverage in ZDT6. However, the NSGAIII and MOBCA both provide very good convergence toward all true Pareto optimal fronts. The most interesting pattern is that the Pareto optimal solutions obtained by NSGAIII show higher coverage than MOBCA on ZDT2. However, the coverage of the MOBCA on ZDT3 is better than NSGAIII. This shows that the MOBCA has the potential to outperform NSGAIII in finding a Pareto optimal front with separate regions.
The convergence curve of HV and IGD is presented in Figure 6a–d and Figure 7a–d. Figure 6a,b and Figure 7a,b show that the MOBCA converge to the Pareto front is faster than that of NSGAIII. This shows the MOBCA can quickly converge and cover the Pareto front in comparison to NSGAIII. However, the MOBCA only demonstrates a weak advantage on ZDT3 compared to NSGAIII. We can draw a conclusion based on Table 2: although the MOBCA shows a weak advantage on ZDT3, it is a stable and robust way to solve separated MOPs.

4.2.2. Results on IMOP Test Suite

The previous section investigated the performance of the proposed MOBCA on the ZDT test set. Most of the test functions in this suite are not multi-modal. To benchmark the performance of the proposed algorithm’s more challenging test set, this subsection employs IMOP benchmark functions. These functions are the most difficult test functions in the literature on multi-objective optimization and are able to confirm whether the superiority of the MOBCA is significant or not.
Inspecting the results in Table 2, it is evident that the MOBCA outperforms the MOEA/D, MOPSO, and NSGAIII in all of IMOP test functions. Since IGD is a good metric for benchmarking the convergence of an algorithm, these results indicate that the MOBCA has a better convergence on these benchmark functions. In order to observe the coverage of the algorithms, the HV metric provides quantitative result analysis in Table 3. The MOBCA failed to achieve the best results in IMOP1. We noticed that the MOBCA obtained the worst standard deviation on IMOP1 regarding HV. This means that in 30 rounds of experiments, the HV obtained by MOBCA is unstable. This may be the local optima that causes the convergence of MOBCA to stagnate.
Figure 5e–l shows the obtained Pareto front using algorithms in the IMOPs. The figures show that the MOBCA is closer to the true Pareto front, and the coverage is broader than other algorithms. Especially in Figure 5f–i,k, the MOBCA demonstrates overwhelming advantages in the coverage of the Pareto front. This means that the MOBCA can provide decision-makers with more high-quality solutions when solving practical problems. Although the MOBCA provides similar results to NSGAIII when solving IMOP5, as shown in Figure 5i, the MOBCA is significantly better than NSGAIII when solving IMOP7, as shown in Figure 5k.
The convergence speed of the MOBCA and the obtained Pareto front coverage can be reflected by the HV and IGD. Figure 6 and Figure 7 demonstrate that the MOBCA converges to the Pareto optimal front more quickly in most of the IMOPs. In particular, Figure 6f,k and Figure 7f,k present that the MOBCA can avoid the local optima effectively compared with other algorithms. The MOBCA has been developed based on the single-objective BCA, so it inherits the excellent convergence speed of the BCA. Figure 6e,h,l and Figure 7e,h,l prove this view.

4.2.3. Results on Real-World Problems

The last part of Table 2 and Table 3 present real-world multi-objective optimization problems (RWMOPs). The MOBCA shows outstanding results in terms of the IGD metric, while failed archives show good results in HV. Due to the complexity and dynamics of real-life problems, the MOBCA initially designed to solve multi-objective problems may have certain flaws. However, it is still undeniable that the MOBCA can provide high-quality solutions compared with other algorithms with its current performance.

4.2.4. Discussion

The qualitative and quantitative results show that the MOBCA benefits from high convergence and coverage. The high convergence of the MOBCA is inherited from the BCA. The main mechanisms that guarantee convergence in the BCA and MOBCA are the besiege and conquer mechanism. These two mechanisms emphasize exploitation and convergence proportional to the number of iterations. Since we select one solution from the archive in every iteration for each army and require the soldiers to move around the armies in the MOBCA, being trapped by local optima might be a concern. However, the results prove that the MOBCA algorithm does not suffer from local optima.

5. Conclusions and Future Work

This paper proposes a multi-objective version of the population optimization algorithm BCA. By introducing the grid mechanism, archive mechanism, and leader selection mechanism, and redesigning these mechanisms into the single-objective BCA algorithm, a new multi-objective algorithm, the MOBCA, is generated, which enables the BCA to deal with multi-objective optimization problems.
The proposed algorithm, the MOBCA, is compared with several excellent multi-objective optimization algorithms. These include those inspired by single-objective algorithms: MOPSO, MOEA/D, and NSGAIII. Our experimental results show that the MOBCA is superior to all compared algorithms in this paper in terms of convergence. Furthermore, the archive mechanism and grid mechanism ensure the diversity of the distribution of Pareto optimal solutions obtained by the MOBCA.
Based on the experimental results, the MOBCA shows a similar convergence speed to the BCA. It can converge to the Pareto front effectively in some MOPs. The diversity in solutions is also better than in compared algorithms except in real-world problems. This also shows that the proposed MOBCA still has certain flaws in terms of diversity, and there is considerable room for improvement.
For future work, a new multi-objective optimization mechanism should be introduced to ensure the diversity of the distribution of Pareto optimal solutions obtained by the algorithm. At the same time, a new position update formula should be considered because the MOBCA failed to determine a sufficient number of Pareto optimal solutions in some test functions.

Author Contributions

J.J. and J.W. conceived and designed the methodology and experiments; J.W. performed the experiments, analyzed the results and wrote the paper; J.W., J.L., X.Y. and Z.H. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the financial support from the Foundation of the Jilin Provincial Department of Science and Technology (No.YDZJ202201ZYTS565) and the Foundation of Social Science of Jilin Province, China (No.2022B84).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Muthukumaran, S.; Geetha, P.; Ramaraj, E. Multi-Objective Optimization with Artificial Neural Network Based Robust Paddy Yield Prediction Model. Intell. Autom. Soft Comput. 2023, 35, 216–230. [Google Scholar] [CrossRef]
  2. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Gazelle Optimization Algorithm: A novel nature-inspired metaheuristic optimizer. Neural Comput. Appl. 2023, 35, 4099–4131. [Google Scholar] [CrossRef]
  3. Huang, C.; Zhou, X.; Ran, X.; Liu, Y.; Deng, W.; Deng, W. Co-evolutionary competitive swarm optimizer with three-phase for large-scale complex optimization problem. Inf. Sci. 2023, 619, 2–18. [Google Scholar] [CrossRef]
  4. Rahimi, I.; Gandomi, A.H.; Chen, F.; Mezura-Montes, E. A Review on Constraint Handling Techniques for Population-based Algorithms: From single-objective to multi-objective optimization. Arch. Comput. Methods Eng. 2023, 30, 2181–2209. [Google Scholar] [CrossRef]
  5. Mirjalili, S.Z.; Mirjalili, S.; Saremi, S.; Faris, H.; Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018, 48, 805–820. [Google Scholar] [CrossRef]
  6. Pilechiha, P.; Mahdavinejad, M.; Rahimian, F.P.; Carnemolla, P.; Seyedzadeh, S. Multi-objective optimisation framework for designing office windows: Quality of view, daylight and energy efficiency. Appl. Energy 2020, 261, 114356. [Google Scholar] [CrossRef]
  7. Branke, J.; Kaußler, T.; Schmeck, H. Guidance in evolutionary multi-objective optimization. Adv. Eng. Softw. 2001, 32, 499–507. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.d.S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
  9. Konak, A.; Coit, D.W.; Smith, A.E. Multi-objective optimization using genetic algorithms: A tutorial. Reliab. Eng. Syst. Saf. 2006, 91, 992–1007. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Jangir, P.; Saremi, S. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2017, 46, 79–95. [Google Scholar] [CrossRef]
  12. Liu, B.; Jiao, S.; Shen, Y.; Chen, Y.; Wu, G.; Chen, S. A dynamic hybrid trust network-based dual-path feedback consensus model for multi-attribute group decision-making in intuitionistic fuzzy environment. Inf. Fusion 2022, 80, 266–281. [Google Scholar] [CrossRef]
  13. Li, X.; Yu, Y.; Huang, M. Multi-objective cooperative coevolution algorithm with a Master–Slave mechanism for Seru Production. Appl. Soft Comput. 2022, 119, 108593. [Google Scholar] [CrossRef]
  14. Ghoddousi, P.; Eshtehardian, E.; Jooybanpour, S.; Javanmardi, A. Multi-mode resource-constrained discrete time–cost-resource optimization in project scheduling using non-dominated sorting genetic algorithm. Autom. Constr. 2013, 30, 216–227. [Google Scholar] [CrossRef]
  15. Makhadmeh, S.N.; Alomari, O.A.; Mirjalili, S.; Al-Betar, M.A.; Elnagar, A. Recent advances in multi-objective grey wolf optimizer, its versions and applications. Neural Comput. Appl. 2022, 34, 19723–19749. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  17. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  18. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  19. Coello, C.C.; Lechuga, M.S. MOPSO: A proposal for multiple objective particle swarm optimization. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1051–1056. [Google Scholar]
  20. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Dhiman, G.; Singh, K.K.; Soni, M.; Nagar, A.; Dehghani, M.; Slowik, A.; Kaur, A.; Sharma, A.; Houssein, E.H.; Cengiz, K. MOSOA: A new multi-objective seagull optimization algorithm. Expert Syst. Appl. 2021, 167, 114150. [Google Scholar] [CrossRef]
  22. Srinivas, N.; Deb, K. Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  23. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  24. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2013, 18, 577–601. [Google Scholar] [CrossRef]
  25. Liang, J.J.; Yue, C.; Qu, B.Y. Multimodal multi-objective optimization: A preliminary study. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2454–2461. [Google Scholar]
  26. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  27. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef]
  28. Horn, J.; Nafpliotis, N.; Goldberg, D.E. A niched Pareto genetic algorithm for multiobjective optimization. In Proceedings of the first IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, Orlando, FL, USA, 27–29 June 1994; pp. 82–87. [Google Scholar]
  29. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the strength Pareto evolutionary algorithm. Tik-Report 2001, 103. [Google Scholar] [CrossRef]
  30. Schaffer, J.D. Multiple objective optimization with vector evaluated genetic algorithms. In Proceedings of the First International Conference on Genetic Algorithms and Their Applications; Psychology Press: London, UK, 2014; pp. 93–100. [Google Scholar]
  31. Xiao, Y.; Wu, Y. Robust visual tracking based on modified mayfly optimization algorithm. Image Vis. Comput. 2023, 135, 104691. [Google Scholar] [CrossRef]
  32. Qian, L.; Khishe, M.; Huang, Y.; Mirjalili, S. SEB-ChOA: An improved chimp optimization algorithm using spiral exploitation behavior. Neural Comput. Appl. 2024, 36, 4763–4786. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Tian, Y.; Zhang, X. Improved SparseEA for sparse large-scale multi-objective optimization problems. Complex Intell. Syst. 2023, 9, 1127–1142. [Google Scholar] [CrossRef]
  34. Li, W.; Zhang, T.; Wang, R.; Huang, S.; Liang, J. Multimodal multi-objective optimization: Comparative study of the state-of-the-art. Swarm Evol. Comput. 2023, 77, 101253. [Google Scholar] [CrossRef]
  35. Liu, S.; Lin, Q.; Li, J.; Tan, K.C. A survey on learnable evolutionary algorithms for scalable multiobjective optimization. IEEE Trans. Evol. Comput. 2023, 27, 1941–1961. [Google Scholar] [CrossRef]
  36. Ngatchou, P.; Zarei, A.; El-Sharkawi, A. Pareto multi objective optimization. In Proceedings of the 13th International Conference on, Intelligent Systems Application to Power Systems, Arlington, VA, USA, 6–10 November 2005; pp. 84–91. [Google Scholar]
  37. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In Parallel Problem Solving from Nature PPSN VI: 6th International Conference Paris, France, September 18–20, 2000; Proceedings 6; Springer: Berlin/Heidelberg, Germany, 2000; pp. 849–858. [Google Scholar]
  38. Ishibuchi, H.; Imada, R.; Setoguchi, Y.; Nojima, Y. Reference point specification in inverted generational distance for triangular linear Pareto front. IEEE Trans. Evol. Comput. 2018, 22, 961–975. [Google Scholar] [CrossRef]
  39. Tariq, I.; AlSattar, H.A.; Zaidan, A.; Zaidan, B.; Abu Bakar, M.; Mohammed, R.; Albahri, O.S.; Alsalem, M.; Albahri, A.S. MOGSABAT: A metaheuristic hybrid algorithm for solving multi-objective optimisation problems. Neural Comput. Appl. 2020, 32, 3101–3115. [Google Scholar] [CrossRef]
  40. Zhan, Z.H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev. 2022, 55, 59–110. [Google Scholar] [CrossRef]
  41. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  42. Van den Bergh, F.; Engelbrecht, A.P. A study of particle swarm optimization particle trajectories. Inf. Sci. 2006, 176, 937–971. [Google Scholar] [CrossRef]
  43. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  44. Sierra, M.R.; Coello Coello, C.A. Improving PSO-based multi-objective optimization using crowding, mutation and ∈-dominance. In Evolutionary Multi-Criterion Optimization: Third International Conference, EMO 2005, Guanajuato, Mexico, March 9–11, 2005; Proceedings 3; Springer: Berlin/Heidelberg, Germany, 2005; pp. 505–519. [Google Scholar]
  45. López-Ruiz, S.; Hernández-Castellanos, C.I.; Rodríguez-Vázquez, K. Multi-objective optimization of neural network with stochastic directed search. Expert Syst. Appl. 2024, 237, 121535. [Google Scholar] [CrossRef]
  46. Fromer, J.C.; Coley, C.W. Computer-aided multi-objective optimization in small molecule discovery. Patterns 2023, 4. [Google Scholar] [CrossRef]
  47. Palm, H.; Arndt, L. Reinforcement Learning-Based Hybrid Multi-Objective Optimization Algorithm Design. Information 2023, 14, 299. [Google Scholar] [CrossRef]
Figure 1. Single-objective optimization problem.
Figure 1. Single-objective optimization problem.
Biomimetics 09 00316 g001
Figure 2. Different situations in a multi-objective problem (for example: two objectives). (a) Situation 1: solution x dominates y. (b) Situation 2: x and y do not dominate each other.
Figure 2. Different situations in a multi-objective problem (for example: two objectives). (a) Situation 1: solution x dominates y. (b) Situation 2: x and y do not dominate each other.
Biomimetics 09 00316 g002
Figure 3. Pareto fronts for different optimization directions.
Figure 3. Pareto fronts for different optimization directions.
Biomimetics 09 00316 g003
Figure 4. The workflow of the BCA algrotihm.
Figure 4. The workflow of the BCA algrotihm.
Biomimetics 09 00316 g004
Figure 5. Obtained Pareto fronts by MOBCA, MOPSO, MOEA/D, and NSGAIII.
Figure 5. Obtained Pareto fronts by MOBCA, MOPSO, MOEA/D, and NSGAIII.
Biomimetics 09 00316 g005
Figure 6. The convergence of hypervolume with the number of function evaluations.
Figure 6. The convergence of hypervolume with the number of function evaluations.
Biomimetics 09 00316 g006
Figure 7. The convergence of invert generation distance with the number of function evaluations.
Figure 7. The convergence of invert generation distance with the number of function evaluations.
Biomimetics 09 00316 g007
Table 1. Classical and novel multi-objective optimization algorithms.
Table 1. Classical and novel multi-objective optimization algorithms.
AbbreviationsAlgorithmsAuthors and Year
MOGWOMulti-Objective Grey Wolf OptimizerMirjalili et al., 2016 [8]
MOPSOMulti-Objective Particle Swarm OptimizationCoello and Lechuga, 2002 [19]
MOALOMulti-Objective Ant Lion OptimizerMirjalili et al., 2017 [11]
MOWOAMulti-Objective Whale Optimization AlgorithmMirjalili and Lewis, 2016 [20]
MOGOAMulti-Objective Grasshopper Optimization AlgorithmMirjalili et al., 2018 [5]
MOGAMulti-Objective Genetic AlgorithmKonak et al., 2006 [9]
MOSOAMulti-Objective Seagull Optimization AlgorithmDhiman et al., 2021 [21]
NSGANon-Dominated Sorting Genetic AlgorithmSrinivas and Deb, 1994 [22]
NSGAIINon-Dominated Sorting Genetic Algorithm IIDeb et al., 2002 [23]
NSGAIIINon-Dominated Sorting Genetic Algorithm IIIDeb and Jain, 2013 [24]
DN_NSGAIIDecision Space-Based Niching NSGAIILiang et al., 2016 [25]
MOEA/DMulti-Objective Evolutionary Algorithm based on DecompositionZhang and Li, 2007 [26]
RVEAReference Vector-Guided Evolutionary AlgorithmCheng et al., 2016 [27]
NPGANiched Pareto Genetic AlgorithmHorn et al., 1994 [28]
SPEA2Improving the Strength of the Pareto Evolutionary AlgorithmZitzler et al., 2001 [29]
VEGAVector-Evaluated Genetic AlgorithmSchaffer, 2014 [30]
Table 2. Result of invert generation distance (IGD) after 30 experiment runs.
Table 2. Result of invert generation distance (IGD) after 30 experiment runs.
ProblemMDMOEADMOPSONSGAIIIMOBCA
IMOP1210 3.6567 × 10 1 ( 5.55 × 10 3 ) − 4.9790 × 10 1 ( 2.62 × 10 1 ) − 2.0357 × 10 1 ( 7.16 × 10 2 ) − 9.4338 × 10 2 ( 2.43 × 10 1 )
IMOP2210 7.8495 × 10 1 ( 1.07 × 10 4 ) − 5.9846 × 10 1 ( 1.35 × 10 1 ) − 4.9146 × 10 1 ( 8.38 × 10 2 ) − 1.8948 × 10 1 ( 2.33 × 10 1 )
IMOP3210 5.6532 × 10 1 ( 6.24 × 10 2 ) − 2.2377 × 10 1 ( 2.50 × 10 1 ) − 4.9451 × 10 1 ( 9.06 × 10 2 ) − 5.0760 × 10 2 ( 1.21 × 10 1 )
IMOP4310 6.1844 × 10 1 ( 1.28 × 10 1 ) − 5.0946 × 10 1 ( 2.95 × 10 1 ) − 2.8441 × 10 1 ( 1.83 × 10 1 ) − 2.4192 × 10 2 ( 5.30 × 10 3 )
IMOP5310 5.6834 × 10 1 ( 5.38 × 10 2 ) − 6.1207 × 10 1 ( 1.58 × 10 1 ) − 7.9371 × 10 2 ( 3.94 × 10 2 ) = 6.3583 × 10 2 ( 8.71 × 10 3 )
IMOP6310 3.5418 × 10 1 ( 2.17 × 10 1 ) − 4.9380 × 10 1 ( 1.63 × 10 1 ) − 1.3460 × 10 1 ( 1.69 × 10 1 ) = 4.7965 × 10 2 ( 2.75 × 10 3 )
IMOP7310 9.3885 × 10 1 ( 4.21 × 10 5 ) − 9.2395 × 10 1 ( 1.56 × 10 2 ) − 8.9101 × 10 1 ( 4.29 × 10 2 ) − 2.3145 × 10 1 ( 3.28 × 10 1 )
IMOP8310 1.0612 × 10 0 ( 5.61 × 10 3 ) − 1.7820 × 10 1 ( 2.63 × 10 2 ) − 1.6625 × 10 1 ( 1.53 × 10 1 ) − 1.2358 × 10 1 ( 7.28 × 10 3 )
ZDT1230 1.3888 × 10 1 ( 6.42 × 10 2 ) − 5.5105 × 10 1 ( 1.02 × 10 1 ) − 1.7452 × 10 2 ( 3.26 × 10 3 ) − 9.7979 × 10 3 ( 1.06 × 10 3 )
ZDT2230 5.2995 × 10 1 ( 6.18 × 10 2 ) − 1.3942 × 10 0 ( 2.39 × 10 1 ) − 3.4393 × 10 2 ( 3.23 × 10 2 ) + 9.9316 × 10 2 ( 2.36 × 10 1 )
ZDT3230 1.5911 × 10 1 ( 4.00 × 10 2 ) − 4.4792 × 10 1 ( 7.11 × 10 2 ) − 1.8192 × 10 2 ( 8.71 × 10 3 ) − 1.2143 × 10 2 ( 1.45 × 10 3 )
ZDT4210 5.4634 × 10 1 ( 2.04 × 10 1 ) + 1.0012 × 10 1 ( 4.40 × 10 0 ) + 5.5155 × 10 1 ( 3.00 × 10 1 ) + 2.4867 × 10 1 ( 1.61 × 10 1 )
ZDT6210 7.4800 × 10 2 ( 2.16 × 10 2 ) − 2.4192 × 10 1 ( 3.75 × 10 1 ) = 1.6163 × 10 1 ( 6.23 × 10 2 ) − 6.9473 × 10 3 ( 8.46 × 10 4 )
RWMOP524NaN (NaN) *NaN (NaN) 1.8897 × 10 0 ( 4.60 × 10 3 ) = 1.8888 × 10 0 ( 3.33 × 10 4 )
RWMOP924 1.6484 × 10 3 ( 5.43 × 10 2 ) − 9.8115 × 10 8 ( 4.39 × 10 9 ) − 1.3867 × 10 1 ( 4.16 × 10 1 ) − 2.7131 × 10 0 ( 7.46 × 10 0 )
RWMOP1153 3.6986 × 10 6 ( 3.50 × 10 4 ) − 4.2657 × 10 6 ( 1.21 × 10 6 ) − 2.5476 × 10 6 ( 1.84 × 10 4 ) − 2.4764 × 10 6 ( 8.51 × 10 4 )
RWMOP1425NaN (NaN) 1.8521 × 10 6 ( 5.86 × 10 6 ) − 1.1596 × 10 1 ( 1.48 × 10 1 ) − 1.2783 × 10 2 ( 3.44 × 10 3 )
RWMOP1622NaN (NaN) 1.1851 × 10 8 ( 4.93 × 10 8 ) − 1.9990 × 10 3 ( 3.21 × 10 7 ) = 1.9989 × 10 3 ( 1.32 × 10 18 )
+/−/=1/14/01/15/12/12/4
* NaN indicates that no feasible solution was found. “+” and “−” indicate the number of test problems in which the compared algorithm shows significantly better performance of worse performance than MOBCA respectively. The symbol “=” indicates there is no significant difference between MOBCA and compared algorithms.
Table 3. Result of hypervolume (HV) after 30 experimental runs.
Table 3. Result of hypervolume (HV) after 30 experimental runs.
ProblemMDMOEADMOPSONSGAIIIMOBCA
IMOP1210 9.6578 × 10 1 ( 1.71 × 10 3 ) + 4.6753 × 10 1 ( 3.29 × 10 1 ) − 9.4891 × 10 1 ( 4.17 × 10 2 ) + 8.9643 × 10 1 ( 2.74 × 10 1 )
IMOP2210 9.0908 × 10 2 ( 3.87 × 10 7 ) − 9.3116 × 10 2 ( 2.54 × 10 2 ) − 9.5469 × 10 2 ( 8.72 × 10 3 ) − 1.8745 × 10 1 ( 6.33 × 10 2 )
IMOP3210 1.7102 × 10 1 ( 5.18 × 10 2 ) − 4.8640 × 10 1 ( 2.05 × 10 1 ) − 2.7823 × 10 1 ( 5.04 × 10 2 ) − 6.2497 × 10 1 ( 1.05 × 10 1 )
IMOP4310 6.5316 × 10 2 ( 4.14 × 10 2 ) − 1.2923 × 10 1 ( 1.37 × 10 1 ) − 2.1154 × 10 1 ( 1.02 × 10 1 ) − 4.2134 × 10 1 ( 6.92 × 10 3 )
IMOP5310 4.2214 × 10 1 ( 2.69 × 10 2 ) − 3.2787 × 10 1 ( 9.24 × 10 2 ) − 5.2323 × 10 1 ( 1.77 × 10 2 ) = 5.2656 × 10 1 ( 1.23 × 10 2 )
IMOP6310 2.4057 × 10 1 ( 1.82 × 10 1 ) − 1.9050 × 10 1 ( 1.28 × 10 1 ) − 4.4783 × 10 1 ( 1.21 × 10 1 ) = 5.0203 × 10 1 ( 3.50 × 10 3 )
IMOP7310 9.0909 × 10 2 ( 2.22 × 10 7 ) − 9.1362 × 10 2 ( 7.74 × 10 4 ) − 9.4667 × 10 2 ( 5.23 × 10 3 ) − 4.0964 × 10 1 ( 1.54 × 10 1 )
IMOP8310 7.0160 × 10 2 ( 1.92 × 10 3 ) − 4.5644 × 10 1 ( 2.55 × 10 2 ) − 4.7606 × 10 1 ( 4.79 × 10 2 ) − 5.0825 × 10 1 ( 1.63 × 10 2 )
ZDT1230 5.7239 × 10 1 ( 5.19 × 10 2 ) − 1.6321 × 10 1 ( 6.64 × 10 2 ) − 6.9910 × 10 1 ( 4.47 × 10 3 ) − 7.1044 × 10 1 ( 1.71 × 10 3 )
ZDT2230 1.0100 × 10 1 ( 1.58 × 10 2 ) − 0.0000 × 10 0 ( 0.00 × 10 0 ) − 4.0131 × 10 1 ( 3.12 × 10 2 ) + 3.8379 × 10 1 ( 1.36 × 10 1 )
ZDT3230 5.9559 × 10 1 ( 6.32 × 10 2 ) = 2.9460 × 10 1 ( 5.23 × 10 2 ) − 5.9328 × 10 1 ( 2.29 × 10 2 ) − 5.9629 × 10 1 ( 3.20 × 10 3 )
ZDT4210 1.6822 × 10 1 ( 1.31 × 10 1 ) + 0.0000 × 10 0 ( 0.00 × 10 0 ) = 2.4230 × 10 1 ( 1.73 × 10 1 ) + 0.0000 × 10 0 ( 0.00 × 10 0 )
ZDT6210 2.8712 × 10 1 ( 2.65 × 10 2 ) − 2.6527 × 10 1 ( 1.57 × 10 1 ) − 2.0641 × 10 1 ( 5.67 × 10 2 ) − 3.8490 × 10 1 ( 8.60 × 10 4 )
RWMOP524NaN (NaN) *NaN (NaN) 4.3321 × 10 1 ( 1.05 × 10 3 ) + 2.6988 × 10 1 ( 4.08 × 10 3 )
RWMOP924 5.3041 × 10 2 ( 5.97 × 10 5 ) − 6.0482 × 10 1 ( 1.71 × 10 1 ) + 4.0899 × 10 1 ( 9.35 × 10 4 ) + 4.0771 × 10 1 ( 5.24 × 10 4 )
RWMOP1153 5.8183 × 10 2 ( 2.63 × 10 3 ) − 1.9772 × 10 3 ( 2.72 × 10 3 ) − 9.3776 × 10 2 ( 7.35 × 10 4 ) + 9.2078 × 10 2 ( 2.18 × 10 3 )
RWMOP1425NaN (NaN) 9.9135 × 10 1 ( 5.79 × 10 3 ) + 6.1549 × 10 1 ( 1.49 × 10 3 ) + 3.4281 × 10 1 ( 7.80 × 10 3 )
RWMOP1622NaN (NaN) 1.7356 × 10 1 ( 9.84 × 10 6 ) − 7.6320 × 10 1 ( 3.28 × 10 5 ) + 7.5211 × 10 1 ( 2.89 × 10 3 )
+/−/=2/12/12/14/18/8/2
* NaN indicates that no feasible solution was found. “+” and “−” indicate the number of test problems in which the compared algorithm shows significantly better performance of worse performance than MOBCA respectively. The symbol “=” indicates there is no significant difference between MOBCA and compared algorithms.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, J.; Wu, J.; Luo, J.; Yang, X.; Huang, Z. MOBCA: Multi-Objective Besiege and Conquer Algorithm. Biomimetics 2024, 9, 316. https://doi.org/10.3390/biomimetics9060316

AMA Style

Jiang J, Wu J, Luo J, Yang X, Huang Z. MOBCA: Multi-Objective Besiege and Conquer Algorithm. Biomimetics. 2024; 9(6):316. https://doi.org/10.3390/biomimetics9060316

Chicago/Turabian Style

Jiang, Jianhua, Jiaqi Wu, Jinmeng Luo, Xi Yang, and Zulu Huang. 2024. "MOBCA: Multi-Objective Besiege and Conquer Algorithm" Biomimetics 9, no. 6: 316. https://doi.org/10.3390/biomimetics9060316

Article Metrics

Back to TopTop