Next Article in Journal
Pseudorandom Function from Learning Burnside Problem
Previous Article in Journal
How Can Viruses Affect the Growth of Zooplankton on Phytoplankton in a Chemostat?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coevolutionary Algorithm with Bayes Theorem for Constrained Multiobjective Optimization

1
School of Information Engineering, Sanming University, Sanming 365004, China
2
School of Information and Electrical Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
3
School of Mathematics and Statistics, Changchun University of Technology, Changchun 130012, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(7), 1191; https://doi.org/10.3390/math13071191
Submission received: 9 March 2025 / Revised: 2 April 2025 / Accepted: 3 April 2025 / Published: 4 April 2025

Abstract

:
The effective resolution of constrained multi-objective optimization problems (CMOPs) requires a delicate balance between maximizing objectives and satisfying constraints. Previous studies have demonstrated that multi-swarm optimization models exhibit robust performance in CMOPs; however, their high computational resource demands can hinder convergence efficiency. This article proposes an environment selection model based on Bayes’ theorem, leveraging the advantages of dual populations. The model constructs prior knowledge using objective function values and constraint violation values, and then, it integrates this information to enhance selection processes. By dynamically adjusting the selection of the auxiliary population based on prior knowledge, the algorithm significantly improves its adaptability to various CMOPs. Additionally, a population size adjustment strategy is introduced to mitigate the computational burden of dual populations. By utilizing past prior knowledge to estimate the probability of function value changes, offspring allocation is dynamically adjusted, optimizing resource utilization. This adaptive adjustment prevents unnecessary computational waste during evolution, thereby enhancing both convergence and diversity. To validate the effectiveness of the proposed algorithm, comparative experiments were performed against seven constrained multi-objective optimization algorithms (CMOEAs) across three benchmark test sets and 12 real-world problems. The results show that the proposed algorithm outperforms the others in both convergence and diversity.

1. Introduction

In practical applications, constrained multi-objective optimization problems are commonly encountered in areas such as robot gripper optimization [1], energy-efficient strategy development [2], and vehicle scheduling [3]. These challenges necessitate balancing multiple objectives while enforcing various constraints, thereby incurring substantial computational complexity. Advances in intelligent computing have further refined and expanded research in this domain [4].
A constrained multi-objective optimization problem (CMOP) is generally formulated as follows:
min F ( x ) = f 1 ( x ) , f 2 ( x ) , , f M ( x )
s . t . g i ( x ) 0 , i = 1 , , p h i ( x ) = 0 , i = p + 1 , , L x S .
where F ( x ) represents a vector-valued objective function composed of M sub-objectives, defining the dimensionality of the objective space. The decision vector x belongs to the decision space S and has a dimensionality of D. The problem is subject to L constraints, including p inequality constraints g ( x ) and L p equality constraints h ( x ) , which collectively define the feasible region within S.
To evaluate solution quality, the constraint violation value (CV) is defined as follows:
C V ( x ) = i = 1 L C i ( x ) ,
where C i ( x ) is computed using one of the following methods:
C i ( x ) = max ( 0 , g i ( x ) ) , 1 i p , max ( 0 , | h i ( x ) | δ ) , p + 1 i L .
where δ is a small positive parameter that allows a slight relaxation of equality constraints. A solution x is considered feasible if C V ( x ) = 0 ; otherwise, it is deemed infeasible. In Equation (2), constraint violations occur when any inequality constraint exceeds zero or when an equality constraint deviates beyond the tolerance δ . The total violation magnitude is determined using Equations (3) and (4).
In multi-objective optimization, the objective is to approximate the Pareto front (PF), which represents a set of optimal trade-off solutions. For CMOPs, the PF consists of both the unconstrained Pareto front (UPF) and the constrained Pareto front (CPF), thereby increasing problem complexity. EAs are particularly effective in continuous search spaces for approximating the PF. However, when constraints partition the decision space into multiple feasible regions, certain PF segments may be restricted to smaller areas, making it challenging to maintain solution diversity. Additionally, the existence of infeasible regions can disrupt evolutionary search, increasing the risk of premature convergence and degrading overall optimization performance. To address these challenges, various constraint-handling techniques (CHTs) have been developed. One such method, the ε -constraint approach, relaxes constraints to facilitate the traversal of feasible boundaries. However, this technique often struggles to effectively differentiate high-quality solutions, potentially leading to premature convergence at local optima and hindering the search for the global optimum. Furthermore, solving CMOPs often requires extensive parameter tuning, exacerbating algorithmic complexity and computational cost. While certain CHTs improve population evolution for specific CMOP instances, no single method has demonstrated universal effectiveness across all problem classes. CMOEAs based on multi-population or multi-task frameworks have demonstrated promising performance and have gained significant attention in recent years, including CCMDEA [5], CSEMT [6], CMORWMDP [7], and MACA [8]. However, a key challenge of this approach is its high computational cost [9]. When a substantial portion of computational resources is allocated to processing the UPF, obtaining the CPF becomes significantly more difficult. This trade-off creates a fundamental conflict between computational resource consumption and algorithmic convergence.
When addressing different problems, a single CHT often yields suboptimal results. Therefore, adapting the algorithm dynamically based on constraint variations may enhance its performance. Additionally, numerous studies have demonstrated that multi-population models generally achieve superior performance [5,6,10]. However, existing multi-population models often suffer from inefficient computational resource utilization [11]. To improve the adaptability of CHTs, we propose an environment selection model based on Bayes’ theorem. This model first establishes prior knowledge using objective function values and constraint violation information, which is then incorporated into an individual fitness calculation method. The fitness values are converted into selection probabilities via the softmax function to facilitate the adaptive selection of auxiliary populations. By leveraging prior knowledge, the algorithm dynamically adjusts auxiliary population selection, enhancing its adaptability to various CMOPs. To mitigate resource inefficiencies in dual-population models, we further apply Bayes’ theorem to dynamically adjust population sizes. Specifically, we design a population size adjustment strategy that utilizes past prior knowledge to infer the probability of function value changes. Based on these probabilities, population sizes are allocated accordingly, improving computational resource efficiency. To enhance algorithmic performance, we propose a coevolutionary algorithm with Bayes’ theorem for constrained multiobjective optimization (BTCMO). The main contributions of this work are as follows:
1.
We propose a Bayesian-based environment selection model that utilizes objective function values and constraint violations to estimate fitness. The fitness values are then converted into selection probabilities via the softmax function, enhancing adaptability across different problems.
2.
To mitigate the high computational cost of dual-population models, we propose a Bayesian-based population size adjustment strategy. By leveraging objective function values and Bayesian inference, the model estimates the rate of change in objective values and dynamically adjusts offspring allocation between the main and auxiliary populations, optimizing resource efficiency.
3.
Comparative evaluations between BTCMO and seven CMOEAs across three test sets and 12 real-world problems indicate that BTCMO exhibits the most balanced and superior performance.

2. Research and Review of Existing CMOEAs

The key challenges faced by CMOEAs are preserving solution diversity and achieving convergence. This section presents a review of both classical and recently proposed CMOEAs, categorized based on their methodological approaches.

2.1. Optimization Model Based on Gradual Evolution

This model represents a widely adopted approach that does not explicitly account for the transition from the UPF to the CPF. Instead, it leverages advanced evolutionary algorithms and CHTs to construct an effective optimization framework [12], ensuring both convergence and diversity.
The methodological evolution in constrained multi-objective optimization has demonstrated significant progression since 2022. Yuan et al. [13] formulated a parametric crowding measurement framework through cost-embedded distance metrics. This geometrical quantification mechanism overcomes the deficiencies of Minkowski-based proximity assessment in origin-adjacent solution exploration. Through synergistic integration of constraint-aware distance computation and systemic restriction parameters, their computational topology establishes a particle evaluation hierarchy that simultaneously directs solution clusters toward Pareto-efficient basins and reduces premature local convergence propensity. Subsequent algorithmic innovations emerged through diversified CHTs. Jiao et al. [14] devised a tri-modal environmental selection architecture synthesizing decomposition-based partitioning, dominance relations, and metric-driven evaluation. This multimodal operator configuration effectively attenuates objective space distortion caused by heterogeneous constraint domains. In parallel developments, Zhang et al. [15] established a sequential constraint-satisfaction protocol through boundary-adaptive subproblem resolution. Their contraction-iterative mechanism progressively refines feasibility thresholds while maintaining bi-objective formulation consistency. Extending prior architectural innovations, Zuo et al. [16] conceived IDNSGA-III as an NSGA-III derivative incorporating Q-learning-driven operator selection. The self-adaptive mechanism is complemented by population phase identification for dynamic Q-table optimization. Architecturally, this modular design exhibits cross-algorithm compatibility, having enhanced four canonical CMOEA implementations through plug-in integration. While demonstrating efficacy in constrained space navigation via novel CHTs, current paradigms inadequately resolve the persistent challenge of late-stage diversity erosion stemming from homogeneous constraint processing. This unresolved limitation has catalyzed burgeoning research interest in phased optimization paradigms and coevolutionary population structures [4].

2.2. Optimization Model Based on Multi-Population

This model predominantly maintains a balance between the constraint and objective spaces through the collaboration of multiple populations, primarily two, each exploring the feasible and promising infeasible domains, respectively.
Fei Ming et al. [17] proposed a collaborative evolution framework based on a three-population structure, where each population evolves concurrently to address different aspects of the original CMOP as follows: an unconstrained assisted multi-objective optimization problem and a constraint relaxation problem. This configuration enables the algorithm to leverage both weak coevolution and constraint relaxation techniques, thereby improving its capability to handle CMOPs with complex feasible regions. Additionally, an innovative archive update strategy was introduced to manage the third population’s archive and to facilitate final solution selection during the search process. Jinlong Zhou et al. [18] developed a dual-population algorithm designed to approximate the CPF from both sides of the constrained boundary. Specifically, one population advances toward the CPF exclusively from feasible regions, while the other explores both feasible and infeasible regions concurrently. This bilateral approximation strategy fully exploits the advantages of dual populations, mitigating potential computational inefficiencies that may arise when considering only one side. This approach proves particularly effective in scenarios where the CPF is predominantly situated along constrained boundaries. Qiuzhen Wang et al. [19] introduced a coevolutionary algorithm employing a three-population framework, where the main population dynamically selects between two auxiliary populations based on problem characteristics and iteration progress. This adaptive mechanism enhances both convergence and solution diversity. In 2024, Fei Ming et al. [7] proposed a novel constrained multi-objective evolutionary algorithm incorporating a heterogeneous operator strategy tailored to practical mechanical design problems. This strategy integrates genetic algorithm operators to accelerate convergence with differential evolution operators to manage variable dependencies, thereby significantly enhancing performance in complex real-world applications. In the same year, Songbai Liu et al. [20] introduced an adaptive constraint selection mechanism that dynamically adjusts the search direction of the population by selectively activating or disregarding specific constraints. This mechanism expedites convergence and mitigates the risk of stagnation in local optima. However, while multi-population evolutionary methods enhance search efficiency, they inherently incur substantial computational overhead, which may negatively impact convergence. Therefore, a critical challenge remains in optimizing computational resource allocation while ensuring convergence efficacy.

2.3. Optimization Model Based on Multi-Stage Models

The fundamental concept of this model involves segmenting the evolutionary process into several phases, primarily two stages, to achieve a balance between convergence and diversity.
Jun Dong et al. [21] introduced TSTI in 2022, which emphasizes stage-specific metrics. In the first stage, Pareto non-dominated sorting and unbiased dual-objective models are used to assess convergence, diversity, and feasibility, aiming to yield well-distributed solutions and avoid local optima. In the subsequent stage, the algorithm focuses on rapidly converging the population to the PF by employing the SPEA2 fitness evaluation strategy, which prioritizes feasibility alongside convergence and diversity. In 2023, Yuhang Ma et al. [22] proposed CLBKR, incorporating an early learning stage that employs a classification model to discern whether a test problem is single-modal or multimodal. In later evolutionary phases, a metric (KTR) guides the evolution for single-modal problems, while a dedicated CHT directs evolution for multimodal cases. Fei Ming et al. [23] developed a new constrained multi-objective optimization algorithm in 2024 that operates in two stages. The first stage searches for feasible non-dominated solutions to identify promising regions, and the second stage performs a uniform search within these regions. This algorithm, which relies on a single parameter, exhibits strong generality and adaptability. Andre K. Y. Low et al. [24] proposed a hybrid framework that integrates evolutionary algorithms with Bayesian optimization. By introducing selection pressure during optimization, this framework accelerates convergence to the PF while minimizing sampling in infeasible regions, thus enhancing both efficiency and constraint handling. Specifically, the qNEVVI optimizer based on Bayesian optimization is employed for global exploration in the first stage, and evolutionary algorithm selection pressure is utilized in the second stage to expedite convergence and reduce infeasible sampling. In addition, pre-repair strategies are implemented to further optimize the handling of input constraints. Finally, Yongjing Lv et al. [25] introduced NSGA-II-MC, which adopts a multi-stage constraint processing strategy that includes unconstrained search, adaptive linear ϵ constraint processing, and constraint search based on the CDP to improve convergence and diversity. However, a key challenge remains in determining the appropriate switching point between stages; an incorrect transition may alter the direction of population evolution, making it difficult to achieve the desired CPF.
In the theoretical framework of gradual evolution and practical application scenarios, CHTs play an indispensable role in advancing the optimization process and ensuring the quality of results [13,16,26,27,28]. However, when faced with different types of CMOPs, the difficulty of solving them exhibits significant heterogeneity. The difference in difficulty makes it difficult to comprehensively and efficiently deal with various CMOPs using a single CHT in a single algorithm, making it difficult to achieve ideal optimization results [8,17,18]. This dilemma is also the fundamental reason why multi-population and multi-stage models have attracted widespread attention and become research hotspots in the academic community [20,29]. Although multi-stage models have shown certain advantages in handling some optimization problems, their limitations are also evident when dealing with CMOPs with varying levels of difficulty [24,30,31]. When faced with simple CMOPs, if the algorithm fails to switch to the next stage in a timely manner based on the characteristics of the problem, it will greatly interfere with the diversity of the final solution set of the algorithm, reducing the quality and comprehensiveness of the solutions. On the contrary, when dealing with complex CMOPs, if the algorithm enters the next stage too early, the population is prone to premature convergence, making it difficult for the algorithm to effectively converge to the CPF and obtain the global optimal solution. In contrast, algorithms based on multiple populations have demonstrated more outstanding adaptability and effectiveness in the current research and practice for solving CMOPs. This algorithm maintains the diversity and stability of the population by flexibly utilizing different CHTs, and then, it carefully constructs an information exchange model. With the help of the model’s information exchange and collaboration mechanism, it drives the constrained main population to cross complex infeasible domains and successfully reach the frontier region, achieving the optimization goals. However, it cannot be ignored that the algorithm relies on multiple populations running in parallel, which inevitably leads to a significant consumption of computing resources. The excessive consumption of these resources will ultimately have a negative impact on the convergence performance of the algorithm, hindering its ability to quickly and accurately converge to the optimal solution. In the exploration of solving the adaptability problem of CHTs, an environment selection model based on Bayes’ theorem has emerged. At the beginning of the model construction, prior knowledge was established based on the objective function value and constraint violation value, and then, the objective function value and prior knowledge were deeply integrated to construct a calculation method for individual fitness. Subsequently, the calculated fitness is converted into selection probability using the softmax function, which is used to screen the auxiliary population based on this. With prior knowledge, the algorithm can dynamically adjust the selection process of the auxiliary population, significantly enhancing its adaptability to various CMOPs. To address the issue of high resource consumption in dual population mode, Bayes’ theorem is also introduced into the regulation of population size. A population size adjustment strategy has been carefully designed based on Bayes’ theorem. Bayes’ theorem has the ability to infer the probability of function value changes based on past prior knowledge, and it uses this probability to allocate the population reasonably, effectively improving the utilization efficiency of population computing resources. Based on the application of Bayes’ theorem in environmental selection and population size adjustment, the BTCMO is proposed for constrained multi-objective optimization.

3. Proposed Algorithm

3.1. Framework

Under the framework of progressive evolution theory, CHTs play a pivotal role in the optimization process. However, due to the substantial heterogeneity across different types of CMOPs, a single CHT within a single algorithm struggles to achieve comprehensive and efficient optimization across diverse CMOP scenarios. Although multi-stage models and multi-swarm algorithms exhibit certain advantages, the former encounters challenges such as disrupted solution set diversity and premature convergence when handling CMOPs of varying difficulty levels, while the latter negatively impacts convergence due to excessive computational resource consumption. To address these issues, the BTCMO algorithm is proposed (Algorithm 1). The core components of BTCMO include an environment selection model and a population size adjustment strategy, both based on Bayes’ theorem. The environment selection model establishes prior knowledge using the objective function values and constraint violation values, integrating this knowledge to formulate an individual fitness evaluation method. The fitness values are subsequently transformed into selection probabilities via the softmax function, facilitating the screening of the auxiliary population. By dynamically adjusting the selection process based on prior knowledge, the algorithm enhances adaptability to various CMOPs. Meanwhile, the population size adjustment strategy leverages Bayes’ theorem to infer the probability of function value variations based on historical knowledge, allowing for an adaptive allocation of population sizes ( N 1 for the main population and N 2 for the auxiliary population), thereby improving computational resource efficiency. During execution, the auxiliary population is first selected through the Bayes-based environment selection model, assisting the main population in navigating complex infeasible regions. Simultaneously, the Bayes-based population size adjustment strategy dynamically modifies the population size according to the probability of function value changes, ensuring algorithmic effectiveness while minimizing unnecessary computational resource consumption. Ultimately, this facilitates the algorithm’s convergence to the CPF. Additionally, prior research [32] indicates that DE/current/best/1 exhibits superior convergence performance, whereas DE/rand/1 excels in promoting diversity. Therefore, considering both convergence and diversity, this study employs DE/current/best/1 to generate O 1 and DE/rand/1 to generate O 2 , thereby enhancing the efficiency of solving constrained multi-objective optimization problems.
Algorithm 1 Evolutionary process of BTCMO
Input: NP: population size; MaxFES: maximal number of fitness evaluations
Output: The feasible Pareto optimal solutions
  1:
P 1   Initialize N P individuals;
  2:
Evaluate P 1 ;
  3:
P 2     P 1 ;
  4:
F E S     N P ;
  5:
N 1     N P ;
  6:
N 2     N P ;
  7:
while  F E S < M a x F E S  do
  8:
     O 1   Obtain N 1 offspring from P 1 and P 2 using the DE/current/best/1 operator;
  9:
     O 2   Obtain N 2 offspring from P 2 using the DE/rand/1 operator;
10:
     O = O 1 O 2 ;
11:
     P 1   Selecting N P individuals from P 1 and O as the CDP model [26];
12:
     P 2   Selecting N P individuals from P 2 and O as auxiliary population selection models;
13:
     [ N 1 , N 2 ]   Allocate the number of offspring for P 1 and P 2 through population quantity adjustment strategy;
14:
end while

3.2. Constrained Dominance Principle

A key distinction between constrained multi-objective optimization problems and unconstrained multi-objective optimization problems lies in the necessity of handling constraint violations in CMOPs. Solutions that fail to satisfy the constraints, referred to as infeasible solutions, are typically excluded from the final solution set. While a certain degree of suboptimal convergence or diversity may be acceptable, feasibility remains a fundamental requirement. However, the outright elimination of infeasible solutions may impede the search for the global optimum. To address this challenge, dual-population models have emerged as a promising approach for handling CMOPs. These models maintain a primary population dedicated to feasible solutions while employing an auxiliary population to explore infeasible regions, thereby enhancing both convergence and diversity. In this study, the main population adopts the Constraint Dominance Principle (CDP) for environmental selection. CDP, introduced by Deb et al. [26], is a widely used mechanism for managing infeasible solutions within the non-dominated sorting process of NSGA-II. Owing to its simplicity and effectiveness, CDP has been extensively integrated into various CMOEAs. Given two solutions, x and y, the dominance relationship based on CDP (denoted as x y ) is established according to the following conditions:
  • If φ ( x ) < φ ( y ) , then x dominates y.
  • If φ ( x ) = φ ( y ) , then dominance follows Pareto dominance.
  • φ ( x ) and φ ( y ) denote the constraint violation values of solutions x and y, respectively. If two solutions exhibit identical constraint violations, Pareto dominance is determined as follows:
  • i { 1 , 2 , . . . , m } , f i ( x ) f i ( y ) .
  • i { 1 , 2 , . . . , m } , f i ( x ) < f i ( y ) .

3.3. Auxiliary Environment Selection Model

The main population employs the CDP model to ensure convergence; however, this approach prioritizes feasibility, limiting exploration in infeasible regions during the search process. To address this, we integrate Bayes’ theorem to guide the auxiliary population in exploring infeasible domains, thereby enhancing solution diversity and convergence. Bayes’ theorem enables the integration of prior knowledge with observed data to infer a posterior distribution. Specifically, our strategy first normalizes the objective function and constraint violation values of each individual. Then, using Bayes’ theorem, we compute the posterior distributions of these values to assess individual fitness. Finally, selection probabilities are determined via the softmax function, guiding environmental selection within the auxiliary population.
Assuming the objective matrix is F, we normalize each objective using Equation (5) to obtain F ¯ . Therefore, for each individual, the average of their objective values through Equation (7) is ξ i . For each individual using Equation (6), the constraint penalty value is η i .
F ¯ i j = max ( F j ) F i j max ( F j ) min ( F j )
η i = 1 L l = 1 L max ( C i l , 0 ) 2
ξ i = 1 M j = 1 M F ¯ i j
where i is the individual index, j is the objective index, and M is the number of objectives. ξ i is the fitness of individual i. η i is the constraint penalty value.
Assuming that both the objective fitness and constraint penalty values follow a normal distribution, the prior distribution can be determined using the mean and standard deviation of the objective values and constraint violations. Bayes’ theorem can then be applied to calculate the posterior distribution of the objective fitness and constraint penalty value.
p ( ξ i | μ ξ , σ ξ ) = N ( ξ i | μ ξ , σ ξ )
p ( η i | μ η , σ η ) = N ( η i | μ η , σ η )
ψ ξ = 1 σ ξ 2 π exp ( ξ i μ ξ ) 2 2 σ ξ 2
ψ η = 1 σ η 2 π exp ( η i μ η ) 2 2 σ η 2
where μ and σ represent the mean and standard deviation, respectively.
A comprehensive fitness function is established by incorporating the objective fitness, constraint penalty value and the Bayesian posterior.
G i = ξ i + ψ ξ · ψ η
P i = e G i i = 1 N e G i
Finally, the softmax function is applied to convert the fitness values into probabilities. From the N individuals, NP individuals are randomly selected based on the calculated selection probabilities P.

3.4. Population Quantity Adjustment Strategy

In evolutionary algorithms, the dual population model can significantly enhance search efficiency. However, it often leads to high computational resource consumption. This strategy seeks to mitigate this issue by dynamically adjusting the sizes of the two populations, leveraging fitness assessment and Bayesian inference.The process begins by incorporating both objective values and constraint violations into the fitness evaluation of individuals. This approach allows the fitness function to more comprehensively reflect the quality of each individual. Subsequently, Bayesian inference is applied to estimate the likelihood of changes in an individual’s fitness. Based on this estimated probability, the allocation ratio of individuals between the main population P 1 and the auxiliary population P 2 is dynamically adjusted.The goal of this dynamic adjustment is to maintain a balanced distribution between the two populations. This balance ensures that the search process avoids missing the globally optimal solution, while still enabling the deep exploration of local areas. As a result, the strategy improves the efficiency and stability of the evolutionary algorithm, minimizing unnecessary computational resource usage and enhancing overall algorithm performance.
Firstly, the total fitness of the population is obtained by summing the objective values and constraint violations for each individual, providing a measure of the overall quality of individuals in the population. The fitness of each individual is calculated using the formulas shown in Equations (14) and (15). Additionally, Bayesian inference is employed to update the probability distribution of fitness changes based on existing fitness data, serving as the foundation for subsequent adjustments. Assuming that the changes in population fitness follow a normal distribution, the fitness change probabilities for populations P 1 and P 2 are computed using the formulas presented in Equations (16) and (17). Based on these probabilities, the offspring scaling factor α is dynamically adjusted. The updated formula for this adjustment is shown in Equation (18).
S 1 = i = 1 N 1 1 1 + F 1 i + 1 1 + C 1 i
S 2 = i = 1 N 2 1 1 + F 2 i + 1 1 + C 2 i
P ( Δ S 1 ) N ( μ 1 , σ 1 )
P ( Δ S 2 ) N ( μ 2 , σ 2 )
α = 0.5 + β · ( P ( Δ S 1 ) P ( Δ S 2 ) )
where F 1 i and C 1 i represent the objective value and constraint violation of the i-th individual in population P 1 , respectively, while F 2 i and C 2 i represent the objective values and constraint violations of the i-th individual in population P 2 , respectively. μ 1 and μ 2 are the mean fitness values of P 1 and P 2 , respectively, and σ 1 and σ 2 are the corresponding standard deviations. α is the scaling factor, and the updated formula to 0.5 maps the population size to the center range. β is the adjustment coefficient used to control the magnitude of the scaling factor adjustment, which is set to 0.1 in this paper. To ensure that α remains within a reasonable range, the following restrictions are applied:
α = max ( 0 , min ( 1 , α ) )
Assuming the total population size is N total = 2 × NP , the number of offspring for populations P 1 and P 2 can be calculated based on the updated scaling factor α using the following formula:
N 1 new = α · N total
N 2 new = ( 1 α ) · N total
In summary, this strategy dynamically adjusts the number of offspring for populations P 1 and P 2 based on the fitness values and Bayesian inference results, as outlined in the previous steps and formulas. By reasonably adjusting the offspring count, the model optimizes computational resource usage during iterations (in terms of the number of evaluations). This dynamic adjustment maintains a balance between global exploration and local exploitation, thereby enhancing the efficiency and stability of evolutionary algorithms.

3.5. Search Algorithm

Previous studies [32,33] have demonstrated that the DE/current/best/1 operator exhibits strong convergence performance when solving multi-objective optimization problems, while the DE/rand/1 operator is more effective in maintaining solution diversity. During the evolution of two populations, the P 1 is responsible for convergence, whereas the P 2 focuses on ensuring solution diversity. Consequently, P 1 generates offspring O 1 using the DE/current/best/1 operator, while P 2 generates offspring O 2 using the DE/rand/1 operator. The two operators are defined as follows:
D E / c u r r e n t / b e s t / 1 : O = P r 1 + F × P b e s t P r 1 + P r 2 P r 3
D E / r a n d / 1 : O = P r 1 + F × P r 2 P r 3
In this study, the parent selection for the DE/current/best/1 operator follows these rules: Pr1 and Pr3 are randomly selected from P 1 , Pr2 is randomly selected from P 2 , and P best consists of the top 10% of individuals from P 1 , selected using the CDP model. For the DE/rand/1 operator, all three parents are randomly selected from P 2 . The offspring populations O 1 and O 2 have sizes N 1 and N 2 , respectively. Additionally, the scaling factor F for both operators is set to { 0.6 , 0.8 , 1.0 } , while the crossover rate C R is set to { 0.1 , 0.2 , 1.0 } , following the settings in reference [32].

3.6. Complexity Analysis

In the BTCMO algorithm, the primary source of computational complexity arises from the evolutionary operators, the CDP model, and the fast non-dominated sorting process. The complexity associated with evolutionary operators, including intermediate parent selection and offspring generation, is O ( NP ) and O ( D × NP ) , respectively. Both the CDP model and the fast non-dominated sorting process have a complexity of O ( M × NP 2 ) . As a result, the overall complexity of the BTCMO algorithm is approximately O ( NP 2 ) , which is consistent with the complexity of most CMOEAs.

4. Experimental Setup

To evaluate the BTCMO algorithm, this study employs three benchmark test function sets as follows: CF [34], LIR-CMOP [35], and MW [36]. The CF suite consists of 10 test functions, whereas both LIR-CMOP and MW each contain 14 test functions. Additionally, the following three evaluation metrics are used to assess feasibility, diversity, and convergence: Inverted Generational Distance (IGD) [37], Hypervolume (HV) [38], and Feasible Solutions Ratio (FSR) [39].
To evaluate the performance of BTCMO, this study benchmarks it against the following seven state-of-the-art CMOEAs: TSTI [21], BiCo [40], CTAEA [41], CTSEA [42], C3M [43], CAEAD [44], and CMOSMA [45]. All algorithms were assessed over 30 independent runs. The population size was set to 100, and the maximum function evaluations were fixed at 100,000. The experiments were conducted within the PlatEMO framework [46], developed by Tian et al.

5. Experimental Analysis

Table 1 and Table 2 summarize the statistical outcomes of 30 independent trials for IGD and HV. These evaluations span 38 benchmark test functions and 12 real-world problems, respectively. The values enclosed in parentheses denote the standard deviation of IGD or HV. The bold parts in the table indicate the optimal metric values obtained in the comparative algorithm. The Wilcoxon rank-sum test results are represented by ‘+’, ‘-’, and ‘=’, where ‘+’ indicates that the compared algorithm significantly surpasses BTCMO, ‘-’ signifies a notably inferior performance, and ‘=’ denotes no statistically significant difference, all determined at a 0.05 significance level. If a population consistently fails to produce feasible solutions across all trials, IGD and HV are reported as NaN, with the values in parentheses reflecting the FSR.

5.1. Comparison on CF Test Set

From Table 1 and Table 2, it is evident that the BTCMO algorithm surpasses the other approaches in terms of IGD and HV across nine test functions (CF2–CF10). This advantage arises from its Bayesian environment selection model, which enhances search efficiency near the CPF, leading to improved convergence and distribution. Conversely, the CAEAD algorithm achieves the best IGD and HV performance solely on CF1, benefiting from its multi-task constrained objective framework. However, while this framework excels in specific cases, it lacks broad adaptability. Furthermore, both BTCMO and CMOSMA consistently generate feasible solutions across the CF test set, whereas the other six compared algorithms fail to do so for all functions. Although CMOSMA ensures feasibility, its IGD and HV performance remains significantly lower than that of BTCMO. Overall, BTCMO exhibits superior and well-balanced convergence and diversity across various test problems while maintaining feasibility, outperforming the seven competing algorithms.

5.2. Comparison on the LIR-CMOP Test Set

The 14 LIR-CMOP functions pose substantial challenges for most existing algorithms due to their small feasible regions. Consequently, the LIR-CMOP test set primarily serves to assess algorithm performance in environments characterized by large infeasible regions. As shown in Table 1 and Table 2, BTCMO achieves the best IGD and HV values on LIR-CMOP1, LIR-CMOP3, LIR-CMOP5, LIR-CMOP6, and LIR-CMOP9 to LIR-CMOP14. This superiority stems from BTCMO’s strong capability in handling different constraint complexities, including narrow feasible regions (LIR-CMOP1 and LIR-CMOP3), extensive infeasible regions (LIR-CMOP5 and LIR-CMOP6), and disconnected feasible regions (LIR-CMOP9 to LIR-CMOP14). CAEAD attains the best IGD and HV results on LIR-CMOP2 and LIR-CMOP4, as these functions, like LIR-CMOP1 to LIR-CMOP4, contain extremely narrow feasible regions. By utilizing unconstrained populations, CAEAD effectively explores infeasible regions near the CPF. Additionally, C3M achieves the optimal IGD and HV values on LIR-CMOP7 and LIR-CMOP8, whereas the remaining five algorithms fail to secure any optimal results. This is likely due to biases in their knowledge transfer mechanisms, which hinder performance. Overall, BTCMO attains the highest number of optimal solutions within the LIR-CMOP test set, highlighting its superiority. These findings reinforce its strong capabilities in convergence and diversity optimization, establishing BTCMO as a highly competitive algorithm.

5.3. Comparison on MW Test Set

The MW test set serves as a general benchmark, featuring feasible regions with diverse characteristics. Some regions are disconnected, others are separated by large infeasible areas, and some are located far from the unconstrained Pareto front. Table 1 presents the IGD results of eight algorithms from MW1 to MW14. BTCMO achieves the best IGD values on 12 functions and delivers competitive results on the remaining two, demonstrating superior overall performance on the MW test set. The CTSEA algorithm attains the best IGD values on two functions, likely due to the relatively large feasible regions in the MW test set, which are easier to handle. Regarding HV, Table 2 shows that BTCMO outperforms the other algorithms on 12 test functions while maintaining competitive performance on the remaining two. Specifically, CTSEA and CMOSMA achieve the highest HV values on MW11 and MW7, respectively. The remaining five algorithms fail to obtain the best results on any of the 14 test functions, likely due to limitations in their frameworks, leading to poor solution distribution. From the perspective of feasible solution rates, both BTCMO and CMOSMA consistently generate feasible solutions, consistent with their performance on the CF test set. This further validates the strong feasibility of BTCMO. Additionally, BTCMO achieves the highest number of optimal IGD and HV values across the MW test set, underscoring its superior convergence and distribution capabilities.
Overall, the results from the three benchmark test sets and real-world problems demonstrate that BTCMO is a highly competitive algorithm in terms of convergence, diversity, and feasibility. As shown in Figure 1, the Friedman test ranks the CMOSMA algorithm second in overall average ranking; however, it still lags behind BTCMO. This may be attributed to the insufficient utilization of historical experience in auxiliary tasks, leading to lower convergence performance compared to BTCMO. Regardless of the algorithm, BTCMO consistently outperforms its competitors across different types of problems, further validating its robustness, while the CAEAD and C3M algorithms exhibit strong performance on the LIR-CMOP test set, BTCMO significantly outperforms both in the other two test sets, as well as in real-world problems. Furthermore, from Table 1 and Table 2, it is evident that the number of cases where BTCMO outperforms competing algorithms is substantially higher than the number of cases where competing algorithms surpass BTCMO. This indicates that BTCMO achieves a well-balanced and superior performance in handling CMOPs. The comprehensive evaluation from the perspectives of convergence, diversity, and feasibility strongly supports this conclusion.

5.4. Convergence Speed Analysis

Figure 2 depicts the IGD convergence of BTCMO and seven comparative algorithms after 100,000 evaluations, considering only the IGD values of feasible solutions. The proposed BTCMO algorithm exhibits the fastest convergence in problems where the CPF lies on the boundary (e.g., CF3) and in those characterized by large infeasible regions (e.g., LIR-CMOP6). This advantage stems from its auxiliary population, which facilitates CPF approximation from both feasible and infeasible regions within the search space. Moreover, BTCMO demonstrates superior convergence speed in cases where the CPF comprises multiple disconnected segments, such as CF9, MW6, LIR-CMOP9, and LIR-CMOP11. The synergy between the main and auxiliary populations enables BTCMO to efficiently explore the search space and construct a well-distributed solution set across the entire CPF, significantly enhancing convergence speed. Compared to the seven benchmark algorithms, BTCMO consistently achieves rapid convergence as the number of function evaluations increases. Although its initial convergence rate is not the fastest for CF8, MW7, and MW14, BTCMO quickly reaches the Pareto front once the number of evaluations surpasses 40,000, underscoring its strong ability to escape local optima. While CMOSMA demonstrates relatively fast convergence compared to the other algorithms, it still lags behind BTCMO. Overall, BTCMO outperforms all competing approaches in both convergence speed and final convergence quality.

5.5. Diversity Analysis

Figure 3, Figure 4 and Figure 5 illustrate the population distribution of BTCMO and seven comparative algorithms across nine different test functions. Each algorithm operates with a population size of 100 and a maximum evaluation budget of 100,000. From these visualizations, BTCMO consistently achieves the most well-distributed solutions across all nine functions, whereas BiCO, C3M, and CAEAD exhibit poor performance in both convergence and diversity. For example, in LIR-CMOP6, CTAEA fails to reach the leading region, while in LIR-CMOP9, it struggles to explore smaller feasible regions. This limitation stems from inadequate constraint relaxation, which restricts the algorithm’s ability to escape local optima. Similar shortcomings are observed in CTSEA and TSTI. In CMOPs, properly balancing constraint relaxation is essential in enabling the thorough exploration of the unconstrained feasible domain and ensuring the discovery of all small feasible regions. CMOSMA demonstrates competitive convergence and diversity, achieving performance comparable to BTCMO on CF6, LIR-CMOP13, MW8, MW10, and MW14. However, on CF2, CF4, LIR-CMOP6, and LIR-CMOP9, its convergence lags significantly behind BTCMO. This discrepancy likely arises from its sensitivity to infeasible barriers, which impedes effective progression toward optimal solutions. Overall, BTCMO exhibits a more balanced and superior performance than the other algorithms in terms of both convergence and diversity.

5.6. Real-World Problem Testing Results

To thoroughly assess the performance of BTCMO, we extended our comparative experiments beyond standard test sets to include real-world problems. Specifically, we selected 12 benchmark problems from RWMOP [47], with detailed descriptions provided in Table 3. For consistency, all experiments were conducted with a population size of 100 and a maximum evaluation budget of 100,000, with each algorithm tested over 30 independent runs. The results are summarized in Table 4, where the best-performing outcomes among all compared algorithms are highlighted in bold. The results indicate that BTCMO outperforms the competing algorithms in most cases. Except for RWMOP6, BTCMO achieves the best performance across all tested problems, demonstrating strong convergence capabilities. Notably, in RWMOP8, only CAEAD and BTCMO successfully identified feasible solutions, while in RWMOP10 and RWMOP12, feasibility was achieved solely by TSTI and BTCMO. This underscores BTCMO’s robust exploratory ability, which enhances both feasibility and convergence in real-world constrained optimization problems.

5.7. Statistics and Testing

Figure 1 shows the average rankings of IGD and HV for eight algorithms across all test functions, with BTCMO consistently ranking first. Although CMOSMA ranks second on average, it still falls behind BTCMO, while the remaining six algorithms show significantly poorer performance. Table 5 and Table 6 summarize the Wilcoxon signed-rank test results for BTCMO’s IGD and HV compared to the seven other algorithms. Generally, a larger difference between R + and R indicates a more pronounced advantage of BTCMO over the respective algorithm. The results clearly demonstrate that the number of R values is considerably smaller than R + , further highlighting BTCMO’s superior performance. Moreover, all p-values are below 0.05, validating that BTCMO’s advantage is statistically significant. Based on this comprehensive statistical analysis, it can be concluded that BTCMO excels in both convergence and diversity.

6. Conclusions

This study leverages Bayes’ theorem to incorporate prior knowledge into population evolution and introduces BTCMO, a Bayesian-based evolutionary framework for CMOPs. In BTCMO, a Bayesian-based environment selection model is proposed to enhance the adaptability of CHTs. This model constructs prior knowledge based on objective function values and constraint violations, integrating them to define individual fitness. The calculated fitness is then transformed into selection probabilities using the softmax function, which guides the screening of the auxiliary population. By dynamically adjusting the selection process based on prior knowledge, BTCMO significantly improves adaptability across various CMOPs. To address the high computational cost of dual-population frameworks, Bayes’ theorem is also employed for adaptive population size regulation. A Bayesian-based population size adjustment strategy is designed, where historical prior knowledge is used to estimate the probability of objective function value changes. Based on this probability, offspring allocation is adjusted accordingly, optimizing computational resource utilization. A comprehensive comparative study was conducted on BTCMO, using 38 benchmark test functions and 12 real-world problems. The results demonstrate that BTCMO achieves superior performance in convergence, diversity, and feasibility, validating its effectiveness in solving CMOPs.
Although BTCMO can generate high-quality solution sets, its performance on the MW test functions is not particularly outstanding, as observed from the 38 benchmark tests. Additionally, the diversity of BTCMO in the LIR-CMOP test functions requires further improvement. This limitation arises from certain deficiencies in the algorithm’s CHTs. Therefore, future research will focus on designing more effective constraint-handling strategies to enhance BTCMO’s performance. Nonetheless, BTCMO demonstrates exceptional performance across 38 test functions and 12 real-world problems, establishing itself as a highly competitive algorithm in evolutionary computing.

Author Contributions

Conceptualization, S.Z. and Y.L.; methodology, S.Z.; software, Y.L.; validation, H.J., S.Z., and Y.L.; formal analysis, Y.L.; investigation, S.Z.; resources, Q.S.; data curation, Y.L.; writing—original draft preparation, S.Z.; writing—review and editing, Q.S.; visualization, H.J.; supervision, H.J.; project administration, H.J.; funding acquisition, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Guiding Science and Technology Projects in Sanming City (2023-G-5).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

The authors would like to thank the support of Fujian Key Lab of Agriculture IOT Application, IOT Application Engineering Research Center of Fujian Province Colleges and Universities.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, Z.; Wang, Y. Shift-Based Penalty for Evolutionary Constrained Multiobjective Optimization and Its Application. IEEE Trans. Cybern. 2023, 53, 18–30. [Google Scholar] [CrossRef] [PubMed]
  2. Siddall, J.N. Optimal Engineering Design: Principles and Applications; CRC Press: Boca Raton, FL, USA, 1982. [Google Scholar]
  3. Jozefowiez, N.; Semet, F.; Talbi, E.G. Multi-objective vehicle routing problems. Eur. J. Oper. Res. 2008, 189, 293–309. [Google Scholar] [CrossRef]
  4. Ray, T.; Tai, K.; Seow, K. Multiobjective design optimization by an evolutionary algorithm. Eng. Optim. 2001, 33, 399–424. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Zhang, H.; Tian, Y.; Li, C.; Yue, D. Cooperative constrained multi-objective dual-population evolutionary algorithm for optimal dispatching of wind-power integrated power system. Swarm Evol. Comput. 2024, 87, 101525. [Google Scholar] [CrossRef]
  6. Qiao, K.; Liang, J.; Yu, K.; Ban, X.; Yue, C.; Qu, B.; Suganthan, P.N. Constraints Separation Based Evolutionary Multitasking for Constrained Multi-Objective Optimization Problems. IEEE/CAA J. Autom. Sin. 2024, 11, 1819–1835. [Google Scholar] [CrossRef]
  7. Ming, F.; Gong, W.; Zhen, H.; Wang, L.; Gao, L. Constrained multi-objective optimization evolutionary algorithm for real-world continuous mechanical design problems. Eng. Appl. Artif. Intell. 2024, 135, 108673. [Google Scholar] [CrossRef]
  8. Zhang, W.; Yang, J.; Li, G.; Zhang, W.; Yen, G.G. Manifold-assisted coevolutionary algorithm for constrained multi-objective optimization. Swarm Evol. Comput. 2024, 91, 101717. [Google Scholar] [CrossRef]
  9. Qiao, K.; Chen, Z.; Qu, B.; Yu, K.; Yue, C.; Chen, K.; Liang, J. A dual-population evolutionary algorithm based on dynamic constraint processing and resources allocation for constrained multi-objective. Expert Syst. Appl. 2024, 238, 121707. [Google Scholar] [CrossRef]
  10. He, Z.; Liu, H. A dual-population auxiliary multiobjective coevolutionary algorithm for constrained multiobjective optimization problems. Appl. Soft Comput. 2024, 163, 111827. [Google Scholar] [CrossRef]
  11. Chu, X.; Ming, F.; Gong, W. Competitive Multitasking for Computational Resource Allocation in Evolutionary Constrained Multi-Objective Optimization. IEEE Trans. Evol. Comput. 2024. [Google Scholar] [CrossRef]
  12. Wang, B.C.; Li, H.X.; Li, J.P.; Wang, Y. Composite Differential Evolution for Constrained Evolutionary Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1482–1495. [Google Scholar] [CrossRef]
  13. Yuan, J.; Liu, H.L.; Ong, Y.S.; He, Z. Indicator-Based Evolutionary Algorithm for Solving Constrained Multiobjective Optimization Problems. IEEE Trans. Evol. Comput. 2022, 26, 379–391. [Google Scholar] [CrossRef]
  14. Jiao, R.; Xue, B.; Zhang, M. A Multiform Optimization Framework for Constrained Multiobjective Optimization. IEEE Trans. Cybern. 2023, 53, 5165–5177. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Y.; Tian, Y.; Jiang, H.; Zhang, X.; Jin, Y. Design and analysis of helper-problem-assisted evolutionary algorithm for constrained multiobjective optimization. Inf. Sci. 2023, 648, 119547. [Google Scholar] [CrossRef]
  16. Zuo, M.; Xue, Y. Population Feasibility State Guided Autonomous Constrained Multi-Objective Evolutionary Optimization. Mathematics 2024, 12, 913. [Google Scholar] [CrossRef]
  17. Ming, F.; Gong, W.; Wang, L.; Lu, C. A tri-population based co-evolutionary framework for constrained multi-objective optimization problems. Swarm Evol. Comput. 2022, 70, 101055. [Google Scholar]
  18. Zhou, J.; Zhang, Y.; Suganthan, P. Dual population approximate constrained Pareto front for constrained multiobjective optimization. Inf. Sci. 2023, 648, 119591. [Google Scholar] [CrossRef]
  19. Wang, Q.; Li, Y.; Hou, Z.; Zou, J.; Zheng, J. A novel multi-population evolutionary algorithm based on hybrid collaboration for constrained multi-objective optimization. Swarm Evol. Comput. 2024, 87, 101581. [Google Scholar] [CrossRef]
  20. Liu, S.; Wang, Z.; Lin, Q.; Chen, J. Coevolutionary multitasking for constrained multiobjective optimization. Swarm Evol. Comput. 2024, 91, 101727. [Google Scholar] [CrossRef]
  21. Dong, J.; Gong, W.; Ming, F.; Wang, L. A two-stage evolutionary algorithm based on three indicators for constrained multi-objective optimization. Expert Syst. Appl. 2022, 195, 116499. [Google Scholar]
  22. Ma, Y.; Shen, B.; Pan, A. Constrained evolutionary optimization based on dynamic knowledge transfer. Expert Syst. Appl. 2024, 240, 122450. [Google Scholar] [CrossRef]
  23. Ming, F.; Gong, W.; Jin, Y. Even Search in a Promising Region for Constrained Multi-Objective Optimization. IEEE/CAA J. Autom. Sin. 2024, 11, 474–486. [Google Scholar] [CrossRef]
  24. Low, A.K.Y.; Mekki-Berrada, F.; Gupta, A.; Ostudin, A.; Xie, J.; Vissol-Gaudin, E.; Lim, Y.F.; Li, Q.; Ong, Y.S.; Khan, S.A. Evolution-guided Bayesian optimization for constrained multi-objective optimization in self-driving labs. NPJ Comput. Mater. 2024, 10, 104. [Google Scholar] [CrossRef]
  25. Lv, Y.; Li, K.; Zhao, H.; Lei, H. A Multi-Stage Constraint-Handling Multi-Objective Optimization Method for Resilient Microgrid Energy Management. Appl. Sci. 2024, 14, 3253. [Google Scholar] [CrossRef]
  26. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  27. Ma, Z.; Wang, Y.; Song, W. A New Fitness Function with Two Rankings for Evolutionary Constrained Multiobjective Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 5005–5016. [Google Scholar] [CrossRef]
  28. Wang, H.; Cai, T.; Pedrycz, W. Kriging Surrogate Model-Based Constraint Multiobjective Particle Swarm Optimization Algorithm. IEEE Trans. Cybern. 2025, 55, 1224–1237. [Google Scholar] [CrossRef]
  29. Meng, X.; Li, H. An adaptive co-evolutionary competitive particle swarm optimizer for constrained multi-objective optimization problems. Swarm Evol. Comput. 2024, 91, 101746. [Google Scholar] [CrossRef]
  30. Zhao, S.; Hao, X.; Chen, L.; Yu, T.; Li, X.; Liu, W. Two-stage bidirectional coevolutionary algorithm for constrained multi-objective optimization. Swarm Evol. Comput. 2025, 92, 101784. [Google Scholar] [CrossRef]
  31. Li, G.; Zhang, W.; Yue, C.; Yen, G.G. Clustering-based evolutionary algorithm for constrained multimodal multi-objective optimization. Swarm Evol. Comput. 2024, 91, 101714. [Google Scholar] [CrossRef]
  32. Qiao, K.; Liang, J.; Yu, K.; Yue, C.; Lin, H.; Zhang, D.; Qu, B. Evolutionary Constrained Multiobjective Optimization: Scalable High-Dimensional Constraint Benchmarks and Algorithm. IEEE Trans. Evol. Comput. 2024, 28, 965–979. [Google Scholar] [CrossRef]
  33. Qiao, K.; Yu, K.; Qu, B.; Liang, J.; Song, H.; Yue, C.; Lin, H.; Tan, K.C. Dynamic Auxiliary Task-Based Evolutionary Multitasking for Constrained Multiobjective Optimization. IEEE Trans. Evol. Comput. 2023, 27, 642–656. [Google Scholar] [CrossRef]
  34. Zhang, K.; Xu, Z.; Yen, G.; Zhang, L. Two-Stage Multiobjective Evolution Strategy for Constrained Multiobjective Optimization. IEEE Trans. Evol. Comput. 2024, 28, 17–31. [Google Scholar] [CrossRef]
  35. Fan, Z.; Li, W.; Cai, X.; Huang, H.; Fang, Y.; You, Y.; Mo, J.; Wei, C.; Goodman, E. An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions. Soft Comput. 2019, 23, 12491–12510. [Google Scholar] [CrossRef]
  36. Ma, Z.; Wang, Y. Evolutionary Constrained Multiobjective Optimization: Test Suite Construction and Performance Comparisons. IEEE Trans. Evol. Comput. 2019, 23, 972–986. [Google Scholar] [CrossRef]
  37. Ma, L.; Huang, M.; Yang, S.; Wang, R.; Wang, X. An Adaptive Localized Decision Variable Analysis Approach to Large-Scale Multiobjective and Many-Objective Optimization. IEEE Trans. Cybern. 2022, 52, 6684–6696. [Google Scholar] [CrossRef] [PubMed]
  38. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the Strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
  39. Yang, Y.; Liu, J.; Tan, S.; Wang, H. A multi-objective differential evolutionary algorithm for constrained multi-objective optimization problems with low feasible ratio. Appl. Soft Comput. 2019, 80, 42–56. [Google Scholar] [CrossRef]
  40. Liu, Z.Z.; Wang, Y. Handling Constrained Multiobjective Optimization Problems with Constraints in Both the Decision and Objective Spaces. IEEE Trans. Evol. Comput. 2019, 23, 870–884. [Google Scholar] [CrossRef]
  41. Li, K.; Chen, R.; Fu, G.; Yao, X. Two-Archive Evolutionary Algorithm for Constrained Multiobjective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 303–315. [Google Scholar] [CrossRef]
  42. Ming, F.; Gong, W.; Zhen, H.; Li, S.; Wang, L.; Liao, Z. A simple two-stage evolutionary algorithm for constrained multi-objective optimization. Knowl.-Based Syst. 2021, 228, 107263. [Google Scholar] [CrossRef]
  43. Sun, R.; Zou, J.; Liu, Y.; Yang, S.; Zheng, J. A Multistage Algorithm for Solving Multiobjective Optimization Problems with Multiconstraints. IEEE Trans. Evol. Comput. 2023, 27, 1207–1219. [Google Scholar] [CrossRef]
  44. Zou, J.; Sun, R.; Yang, S.; Zheng, J. A dual-population algorithm based on alternative evolution and degeneration for solving constrained multi-objective optimization problems. Inf. Sci. 2021, 579, 89–102. [Google Scholar]
  45. He, C.; Li, M.; Zhang, C.; Chen, H.; Zhong, P.; Li, Z.; Li, J. A self-organizing map approach for constrained multi-objective optimization problems. Complex Intell. Syst. 2022, 8, 5355–5375. [Google Scholar] [CrossRef]
  46. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  47. Kumar, A.; Wu, G.; Ali, M.Z.; Luo, Q.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A Benchmark-Suite of real-World constrained multi-objective optimization problems and some baseline results. Swarm Evol. Comput. 2021, 67, 100961. [Google Scholar] [CrossRef]
Figure 1. Average ranking of 8 algorithms in the Friedman test for IGD and HV across all test functions.
Figure 1. Average ranking of 8 algorithms in the Friedman test for IGD and HV across all test functions.
Mathematics 13 01191 g001
Figure 2. IGD convergence graph of 8 algorithms under 100,000 evaluations.
Figure 2. IGD convergence graph of 8 algorithms under 100,000 evaluations.
Mathematics 13 01191 g002
Figure 3. Population distribution of 8 algorithms in CF.
Figure 3. Population distribution of 8 algorithms in CF.
Mathematics 13 01191 g003
Figure 4. Population distribution of 8 algorithms in LIR-CMOP.
Figure 4. Population distribution of 8 algorithms in LIR-CMOP.
Mathematics 13 01191 g004
Figure 5. Population distribution of 8 algorithms in MW.
Figure 5. Population distribution of 8 algorithms in MW.
Mathematics 13 01191 g005
Table 1. IGD experimental results of BTCMO and 7 comparative algorithms.
Table 1. IGD experimental results of BTCMO and 7 comparative algorithms.
TSTIBiCoCTAEACTSEAC3MCAEADCMOSMABTCMO
CF15.08  × 10 2 (2.90  × 10 3 )+3.91  × 10 2 (2.18  × 10 3 )+6.78  × 10 2 (2.09  × 10 3 )+2.10  × 10 2 (2.97  × 10 3 )+4.69  × 10 3 (6.63  × 10 4 )-3.99  × 10 3   (7.40  × 10 4 )-5.95  × 10 3 (3.08  × 10 4 )-7.35  × 10 3 (4.81  × 10 4 )
CF21.73  × 10 1 (1.79  × 10 2 )+1.46  × 10 1 (2.60  × 10 2 )+1.50  × 10 1 (3.30  × 10 2 )+1.60  × 10 1 (2.77  × 10 2 )+6.28  × 10 2 (4.66  × 10 3 )+7.09  × 10 2 (4.81  × 10 3 )+1.05  × 10 1 (1.36  × 10 2 )+5.34  × 10 2   (4.65  × 10 3 )
CF35.56  × 10 1 (1.50  × 10 1 )+4.87  × 10 1 (1.12  × 10 1 )+4.30  × 10 1 (1.32  × 10 1 )+4.23  × 10 1 (1.06  × 10 1 )+4.71  × 10 1 (8.79  × 10 2 )+5.19  × 10 1 (3.22  × 10 2 )+3.24  × 10 1 (6.28  × 10 2 )+9.27  × 10 2   (2.09  × 10 2 )
CF45.09  × 10 1 (7.68  × 10 2 )+4.18  × 10 1 (7.75  × 10 2 )+4.56  × 10 1 (6.78  × 10 2 )+4.24  × 10 1 (9.02  × 10 2 )+3.73  × 10 1 (1.07  × 10 1 )+3.00  × 10 1 (5.06  × 10 2 )+2.68  × 10 1 (8.52  × 10 2 )+1.43  × 10 1   (2.24  × 10 2 )
CF56.29  × 10 1 (3.25  × 10 2 )+6.28  × 10 1 (2.61  × 10 2 )+6.06  × 10 1 (3.77  × 10 2 )+6.30  × 10 1 (2.43  × 10 2 )+3.93 (5.96  × 10 1 )+5.44 (4.55  × 10 1 )+5.96  × 10 1 (5.26  × 10 2 )+2.97  × 10 1   (3.75  × 10 2 )
CF64.54  × 10 1 (3.35  × 10 2 )+4.67  × 10 1 (2.09  × 10 2 )+4.22  × 10 1 (2.30  × 10 2 )+4.06  × 10 1 (3.06  × 10 2 )+2.22  × 10 1 (2.68  × 10 2 )+2.40  × 10 1 (1.54  × 10 2 )+3.69  × 10 1 (4.13  × 10 2 )+1.13  × 10 1   (1.67  × 10 2 )
CF76.33  × 10 1 (1.12  × 10 1 )+5.71  × 10 1 (9.79  × 10 2 )+6.49  × 10 1 (1.08  × 10 1 )+6.17  × 10 1 (7.04  × 10 2 )+6.56 (9.48  × 10 1 )+7.63 (8.72  × 10 1 )+5.45  × 10 1 (9.32  × 10 2 )+2.55  × 10 1   (4.23  × 10 2 )
CF8NaN (0.00%)+NaN (0.00%)+6.53  × 10 1 (3.94  × 10 2 )+4.40  × 10 1 (3.31  × 10 2 )+7.72  × 10 1 (7.93  × 10 2 )+8.30  × 10 1 (9.33  × 10 2 )+4.23  × 10 1 (2.49  × 10 2 )+1.61  × 10 1   (8.58  × 10 3 )
CF98.59  × 10 1 (1.46  × 10 1 )+9.30  × 10 1 (3.20  × 10 2 )+2.80  × 10 1 (2.09  × 10 2 )+1.97  × 10 1 (4.79  × 10 3 )+3.19  × 10 1 (3.42  × 10 2 )+3.46  × 10 1 (5.35  × 10 2 )+2.12  × 10 1 (7.33  × 10 3 )+8.77  × 10 2   (2.90  × 10 3 )
CF10NaN (0.00%)+NaN (0.00%)+NaN (73.99%)+NaN (93.33%)+NaN (0.00%)+NaN (0.00%)+4.35  × 10 1 (4.46  × 10 2 )+1.31  × 10 1   (1.94  × 10 3 )
+/-/=10/0/010/0/010/0/010/0/09/1/09/1/09/1/0 
LIR-CMOP12.53  × 10 1 (1.59  × 10 2 )+2.46  × 10 1 (1.50  × 10 2 )+NaN (0.20%)+3.88  × 10 1 (2.83  × 10 2 )+3.02  × 10 1 (2.59  × 10 2 )+9.13  × 10 2 (1.53  × 10 2 )+3.27  × 10 1 (1.69  × 10 2 )+6.71  × 10 2   (6.90  × 10 3 )
LIR-CMOP22.23  × 10 1 (1.14  × 10 2 )+2.13  × 10 1 (1.00  × 10 2 )+NaN (1.20%)+3.22  × 10 1 (1.93  × 10 2 )+2.87  × 10 1 (3.15  × 10 2 )+3.81  × 10 2   (9.93  × 10 3 )-2.67  × 10 1 (2.49  × 10 2 )+6.78  × 10 2 (5.01  × 10 3 )
LIR-CMOP32.70  × 10 1 (1.90  × 10 2 )+2.71  × 10 1 (6.18  × 10 3 )+NaN (0.00%)+3.55  × 10 1 (1.67  × 10 2 )+NaN (96.67%)+7.22  × 10 2 (1.48  × 10 2 )=3.28  × 10 1 (2.29  × 10 2 )+7.03  × 10 2   (7.39  × 10 3 )
LIR-CMOP42.57  × 10 1 (1.80  × 10 2 )+2.56  × 10 1 (6.42  × 10 3 )+NaN (0.00%)+3.32  × 10 1 (1.55  × 10 2 )+3.41  × 10 1 (4.69  × 10 2 )+8.07  × 10 2   (1.50  × 10 2 )-3.13  × 10 1 (9.47  × 10 3 )+9.50  × 10 2 (9.60  × 10 3 )
LIR-CMOP51.23 (1.96  × 10 3 )+1.23 (1.33  × 10 3 )+1.26 (6.89  × 10 2 )+3.76  × 10 1 (1.76  × 10 2 )+6.75  × 10 1 (4.79  × 10 1 )+1.21 (1.32  × 10 2 )+3.42  × 10 1 (2.24  × 10 2 )+1.87  × 10 1   (3.24  × 10 2 )
LIR-CMOP61.35 (1.27  × 10 4 )+1.35 (9.39  × 10 5 )+1.35 (8.05  × 10 4 )+4.53  × 10 1 (4.48  × 10 2 )+3.40  × 10 1 (3.64  × 10 1 )=1.05 (3.71  × 10 1 )+3.84  × 10 1 (3.64  × 10 2 )+2.00  × 10 1   (4.18  × 10 2 )
LIR-CMOP72.42  × 10 1 (2.73  × 10 1 )+1.68 (3.31  × 10 4 )+1.02 (6.03  × 10 1 )+1.69  × 10 1 (1.61  × 10 2 )+4.86  × 10 2   (4.52  × 10 2 )-1.27  × 10 1 (2.72  × 10 2 )+1.42  × 10 1 (1.41  × 10 2 )+8.98  × 10 2 (9.59  × 10 3 )
LIR-CMOP81.21 (6.29  × 10 1 )+1.68 (2.09  × 10 4 )+1.69 (1.13  × 10 3 )+2.80  × 10 1 (1.92  × 10 2 )+4.53  × 10 2   (4.81  × 10 2 )-1.66  × 10 1 (4.30  × 10 2 )+2.10  × 10 1 (2.16  × 10 2 )+1.20  × 10 1 (1.64  × 10 2 )
LIR-CMOP96.72  × 10 1 (5.34  × 10 2 )+1.08 (3.19  × 10 2 )+6.65  × 10 1 (9.28  × 10 2 )+9.51  × 10 1 (7.89  × 10 2 )+4.89  × 10 1 (3.35  × 10 2 )+5.29  × 10 1 (1.87  × 10 2 )+6.46  × 10 1 (2.41  × 10 2 )+9.40  × 10 2   (1.42  × 10 2 )
LIR-CMOP109.07  × 10 1 (6.18  × 10 2 )+1.05 (1.55  × 10 2 )+4.40  × 10 1 (1.77  × 10 2 )+5.75  × 10 1 (6.12  × 10 2 )+2.94  × 10 1 (9.10  × 10 2 )+3.56  × 10 1 (7.10  × 10 2 )+2.80  × 10 1 (5.58  × 10 2 )+1.49  × 10 2   (1.94  × 10 3 )
LIR-CMOP117.66  × 10 1 (1.08  × 10 2 )+9.70  × 10 1 (4.39  × 10 2 )+3.18  × 10 1 (1.35  × 10 2 )+4.97  × 10 1 (8.25  × 10 2 )+1.73  × 10 1 (7.07  × 10 2 )+2.54  × 10 1 (5.57  × 10 2 )+1.22  × 10 1 (2.69  × 10 2 )+7.25  × 10 3   (6.36  × 10 4 )
LIR-CMOP124.16 × 10−1 (8.95 × 10−2)+9.69 × 10−1 (3.98 × 10−2)+4.83 × 10−1 (8.94 × 10−2)+5.95 × 10−1 (4.45 × 10−2)+1.78 × 10−1 (2.74 × 10−2)+2.81 × 10−1 (1.59 × 10−2)+3.19 × 10−1 (4.81 × 10−2)+3.22 × 10−2 (4.79 × 10−3)
LIR-CMOP131.32 (6.28  × 10 4 )+1.32 (1.42  × 10 3 )+1.13  × 10 1 (1.27  × 10 3 )+9.48  × 10 2 (6.83  × 10 4 )+3.62  × 10 1 (3.38  × 10 1 )+3.38  × 10 1 (2.63  × 10 1 )+9.61  × 10 2 (5.51  × 10 4 )+9.35  × 10 2   (9.96  × 10 5 )
LIR-CMOP141.27 (1.01  × 10 3 )+1.28 (1.23  × 10 3 )+1.13  × 10 1 (9.52  × 10 4 )+9.67  × 10 2 (3.36  × 10 4 )+1.98  × 10 1 (2.09  × 10 1 )+2.59  × 10 1 (7.83  × 10 2 )+9.79  × 10 2 (5.39  × 10 4 )+9.60  × 10 2   (1.32  × 10 4 )
+/-/=14/0/014/0/014/0/014/0/011/2/111/2/114/0/0 
MW1NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+2.20  × 10 3   (7.37  × 10 5 )
MW21.45  × 10 1 (4.89  × 10 2 )+3.97  × 10 2 (6.51  × 10 3 )+5.77  × 10 2 (2.64  × 10 2 )+7.87  × 10 2 (3.87  × 10 2 )+NaN (0.00%)+NaN (0.00%)+3.50  × 10 2 (9.18  × 10 3 )+1.90  × 10 2   (1.64  × 10 3 )
MW31.38  × 10 1 (9.39  × 10 2 )+8.89  × 10 3 (6.85  × 10 4 )+7.21  × 10 3 (3.55  × 10 4 )+7.30  × 10 3 (7.88  × 10 4 )+1.32  × 10 2 (2.34  × 10 3 )+1.63  × 10 2 (1.86  × 10 3 )+6.80  × 10 3 (4.48  × 10 4 )+4.43  × 10 3   (6.33  × 10 5 )
MW4NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+4.57  × 10 2   (1.34  × 10 2 )
MW5NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+5.41  × 10 3   (3.47  × 10 3 )
MW65.45  × 10 1 (8.89  × 10 2 )+4.19  × 10 2 (8.95  × 10 3 )+1.04  × 10 1 (1.41  × 10 1 )+2.89  × 10 1 (1.94  × 10 1 )+NaN (0.00%)+NaN (0.00%)+5.34  × 10 2 (7.80  × 10 2 )+1.50  × 10 2   (3.59  × 10 3 )
MW75.17  × 10 2 (2.36  × 10 4 )+8.28  × 10 3 (5.79  × 10 4 )+8.40  × 10 3 (2.95  × 10 4 )+6.34  × 10 3   (3.52  × 10 4 )-9.93  × 10 3 (7.80  × 10 4 )+1.14  × 10 2 (5.83  × 10 4 )+7.38  × 10 3 (5.58  × 10 4 )+7.01  × 10 3 (5.74  × 10 4 )
MW8NaN (86.67%)+5.70  × 10 2 (5.56  × 10 3 )+8.09  × 10 2 (2.29  × 10 2 )+8.51  × 10 2 (2.98  × 10 2 )+NaN (0.00%)+NaN (0.00%)+5.72  × 10 2 (6.09  × 10 3 )+4.94  × 10 2   (1.33  × 10 3 )
MW9NaN (0.00%)+NaN (0.00%)+NaN (83.33%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (96.67%)+8.98  × 10 3   (1.22  × 10 3 )
MW10NaN (46.67%)+1.55  × 10 1 (5.28  × 10 2 )+1.22  × 10 1 (1.01  × 10 1 )+NaN (83.33%)+NaN (0.00%)+NaN (0.00%)+1.15  × 10 1 (1.06  × 10 1 )+2.53  × 10 2   (5.93  × 10 3 )
MW112.49  × 10 1 (3.29  × 10 1 )-1.34  × 10 1 (2.32  × 10 1 )-1.64  × 10 2 (1.63  × 10 3 )+6.50  × 10 3   (2.38  × 10 4 )-NaN (83.33%)+6.98  × 10 3 (1.18  × 10 4 )-8.06  × 10 3 (8.20  × 10 4 )-1.23  × 10 2 (5.55  × 10 4 )
MW12NaN (0.00%)+NaN (60.17%)+NaN (76.67%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (83.33%)+6.92  × 10 3   (7.68  × 10 4 )
MW134.29  × 10 1 (1.48  × 10 1 )+1.80  × 10 1 (2.52  × 10 1 )+1.28  × 10 1 (1.60  × 10 2 )+1.67  × 10 1 (6.53  × 10 2 )+1.68 (2.88  × 10 1 )+2.12 (3.69  × 10 1 )+1.16  × 10 1 (2.50  × 10 2 )+5.97  × 10 2   (1.17  × 10 2 )
MW142.99  × 10 1 (7.69  × 10 2 )+3.98  × 10 1 (5.49  × 10 2 )+3.26  × 10 1 (7.34  × 10 2 )+3.37  × 10 1 (6.59  × 10 2 )+7.41  × 10 1 (3.41  × 10 2 )+9.08  × 10 1 (3.98  × 10 2 )+4.93  × 10 1 (5.00  × 10 2 )+1.45  × 10 1   (4.24  × 10 3 )
+/-/=13/1/013/1/014/0/012/2/014/0/013/1/013/1/0
Table 2. HV experimental results of BTCMO and 7 comparative algorithms.
Table 2. HV experimental results of BTCMO and 7 comparative algorithms.
TSTIBiCoCTAEACTSEAC3MCAEADCMOSMABTCMO
CF15.03  × 10 1 (3.48  × 10 3 )+5.17  × 10 1 (2.59  × 10 3 )+4.89  × 10 1 (2.19  × 10 3 )+5.39  × 10 1 (2.72  × 10 3 )+5.60  × 10 1 (8.03  × 10 4 )-5.61  × 10 1   (9.09  × 10 4 )-5.58  × 10 1 (4.36  × 10 4 )-5.56  × 10 1 (6.59  × 10 4 )
CF24.82  × 10 1 (3.34  × 10 2 )+5.01  × 10 1 (2.79  × 10 2 )+5.09  × 10 1 (3.38  × 10 2 )+5.04  × 10 1 (2.81  × 10 2 )+5.92  × 10 1 (8.79  × 10 3 )=5.78  × 10 1 (7.57  × 10 3 )+5.53  × 10 1 (2.06  × 10 2 )+5.98  × 10 1   (2.23  × 10 2 )
CF39.80  × 10 2 (3.24  × 10 2 )+1.21  × 10 1 (3.84  × 10 2 )+1.30  × 10 1 (4.25  × 10 2 )+1.37  × 10 1 (4.41  × 10 2 )+6.54  × 10 2 (3.54  × 10 2 )+2.50  × 10 2 (1.81  × 10 2 )+1.88  × 10 1 (2.72  × 10 2 )+2.58  × 10 1   (1.41  × 10 2 )
CF41.65  × 10 1 (4.55  × 10 2 )+2.17  × 10 1 (5.33  × 10 2 )+2.00  × 10 1 (4.43  × 10 2 )+2.09  × 10 1 (5.10  × 10 2 )+2.10  × 10 1 (7.42  × 10 2 )+2.27  × 10 1 (5.10  × 10 2 )+2.91  × 10 1 (6.15  × 10 2 )+3.63  × 10 1   (2.64  × 10 2 )
CF51.15  × 10 1 (1.98  × 10 2 )+1.17  × 10 1 (1.48  × 10 2 )+1.26  × 10 1 (2.39  × 10 2 )+1.13  × 10 1 (1.42  × 10 2 )+0.00 (0.00)+0.00 (0.00)+1.25  × 10 1 (3.65  × 10 2 )+2.91  × 10 1   (2.11  × 10 2 )
CF63.53  × 10 1 (7.41  × 10 2 )+3.54  × 10 1 (1.65  × 10 2 )+3.84  × 10 1 (4.69  × 10 2 )+4.00  × 10 1 (2.50  × 10 2 )+4.96  × 10 1 (2.42  × 10 2 )+4.54  × 10 1 (2.26  × 10 2 )+4.31  × 10 1 (3.52  × 10 2 )+5.87  × 10 1   (1.48  × 10 2 )
CF71.67  × 10 1 (8.35  × 10 2 )+2.11  × 10 1 (9.66  × 10 2 )+1.82  × 10 1 (8.44  × 10 2 )+1.71  × 10 1 (7.03  × 10 2 )+0.00 (0.00)+0.00 (0.00)+2.53  × 10 1 (9.21  × 10 2 )+4.09  × 10 1   (4.99  × 10 2 )
CF8NaN (0.00%)+NaN (0.00%)+1.07  × 10 1 (1.40  × 10 2 )+1.55  × 10 1 (4.46  × 10 2 )+1.14  × 10 2 (9.27  × 10 3 )+6.04  × 10 3 (7.23  × 10 3 )+1.21  × 10 1 (3.71  × 10 2 )+3.48  × 10 1   (2.33  × 10 2 )
CF99.14  × 10 2 (1.88  × 10 3 )+9.09  × 10 2 (2.77  × 10 7 )+1.70  × 10 1 (2.68  × 10 2 )+2.67  × 10 1 (9.89  × 10 3 )+1.22  × 10 1 (3.20  × 10 2 )+9.48  × 10 2 (3.26  × 10 2 )+2.46  × 10 1 (9.61  × 10 3 )+4.00  × 10 1   (1.62  × 10 2 )
CF10NaN (0.00%)+NaN (0.00%)+NaN (73.99%)+NaN (93.33%)+NaN (0.00%)+NaN (0.00%)+1.01  × 10 1 (1.98  × 10 2 )+3.19  × 10 1   (6.23  × 10 3 )
+/-/=10/0/010/0/010/0/010/0/08/1/19/1/09/1/0 
LIR-CMOP11.29  × 10 1 (9.39  × 10 3 )+1.28  × 10 1 (5.92  × 10 3 )+NaN (0.20%)+9.95  × 10 2 (1.20  × 10 2 )+1.10  × 10 1 (1.04  × 10 2 )+1.94  × 10 1 (9.44  × 10 3 )=1.05  × 10 1 (5.90  × 10 3 )+1.97  × 10 1   (5.10  × 10 3 )
LIR-CMOP22.54  × 10 1 (1.06  × 10 2 )+2.47  × 10 1 (8.80  × 10 3 )+NaN (1.20%)+2.07  × 10 1 (1.11  × 10 2 )+1.98  × 10 1 (2.00  × 10 2 )+3.37  × 10 1   (6.94  × 10 3 )-2.27  × 10 1 (1.41  × 10 2 )+3.23  × 10 1 (5.06  × 10 3 )
LIR-CMOP31.13  × 10 1 (1.09  × 10 2 )+1.12  × 10 1 (5.33  × 10 3 )+NaN (0.00%)+9.21  × 10 2 (5.53  × 10 3 )+NaN (96.67%)+1.74  × 10 1 (6.52  × 10 3 )=9.63  × 10 2 (8.63  × 10 3 )+1.74  × 10 1   (3.77  × 10 3 )
LIR-CMOP42.09  × 10 1 (8.17  × 10 3 )+2.04  × 10 1 (7.79  × 10 3 )+NaN (0.00%)+1.77  × 10 1 (1.32  × 10 2 )+1.80  × 10 1 (3.53  × 10 2 )+2.79  × 10 1   (7.72  × 10 3 )-1.85  × 10 1 (7.94  × 10 3 )+2.77  × 10 1 (5.68  × 10 3 )
LIR-CMOP50.00 (0.00)+0.00 (0.00)+0.00 (0.00)+1.25  × 10 1 (6.34  × 10 3 )+1.11  × 10 1 (1.12  × 10 1 )=0.00 (0.00)+1.37  × 10 1 (8.69  × 10 3 )+1.96  × 10 1   (1.24  × 10 2 )
LIR-CMOP60.00 (0.00)+0.00 (0.00)+0.00 (0.00)+9.72  × 10 2 (2.76  × 10 3 )+1.10  × 10 1 (5.19  × 10 2 )=2.20  × 10 2 (2.90  × 10 2 )+1.10  × 10 1 (7.72  × 10 3 )+1.33  × 10 1   (1.18  × 10 2 )
LIR-CMOP72.21  × 10 1 (4.23  × 10 2 )+0.00 (0.00)+8.83  × 10 2 (8.29  × 10 2 )+2.34  × 10 1 (4.40  × 10 3 )+2.76  × 10 1   (1.76  × 10 2 )-2.45  × 10 1 (8.17  × 10 3 )+2.42  × 10 1 (3.88  × 10 3 )+2.57  × 10 1 (3.26  × 10 3 )
LIR-CMOP87.55  × 10 2 (1.01  × 10 1 )+0.00 (0.00)+0.00 (0.00)+2.21  × 10 1 (2.52  × 10 3 )+2.79  × 10 1   (1.81  × 10 2 )-2.38  × 10 1 (1.16  × 10 2 )+2.30  × 10 1 (5.72  × 10 3 )+2.48  × 10 1 (5.66  × 10 3 )
LIR-CMOP92.36  × 10 1 (3.02  × 10 2 )+9.95  × 10 2 (4.58  × 10 2 )+2.76  × 10 1 (4.44  × 10 2 )+2.28  × 10 1 (4.24  × 10 2 )+3.62  × 10 1 (1.29  × 10 2 )+3.20  × 10 1 (1.85  × 10 2 )+2.87  × 10 1 (2.18  × 10 2 )+5.30  × 10 1   (4.59  × 10 3 )
LIR-CMOP109.92  × 10 2 (5.06  × 10 2 )+5.26  × 10 2 (2.53  × 10 4 )+4.61  × 10 1 (1.96  × 10 2 )+2.57  × 10 1 (3.63  × 10 2 )+5.60  × 10 1 (5.39  × 10 2 )+5.16  × 10 1 (3.59  × 10 2 )+5.54  × 10 1 (3.40  × 10 2 )+6.98  × 10 1   (1.94  × 10 3 )
LIR-CMOP112.27  × 10 1 (2.69  × 10 3 )+1.71  × 10 1 (1.09  × 10 2 )+5.56  × 10 1 (7.43  × 10 3 )+3.68  × 10 1 (3.81  × 10 2 )+5.83  × 10 1 (4.74  × 10 2 )+5.58  × 10 1 (4.00  × 10 2 )+6.17  × 10 1 (2.13  × 10 2 )+6.91  × 10 1   (6.30  × 10 4 )
LIR-CMOP124.06  × 10 1 (4.47  × 10 2 )+1.81  × 10 1 (1.44  × 10 2 )+4.20  × 10 1 (1.82  × 10 2 )+4.15  × 10 1 (2.88  × 10 2 )+5.28  × 10 1 (9.55  × 10 3 )+4.73  × 10 1 (1.69  × 10 2 )+4.55  × 10 1 (2.12  × 10 2 )+6.06  × 10 1   (2.57  × 10 3 )
LIR-CMOP131.05 × 10−4 (1.42 × 10−4)+8.11 × 10−5 (1.04 × 10−4)+5.44 × 10−1 (1.62 × 10−3)+5.55 × 10−1 (1.32 × 10−3)+3.50 × 10−1 (1.49 × 10−1)+3.43 × 10−1 (1.23 × 10−1)+5.52 × 10−1 (1.23 × 10−3)+5.60 × 10−1 (3.08 × 10−4)
LIR-CMOP145.94  × 10 4 (2.82  × 10 4 )+4.25  × 10 4 (3.17  × 10 4 )+5.45  × 10 1 (9.21  × 10 4 )+5.55  × 10 1 (1.23  × 10 3 )+4.59  × 10 1 (9.44  × 10 2 )+3.80  × 10 1 (6.24  × 10 2 )+5.53  × 10 1 (1.72  × 10 3 )+5.59  × 10 1   (6.46  × 10 4 )
+/-/=14/0/014/0/014/0/014/0/010/2/210/2/214/0/0 
MW1NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+4.89  × 10 1   (1.96  × 10 4 )
MW23.92  × 10 1 (5.43  × 10 2 )+5.25  × 10 1 (9.06  × 10 3 )+5.00  × 10 1 (3.37  × 10 2 )+4.74  × 10 1 (4.82  × 10 2 )+NaN (0.00%)+NaN (0.00%)+5.32  × 10 1 (1.29  × 10 2 )+5.55  × 10 1   (2.55  × 10 3 )
MW34.56  × 10 1 (6.54  × 10 2 )+5.37  × 10 1 (1.21  × 10 3 )+5.41  × 10 1 (9.87  × 10 4 )+5.40  × 10 1 (1.32  × 10 3 )+5.29  × 10 1 (4.75  × 10 3 )+5.23  × 10 1 (3.37  × 10 3 )+5.40  × 10 1 (7.63  × 10 4 )+5.45  × 10 1   (2.02  × 10 4 )
MW4NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+8.36  × 10 1   (1.88  × 10 2 )
MW5NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+3.22  × 10 1   (1.80  × 10 3 )
MW69.79  × 10 2 (4.13  × 10 2 )+2.74  × 10 1 (1.02  × 10 2 )+2.44  × 10 1 (4.96  × 10 2 )+1.77  × 10 1 (5.06  × 10 2 )+NaN (0.00%)+NaN (0.00%)+2.73  × 10 1 (2.44  × 10 2 )+3.09  × 10 1   (4.84  × 10 3 )
MW74.00  × 10 1 (1.15  × 10 3 )+4.05  × 10 1 (1.05  × 10 3 )+4.06  × 10 1 (1.16  × 10 3 )+4.10  × 10 1 (1.18  × 10 3 )=4.04  × 10 1 (1.57  × 10 3 )+4.01  × 10 1 (1.53  × 10 3 )+4.11  × 10 1   (1.13  × 10 3 )-4.10  × 10 1 (1.01  × 10 3 )
MW8NaN (86.67%)+5.03  × 10 1 (1.29  × 10 2 )+4.52  × 10 1 (4.57  × 10 2 )+4.44  × 10 1 (5.04  × 10 2 )+NaN (0.00%)+NaN (0.00%)+5.00  × 10 1 (1.34  × 10 2 )+5.25  × 10 1   (5.89  × 10 3 )
MW9NaN (0.00%)+NaN (0.00%)+NaN (83.33%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (96.67%)+3.92  × 10 1   (3.79  × 10 3 )
MW10NaN (46.67%)+3.42  × 10 1 (2.82  × 10 2 )+3.63  × 10 1 (4.96  × 10 2 )+NaN (83.33%)+NaN (0.00%)+NaN (0.00%)+3.68  × 10 1 (5.37  × 10 2 )+4.27  × 10 1   (5.56  × 10 3 )
MW113.85  × 10 1 (8.40  × 10 2 )-4.11  × 10 1 (6.33  × 10 2 )=4.42  × 10 1 (7.41  × 10 4 )+4.47  × 10 1   (4.00  × 10 4 )-NaN (83.33%)+4.47  × 10 1 (2.47  × 10 4 )-4.46  × 10 1 (5.58  × 10 4 )-4.46  × 10 1 (4.42  × 10 4 )
MW12NaN (0.00%)+NaN (60.17%)+NaN (76.67%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (83.33%)+6.03  × 10 1   (8.81  × 10 4 )
MW132.67  × 10 1 (6.08  × 10 2 )+4.00  × 10 1 (4.41  × 10 2 )+4.10  × 10 1 (1.34  × 10 2 )+3.83  × 10 1 (3.90  × 10 2 )+0.00 (0.00)+0.00 (0.00)+4.22  × 10 1 (1.77  × 10 2 )+4.52  × 10 1   (4.14  × 10 3 )
MW144.04  × 10 1 (3.68  × 10 2 )+3.53  × 10 1 (2.60  × 10 2 )+3.91  × 10 1 (3.64  × 10 2 )+3.92  × 10 1 (3.24  × 10 2 )+1.72  × 10 1 (1.73  × 10 2 )+1.28  × 10 1 (1.72  × 10 2 )+3.19  × 10 1 (3.29  × 10 2 )+4.62  × 10 1   (1.88  × 10 3 )
+/-/=13/1/013/0/114/0/012/1/114/0/013/1/012/2/0
Table 3. Details of RWMOP.
Table 3. Details of RWMOP.
NameProblemMD
RWMOP1Vibrating Platform Design25
RWMOP2Two Bar Truss Design23
RWMOP3Speed Reducer Design27
RWMOP4Gear Train Design24
RWMOP5Simply Supported I-Beam Design24
RWMOP6Cantilever Beam Design22
RWMOP7Crash Energy Management for High-Speed Train26
RWMOP8Haverly’s Pooling Problem29
RWMOP9Process Flow Sheeting Problem23
RWMOP10Two Reactor Problem27
RWMOP11Process Synthesis Problem27
RWMOP12Synchronous Optimal Pulse-Width Modulation of 3-Level Inverters225
Table 4. IGD experimental results of BTCMO and 7 comparative algorithms for RWMOP.
Table 4. IGD experimental results of BTCMO and 7 comparative algorithms for RWMOP.
TSTIBiCoCTAEACTSEAC3MCAEADCMOSMABTCMO
RW-MOP11.01  × 10 2 (1.82 × 10 )+9.15 × 10 (1.02  × 10 1 )+1.48  × 10 2 (6.35 × 10 )+9.15 × 10 (7.06  × 10 3 )+9.15 × 10 (5.53  × 10 3 )+9.15 × 10 (3.83  × 10 3 )+9.15 × 10 (3.66  × 10 2 )+8.85  × 10   (3.07)
RW-MOP26.17 × 10 (2.75 × 10 )+2.28  × 10 2 (1.10  × 10 2 )+6.24  × 10 4 (6.93  × 10 3 )+1.03  × 10 2 (5.21 × 10 )+2.84  × 10 2 (7.54 × 10 )+3.10  × 10 2 (8.65 × 10 )+2.42  × 10 2 (1.22  × 10 2 )+6.05(4.23)
RW-MOP36.05  × 10 2 (5.71  × 10 2 )+6.06  × 10 2 (2.82  × 10 1 )+NaN (11.47%)+6.06  × 10 2 (7.30  × 10 1 )+6.05  × 10 2 (6.90  × 10 2 )+6.05  × 10 2 (7.42  × 10 2 )+7.54  × 10 2 (4.39  × 10 2 )+6.05  × 10 2   (3.27  × 10 2 )
RW-MOP41.55 × 10 (6.95  × 10 5 )+1.55 × 10 (7.73  × 10 3 )+1.54 × 10 (4.30  × 10 2 )+1.55 × 10 (7.62  × 10 3 )+1.55 × 10 (2.46  × 10 3 )+1.55 × 10 (4.19  × 10 3 )+1.55 × 10 (6.00  × 10 3 )+1.21  × 10   (1.81)
RW-MOP53.93 (5.35  × 10 1 )+3.94 (8.48  × 10 1 )+3.87 (7.10  × 10 1 )+3.72 (5.03  × 10 1 )+3.85 (5.01  × 10 1 )+3.81 (5.38  × 10 1 )+4.12 (6.47  × 10 1 )+2.92  × 10 1   (1.56  × 10 1 )
RW-MOP62.00  × 10 3 (1.32  × 10 18 )=2.00  × 10 3 (1.32  × 10 18 )=2.00  × 10 3 (1.32  × 10 18 )=2.00  × 10 3 (1.32  × 10 18 )=2.00  × 10 3 (1.32  × 10 18 )=2.00  × 10 3 (1.32  × 10 18 )=2.00  × 10 3 (1.32  × 10 18 )=2.00  × 10 3 (1.32  × 10 18 )
RW-MOP71.61  × 10 2 (2.94  × 10 4 )+1.76  × 10 2 (6.41  × 10 3 )+1.62 (1.03  × 10 1 )+1.61  × 10 2 (7.06  × 10 18 )+1.61  × 10 2 (7.06  × 10 18 )+1.61  × 10 2 (7.06  × 10 18 )+1.61  × 10 2 (7.06  × 10 18 )+1.61  × 10 2   (6.70  × 10 18 )
RW-MOP8NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (90.00%)+2.00  × 10 3 (1.37  × 10 12 )+NaN (0.00%)+1.28  × 10 3   (9.25  × 10 13 )
RW-MOP99.90  × 10 1 (4.23  × 10 6 )+9.90  × 10 1 (7.39  × 10 6 )+1.02 (6.31  × 10 3 )+9.90  × 10 1 (3.91  × 10 6 )+9.90  × 10 1 (2.10  × 10 5 )+9.90  × 10 1 (1.89  × 10 5 )+9.90  × 10 1 (2.78  × 10 6 )+9.90  × 10 1   (1.67  × 10 5 )
RW-MOP103.18 × 10 (1.85)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+1.92(3.62  × 10 5 )
RW-MOP119.27 (2.96  × 10 5 )+9.27 (2.10  × 10 5 )+9.33 (4.30  × 10 1 )+9.72 (6.96  × 10 1 )+9.27 (3.55  × 10 5 )+9.27 (7.27  × 10 6 )+9.31 (2.09  × 10 1 )+9.01(1.18  × 10 4 )
RW-MOP12NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+NaN (0.00%)+1.08  × 10 1   (8.76  × 10 3 )
+/-/=11/0/111/0/111/0/111/0/111/0/111/0/111/0/1
Table 5. Wilcoxon signed-rank test results of IGD.
Table 5. Wilcoxon signed-rank test results of IGD.
BTCMO-VS R + R pLevel = 0.05
TSTI687540.000002YES
BiCo679620.000004YES
CTAEA691500.000000YES
CTSEA693480.000002YES
C3M5931480.000642YES
CAEAD5781630.001341YES
CMOSMA707340.000001YES
Table 6. Wilcoxon signed-rank test results of HV.
Table 6. Wilcoxon signed-rank test results of HV.
BTCMO-VS R + R pLevel = 0.05
TSTI74010.000000YES
BiCo74010.000000YES
CTAEA74100.000000YES
CTSEA73920.000000YES
C3M720210.000000YES
CAEAD720210.000000YES
CMOSMA73560.000000YES
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, S.; Jia, H.; Li, Y.; Shi, Q. Coevolutionary Algorithm with Bayes Theorem for Constrained Multiobjective Optimization. Mathematics 2025, 13, 1191. https://doi.org/10.3390/math13071191

AMA Style

Zhao S, Jia H, Li Y, Shi Q. Coevolutionary Algorithm with Bayes Theorem for Constrained Multiobjective Optimization. Mathematics. 2025; 13(7):1191. https://doi.org/10.3390/math13071191

Chicago/Turabian Style

Zhao, Shaoyu, Heming Jia, Yongchao Li, and Qian Shi. 2025. "Coevolutionary Algorithm with Bayes Theorem for Constrained Multiobjective Optimization" Mathematics 13, no. 7: 1191. https://doi.org/10.3390/math13071191

APA Style

Zhao, S., Jia, H., Li, Y., & Shi, Q. (2025). Coevolutionary Algorithm with Bayes Theorem for Constrained Multiobjective Optimization. Mathematics, 13(7), 1191. https://doi.org/10.3390/math13071191

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop