1. Introduction
Multi-objective optimization problems (MOPs) constitute an important branch of mathematical optimization and operations research, which are characterized by the existence of multiple conflicting objectives. Due to its pervasive presence in practical applications, MOP has been of keen interest to researchers in recent years. Multi-objective evolutionary algorithms (MOEAs) are a class of intelligence algorithms based on population evolution for dealing with MOPs. Owing to a flexible framework, MOEAs have strong applicability for solving MOPs with various characteristics and have been widely used in many fields, such as trajectory planning [
1], flood control scheduling [
2], rail vehicle systems [
3], transmission gear design [
4], and job shop scheduling [
5].
According to the selection criteria of population, existing MOEAs can be classified into three categories: dominance-based MOEAs [
6,
7,
8], indicator-based MOEAs [
9,
10,
11,
12], and decomposition-based MOEAs [
13,
14,
15]. In the context of dominance-based MOEAs, the non-dominance sorting rank and crowded degree are used to evaluate the fitness of individuals, determining the reserved population in the evolution process. However, when the number of objectives is large, the proportion of non-dominated solutions in the population could be significant, which leads to the loss of selection pressure. As for the indicator-based MOEAs, the performance indicators, such as hypervolume [
16] and inverted generational distance [
17], are used to evaluate the fitness of individuals. These algorithms can overcome the challenges brought by the loss of selection pressure, but their high computational complexity is a crucial issue, resulting in excessive runtime [
12].
As a new class of MOEAs, the decomposition-based MOEAs (MOEA/Ds) have attracted significant attention from many scholars due to their good generality and low computational complexity. MOEA/Ds utilize a set of weight vectors to decompose a MOP into a series of simpler subproblems, and then, the population approximates the Pareto front (PF) by optimizing these subproblems in parallel. Since the primal MOEA/D was proposed in [
13], researchers have developed numerous variant versions, which made improvements on various aspects such as the decomposition method [
18,
19,
20,
21,
22,
23], reproduction strategy [
24,
25,
26], replacement procedure [
27,
28,
29,
30,
31], computational resource allocation [
32,
33], and weight vector adjustment [
34,
35,
36,
37,
38].
In the original MOEA/D [
13], the replacement procedure solely driven by the scalarizing function value is the primary force to encourage convergence, enabling the population to approximate the PF. However, this approach has a detrimental effect on the diversity of the population. To better balance convergence and diversity, some efforts have been made on the replacement mechanism, such as adaptive neighborhood [
39], stable-state neighborhood [
31], adaptive replacement strategy [
40], distance-based updating [
28], improved global replacement [
41], stable matching model [
27], and incomplete matching list [
42]. Specifically, some other categories, including MOEA/DD [
43] and MOEA/DDS [
44], exploit the advantages of different selection criteria. Despite their contributions to enhancing algorithm performance, these approaches also exhibit some drawbacks. Most of the existing methods limit the region or scale of replacement to preserve the diversity of the population, which may exclude more reasonable association patterns between subproblems and individuals, resulting in insufficient convergence efficiency. As illustrated in
Figure 1, the weight vectors
and
are associated with the individuals
and
, respectively. Comparing
with the new individual
y, it is evident that
y exhibits better convergence for
, yet
y falls outside the replacement region (marked in gray) constrained by diversity estimation, such as crowding distance, vector angle, or niche technique. This phenomenon occurs frequently during evolution. In contrast, the previously proposed stable matching model [
27] and incomplete matching list [
42] enable one-to-one global matching between subproblems and individuals, achieving a better equilibrium between convergence and diversity. However, these matching-based replacement mechanisms, which lack competition, have a high risk of matching an individual with an unpreferred subproblem, ultimately leading to imbalanced association outcomes. In fact, if we consider the subproblems and individuals as two agent sets, the replacement procedure in MOEA/D can be regarded as a bilateral selection process: each subproblem prefers an individual that possesses a better scalarizing function value to enhance convergence, while each individual aims to avoid repetitive associations to maintain diversity. From this perspective, the auction concept, originating from the economic field, can be adopted as a natural framework to model this selection process. By treating the subproblems as bidders and the individuals as goods to be allocated, the auction mechanism introduces an efficient and rational way to coordinate the replacement procedure. Distinguished from existing approaches, the competitive nature of auctions encourages subproblems to compete for their preferred individuals among the whole population, leading to a more balanced and effective association. This paper presents a first attempt along this line.
Aside from research on the replacement procedure, the approach of adjusting weight vectors is another aspect of improving the performance of MOEA/D. Relevant studies showed that the final non-dominated solutions obtained by MOEA/D significantly depend on the distribution of weight vectors, and the best results can only be achieved when the distribution of weight vectors is consistent with the shape of PF [
45,
46,
47]. Generally, utilizing a series of fixed uniform weight vectors is inefficient, especially when dealing with MOPs with irregular PFs that are degenerate, convex, discontinuous, or sharp-tailed. To achieve the appropriate distribution of weight vectors, adaptively adjusting them during evolution is a promising method that has been adopted by many scholars [
14,
34,
35,
36,
37,
38,
48,
49,
50,
51]. Although many effective strategies were designed to adjust the weight vectors, their trigger timing was predefined and devoid of an instructor during evolution, potentially coming at an inappropriate moment. To address this issue, Dong [
52] utilized a measure called population activity to determine the trigger timing of weight vector adjustment. Similarly, Lucas [
45] presented MOEA/D-UR in which an improvement metric is defined to determine when to adjust the weight vectors. These research results indicated that the adjustment strategy with measure-dependent trigger timing is more effective than that with predefined trigger timing. Therefore, we drew inspiration from these methods to design an improved weight vector adjustment strategy in MOEA/D, whose trigger timing is determined by the convergence activity of the population.
Based on the above crucial discussion, an improved MOEA/D with an auction-based matching mechanism (ABM), called MOEA/D-ABM, is proposed in this paper. As an application of auction theory [
53], the ABM mechanism is the main innovation of this paper. Meanwhile, several advanced techniques were incorporated into MOEAD/-ABM, including the dynamic normalization method and the weight vector adjustment strategy, to further enhance the performance of the algorithm. The main contribution of this paper can be summarized as follows:
- (1)
A novel auction-based matching mechanism is proposed to match the weight vectors (subproblems) with current solutions (individuals) in each generation. Specifically, the weight vectors and solutions are regarded as the bidders and the auctioned commodities, respectively. By simulating the auction process in economic activities, each weight vector can be associated with its preferred solution that possesses a better scalarizing function value in a competitive manner. This mechanism leverages the strengths of free competition to encourage convergence in a fair and efficient way. Meanwhile, the one-to-one matching model avoids the presence of repeated solutions in the population, maintaining diversity. We demonstrate the rationality of this mechanism from a mathematical analysis perspective.
- (2)
A dynamic normalization method is employed to normalize the objectives. Commonly, the ideal and nadir points are estimated to normalize the objectives, lessening the influence of distinct scale of objectives on the evolution, especially when solving many-objective problems. However, in early evolution, the estimated nadir point significantly differs from the true one, which may disrupt the appropriate search direction [
54]. In the dynamic normalization method, we decouple the estimated nadir point from the normalization procedure at the initial stage evolution, and gradually increase its intensity of the estimated nadir as the evolution progresses. This approach can effectively avoid the above disruption.
- (3)
A weight vector adjustment mechanism based on the sparsity level of the population is adopted, and a measure based on the convergence activity of the population is defined to periodically identify the trigger timing of adjustment. Low-level population convergence activity indicates difficulty in further optimizing the current subproblems, necessitating adjusting the weight vectors. During the adjustment process, crowded individuals and their associated weight vectors are deleted, and, subsequently, sparse non-dominated solutions and the newly generated weight vectors are added. This strategy helps to estimate the shape of PF comprehensively and improve the diversity of the final output solutions.
- (4)
Finally, we apply MOEA/D-ABM compared with six state-of-the-art MOEAs to solve some benchmark problems including the DTLZ test suit [
55], WFG test suit [
56], and BT test suit [
57]. The experimental results show the competitiveness of our algorithm in both diversity and convergence. Especially when dealing with the BT test suit which poses significant convergence challenges, MOEA/D-ABM significantly outperforms the other compared MOEAs in terms of convergence rate. The results demonstrate that integrating ABM into MOEA/D can significantly enhance the convergence efficiency of the algorithm while maintaining the diversity.
The remaining structure of this paper is arranged as follows. In
Section 2, some basic knowledge about multi-objective optimization problem and MOEA/D algorithm is introduced.
Section 3 describes our proposed MOEA/D-ABM in detail. In
Section 4, we implement MOEA/D-ABW and six comparison MOEAs to solve some MOP test suits. The experimental results are analyzed to verify the effectiveness of our proposed algorithm. Finally, the work of this paper is concluded in
Section 5.
3. Proposed Algorithm
In this section, we first present the overall framework of MOEA/D-ABM. After that, we give the detailed instruction of several crucial components. The major innovation of our proposed algorithm can be embodied in the auction-based matching (ABM) mechanism. Moreover, the dynamic normalization method and weight vector adjustment strategy are employed to further enhance the performance of the algorithm.
3.1. The Framework of MOEA/D-ABM
The main framework of MOEA/D-ABM is depicted in Algorithm 2.
Initially,
N weight vectors are generated by the UR method, and then, WS-trans- formation [
34] is applied to them. The neighborhood of each weight vector
is composed of the
T weight vectors closest to it.
N feasible solutions are randomly generated to form the population
. The ideal point
and nadir point
are estimated by the minimum and maximum value of each objective among
, respectively. An external population, denoted as
, is initialized as an empty set to store the non-dominated solutions during the evolution process.
As the iteration proceeds, for each individual
, the mating pool
is chosen from its neighborhood
or the whole population
with a selection probability of
for
, and a new offspring
is generated by simulating binary crossover (SBX) [
64] and polynomial mutation operators on
(Algorithm 2, lines 6–11). After that,
is evaluated and the ideal point is updated, and, at most,
solutions in
are replaced by
if their associated scalarizing function values can be further optimized (Algorithm 2, lines 12–14). It should be noted that, in the calculation of scalarizing function value, a dynamic normalization procedure is employed to control the intensity of the nadir point. After the offspring reproduction and replacement, the non-dominated solutions stored in
is updated based on the offspring set
, and the nadir point
is re-estimated by the maximum value of each objective among
(Algorithm 2, lines 16–17). Furthermore, in every generation, the ABM mechanism is incorporated to rematch the weight vectors (subproblems) with all current solutions in the union set
, aiming to enhance the convergence of the population (Algorithm 2, lines 18–20). Additionally, for every 5% during 10–90% of the whole evolution process, a measure of convergence activity, denoted as
, is calculated to determine whether or not to adjust the weight vectors (Algorithm 2, lines 22–27). If
,
weight vectors are adjusted based on the distribution of
. Finally, the solutions in
are output.
The following subsections provide detailed instruction for several components of MOEA/D-ABM.
Algorithm 2: MOEA/D-ABM |
|
Algorithm 3: Replacement |
|
3.2. Weight Vector Initialization
We used the UR method introduced in
Section 2.2.2 to the initial
N uniformly distributed weight vectors, denoted as
, due to its simplicity and good performance in uniformity. The optimal solution of Tchebycheff scalarizing function is the intersection point of vector
and normalized PF; thus, the WS-transformation is applied on the generated weight vectors. In this method, the weight vector,
, is transformed as follows:
Due to the existence of boundary weight vectors,
is replaced by
when
.
3.3. External Population
is defined to store the non-dominated solutions during the evolution process. The distribution of
resembles that of the true PF to some extent; thus, it is used to provide guidance for weight vector adjustment. The more solutions
includes, the more suitable it is for weight vector adjustment, but this leads to a high computational endeavor. As a compromise, we set the size of
to twice that of the population, i.e.,
. See Algorithm 4 for the update of
. If an offspring is non-dominated among
, we add it to
and delete any one dominated by it (Algorithm 4, lines 1–5). If the number of solutions in
exceeds
, we delete the most crowded one in
until its size is equal to
. The sparsity level is utilized to measure the crowded degree of a solution among a population by the following formulation [
65]:
where
denotes the Euclidean distance in the normalized objective space between the solution,
, and its
i-th nearest solutions in the population,
;
m is the number of objectives.
Algorithm 4: UpdateEP |
|
3.4. Dynamic Normalization Procedure
In the calculation of the scalarizing function value given in (
2), the ideal point and nadir point need to be estimated. The estimation of the ideal point could be the minimum value of each objective found in the evolution process. On the other hand, the estimation of the nadir point could be the maximum value of each objective in the non-dominated solutions,
. However,
has little reliable information about the true PF in the early generations; thus, the estimated nadir point may be significantly different from the true one, resulting in degradation during evolution [
54]. To address this issue, we employed a novel dynamic normalization method proposed in [
66] to control the intensity of the nadir point in the normalization procedure. In this method, the calculation of normalized objectives presented in (
2) is modified as follows:
Following the original paper, we used the sigmoid function to control the parameter
as follows:
where
t and
are the numbers of current generation and maximum generation, respectively. Initially, there is little reliable information about the true PF; thus,
and, further,
, indicating that the nadir point has no impact on the normalization. As the algorithm executes, the value of
increases, allowing a greater impact of the nadir point on the normalization. At the end of the algorithm,
, making (
8) and (
2) equivalent.
3.5. Auction-Based Matching Mechanism
In the traditional replacement strategy of MOEA/Ds, as shown in Algorithm 3,
is usually set to be a small value to maintain diversity, but this leads to the insufficient optimization of some subproblems, decreasing the convergence efficiency. To address this issue, we proposed a novel matching mechanism based on the auction algorithm to rematch the subproblems with the solutions for better association between them, promoting convergence efficiency.
Algorithm 5: AuctionBasedMatch(, ) |
|
The auction algorithm originally proposed by Bertsekas [
53] is a distributed algorithm that simulates the process of human auction activities. The main idea of this method is to introduce the concept of bidding: (1) Each bidder bids on their preferred item, which brings the maximum earning value to it. (2) The auctioneer assigns the item to the bidder that offers the highest bidding price, and then updates the item’s price. In the proposed ABM, we regard the weight vectors (subproblems) and the solutions as bidders and auction items, respectively, and the relevant symbols are given as follows:
: the collection of weight vectors (bidders), i.e., .
: the collection of solutions (auction items), i.e., .
: the value of solution
to the weight vector
(
). Since the weight vector prefers the solution with small scalarizing function value,
is calculated as the opposite of the scalarizing function value, i.e.,
: the price of solution (). The price is initialized to 0 and updated with the auction process.
: the bidding price of the weight vector on solution ().
: the assignment decision variables (). if solution is assigned to weight vector ; otherwise.
The relationship among the above symbols can be represented by the network shown in
Figure 2.
The pseudo-code and flowchart of ABM are presented in Algorithm 5 and
Figure 3, respectively. The ABM mechanism is mainly composed of two crucial stages: the bidding stage (Algorithm 5, lines 8–14) and the assignment stage (Algorithm 5, lines 15–19).
- (1)
Bidding stage (Algorithm 5, lines 8–14)
In the bidding stage, each weight vector not assigning a solution selects its preferred solution from
and bids on it. Firstly, for the unassigned weight vector
, we calculate the earning value of each solution to it under current item prices. The earning value is
. Motivated by self-interest, each unassigned weight vector prefers the solution with better scalarizing function value, i.e.,
prefers
with the great value of
. Therefore,
selects the solution from
that offers it the maximum earning value as its preferred one, which can be expressed as
where
denotes the index of the solution
prefers. The corresponding earning value is denoted by
, i.e.,
Secondly, the unassigned weight vector
bids on its preferred solution
, and the bidding price is calculated as follows:
where
is a relaxation parameter and set to
herein.
represents the second-best earning value for the weight vector
, which is calculated as follows:
Through Equations (
13)–(
15), we can deduce that
implying that the bidding price on solution
is at least
higher than its current price
. After every unassigned weight vector expresses its preferred solution and bids on it through the above process, we proceed to the following assignment stage.
- (2)
Assignment stage (Algorithm 5, lines 15–20)
In the assignment stage, the auctioneer assigns the bidded solutions to the weight vectors with the highest bidding price on it and updates the item price accordingly. Let
j be the index of a bidded solution, i.e.,
(
denotes the set of indexes of the bidded solutions), it is assigned to the weight vector
, which is expressed as follows:
where
denotes the collection of indexes of the bidders that bid on
in the bidding stage. As a result, the assignment decision variables are updated as follows:
Equation (
18) indicates that once solution
is assigned to weight vector
, it withdraws its previous assignment to other weight vectors. Finally, the highest bidding price for solution
is taken as its updated price, which can be written as
We iterate the auction process including the above two stages until each weight vector is assigned a solution (Algorithm 5, lines 4–20). It can be predicted from (
16) that, as the auction process iterates, the price of the solution will gradually increase, resulting in decreased earning values for the weight vectors. Referring to [
53], it can be proved that ABM must terminate after a finite number of iterations, i.e., every weight vector will be assigned a solution after a finite number of auction rounds.
The rationality of ABM is supported by the following derivation: after each round of auction, if the weight vector
is assigned its preferred solution
, the following inequality can be derived from (
12)–(
16) and (
19):
When the ABM algorithm terminates, each weight vector is assigned its preferred solution; thus, it can be further derived from (
20) that
Inequality (
21) indicates that, when the ABM algorithm terminates, the obtained assignment scheme is optimal for the following one-to-one assignment problem within a maximum error range of
:
In summary, the assignment scheme obtained by ABM ensures the maximization of the earning value of all weight vectors as much as possible. In other words, through the employment of ABM, each weight vector is associated with a solution that offers a better scalarizing function value in a competitive manner. Therefore, we reasonably believe that ABM can accelerate the convergence of the population. Meanwhile, the one-to-one assignment strategy avoids the presence of repeated solutions in the population, maintaining the diversity. The superiority of ABM has been verified in the experimental study.
3.6. Weight Vector Adjustment
Adjusting weight vectors based on
during the evolution process is a good method for a more appropriate distribution, and, recently, some MOEA/Ds have adopted this strategy [
37,
45,
49]. However, studies have indicated that premature or too late of an adjustment of the weight vectors may severely affect the optimization process by deteriorating the search [
19]. For this reason, in MOEA/D-ABM, the weight vector adjustment is only permitted every 5% between 10% and 90% of the whole evolution process. Meanwhile, a measure called convergence activity, denoted as
, is defined to determine whether weight vectors need to be adjusted or not. It can be formulated as follows:
where
denotes the proportion of the subproblems that are improved in the
i-th generation;
t is the number of current generation;
represents the average of
in the latest
generations. When
is smaller than a given coefficient
, we believe that the population cannot achieve better solutions for the subproblems with current weight vectors. In this condition, the weight vectors need to be adjusted.
The detailed procedure of adjusting weight vectors is described in Algorithm 6. Firstly, we calculate the sparsity level of each individual in population
among
using (
7), and then remove the individual with the lowest sparsity level,
, and its associated weight vector,
. The deletion process will be repeated until
individuals in
are removed (Algorithm 6, lines 2–6). The reason for this operator is that the subproblem associated with the crowded individual is relatively redundant. After that, we calculate the sparsity level among of each non-dominated individual in
among the population
, and select the sparsest one with the largest sparsity level,
, as the new individual to be added into
(Algorithm 6, lines 7–12). Meanwhile, since the optimal solution of Tchebycheff scalarizing function is the intersection point of vector
and normalized PF, the new weight vector associated with
is generated by using the Tchebycheff method:
The addition process will also be repeated until
individuals are added (lines 7–12). The reason for this operator is that the sparsest non-dominated solution has relative potential in exploring PF comprehensively. Finally, the neighborhood of each weight vector is updated (line 13).
Algorithm 6: AdjustWeights(, , , ) |
|
3.7. Computational Complexity
The computational complexity is considered for one generation of the proposed MOEA/D-ABM, whose computational cost mainly comes from the MOEA/D (Algorithm 2, lines 5–15), the update (Algorithm 2, line 16), the ABM, and the weight vector adjustment (Algorithm 2, line 25). The complexity of MOEA/D is ; the complexity of the update is ; the complexity of the weight vector adjustment is ; for the ABM, the complexity of initializing (Algorithm 5, line 1) is , and the maximum complexity of each auction round (Algorithm 5, lines 4-20) is (N bidders are unassigned and N auction items are bidded). The number of auction rounds, denoted as R, is finite but not fixed, primarily influenced by the relaxation parameter , the number of bidders N and the number of auction items ; thus, the complexity of ABM is . To sum up the findings, the maximum complexity of MOEA/D-ABM is .