Next Article in Journal
Manufacturer vs. Retailer: A Comparative Analysis of Different Government Subsidy Strategies in a Dual-Channel Supply Chain Considering Green Quality and Channel Preferences
Previous Article in Journal
An Effective Method for Slicing Triangle Meshes Using a Freeform Curve
Previous Article in Special Issue
Investigating the Effect of Organization Structure and Cognitive Profiles on Engineering Team Performance Using Agent-Based Models and Graph Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dynamic Tasking-Based Evolutionary Algorithm for Bi-Objective Feature Selection

by
Hang Xu
School of Mechanical, Electrical & Information Engineering, Putian University, Putian 351100, China
Mathematics 2024, 12(10), 1431; https://doi.org/10.3390/math12101431
Submission received: 30 March 2024 / Revised: 29 April 2024 / Accepted: 4 May 2024 / Published: 7 May 2024
(This article belongs to the Special Issue Mathematical Optimization and Decision Making)

Abstract

:
Feature selection in classification is a complex optimization problem that cannot be solved in polynomial time. Bi-objective feature selection, aiming to minimize both selected features and classification errors, is challenging due to the conflict between objectives, while one of the most effective ways to tackle this is to use multi-objective evolutionary algorithms. However, very few of these have ever reflected an evolutionary multi-tasking framework, despite the implicit parallelism offered by the population-based search characteristic. In this paper, a dynamic multi-tasking-based multi-objective evolutionary algorithm (termed DTEA) is proposed for handling bi-objective feature selection in classification, which is not only suitable for datasets with relatively lower dimensionality of features, but is also suitable for datasets with relatively higher dimensionality of features. The role and influence of multi-tasking on multi-objective evolutionary feature selection were studied, and a dynamic tasking mechanism is proposed to self-adaptively assign multiple evolutionary search tasks by intermittently analyzing the population behaviors. The efficacy of DTEA is tested on 20 classification datasets and compared with seven state-of-the-art evolutionary algorithms. A component contribution analysis was also conducted by comparing DTEA with its three variants. The empirical results show that the dynamic-tasking mechanism works efficiently and enables DTEA to outperform other algorithms on most datasets in terms of both optimization and classification.

1. Introduction

Feature selection is an essential skill for machine learning [1] and multi-objective feature selection, especially for high-dimensional datasets, and has become a hot research topic in the field of optimization. For instance, bi-objective feature selection in classification [2] normally seeks both the smallest number of selected features and the minimum value of classification error rates. Due to a potential contradiction, we need to find a compromise and balance between these two objectives—one of the most effective ways is by using multi-objective evolutionary algorithms (MOEAs) [3,4,5,6,7,8]. Compared with traditional search mechanisms, an evolutionary algorithm (EA) [9], like many other meta heuristics [10], has no need for specific domain knowledge or problem presumption. Moreover, its population-based searching characteristics are naturally suitable for finding various nondominated feasible solutions with an environmental selection trade-off [11].
Nowadays, after long-term development, MOEAs have generated a variety of mainstream frameworks [12]. From the point of view of environmental selection, there are MOEAs based on dominance relationships [13,14,15,16,17,18], decomposition strategies [19,20,21,22], performance indicators [23,24,25,26], and distance measurements [27,28,29,30]. In addition, from the perspective of hybrid applications, there are also MOEAs that are combined with surrogate models [31,32,33], cooperative coevolutionary mechanisms [34,35,36], and evolutionary multi-tasking frameworks [37,38,39], which can solve multiple tasks simultaneously via knowledge transfer. Finally, from the viewpoint of addressing optimization problems, there are MOEAs used for high-dimensional objective space [40,41,42,43,44,45], complex Pareto fronts [46,47,48,49], multi-modal environments [50,51,52,53], and large-scale decision variables [54,55,56].
There are numerous other kinds of excellent MOEAs [57,58,59,60] that have been proposed around the world, many of which are used for real-world applications, such as intrusion detection in networks [61], efficient sensing in wireless sensor networks [62], control of building systems [63], menu planning in schools [64] and control of hybrid electric vehicle charging systems [65].
Recently, large-scale multi-objective optimization problems (LSMOPs) [66] have attracted a lot of attention, which often challenge MOEAs to their limits due to the tremendous difficulty of searching the decision space. Some MOEAs adopt multi-tasking frameworks to solve LSMOPs, such as the multi-variation multi-factorial evolutionary algorithm [67] that conducts a search on both the original space of the LSMOP and multiple simplified spaces constructed in a multi-variation manner concurrently; the evolutionary multitasking algorithm [68] that self-adaptively adjusts the search range of each individual by estimating the population evolving trend; and the multi-objective multitasking algorithm [69] with subspace distribution alignment and decision variable transfer strategies that can promote positive information transfer. Such methods have shown some success in multi-tasking when addressing LSMOPs.
Moreover, the high-dimensional feature selection problem [70] (i.e., also an LSMOP) is even more challenging to address due to the sparse and discrete search space, not to mention the complicated relationship or interactions between real-world features in a dataset. It should also be noted that the term “large-scale decision space” in the field of LSMOPs actually means high-dimensional features in this paper, because the decision or search space of a feature selection problem is precisely the feature space. However, to the best of our knowledge, there are still very few studies that have focused on applying an evolutionary multi-tasking framework to the high-dimensional multi-objective feature selection problem, in spite of the promising combination of learning techniques and evolutionary algorithms [71]. Therefore, in this work, we focus on studying how a dynamic evolutionary multi-tasking framework behaves in bi-objective feature selection optimization, whether in high-dimensional or low-dimensional datasets, with an attempt to explore a universal mechanism for self-adaptively distributing multiple evolutionary search tasks within a population. Consequently, a dynamic tasking-based evolutionary algorithm (termed DTEA) is proposed, with its major contributions given as follows:
  • For the first time, a dynamic multi-tasking framework is proposed to self-adaptively assign evolutionary search tasks to different parts of a population for tackling bi-objective feature selection, especially in datasets with relatively higher dimensionality of features. Unlike other approaches, the proposed multi-tasking system aims at adjusting the search behaviors on tasks to fit the real-time evolutionary demands, without any reformation of the original optimization problem, hence reducing the risk of negative transfer.
  • In the early evolutionary stage, the algorithm decides whether or not to add a backward search task in order to boost the convergence of the whole population, while part of the solutions will be reassigned and redeployed. Then, with the help of cross-cultural genetic transfer, the backward and main task solutions will mutually pull each other, thereby approaching to meet in objective space and, finally, merging into a single task again.
  • At a later evolutionary stage, the algorithm decides whether to add a region of interest (ROI) search task, if the convergence has come to maturity and the ROI can be estimated. In that case, the computational resources will be rearranged to concentrate on searching only the ROI area by reassigning part of the solutions to the ROI task which will be intermittently updated every 10 generations according to the current nondominated front of the main task.
The remainder of this paper is organized as follows: First, background knowledge, including the feature selection problem to be solved and related work, is introduced in Section 2. Then, the general framework and the essential components of the proposed DTEA are illustrated in Section 3. Following this, the experimental setups are described in Section 4, while the empirical studies undertaken are presented in Section 5. Finally, the conclusions and a consideration of future work are provided in Section 6.

2. Background

2.1. Bi-Objective Feature Selection Problem

The multi-objective optimization problem (MOP) [72] with a total of M objectives and D decision variables is normally defined as follows:
m i n i m i z e F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f M ( x ) ) T s u b j e c t t o x Ω ,
where x = ( x 1 , x 2 , , x D ) is a decision vector in the D-dimensional decision space Ω , F ( x ) is an objective vector of x in the M-dimensional objective space, and  f i ( x ) is the ith objective function of x .
With respect to the bi-objective feature selection problem in this paper, two objectives are minimized: the number of selected features and the resultant classification error of the selected features, both of which can be transformed to ratios between 0 and 1. Drawn from the D-dimensional feature space, solution x is a discrete vector that contains either 0 or 1, while x i = 1 means selecting the ith feature and x i = 0 means not. Thereby, the first objective function f 1 ( x ) in (1) can be further specified as:
f 1 ( x ) = i = 1 D x i / D ,
indicating that, if a decision vector x has selected k features, there will exist k elements in x with x i = 1 and all the others ( D k elements) with x i = 0 , resulting in the objective value in the f 1 direction being k / D .
Moreover, the second objective function f 2 ( x ) in (1) denotes the resultant classification error ratio based on the selected features and a preset classifier. Here, we conduct the so-called wrapper-based approach [4,73] for evolutionary feature selection, which directly uses a classifier to evaluate the classification error in real time, during evolution. Theoretically, the classification error can be reduced by selecting more valid features but the computational cost will also be high if too many features are selected.

2.2. Related Works

As previously introduced, although there are some promising works [67,68,69,74,75] applying evolutionary multi-tasking frameworks to solve LSMOPs or high-dimensional single-objective feature selection problems, almost none has focused on the high-dimensional multi-objective feature selection case. MOEAs with multi-tasking as described in [67,68,69,74,75] are only applicable to either the continuous optimization environment or the single-objective optimization case. Recently, a multi-task decomposition-based evolutionary algorithm (MTDEA) [76] was proposed to specifically tackle bi-objective feature selection, which is based on a combination of decomposition approaches and multi-task mechanisms. The most fundamental difference between the above MTDEA and the proposed DTEA in this paper is that MTDEA focuses on the adjustment of global idea points in decomposition-based approaches, while DTEA focus on studying the dynamic and adaptive control of multi-task mechanisms during evolution. Moreover, DTEA adopts totally different environmental selection methods and evolutionary search tasks from MTDEA, which aims to better balance the diversity and convergence factors in searching the Pareto optimal solutions.
There are many other MOEAs [5,77,78] that do not adopt multi-tasking but are specifically designed for tackling the bi-objective feature selection problem, which are described here. For instance, an MOEA based on a duplication analysis mechanism (DAEA) is proposed in [79] to tackle feature selection, where the duplicated solutions in both the objective and decision space are adaptively filtered before environmental selection. DAEA handles relatively low-dimensional datasets rather well, but its ability has not yet been demonstrated on relatively much higher-dimensional datasets. In addition, an MOEA specially designed for solving the sparse subset selection problems (SparseEA) is proposed in [80] to accelerate the convergence and search speed in the large-scale decision space, which can adaptively initialize and reproduce the population according to the fitness contribution of each feature evaluated at the beginning of the evolution. SparseEA handles relatively high-dimensional datasets quite well, but it may encounter early maturity and falls into local optima on relatively much lower-dimensional datasets. Furthermore, an MOEA based on segmented initialization and modified reproduction methods (SIOM) is proposed in [81] to provide a universal plug-in unit for traditional MOEAs to adapt to bi-objective feature selection optimization scenarios. Like DAEA, the search ability of SIOM has not yet been fully verified on relatively much higher-dimensional datasets.

3. Proposed Algorithm

In this section, we first elaborate the general framework of the proposed algorithm, and then discuss the processes of reproduction and selection. Finally, the details of utilizing two essential components, i.e., the backward and ROI search tasks, are  illustrated.

3.1. General Framework

The general principles of the proposed dynamic multi-tasking mechanism can be roughly divided into two stages: the early evolutionary stage where main and backward search tasks coexist, shown in Figure 1a, and the later evolutionary stage where the main and ROI search tasks coexist, shown in Figure 1b.
In Figure 1, in the beginning, an initial population is generated and entirely assigned to the main task. After 10 generations (an empirical setting), the convergence state of the population is analyzed to decide whether or not to reassign part of the solutions to the backward search task and to redeploy those solutions at the front end of the f 1 direction (using the algorithms further detailed in Section 3.3). As shown in Figure 1a, the squares denoting the backward task solutions are understood to lie in front of the dots denoting the main task solutions, thereby simulating a backward or forward search towards each other. Moreover, owing to so-called cross-cultural information transfer [37], the main and backward tasks mutually approach at an increasingly faster speed, and finally merge into a single main task again. In this way, the convergence speed is greatly increased, with the help of the backward task searching the feature subsets of much smaller sizes.
After merging, when the convergence state of the population comes to maturity, part of the solutions will be reassigned to the ROI search task and to explore only a restricted area. As shown in Figure 1b, the so-called ROI area is estimated by the current nondominated front (i.e., from  1 / D to 3 / D in the f 1 direction) of the main task solutions, but with 1 / D larger in the f 1 direction (i.e., extending the right boundary of ROI from 3 / D to 4 / D ). Moreover, the ROI area is updated intermittently every 10 generations as the nondominated front of the main task solutions changes. Thus, by concentrating the computational resources on the most promising ROI area, the ROI task is intended to help find more nondominated solutions reserved for the final classification test in a more efficient way.
The general framework of the proposed algorithm DTEA is shown in Algorithm 1, with necessary annotations following directly after the codes. More detailed explanations are given as follows: In Line 2, the cross-cultural mating rate is a parameter to control the random possibility of genetic exchanges between solutions of two different tasks (later used in Line 8), as described in [37].
In Line 4, there is only a main search task at the beginning, defined as type 1 in the task pool. In Line 6, the archived population is used as a comparison with the current one so as to track the convergence rate for the past 10 generations. Moreover, the  R e p r o d u c e , S e l e c t , A d d B a c k and A d d R O I methods in Lines 8, 9, 14 and 16 are detailed in Algorithms 2, 3, 4 and 6, respectively. It should also be noted that all the multiple tasks are conducted within a single population and cooperate through the cross-cultural genetic exchanges among solutions, keeping the population size always the same.

3.2. Reproduction and Selection

Algorithm 2 illustrates the reproduction process that not only outputs new offspring but also plays a key role in the cross-cultural information transfer between different tasks. In fact, the multi-tasking reproduction mechanism used in DTEA is similar to that in MO-MFEA [37], except for adopting the modified crossover mechanism in DAEA [79] that reduces invalid crossover to some extent. Moreover, the cross-cultural mating rate is preset to 0.5 in DTEA (Line 2 of Algorithm 1) in order to obtain a balanced information transfer ratio, with no attempt to find its optimal value.
Algorithm 1  D T E A ( N , D )
  • Input: population size N, total feature number D;
  • Output: final population P o p ;
1:
P o p randomly sample N initial solutions from the D-dimensional feature space;
2:
R = 0.5 ; / / set cross-cultural mating rate
3:
G e n = 0 ; / / set generation count
4:
T = ( 1 , 1 , , 1 ) ; / / create N size task pool
5:
R O I = ; / / set an empty ROI object
6:
A r c = P o p ; / / archive old population
7:
while termination criterion not met do
8:
     ( P o p * , T * ) = R e p r o d u c e ( P o p , T , R ) ;
9:
     P o p = S e l e c t ( P o p , P o p * , T , T * , R O I ) ;
10:
     G e n = G e n + 1 ;
11:
    if  G e n % 10 = = 0  then
12:
        if only one task type in T then
13:
           if  G e n = = 10  then
14:
                ( P o p , T ) = A d d B a c k ( P o p , T , A r c ) ;
15:
           else if  R O I = =  then
16:
                ( R O I , T ) = A d d R O I ( P o p , T , A r c ) ;
17:
           end if
18:
        else
19:
           if  R O I = =  then
20:
               merge & update T; / / see Section 3.3
21:
           else
22:
               update R O I ; / / see Section 3.4
23:
           end if
24:
        end if
25:
         A r c = P o p ;
26:
    end if
27:
end while
Algorithm 2  R e p r o d u c e ( P o p , T , R )
  • Input: current population P o p , task pool T, cross-cultural mating rate R;
  • Output: offspring set P o p * , offspring task pool T * ;
1:
P a r randomly choose N pairs of parents from P o p by tournament selection in terms of the associated nondominated fronts and crowding distances;
2:
for i = 1 , 2 , , N do
3:
     a , b get parent indexes in P a r ( i ) ;
4:
    if  T ( a ) = = T ( b )  then
5:
         P o p * ( i ) get offspring by crossover & mutation;
6:
         T * ( i ) = T ( a ) ; / / assign the same task as parents
7:
    else
8:
        if  r a n d < R  then
9:
            P o p * ( i ) repeat Line 5;
10:
            T * ( i ) randomly assign T ( a ) or T ( b ) ;
11:
        else
12:
            P o p * ( i ) only do mutation to P o p ( a ) ;
13:
            T * ( i ) = T ( a ) ;
14:
        end if
15:
    end if
16:
end for
Algorithm 3 demonstrates the environmental selection process that individually and successively selects the fittest solutions for each evolutionary search task. The multi-tasking selection mechanism of DTEA is generally based on that of MO-MFEA [37], but conducts totally different tasks and adopts a matched duplication removal mechanism. As can be seen from Lines 4 and 6, the main and backward tasks only remove the duplicates within themselves, while the ROI task further removes its duplicates within the main task. In this way, the main and backward tasks can evolve more independently so as to stimulate their respective convergence potential, while the ROI task aims at supplying more diversity for the nondominated front obtained by the main task. As for truncation, the main and backward tasks use the same classic nondominated sorting and crowding distance methods [13], while the ROI task uses another selection method, which is explained later in Section 3.4.
Algorithm 3  S e l e c t ( P o p , P o p * , T , T * , R O I )
  • Input: current population P o p , offspring set P o p * , task pool T, offspring task pool T * , region of interest R O I ;
  • Output: next-generation population P o p ;
1:
ψ = P o p P o p * ;
2:
φ = T T * ;
3:
ψ 1 , ψ 2 , ψ 3 , respectively, get solutions from ψ corresponding to φ = 1 , 2 , 3 ; / / 1-main, 2-backward, 3-ROI tasks
4:
ψ 1 , ψ 2 , ψ 3 respectively, remove the duplicated decision vector solutions of their own;
5:
ψ 1 , ψ 2 individually truncate ψ 1 , ψ 2 by nondominated sorting and crowding distance;
6:
ψ 3 remove same decision vector solutions with ψ 1 ;
7:
ψ 3 = S e l e c t R O I ( ψ 3 , R O I ) ; / / see Section 3.4;
8:
P o p = ψ 1 ψ 2 ψ 3 ;

3.3. Backward Search Task

The utilization of the backward search task consists of two steps: judging whether to add the backward task (called by Line 14 of Algorithm 1 and detailed in Algorithm 4), and judging whether to merge it into the main task (called by Line 20 of Algorithm 1 and detailed in Figure 2).
In Algorithm 4, the objective values of the current and archived population in the f 1 direction are used to estimate the convergence rate and to judge the adding of the backward task (Lines 1 and 2). If adding, the scale of the backward task is set according to the objective values in the f 2 direction (Lines 3∼8). Here, the scale ranges between 0.25 and 0.5 of N and is negatively correlated with the convergence speed in the f 2 direction. As shown in Line 9, solutions with the worst nondominated fronts and crowding distances are allocated from the main task to the backward task and are redeployed by Algorithm 5.
Algorithm 4  A d d B a c k ( P o p , T , A r c )
  • Input: current population P o p , task pool T, archived population A r c ;
  • Output: redeployed population P o p , reassigned tasks T;
1:
F 1 1 , F 1 2 get f 1 objective values from P o p , A r c , respectively;
2:
if  m i n ( F 1 1 ) > m e a n ( F 1 2 ) / 2   then
3:
     ψ 1 and ψ 2 respectively, get nondominated solutions from P o p and A r c ;
4:
     F 2 1 and F 2 2 respectively, get unique f 2 objective values from ψ 1 and ψ 2 ;
5:
     n 1 number of duplicates between F 2 1 and F 2 2 ;
6:
     r 1 = n 1 / s i z e ( F 2 1 ) ; / / f 2 duplicate rate
7:
     r 2 = s i g m o i d ( r 1 10 ) / 2 ; / / backward task ratio
8:
     n 2 = f l o o r ( r 2 N ) ; / / backward task scale
9:
     i n d e x sort solutions of P o p in descending order with nondominated fronts as the primary criterion and crowding distances as the secondary criterion;
10:
     T ( i n d e x ( 1 , 2 , , n 2 ) ) = 2 ; / / reassign tasks
11:
     P o p ( i n d e x ( 1 , 2 , , n 2 ) ) = R e d e p l o y ( n 2 , D , 1 / D ) ;
12:
end if
Algorithm 5  R e d e p l o y ( K , D , H )
  • Input: redeployed solution number K, total feature number D, distribution ratio H;
  • Output: redeployed solution set S;
1:
for i = 1 , 2 , , K do
2:
    for  j = 1 , 2 , , D  do
3:
        if  r a n d < H  then
4:
            x j i = 1 ; / / select the jth feature
5:
        else
6:
            x j i = 0 ; / / not select the jth feature
7:
        end if
8:
    end for
9:
     S ( i ) = x i ;
10:
end for
In Algorithm 5, a required number of solutions are redeployed in objective space by re-initializing individuals with a smaller number of selected features. As reported in [81], the distribution of an initial population in the bi-objective feature selection case can be roughly controlled by the distribution ratio used in Line 3. In other words, the distribution ratio reflects the mathematical expectation of the selected features in each individual. Here, it is set to 1 / D , meaning that the individual is expected to select around one feature or a bit more, causing the backward task solutions redeployed at the front end of the objective space to simulate a backward search towards the back end.
The final stage of the backward search task is to be merged into the main task, which is determined by the mechanism shown in Figure 2, where the squares denote the objective vectors of the main task and the dots denote those of the backward task. First of all, let us assume the rectangle A 1 (with two edges E 1 and E 2 shown in Figure 2) is made up of the maximum and minimum objective values in both f 1 and f 2 directions (i.e., the four nodes of a rectangle) for all the backward search task-related solutions, while the same goes for the rectangle A 2 (with two edges E 3 and E 4 shown in Figure 2) for all the main search task-related solutions. By multiplying the two edges of A 1 or A 2 (i.e., E 1 , E 2 or E 3 , E 4 ), we can obtain the area size of either A 1 or A 2 . Then, let us define another rectangle A 3 , which denotes the overlapping area of the two dashed rectangles ( A 1 and A 2 ) in Figure 2. Hence, the upper left and lower right nodes of A 3 can be easily obtained by finding the two intersection points of A 1 and A 2 , i.e., the intermediate nodes between A 1 and A 2 in both f 1 and f 2 directions. The area size of A 3 can be directly calculated by the above obtained upper left and lower right nodes of A 3 . Therefore, if the area size of A 3 is close to that of either A 1 or A 2 , it is indicated that the backward and main tasks are approaching each other closely in the objective space. Moreover, if  A 3 has become as large as either A 1 or A 2 , the multiple evolutionary tasks will then be merged by updating the task pool shown in Line 20 of Algorithm 1 with all the elements in T reset to 1 (i.e., all turning into the main task).

3.4. ROI Search Task

The utilization of the ROI search task consists of three parts: first, judging whether to add and construct it (called by Line 16 of Algorithm 1, and then detailed in Algorithm 6 and Figure 3); second, intermittent updating of the R O I object (called by Line 22 of Algorithm 1 and detailed in Figure 4); third, the ROI selection method (called by Line 7 in Algorithm 3 and detailed in Algorithm 7).
Algorithm 6  A d d R O I ( P o p , T , A r c )
  • Input: current population P o p , task pool T, archived population A r c ;
  • Output: region of interest R O I , reassigned tasks T;
1:
ψ 1 , ψ 2 , respectively, get nondominated solutions from P o p , A r c by nondominated sorting;
2:
if  ψ 1 duplicate with ψ 2 in decision space then
3:
     n 1 = N s i z e ( ψ 1 ) ;
4:
     n 2 = m i n ( n 1 , f l o o r ( N / 2 ) ) ; / / ROI task scale
5:
     i n d e x sort solutions of P o p in descending order with nondominated fronts as the primary criterion and crowding distances as the secondary criterion;
6:
     T ( i n d e x ( 1 , 2 , , n 2 ) ) = 3 ; / / reassign tasks
7:
     φ sum element values of each decision vector in ψ 1 ; / / get number of selected features in each nondominated solution
8:
     n 3 = m a x ( φ ) ; / / get number of selected features in rightmost nondominated solution
9:
    first row of R O I 1 , 2 , , n 3 , n 3 + 1 ;
10:
    second row of R O I contains the number of needed solutions corresponding to each element of first row, the calculation of which is shown in Figure 3;
11:
end if
Algorithm 7  S e l e c t R O I ( ψ 3 , R O I )
  • Input: candidate solutions ψ 3 , ROI object R O I ;
  • Output: reserved solutions ψ 3 ;
1:
L length of the first row in R O I ;
2:
S = ; / / used to store selected indexes
3:
for  i = L 1 , , 2 , 1 , L   do
4:
    if  R O I ( 2 , i ) > 0  then
5:
         i n d e x 1 get indexes of unselected candidates;
6:
         i n d e x 2 sort solutions of ψ 3 ( i n d e x 1 ) in ascending order with
a b s ( f 1 R O I ( 1 , i ) / D ) as primary criterion and f 2 as secondary criterion;
7:
         S = S i n d e x 1 ( i n d e x 2 ( 1 , 2 , , R O I ( 2 , i ) ) ) ;
8:
    end if
9:
end for
10:
ψ 3 = ψ 3 ( S ) ; / / get final reserved solutions
At first, in Algorithm 6, we construct a 2-dimensional R O I object to help conduct the ROI search task, while the adding criterion is shown in Lines 1 and 2. Specifically, the criterion requires that each nondominated solution of the current population can find a duplicate decision vector in the archived population 10 generation ago, indicating that the convergence has kind of come to maturity. In Lines 3 and 4, the ROI task scale is set according to the number of solutions that are not nondominated, and is restricted under half of the population size so that the main task solutions still hold the majority. Similar to the backward task, only some solutions with the worst nondominated fronts and crowding distances are reassigned to the ROI search task (Line 6). Moreover, the construction of the R O I object is shown in Lines 7∼10 and further illustrated in Figure 3.
Figure 3 demonstrates a topological example of constructing an R O I object with the ROI task scale set to 9, with the f 1 value of the rightmost nondominated solution set to 3 / D . Hence, the ROI search boundary is set to 4 / D (i.e., 1 / D larger than the nondominated front boundary) for extensive exploration, and the first row of R O I contains all the possible sizes of feature subsets within that boundary (i.e., 1 , 2 , 3 , 4 ). In fact, the so-called R O I object can be treated as a set of ROI subproblems, while each subproblem consists of the number of selected features and the number of needed solutions. The former denotes the selection bias, and the latter denotes how many solutions should be selected. The constructing order of each needed solution is marked inside the circles, starting from right to left, layer-by-layer, but only remaining one at the right end. By counting the number of those circles hanging on each dashed vertical line related to the number of selected features, we obtain the corresponding number of needed solutions for each ROI subproblem (i.e., the second row of R O I ).
After construction, the scale of the ROI task is fixed, but the structure of the R O I object will be intermittently updated every 10 generations, according to the changes of the current nondominated main task solutions. In Figure 4, a topological example of updating the R O I object inherited from Figure 3 is illustrated, where the ROI task scale is still 9 but the f 1 value of the rightmost nondominated solution now changes to 4 / D . Hence, the ROI search boundary is extended to 5 / D , and the first row of R O I is adjusted to 1 , 2 , 3 , 4 , 5 . The constructing order in Figure 4 follows a similar principle to that in Figure 3, except for additionally taking the distribution of nondominated main task solutions into account. Specifically, we can regard those nondominated main task solutions as pre-constructed needed solutions, and then reconstruct the needed solutions on that basis, so that the general distribution of nondominated solutions (from both the main and ROI tasks) at each feature subset size will be more uniform.
With the construction or update of the R O I object, the ROI selection method (i.e., the final part of the ROI task utilization) is illustrated in Algorithm 7 (though first called by Line 7 in Algorithm 3). In Algorithm 7, the ROI selection can actually be seen as a bias and sequential subproblem selection process, with a selection order shown in Line 3 (i.e., from the penultimate to the first, plus the last), where each column in R O I can be treated as a subproblem. This means that we start the search from the most promising area (i.e., around the solution with the smallest f 2 value) and end at the right boundary of the ROI area for extensive exploration. Moreover, the selection bias of the ROI search task is shown in Lines 5 and 6, where the first criterion is about the number of selected features ( f 1 values) and the second one is about the classification error ( f 2 values). In this way, the computational resources will be optimized and focused in order to obtain more diverse nondominated solutions uniformly distributed at each feature subset size in the ROI search area.

4. Experimental Setups

4.1. Test Datasets and Comparison Algorithms

In the experiment, a total of 20 classification datasets are tested, all of which come from real-world data and can be obtained from public databases, such as the UCI machine learning repository [82]. Table 1 provides the attributes of each test instance, including the numbers of features, samples, and classes, while the population size and the termination criterion (i.e., number of objective function evaluations) run on each dataset is denoted by N and T, respectively. Here, the value of T is set according to the convergence difficulty of each dataset, normally increasing as the total number of features rises, while the value of N is set to 50 for some very small datasets and fixed at 100 for all the other datasets (including the high-dimensional ones) in order to test the search efficiency of the comparison algorithms. Overall, the total feature number ranges from 13 to 10,509 in ascending order, with two or more classes, while the preset population sizes and termination criteria also increase progressively from low-dimensional to high-dimensional datasets.
For comparison, seven state-of-the-art MOEAs are run under the same experimental conditions as the proposed DTEA, including SparseEA [80], DAEA [79], GFMMOEA [49], MOEAHD [47], ARMOEA [83], NSGAII [13] and HypE [23]. Among them, SparseEA and DAEA are two recently published MOEAs specifically designed for bi-objective feature selection, especially for high-dimensional datasets. GFMMOEA is a nondominated sorting-based algorithm that trains a generalized simplex model to dynamically estimate the shape of nondominated fronts. MOEAHD is an improved decomposition-based MOEA that hierarchically adjusts the weight vectors to better tackle complex Pareto fronts. ARMOEA uses an enhanced inverted generational distance indicator to generate a set of reference points that adaptively guides the evolutionary directions. NSGAII and HypE are two of the most classic MOEAs, based, respectively, on nondominated sorting and performance indicators. In addition, three variants of DTEA are also compared to determine the component contributions of the proposed dynamic multi-tasking framework. They are, respectively, the variant without an ROI search task (named DTEA_Back), the variant without a backward search task (named DTEA_ROI), and the variant with only a main search task (named DTEA_Main).

4.2. Performance Indicators and Parameter Settings

Since no single indicator can provide a comprehensive evaluation [84,85], multiple indicators are used in this paper to evaluate the algorithm performance, with hypervolume (HV) [86] as the main indicator and the inverted generation distance (IGD) [87] as a supplement. HV and IGD are two classic and widely used metrics for the evolutionary multi-objective optimization, details about which can be obtained in [26,28,47,79,81]. Moreover, specially for the measurement of feature selection performance, another two metrics, called the minimum classification error (MCE) and the selected feature number (SFN), are adopted, which were also used in [79]. MCE is the minimum classification error obtained by all the solutions, while SFN is the corresponding number of selected features. Generally speaking, smaller IGD, MCE and SFN values represent better performance, which is opposite that for HV. It is worth noting that the Wilcoxon test with a significance level of 5 % is also applied to check the performance differences.
The parameter setting for each comparison algorithm is consistent with their own literature. If not specially stated in their literature, the reproduction process will use the same traditional one-point crossover and bit-wise mutation methods. It should be noted that all the algorithms are coded in MATLAB, conducted on the open-source platform PlatEMO [88], and run for 30 times independently with a series of reproducible random seeds. As for classification, each dataset here is randomly divided into the training and test subsets with a proportion of approximately 7 / 3 , following the stratified split process [79]. Moreover, a KNN ( K = 5 ) classification model is used on the training set to measure the classification performance during the evolutionary feature selection process, along with a 10-fold cross-validation to avoid feature selection bias [89,90].

5. Results and Analyses

In this section, we first analyze the optimization performances of all the algorithms by the HV and IGD metrics, and then illustrate their solution distributions in the objective space for an intuitive comparison. In addition, the classification performance is further discussed in terms of the MCE and SFN metrics, and finally, the component contributions of DTEA on the backward and ROI search tasks are studied.

5.1. Optimization Performances

Table 2 presents the HV performance statistics of all the comparison algorithms on each test dataset. It can be seen that DTEA performs the best on all cases, showing excellent convergence on relatively high-dimensional datasets (No. 12∼20) and promising diversity on relatively low-dimensional ones (No. 1∼11). It is indicated that DTEA adapts smoothly from low-dimensional to high-dimensional datasets, owing to its adaptive multi-tasking mechanism that dynamically reacts to real-time evolutionary demands. As a supplement, Table 3 shows the statistic IGD performances, while DTEA performs the best on most of the test instances, but loses insignificantly on five of them. However, overall, the proposed DTEA is still the best among all the comparison algorithms, which is also testified by the Friedman test results shown in Table 4. Table 4 gives the average Friedman test rankings for all the algorithms in terms of the HV, IGD and MCE metrics, respectively, while the MCE performance will be discussed later in Section 5.3.
Furthermore, although DTEA performs better than all the other algorithms in terms of the HV metric on every dataset, as observed from Table 2, it still has some slight setbacks in terms of the IGD metric on five out of 20 datasets, as observed from Table 3. In Table 3, DTEA loses insignificantly on the Australian, German, Breast, Madelon and Prostate datasets; the first four are relatively low-dimensional datasets that cannot fully utilize the large-scale search capability of the proposed algorithm. Moreover, on the Prostate dataset, DTEA loses only insignificantly to DAEA, which is also specifically designed for tackling high-dimensional feature selection, and performs much better than all the other algorithms. It should also be noted that IGD is adopted as a supplemental performance indicator in this paper since it relies highly on the sampled reference points and may not be as accurate as the major performance indicator HV shown in Table 2 whose reference point is fixed as (1, 1) in the objective space.
Finally, the mean ranks calculated using Friedman’s test shown in Table 4 also verify the overall performance advantages of the proposed DTEA over the other algorithms on all datasets in terms of all three metrics (MCE will be discussed later). It can also be observed that, although still worse than DTEA, the comparison algorithm DAEA generally performs second, while SparseEA comes next; the latter are two recently published methods specifically designed for tackling bi-objective feature selection.

5.2. Solution Distributions

The nondominated solution distributions of the test data for each algorithm are illustrated in Figure 5, with the corresponding median HV performance among all 30 independent runs for the sake of fairness. Six datasets, i.e., Breast, Sonar, Leukemia1, Leukemia2, CNS and Brain2 are chosen to represent the relatively lower and higher dimensionality of features. It can be seen that the proposed algorithm DTEA generally performs the best among all eight algorithms, with quite promising population diversity and much better classification convergence.
In detail, in two low-dimensional cases (i.e., the Breast and Sonar datasets), the advantage of DTEA is not so significant, but it can still be observed that DTEA obtains the best converged solution in the direction of the f 2 objective (i.e., the classification error rate) and also reaches the best f 1 objective value. By contrast, the advantage of DTEA is more significant in the other high-dimensional cases (i.e., the Leukemia1, Leukemia2, CNS and Brain2 datasets), as can be observed by DTEA maintaining more diverse population distributions with more nondominated solutions obtained, along with the best converged solution in the f 2 objective direction. This is because the search space in high-dimensional cases becomes much sparser than that in low-dimensional cases, making the feasible solutions much rarer and more difficult to be found. In this situation, the superior search ability of DTEA, enhanced by its dynamic multi-tasking mechanism, helps to better explore the potential feasible solutions, thereby finding more nondominated ones.

5.3. Classification Performances

Table 5 exhibits the classification performances of all eight comparison algorithms on each test dataset. For the readers’ convenience, the mean minimum classification error (MCE) and the corresponding selected feature number (SFN) values are shown up and down in two rows together. It can be seen that DTEA performs the best on most of the datasets with the smallest MCE values and moderately small SFN values, while only two slight losses occur on the No. 10 and No. 14 datasets. To be more specific, the No. 10 dataset (Semeion) is a relatively low-dimensional dataset with 256 features, where DTEA only loses insignificantly to DAEA as the second place. By contrast, the No. 14 dataset (DLBCL) is a high-dimensional instance with 5469 features, where the three large-scale problem specialized algorithms (i.e., DTEA, SparseEA and DAEA) all lose to the other five algorithms (i.e., GFMMOEA, MOEAHD, ARMOEA, NSGAII and HypE), although the corresponding HV and IGD performances of DTEA are obviously the best. Overall, the winning rate of DTEA against all the other comparison algorithms is up to 90 % in terms of MCE (still ranking first in Table 4), not to mention the much smaller SFN values in high-dimensional cases that can greatly reduce the classification computational cost.
Moreover, by comparing the SFN results, it is found that all the algorithms can find moderately small sizes of feature subsets on relatively low-dimensional datasets; however, as the dimension rises, the SFN values from DTEA, SparseEA and DAEA become much smaller than those from GFMMOEA, MOEAHD, ARMOEA, NSGAII and HypE. This provides an explanation for the MCE performances presented on the No. 14 dataset DLBCL, where DTEA, SparseEA and DAEA might lose some useful features when accelerating the population convergence, which are difficult to recover in the later evolutionary search.
Nevertheless, DTEA still performs better than SparseEA and DAEA on DLBCL, and not significantly different from GFMMOEA, MOEAHD, ARMOEA and NSGAII, proving its capability of alleviating the overfitting or early maturity to some extent.

5.4. Component Contributions

The component contributions of DTEA on the backward and ROI search tasks are analyzed by comparing DTEA and three of its variants (i.e., DTEA_Back, DTEA_ROI and DTEA_Main, as previously introduced in Section 4.1). Here, DTEA_Back, DTEA_ROI and DTEA_Main only remain the backward and main tasks, the ROI and main tasks, and the single main task, respectively. It should be noted that DTEA_Main still adopts the duplication removal mechanism (Line 4 in Algorithm 3) and uses the same modified reproduction method as DTEA, DTEA_Back and DTEA_ROI. Thus, the single-tasking DTEA_Main is understood to perform better in feature selection than the basic framework of MO-MFEA [37]. Overall, as shown in Table 6, DTEA performs the best on most of the datasets against its three variants, with better performance on 17 out of all the 20 datasets, demonstrating the advantage of combining the two essential components (i.e., the backward and ROI search tasks) together.
Moreover, by observing the performance comparisons between DTEA_Back, DTEA_ROI and DTEA_Main, the contribution and effectiveness of the backward and ROI task components can be verified. Specifically, DTEA_Back performs the same with DTEA_Main on relatively low-dimensional datasets (No. 1∼8) and significantly better on many others (No. 10∼20), but loses a little on Hill_Valley, which has 100 features and stands between low and high dimensions. This actually proves the necessity and validity of our dynamic tasking mechanism of judging whether or not to add the backward task, because not all circumstances are suitable for convergence acceleration and the backward task is normally triggered in relatively high-dimensional cases.
By contrast, DTEA_ROI performs the same with DTEA_Main on relatively high-dimensional datasets (No. 10∼20), but better on all the others except for being the same with DTEA_Main on No. 3 and losing insignificantly on No. 7. This is because the No. 3 Climate dataset is too simple for the ROI task component to make any contribution, and the No. 7 Lung dataset has too few classification samples. Like the dynamic tasking mechanism of the backward task, the ROI task is not triggered unless the population stops convergence for at least 10 generations. Thus, without the acceleration help of the backward task, DTEA_ROI cannot reach the triggering criterion in the relatively high-dimensional cases.
In general, based on the above analyses, it can be concluded that either the backward or ROI task component can be adaptively judged to be added or not, and mostly produces benefits if triggered. Moreover, the combination of the two components, together constituting the dynamic multi-tasking system in DTEA, works efficiently and cooperatively, whether on low- or high-dimensional datasets.

6. Conclusions

In this paper, a dynamic multi-tasking evolutionary algorithm (DTEA) is proposed for solving the bi-objective feature selection problem, especially in high-dimensional cases. The dynamic multi-tasking system consists of judging, adding, merging and updating either the backward search task or the ROI one. The former is used to respond to the convergence evolutionary requirement, while the latter responds to the diversity need. To test its effectiveness, a total of 20 classification datasets, including relatively low- and high-dimensional features, along with seven state-of-the-art comparison algorithms, were subject to experiment. According to the empirical results in terms of both optimization and classification, it can be concluded that DTEA achieves outstanding performance on most of the datasets, with the proposed dynamic multi-tasking mechanism playing an effective role with all the essential components working together efficiently and cooperatively. In summary, from lower to higher dimensionality of features, the proposed algorithm DTEA generally achieves excellent performances in terms of the HV metric, owing to its dynamically adjusted balance between the convergence and diversity factors. However, DTEA may still encounter some insignificant setbacks in terms of the IGD and MCE metrics, which is probably because both IGD and MCE are more pertinent to evaluating the prominent convergence points (like knee points).
In future work, more kinds of evolutionary search tasks will be considered so as to adapt to more complex optimization environmental requirements. Moreover, there is also potential to study application of the proposed dynamic multi-tasking mechanism to many other subset selection problems, such as pattern mining, critical node detection, and neural network training. Application of the proposed algorithm in disease diagnosis and medical data analysis is also an indicated area of future work.

Funding

This research was funded by a National Natural Science Foundation of China grant, number 62103209, by a Natural Science Foundation of Fujian Province grant, number 2020J05213, by a Scientific Research Project of Putian Science and Technology Bureau grant, number 2021ZP07, by a Startup Fund for Advanced Talents of Putian University grant, number 2019002, and by a Research Projects of Putian University grant, number JG202306.

Data Availability Statement

The data will be made available by the authors on request.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  2. Dash, M.; Liu, H. Feature selection for classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef]
  3. Coello Coello, C. Evolutionary multi-objective optimization: A historical view of the field. IEEE Comput. Intell. Mag. 2006, 1, 28–36. [Google Scholar] [CrossRef]
  4. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 2015, 20, 606–626. [Google Scholar] [CrossRef]
  5. Xue, B.; Zhang, M.; Browne, W.N. Particle swarm optimization for feature selection in classification: A multi-objective approach. IEEE Trans. Cybern. 2012, 43, 1656–1671. [Google Scholar] [CrossRef] [PubMed]
  6. Xue, B.; Cervante, L.; Shang, L.; Browne, W.N.; Zhang, M. Multi-objective evolutionary algorithms for filter based feature selection in classification. Int. J. Artif. Intell. Tools 2013, 22, 1350024. [Google Scholar] [CrossRef]
  7. Nguyen, H.B.; Xue, B.; Liu, I.; Andreae, P.; Zhang, M. New mechanism for archive maintenance in PSO-based multi-objective feature selection. Soft Comput. 2016, 20, 3927–3946. [Google Scholar] [CrossRef]
  8. Nguyen, B.H.; Xue, B.; Andreae, P.; Ishibuchi, H.; Zhang, M. Multiple Reference Points based Decomposition for Multi-objective Feature Selection in Classification: Static and Dynamic Mechanisms. IEEE Trans. Evol. Comput. 2019, 24, 170–184. [Google Scholar] [CrossRef]
  9. Coello, C.A.C.; Lamont, G.B.; Van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: New York, NY, USA, 2007; Volume 5. [Google Scholar]
  10. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  11. Eiben, A.E.; Smith, J.E. What is an evolutionary algorithm? In Introduction to Evolutionary Computing; Springer: Berlin/Heidelberg, Germany, 2015; pp. 25–48. [Google Scholar]
  12. Zhou, A.; Qu, B.Y.; Li, H.; Zhao, S.Z.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [CrossRef]
  13. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  14. Ziztler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the strength Pareto evolutionary algorithm for multiobjective optimization. Evol. Methods Des. Optim. Control 2002, 103, 95–100. [Google Scholar]
  15. Deb, K.; Jain, H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  16. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining Convergence and Diversity in Evolutionary Multiobjective Optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef] [PubMed]
  17. Yuan, Y.; Xu, H.; Wang, B.; Yao, X. A New Dominance Relation-Based Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 16–37. [Google Scholar] [CrossRef]
  18. Castillo, J.C.; Segura, C.; Coello, C.A.C. VSD-MOEA: A Dominance-Based Multiobjective Evolutionary Algorithm with Explicit Variable Space Diversity Management. Evol. Comput. 2022, 30, 195–219. [Google Scholar] [CrossRef]
  19. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  20. Li, H.; Zhang, Q. Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 2009, 13, 284–302. [Google Scholar] [CrossRef]
  21. Liu, H.L.; Gu, F.; Zhang, Q. Decomposition of a Multiobjective Optimization Problem Into a Number of Simple Multiobjective Subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef]
  22. Li, K.; Zhang, Q.; Kwong, S.; Li, M.; Wang, R. Stable Matching-Based Selection in Evolutionary Multiobjective Optimization. IEEE Trans. Evol. Comput. 2014, 18, 909–923. [Google Scholar] [CrossRef]
  23. Bader, J.; Zitzler, E. HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization. Evol. Comput. 2011, 19, 45–76. [Google Scholar] [CrossRef]
  24. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
  25. Li, M.; Yang, S.; Liu, X. Pareto or Non-Pareto: Bi-Criterion Evolution in Multiobjective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 645–665. [Google Scholar] [CrossRef]
  26. Xu, H.; Zeng, W.; Zeng, X.; Yen, G.G. A Polar-Metric-Based Evolutionary Algorithm. IEEE Trans. Cybern. 2020, 51, 3429–3440. [Google Scholar] [CrossRef]
  27. Zhang, X.; Tian, Y.; Jin, Y. A Knee Point-Driven Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2015, 19, 761–776. [Google Scholar] [CrossRef]
  28. Xu, H.; Zeng, W.; Zeng, X.; Yen, G.G. An Evolutionary Algorithm Based on Minkowski Distance for Many-Objective Optimization. IEEE Trans. Cybern. 2019, 49, 3968–3979. [Google Scholar] [CrossRef]
  29. Schutze, O.; Esquivel, X.; Lara, A.; Coello, C.A.C. Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization. IEEE Trans. Evol. Comput. 2012, 16, 504–522. [Google Scholar] [CrossRef]
  30. Tian, Y.; Zhang, X.; Cheng, R.; Jin, Y. A multi-objective evolutionary algorithm based on an enhanced inverted generational distance metric. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 5222–5229. [Google Scholar]
  31. Wang, H.; Jin, Y.; Jansen, J.O. Data-driven surrogate-assisted multiobjective evolutionary optimization of a trauma system. IEEE Trans. Evol. Comput. 2016, 20, 939–952. [Google Scholar] [CrossRef]
  32. Wang, H.; Jin, Y.; Sun, C.; Doherty, J. Offline data-driven evolutionary optimization using selective surrogate ensembles. IEEE Trans. Evol. Comput. 2018, 23, 203–216. [Google Scholar] [CrossRef]
  33. Pan, L.; He, C.; Tian, Y.; Wang, H.; Zhang, X.; Jin, Y. A Classification-Based Surrogate-Assisted Evolutionary Algorithm for Expensive Many-Objective Optimization. IEEE Trans. Evol. Comput. 2018, 23, 74–88. [Google Scholar] [CrossRef]
  34. Wang, R.; Purshouse, R.C.; Fleming, P.J. Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans. Evol. Comput. 2012, 17, 474–494. [Google Scholar] [CrossRef]
  35. Goh, C.K.; Tan, K.C. A competitive-cooperative coevolutionary paradigm for dynamic multiobjective optimization. IEEE Trans. Evol. Comput. 2008, 13, 103–127. [Google Scholar]
  36. Ma, X.; Li, X.; Zhang, Q.; Tang, K.; Liang, Z.; Xie, W.; Zhu, Z. A survey on cooperative co-evolutionary algorithms. IEEE Trans. Evol. Comput. 2018, 23, 421–441. [Google Scholar] [CrossRef]
  37. Gupta, A.; Ong, Y.S.; Feng, L.; Tan, K.C. Multiobjective Multifactorial Optimization in Evolutionary Multitasking. IEEE Trans. Cybern. 2017, 47, 1652–1665. [Google Scholar] [CrossRef] [PubMed]
  38. Ma, X.; Zheng, Y.; Zhu, Z.; Li, X.; Wang, L.; Qi, Y.; Yang, J. Improving Evolutionary Multitasking Optimization by Leveraging Inter-Task Gene Similarity and Mirror Transformation. IEEE Comput. Intell. Mag. 2021, 16, 38–53. [Google Scholar] [CrossRef]
  39. Ming, F.; Gong, W.; Gao, L. Adaptive Auxiliary Task Selection for Multitasking-Assisted Constrained Multi-Objective Optimization [Feature]. IEEE Comput. Intell. Mag. 2023, 18, 18–30. [Google Scholar] [CrossRef]
  40. He, Z.; Yen, G.G.; Zhang, J. Fuzzy-Based Pareto Optimality for Many-Objective Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2014, 18, 269–285. [Google Scholar] [CrossRef]
  41. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition. IEEE Trans. Evol. Comput. 2015, 19, 694–716. [Google Scholar] [CrossRef]
  42. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef]
  43. Li, M.; Yang, S.; Liu, X. Bi-goal evolution for many-objective optimization problems. Artif. Intell. 2015, 228, 45–65. [Google Scholar] [CrossRef]
  44. Yuan, Y.; Xu, H.; Wang, B.; Zhang, B.; Yao, X. Balancing Convergence and Diversity in Decomposition-Based Many-Objective Optimizers. IEEE Trans. Evol. Comput. 2016, 20, 180–198. [Google Scholar] [CrossRef]
  45. Palakonda, V.; Mallipeddi, R. An Evolutionary Algorithm for Multi and Many-Objective Optimization With Adaptive Mating and Environmental Selection. IEEE Access 2020, 8, 82781–82796. [Google Scholar] [CrossRef]
  46. Jiang, S.; Yang, S. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts. IEEE Trans. Cybern. 2016, 46, 421–437. [Google Scholar] [CrossRef]
  47. Xu, H.; Zeng, W.; Zhang, D.; Zeng, X. MOEA/HD: A Multiobjective Evolutionary Algorithm Based on Hierarchical Decomposition. IEEE Trans. Cybern. 2019, 49, 517–526. [Google Scholar] [CrossRef]
  48. Qi, Y.; Ma, X.; Liu, F.; Jiao, L.; Sun, J.; Wu, J. MOEA/D with Adaptive Weight Adjustment. Evol. Comput. 2014, 22, 231–264. [Google Scholar] [CrossRef]
  49. Tian, Y.; Zhang, X.; Cheng, R.; He, C.; Jin, Y. Guiding Evolutionary Multiobjective Optimization With Generic Front Modeling. IEEE Trans. Cybern. 2020, 50, 1106–1119. [Google Scholar] [CrossRef]
  50. Wang, Y.; Li, H.X.; Yen, G.G.; Song, W. MOMMOP: Multiobjective Optimization for Locating Multiple Optimal Solutions of Multimodal Optimization Problems. IEEE Trans. Cybern. 2015, 45, 830–843. [Google Scholar] [CrossRef] [PubMed]
  51. Tutum, C.C.; Deb, K. A multimodal approach for evolutionary multi-objective optimization (MEMO): Proof-of-principle results. In Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization, Guimarães, Portugal, 29 March–1 April 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–18. [Google Scholar]
  52. Qu, B.Y.; Liang, J.J.; Suganthan, P.N. Niching particle swarm optimization with local search for multi-modal optimization. Inf. Sci. 2012, 197, 131–143. [Google Scholar] [CrossRef]
  53. Liu, Y.; Yen, G.G.; Gong, D. A multimodal multiobjective evolutionary algorithm using two-archive and recombination strategies. IEEE Trans. Evol. Comput. 2018, 23, 660–674. [Google Scholar] [CrossRef]
  54. Zhang, X.; Tian, Y.; Cheng, R.; Jin, Y. A Decision Variable Clustering-Based Evolutionary Algorithm for Large-scale Many-objective Optimization. IEEE Trans. Evol. Comput. 2016, 22, 97–112. [Google Scholar] [CrossRef]
  55. Ma, X.; Liu, F.; Qi, Y.; Wang, X.; Li, L.; Jiao, L.; Yin, M.; Gong, M. A Multiobjective Evolutionary Algorithm Based on Decision Variable Analyses for Multiobjective Optimization Problems With Large-Scale Variables. IEEE Trans. Evol. Comput. 2016, 20, 275–298. [Google Scholar] [CrossRef]
  56. Hong, W.J.; Yang, P.; Tang, K. Evolutionary computation for large-scale multi-objective optimization: A decade of progresses. Int. J. Autom. Comput. 2021, 18, 155–169. [Google Scholar] [CrossRef]
  57. Park, J.; Ajani, O.S.; Mallipeddi, R. Optimization-Based Energy Disaggregation: A Constrained Multi-Objective Approach. Mathematics 2023, 11, 563. [Google Scholar] [CrossRef]
  58. Ríos, A.; Hernández, E.E.; Valdez, S.I. A Two-Stage Mono- and Multi-Objective Method for the Optimization of General UPS Parallel Manipulators. Mathematics 2021, 9, 543. [Google Scholar] [CrossRef]
  59. Ganesh, N.; Shankar, R.; Kalita, K.; Jangir, P.; Oliva, D.; Pérez-Cisneros, M. A Novel Decomposition-Based Multi-Objective Symbiotic Organism Search Optimization Algorithm. Mathematics 2023, 11, 1898. [Google Scholar] [CrossRef]
  60. Chalabi, N.E.; Attia, A.; Alnowibet, K.A.; Zawbaa, H.M.; Masri, H.; Mohamed, A.W. A Multi-Objective Gaining-Sharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems. Mathematics 2023, 11, 3092. [Google Scholar] [CrossRef]
  61. Ponti, A.; Candelieri, A.; Giordani, I.; Archetti, F. Intrusion Detection in Networks by Wasserstein Enabled Many-Objective Evolutionary Algorithms. Mathematics 2023, 11, 2342. [Google Scholar] [CrossRef]
  62. Othman, R.A.; Darwish, S.M.; Abd El-Moghith, I.A. A Multi-Objective Crowding Optimization Solution for Efficient Sensing as a Service in Virtualized Wireless Sensor Networks. Mathematics 2023, 11, 1128. [Google Scholar] [CrossRef]
  63. Garces-Jimenez, A.; Gomez-Pulido, J.M.; Gallego-Salvador, N.; Garcia-Tejedor, A.J. Genetic and Swarm Algorithms for Optimizing the Control of Building HVAC Systems Using Real Data: A Comparative Study. Mathematics 2021, 9, 2181. [Google Scholar] [CrossRef]
  64. Ramos-Pérez, J.M.; Miranda, G.; Segredo, E.; León, C.; Rodrínguez-Leónn, C. Application of Multi-Objective Evolutionary Algorithms for Planning Healthy and Balanced School Lunches. Mathematics 2021, 9, 80. [Google Scholar] [CrossRef]
  65. Alshammari, N.F.; Samy, M.M.; Barakat, S. Comprehensive Analysis of Multi-Objective Optimization Algorithms for Sustainable Hybrid Electric Vehicle Charging Systems. Mathematics 2023, 11, 1741. [Google Scholar] [CrossRef]
  66. Zille, H.; Mostaghim, S. Comparison study of large-scale optimisation techniques on the LSMOP benchmark functions. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar]
  67. Feng, Y.; Feng, L.; Kwong, S.; Tan, K.C. A Multivariation Multifactorial Evolutionary Algorithm for Large-Scale Multiobjective Optimization. IEEE Trans. Evol. Comput. 2022, 26, 248–262. [Google Scholar] [CrossRef]
  68. Liang, Z.; Liang, W.; Wang, Z.; Ma, X.; Liu, L.; Zhu, Z. Multiobjective Evolutionary Multitasking With Two-Stage Adaptive Knowledge Transfer Based on Population Distribution. IEEE Trans. Syst. Man, Cybern. Syst. 2022, 52, 4457–4469. [Google Scholar] [CrossRef]
  69. Gao, W.; Cheng, J.; Gong, M.; Li, H.; Xie, J. Multiobjective Multitasking Optimization With Subspace Distribution Alignment and Decision Variable Transfer. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 818–827. [Google Scholar] [CrossRef]
  70. Li, H.; He, F.; Liang, Y.; Quan, Q. A dividing-based many-objective evolutionary algorithm for large-scale feature selection. Soft Comput. 2020, 24, 6851–6870. [Google Scholar] [CrossRef]
  71. Zhang, X.; Cheng, R.; Feng, L.; Jin, Y. Machine Learning Assisted Evolutionary Multi-Objective Optimization [Guest Editorial]. IEEE Comput. Intell. Mag. 2023, 18, 16–17. [Google Scholar] [CrossRef]
  72. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable Test Problems for Evolutionary Multiobjective Optimization. In Evolutionary Multiobjective Optimization: Theoretical Advances and Applications; Abraham, A., Jain, L., Goldberg, R., Eds.; Springer: London, UK, 2005; pp. 105–145. [Google Scholar] [CrossRef]
  73. Mukhopadhyay, A.; Maulik, U. An SVM-wrapped multiobjective evolutionary feature selection approach for identifying cancer-microRNA markers. IEEE Trans. Nanobiosci. 2013, 12, 275–281. [Google Scholar] [CrossRef]
  74. Chen, K.; Xue, B.; Zhang, M.; Zhou, F. Evolutionary Multitasking for Feature Selection in High-Dimensional Classification via Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2022, 26, 446–460. [Google Scholar] [CrossRef]
  75. Chen, K.; Xue, B.; Zhang, M.; Zhou, F. An Evolutionary Multitasking-Based Feature Selection Method for High-Dimensional Classification. IEEE Trans. Cybern. 2022, 52, 7172–7186. [Google Scholar] [CrossRef] [PubMed]
  76. Xu, H.; Huang, C.; Lin, J.; Lin, M.; Zhang, H.; Xu, R. A Multi-Task Decomposition-Based Evolutionary Algorithm for Tackling High-Dimensional Bi-Objective Feature Selection. Mathematics 2024, 12, 1178. [Google Scholar] [CrossRef]
  77. Cheng, F.; Chu, F.; Xu, Y.; Zhang, L. A Steering-Matrix-Based Multiobjective Evolutionary Algorithm for High-Dimensional Feature Selection. IEEE Trans. Cybern. 2022, 52, 9695–9708. [Google Scholar] [CrossRef]
  78. Xu, H.; Huang, C.; Wen, H.; Yan, T.; Lin, Y.; Xie, Y. A Hybrid Initialization and Effective Reproduction-Based Evolutionary Algorithm for Tackling Bi-Objective Large-Scale Feature Selection in Classification. Mathematics 2024, 12, 554. [Google Scholar] [CrossRef]
  79. Xu, H.; Xue, B.; Zhang, M. A Duplication Analysis-Based Evolutionary Algorithm for Biobjective Feature Selection. IEEE Trans. Evol. Comput. 2021, 25, 205–218. [Google Scholar] [CrossRef]
  80. Tian, Y.; Zhang, X.; Wang, C.; Jin, Y. An Evolutionary Algorithm for Large-Scale Sparse Multiobjective Optimization Problems. IEEE Trans. Evol. Comput. 2020, 24, 380–393. [Google Scholar] [CrossRef]
  81. Xu, H.; Xue, B.; Zhang, M. Segmented Initialization and Offspring Modification in Evolutionary Algorithms for Bi-Objective Feature Selection. In Proceedings of the GECCO’20, 2020 Genetic and Evolutionary Computation Conference, Cancún, Mexico, 8–12 July 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 444–452. [Google Scholar] [CrossRef]
  82. Kelly, M.; Longjohn, R.; Nottingham, K. The UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu (accessed on 3 May 2024).
  83. Tian, Y.; Cheng, R.; Zhang, X.; Cheng, F.; Jin, Y. An Indicator-Based Multiobjective Evolutionary Algorithm With Reference Point Adaptation for Better Versatility. IEEE Trans. Evol. Comput. 2018, 22, 609–622. [Google Scholar] [CrossRef]
  84. Ishibuchi, H.; Pang, L.M.; Shang, K. Difficulties in Fair Performance Comparison of Multi-Objective Evolutionary Algorithms [Research Frontier]. IEEE Comput. Intell. Mag. 2022, 17, 86–101. [Google Scholar] [CrossRef]
  85. Su, Y.; Jin, Z.; Tian, Y.; Zhang, X.; Tan, K.C. Comparing the Performance of Evolutionary Algorithms for Sparse Multi-Objective Optimization via a Comprehensive Indicator [Research Frontier]. IEEE Comput. Intell. Mag. 2022, 17, 34–53. [Google Scholar] [CrossRef]
  86. While, L.; Hingston, P.; Barone, L.; Huband, S. A faster algorithm for calculating Hypervolume. IEEE Trans. Evol. Comput. 2006, 10, 29–38. [Google Scholar] [CrossRef]
  87. Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; Da Fonseca, V.G. Performance assessment of multiobjective optimizers: An analysis and review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
  88. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  89. Tran, B.; Xue, B.; Zhang, M.; Nguyen, S. Investigation on particle swarm optimisation for feature selection on high-dimensional data: Local search and selection bias. Connect. Sci. 2016, 28, 270–294. [Google Scholar] [CrossRef]
  90. Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef]
Figure 1. General ideas of the proposed dynamic multitasking mechanism: (a) main and backward search tasks; (b) main and ROI search tasks, where D is the total number of features.
Figure 1. General ideas of the proposed dynamic multitasking mechanism: (a) main and backward search tasks; (b) main and ROI search tasks, where D is the total number of features.
Mathematics 12 01431 g001
Figure 2. Merging mechanism for the backward search task.
Figure 2. Merging mechanism for the backward search task.
Mathematics 12 01431 g002
Figure 3. Topological example of constructing the R O I object, where D is the total number of features.
Figure 3. Topological example of constructing the R O I object, where D is the total number of features.
Mathematics 12 01431 g003
Figure 4. Topological example of updating the R O I object, where D is the total number of features.
Figure 4. Topological example of updating the R O I object, where D is the total number of features.
Mathematics 12 01431 g004
Figure 5. Nondominated solution distributions for each algorithm with an independent run of median HV performance: (a) Breast dataset; (b) Sonar dataset; (c) Leukemia1 dataset; (d) Leukemia2 dataset; (e) CNS dataset; (f) Brain2 dataset.
Figure 5. Nondominated solution distributions for each algorithm with an independent run of median HV performance: (a) Breast dataset; (b) Sonar dataset; (c) Leukemia1 dataset; (d) Leukemia2 dataset; (e) CNS dataset; (f) Brain2 dataset.
Mathematics 12 01431 g005
Table 1. Attributes for each classification dataset used as test problems.
Table 1. Attributes for each classification dataset used as test problems.
No.NameFeatureSampleClassN T
1Wine131783505000
2Australian146902505000
3Climate185402505000
4German2410002507500
5Breast305692507500
6Ionosphere343512507500
7Lung5632310015,000
8Sonar60208210020,000
9Hill_Valley100606210025,000
10Semeion25615931010025,000
11Madelon5002600210025,000
12SRBCT230883410025,000
13Leukemia1532772310025,000
14DLBCL546977210025,000
15Brain1592090510030,000
16Leukemia2707072210030,000
17CNS712960210030,000
18Arcene10,000200210030,000
19Brain210,36750410030,000
20Prostate10,509102210030,000
Table 2. Mean HV performances of each algorithm on each dataset, with best ones marked in gray, and insignificant differences prefixed by ⋆.
Table 2. Mean HV performances of each algorithm on each dataset, with best ones marked in gray, and insignificant differences prefixed by ⋆.
No.DataDTEASparseEADAEAGFMMOEAMOEAHDARMOEANSGAII HypE
1Wine8.78 × 10 1 ⋆ 8.74 × 10 1 ⋆ 8.76 × 10 1 8.67 × 10 1 ⋆ 8.77 × 10 1 ⋆ 8.73 × 10 1 ⋆ 8.74 × 10 1 ⋆ 8.74 × 10 1
±1.02 × 10 2 ±1.13 × 10 2 ±1.04 × 10 2 ±1.21 × 10 2 ±9.82 × 10 3 ±1.17 × 10 2 ±1.19 × 10 2 ±1.14 × 10 2
2Australian7.99 × 10 1 7.96 × 10 1 ⋆ 7.99 × 10 1 7.93 × 10 1 ⋆ 7.99 × 10 1 ⋆ 7.98 × 10 1 ⋆ 7.99 × 10 1 ⋆ 7.98 × 10 1
±5.48 × 10 3 ±5.05 × 10 3 ±6.23 × 10 3 ±1.14 × 10 2 ±4.87 × 10 3 ±5.61 × 10 3 ±5.74 × 10 3 ±5.92 × 10 3
3Climate8.90 × 10 1 ⋆ 8.89 × 10 1 ⋆ 8.90 × 10 1 8.85 × 10 1 ⋆ 8.89 × 10 1 ⋆ 8.89 × 10 1 ⋆ 8.90 × 10 1 ⋆ 8.90 × 10 1
±6.44 × 10 3 ±6.15 × 10 3 ±6.35 × 10 3 ±6.95 × 10 3 ±6.91 × 10 3 ±6.78 × 10 3 ±7.00 × 10 3 ±6.70 × 10 3
4German7.28 × 10 1 ⋆ 7.28 × 10 1 ⋆ 7.27 × 10 1 7.18 × 10 1 ⋆ 7.27 × 10 1 ⋆ 7.26 × 10 1 ⋆ 7.28 × 10 1 ⋆ 7.27 × 10 1
±7.83 × 10 3 ±6.09 × 10 3 ±5.86 × 10 3 ±9.67 × 10 3 ±9.01 × 10 3 ±6.35 × 10 3 ±6.17 × 10 3 ±6.37 × 10 3
5Breast9.40 × 10 1 ⋆ 9.38 × 10 1 ⋆ 9.39 × 10 1 9.36 × 10 1 ⋆ 9.37 × 10 1 ⋆ 9.39 × 10 1 ⋆ 9.39 × 10 1 9.34 × 10 1
±4.40 × 10 3 ±4.73 × 10 3 ±3.09 × 10 3 ±8.36 × 10 3 ±5.13 × 10 3 ±7.06 × 10 3 ±6.63 × 10 3 ±1.33 × 10 2
6Ionosphere8.86 × 10 1 ⋆ 8.83 × 10 1 ⋆ 8.85 × 10 1 8.73 × 10 1 ⋆ 8.86 × 10 1 ⋆ 8.85 × 10 1 ⋆ 8.83 × 10 1 ⋆ 8.85 × 10 1
±8.75 × 10 3 ±1.12 × 10 2 ±7.79 × 10 3 ±1.04 × 10 2 ±8.18 × 10 3 ±1.20 × 10 2 ±1.06 × 10 2 ±9.58 × 10 3
7Lung7.38 × 10 1 ⋆ 7.29 × 10 1 ⋆ 7.32 × 10 1 6.76 × 10 1 6.91 × 10 1 7.11 × 10 1 7.03 × 10 1 ⋆ 7.14 × 10 1
±6.23 × 10 2 ±7.02 × 10 2 ±5.97 × 10 2 ±6.83 × 10 2 ±6.50 × 10 2 ±6.69 × 10 2 ±6.68 × 10 2 ±6.45 × 10 2
8Sonar8.13 × 10 1 ⋆ 8.08 × 10 1 ⋆ 8.05 × 10 1 7.89 × 10 1 7.99 × 10 1 7.95 × 10 1 ⋆ 8.03 × 10 1 7.89 × 10 1
±1.96 × 10 2 ±2.31 × 10 2 ±2.43 × 10 2 ±3.13 × 10 2 ±1.97 × 10 2 ±2.31 × 10 2 ±2.21 × 10 2 ±3.25 × 10 2
9Hill_Valley6.32 × 10 1 ⋆ 6.28 × 10 1 ⋆ 6.29 × 10 1 5.99 × 10 1 6.26 × 10 1 6.03 × 10 1 6.16 × 10 1 6.23 × 10 1
±7.98 × 10 3 ±8.86 × 10 3 ±1.22 × 10 2 ±2.10 × 10 2 ±9.70 × 10 3 ±2.69 × 10 2 ±1.91 × 10 2 ±1.43 × 10 2
10Semeion8.44 × 10 1 8.35 × 10 1 ⋆ 8.43 × 10 1 7.11 × 10 1 7.85 × 10 1 7.30 × 10 1 7.39 × 10 1 7.26 × 10 1
±6.83 × 10 3 ±7.82 × 10 3 ±5.31 × 10 3 ±1.93 × 10 2 ±1.52 × 10 2 ±1.74 × 10 2 ±1.52 × 10 2 ±1.11 × 10 2
11Madelon9.08 × 10 1 9.06 × 10 1 ⋆ 9.08 × 10 1 6.04 × 10 1 6.69 × 10 1 6.15 × 10 1 6.28 × 10 1 6.60 × 10 1
±3.47 × 10 3 ±2.52 × 10 3 ±3.26 × 10 3 ±4.80 × 10 2 ±2.63 × 10 2 ±4.75 × 10 2 ±3.58 × 10 2 ±3.73 × 10 2
12SRBCT9.63 × 10 1 ⋆ 9.44 × 10 1 9.34 × 10 1 3.23 × 10 1 2.77 × 10 1 3.05 × 10 1 3.33 × 10 1 3.23 × 10 1
±3.57 × 10 2 ±4.44 × 10 2 ±4.09 × 10 2 ±2.24 × 10 3 ±1.81 × 10 3 ±2.41 × 10 3 ±6.11 × 10 2 ±2.24 × 10 3
13Leukemia19.54 × 10 1 ⋆ 9.53 × 10 1 ⋆ 9.47 × 10 1 5.59 × 10 1 5.48 × 10 1 5.45 × 10 1 5.64 × 10 1 5.53 × 10 1
±3.49 × 10 2 ±3.02 × 10 2 ±2.86 × 10 2 ±2.20 × 10 2 ±1.68 × 10 2 ±2.51 × 10 2 ±1.53 × 10 2 ±2.13 × 10 2
14DLBCL9.26 × 10 1 8.97 × 10 1 ⋆ 9.05 × 10 1 6.26 × 10 1 6.04 × 10 1 6.07 × 10 1 6.22 × 10 1 6.21 × 10 1
±4.24 × 10 2 ±3.96 × 10 2 ±4.48 × 10 2 ±2.04 × 10 2 ±2.10 × 10 2 ±2.19 × 10 2 ±2.30 × 10 2 ±1.60 × 10 2
15Brain18.10 × 10 1 7.80 × 10 1 7.87 × 10 1 5.12 × 10 1 4.84 × 10 1 4.97 × 10 1 5.13 × 10 1 5.12 × 10 1
±2.57 × 10 2 ±4.48 × 10 2 ±3.78 × 10 2 ±5.71 × 10 3 ±1.45 × 10 2 ±2.42 × 10 3 ±3.44 × 10 3 ±4.38 × 10 3
16Leukemia29.46 × 10 1 ⋆ 9.27 × 10 1 ⋆ 9.41 × 10 1 5.70 × 10 1 5.47 × 10 1 5.60 × 10 1 5.69 × 10 1 5.71 × 10 1
±4.75 × 10 2 ±5.28 × 10 2 ±4.93 × 10 2 ±2.09 × 10 2 ±1.87 × 10 2 ±1.82 × 10 2 ±1.46 × 10 2 ±1.83 × 10 2
17CNS7.12 × 10 1 ⋆ 6.90 × 10 1 ⋆ 6.95 × 10 1 3.97 × 10 1 3.88 × 10 1 3.85 × 10 1 4.10 × 10 1 4.02 × 10 1
±6.10 × 10 2 ±7.01 × 10 2 ±7.90 × 10 2 ±2.89 × 10 2 ±3.18 × 10 2 ±3.15 × 10 2 ±3.47 × 10 2 ±3.53 × 10 2
18Arcene8.70 × 10 1 ⋆ 8.56 × 10 1 ⋆ 8.65 × 10 1 3.84 × 10 1 3.56 × 10 1 3.77 × 10 1 3.84 × 10 1 3.84 × 10 1
±2.37 × 10 2 ±5.73 × 10 2 ±2.21 × 10 2 ±2.20 × 10 3 ±1.35 × 10 3 ±1.95 × 10 3 ±1.44 × 10 3 ±2.20 × 10 3
19Brain27.33 × 10 1 6.99 × 10 1 ⋆ 7.15 × 10 1 4.04 × 10 1 4.01 × 10 1 4.06 × 10 1 4.09 × 10 1 4.05 × 10 1
±6.86 × 10 2 ±6.26 × 10 2 ±7.32 × 10 2 ±2.46 × 10 2 ±1.97 × 10 2 ±2.73 × 10 2 ±2.30 × 10 2 ±2.53 × 10 2
20Prostate9.48 × 10 1 9.26 × 10 1 ⋆ 9.42 × 10 1 4.81 × 10 1 4.75 × 10 1 4.76 × 10 1 4.80 × 10 1 4.77 × 10 1
±3.05 × 10 2 ±4.36 × 10 2 ±2.72 × 10 2 ±1.74 × 10 2 ±1.36 × 10 2 ±1.92 × 10 2 ±1.68 × 10 2 ±1.82 × 10 2
Table 3. Mean IGD performances of each algorithm on each dataset, with best ones marked in gray, and insignificant differences prefixed by ⋆.
Table 3. Mean IGD performances of each algorithm on each dataset, with best ones marked in gray, and insignificant differences prefixed by ⋆.
No.DataDTEASparseEADAEAGFMMOEAMOEAHDARMOEANSGAII HypE
1Wine8.77 × 10 2 ⋆ 8.97 × 10 2 ⋆ 8.86 × 10 2 9.83 × 10 2 ⋆ 8.79 × 10 2 ⋆ 8.99 × 10 2 ⋆ 9.04 × 10 2 ⋆ 9.16 × 10 2
±1.19 × 10 2 ±1.22 × 10 2 ±1.22 × 10 2 ±1.35 × 10 2 ±1.18 × 10 2 ±1.29 × 10 2 ±1.22 × 10 2 ±1.25 × 10 2
2Australian1.91 × 10 2 ⋆ 2.02 × 10 2 ⋆ 1.91 × 10 2 3.71 × 10 2 ⋆ 1.86 × 10 2 ⋆ 2.13 × 10 2 ⋆ 1.96 × 10 2 ⋆ 2.12 × 10 2
±5.52 × 10 3 ±6.44 × 10 3 ±5.50 × 10 3 ±2.28 × 10 2 ±5.85 × 10 3 ±7.14 × 10 3 ±7.13 × 10 3 ±6.60 × 10 3
3Climate5.91 × 10 2 ⋆ 6.09 × 10 2 ⋆ 5.91 × 10 2 8.78 × 10 2 ⋆ 6.55 × 10 2 ⋆ 6.53 × 10 2 ⋆ 6.49 × 10 2 ⋆ 6.20 × 10 2
±2.98 × 10 2 ±3.18 × 10 2 ±3.01 × 10 2 ±3.80 × 10 2 ±3.71 × 10 2 ±3.71 × 10 2 ±3.51 × 10 2 ±3.16 × 10 2
4German4.05 × 10 2 ⋆ 3.78 × 10 2 ⋆ 3.87 × 10 2 6.35 × 10 2 ⋆ 4.05 × 10 2 ⋆ 3.76 × 10 2 ⋆ 3.70 × 10 2 ⋆ 4.35 × 10 2
±1.75 × 10 2 ±1.55 × 10 2 ±1.57 × 10 2 ±2.63 × 10 2 ±1.45 × 10 2 ±9.16 × 10 3 ±1.16 × 10 2 ±1.52 × 10 2
5Breast1.96 × 10 2 ⋆ 1.92 × 10 2 ⋆ 1.96 × 10 2 2.66 × 10 2 ⋆ 1.95 × 10 2 ⋆ 2.15 × 10 2 ⋆ 2.01 × 10 2 2.44 × 10 2
±4.94 × 10 3 ±4.14 × 10 3 ±3.44 × 10 3 ±1.31 × 10 2 ±5.37 × 10 3 ±7.50 × 10 3 ±6.52 × 10 3 ±6.85 × 10 3
6Ionosphere3.92 × 10 2 ⋆ 4.18 × 10 2 ⋆ 4.04 × 10 2 5.12 × 10 2 ⋆ 3.97 × 10 2 ⋆ 4.09 × 10 2 ⋆ 4.30 × 10 2 ⋆ 3.97 × 10 2
±5.93 × 10 3 ±8.96 × 10 3 ±5.06 × 10 3 ±1.08 × 10 2 ±5.91 × 10 3 ±9.80 × 10 3 ±8.12 × 10 3 ±7.55 × 10 3
7Lung1.03 × 10 1 ⋆ 1.10 × 10 1 ⋆ 1.06 × 10 1 1.48 × 10 1 1.32 × 10 1 1.27 × 10 1 1.29 × 10 1 ⋆ 1.14 × 10 1
±3.25 × 10 2 ±3.78 × 10 2 ±3.21 × 10 2 ±5.62 × 10 2 ±3.70 × 10 2 ±4.53 × 10 2 ±4.52 × 10 2 ±4.05 × 10 2
8Sonar6.20 × 10 2 ⋆ 6.66 × 10 2 ⋆ 6.93 × 10 2 8.43 × 10 2 ⋆ 7.12 × 10 2 7.73 × 10 2 ⋆ 6.82 × 10 2 7.51 × 10 2
±1.70 × 10 2 ±1.93 × 10 2 ±2.15 × 10 2 ±2.69 × 10 2 ±2.11 × 10 2 ±2.20 × 10 2 ±2.32 × 10 2 ±2.92 × 10 2
9Hill_Valley2.71 × 10 2 ⋆ 2.92 × 10 2 ⋆ 2.84 × 10 2 5.82 × 10 2 ⋆ 2.91 × 10 2 5.00 × 10 2 3.99 × 10 2 ⋆ 3.15 × 10 2
±6.07 × 10 3 ±6.27 × 10 3 ±9.56 × 10 3 ±2.33 × 10 2 ±9.17 × 10 3 ±2.46 × 10 2 ±2.13 × 10 2 ±1.35 × 10 2
10Semeion2.31 × 10 2 2.90 × 10 2 2.71 × 10 2 1.88 × 10 1 1.11 × 10 1 1.72 × 10 1 1.63 × 10 1 1.80 × 10 1
±3.29 × 10 3 ±2.99 × 10 3 ±7.95 × 10 3 ±1.86 × 10 2 ±1.77 × 10 2 ±1.92 × 10 2 ±1.61 × 10 2 ±1.02 × 10 2
11Madelon1.90 × 10 2 ⋆ 2.03 × 10 2 ⋆ 1.73 × 10 2 2.32 × 10 1 1.76 × 10 1 2.23 × 10 1 2.11 × 10 1 1.86 × 10 1
±4.63 × 10 3 ±4.58 × 10 3 ±4.95 × 10 3 ±4.09 × 10 2 ±1.90 × 10 2 ±3.92 × 10 2 ±3.02 × 10 2 ±2.72 × 10 2
12SRBCT4.12 × 10 2 ⋆ 5.22 × 10 2 5.31 × 10 2 5.69 × 10 1 6.33 × 10 1 5.91 × 10 1 5.60 × 10 1 5.69 × 10 1
±1.67 × 10 2 ±2.32 × 10 2 ±2.21 × 10 2 ±2.74 × 10 3 ±2.87 × 10 3 ±3.30 × 10 3 ±5.23 × 10 2 ±2.74 × 10 3
13Leukemia14.93 × 10 2 ⋆ 5.08 × 10 2 ⋆ 5.15 × 10 2 3.93 × 10 1 4.08 × 10 1 4.11 × 10 1 3.89 × 10 1 3.97 × 10 1
±2.32 × 10 2 ±1.86 × 10 2 ±1.88 × 10 2 ±1.12 × 10 2 ±1.08 × 10 2 ±1.19 × 10 2 ±7.76 × 10 3 ±1.08 × 10 2
14DLBCL6.17 × 10 2 9.13 × 10 2 ⋆ 8.34 × 10 2 3.75 × 10 1 3.96 × 10 1 3.96 × 10 1 3.76 × 10 1 3.81 × 10 1
±4.34 × 10 2 ±4.36 × 10 2 ±4.78 × 10 2 ±7.58 × 10 3 ±1.13 × 10 2 ±6.63 × 10 3 ±7.15 × 10 3 ±4.45 × 10 3
15Brain13.78 × 10 2 6.61 × 10 2 5.58 × 10 2 3.72 × 10 1 4.11 × 10 1 3.94 × 10 1 3.71 × 10 1 3.73 × 10 1
±1.49 × 10 2 ±3.84 × 10 2 ±3.39 × 10 2 ±8.02 × 10 3 ±2.04 × 10 2 ±3.41 × 10 3 ±4.83 × 10 3 ±6.15 × 10 3
16Leukemia25.91 × 10 2 ⋆ 8.03 × 10 2 ⋆ 6.52 × 10 2 4.07 × 10 1 4.33 × 10 1 4.22 × 10 1 4.07 × 10 1 4.07 × 10 1
±5.22 × 10 2 ±5.81 × 10 2 ±5.43 × 10 2 ±1.43 × 10 2 ±1.49 × 10 2 ±1.03 × 10 2 ±1.03 × 10 2 ±1.06 × 10 2
17CNS2.06 × 10 1 ⋆ 2.30 × 10 1 ⋆ 2.24 × 10 1 5.00 × 10 1 5.12 × 10 1 5.17 × 10 1 4.87 × 10 1 4.97 × 10 1
±6.71 × 10 2 ±7.68 × 10 2 ±8.69 × 10 2 ±3.14 × 10 2 ±3.43 × 10 2 ±3.45 × 10 2 ±3.64 × 10 2 ±3.51 × 10 2
18Arcene4.56 × 10 2 ⋆ 5.98 × 10 2 ⋆ 4.80 × 10 2 5.19 × 10 1 5.58 × 10 1 5.29 × 10 1 5.19 × 10 1 5.19 × 10 1
±1.77 × 10 2 ±5.48 × 10 2 ±1.73 × 10 2 ±3.10 × 10 3 ±1.99 × 10 3 ±2.79 × 10 3 ±2.04 × 10 3 ±3.10 × 10 3
19Brain21.04 × 10 1 1.38 × 10 1 ⋆ 1.22 × 10 1 4.59 × 10 1 4.67 × 10 1 4.65 × 10 1 4.55 × 10 1 4.61 × 10 1
±6.29 × 10 2 ±5.79 × 10 2 ±6.96 × 10 2 ±1.80 × 10 2 ±1.39 × 10 2 ±1.88 × 10 2 ±1.67 × 10 2 ±1.67 × 10 2
20Prostate3.88 × 10 2 ⋆ 5.56 × 10 2 ⋆ 3.77 × 10 2 4.69 × 10 1 4.79 × 10 1 4.77 × 10 1 4.72 × 10 1 4.74 × 10 1
±2.34 × 10 2 ±3.98 × 10 2 ±2.33 × 10 2 ±1.47 × 10 2 ±1.11 × 10 2 ±1.59 × 10 2 ±1.42 × 10 2 ±1.55 × 10 2
Table 4. Mean ranks calculated by Friedman’s test on all datasets.
Table 4. Mean ranks calculated by Friedman’s test on all datasets.
MetricDTEASparseEADAEAGFMMOEAMOEAHDARMOEANSGAII HypE
HV1st3rd2nd8th6th7th4th5th
IGD1st3rd2nd8th6th7th4th5th
MCE1st3rd2nd8th6th7th5th4th
Table 5. Mean MCE and SFN performances of each algorithm on each dataset, with best ones marked in gray, and insignificant differences prefixed by ⋆.
Table 5. Mean MCE and SFN performances of each algorithm on each dataset, with best ones marked in gray, and insignificant differences prefixed by ⋆.
No.DataDTEASparseEADAEAGFMMOEAMOEAHDARMOEANSGAII HypE
1Wine1.45 × 10 2 ⋆ 2.01 × 10 2 ⋆ 1.64 × 10 2 2.45 × 10 2 ⋆ 1.57 × 10 2 ⋆ 2.26 × 10 2 ⋆ 1.95 × 10 2 ⋆ 1.95 × 10 2
SFN = 5SFN = 6SFN = 6SFN = 6SFN = 5SFN = 5SFN = 6SFN = 6
2Australian1.46 × 10 1 ⋆  1.50 × 10 1 ⋆ 1.46 × 10 1 ⋆ 1.51 × 10 1 ⋆ 1.47 × 10 1 ⋆ 1.47 × 10 1 ⋆ 1.46 × 10 1 ⋆ 1.47 × 10 1
SFN = 4SFN = 4SFN = 4SFN = 5SFN = 4SFN = 4SFN = 4SFN = 4
3Climate6.42 × 10 2 ⋆ 6.56 × 10 2 ⋆ 6.44 × 10 2 7.18 × 10 2 ⋆ 6.56 × 10 2 ⋆ 6.56 × 10 2 ⋆ 6.54 × 10 2 ⋆ 6.46 × 10 2
SFN = 5SFN = 5SFN = 5SFN = 5SFN = 5SFN = 5SFN = 5SFN = 5
4German2.64 × 10 1 ⋆ 2.64 × 10 1 ⋆ 2.65 × 10 1 2.75 × 10 1 ⋆ 2.64 × 10 1 ⋆ 2.66 × 10 1 ⋆ 2.64 × 10 1 ⋆ 2.65 × 10 1
SFN = 6SFN = 6SFN = 6SFN = 6SFN = 6SFN = 6SFN = 6SFN = 6
5Breast3.04 × 10 2 ⋆ 3.26 × 10 2 ⋆ 3.18 × 10 2 3.33 × 10 2 3.33 × 10 2 ⋆ 3.16 × 10 2 ⋆ 3.16 × 10 2 ⋆ 3.08 × 10 2
SFN = 6SFN = 5SFN = 6SFN = 5SFN = 5SFN = 5SFN = 5SFN = 5
6Ionosphere9.43 × 10 2 ⋆ 9.87 × 10 2 ⋆ 9.65 × 10 2 1.08 × 10 1 ⋆ 9.43 × 10 2 ⋆ 9.62 × 10 2 ⋆ 9.84 × 10 2 ⋆ 9.65 × 10 2
SFN = 3SFN = 3SFN = 3SFN = 4SFN = 3SFN = 3SFN = 3SFN = 3
7Lung2.63 × 10 1 ⋆ 2.73 × 10 1 ⋆ 2.70 × 10 1 3.33 × 10 1 3.20 × 10 1 ⋆ 2.90 × 10 1 ⋆ 3.03 × 10 1 ⋆ 2.93 × 10 1
SFN = 6SFN = 6SFN = 6SFN = 6SFN = 4SFN = 7SFN = 5SFN = 5
8Sonar1.85 × 10 1 ⋆ 1.92 × 10 1 ⋆ 1.96 × 10 1 2.12 × 10 1 2.04 × 10 1 2.05 × 10 1 ⋆ 1.98 × 10 1 ⋆ 1.98 × 10 1
SFN = 8SFN = 8SFN = 7SFN = 6SFN = 5SFN = 6SFN = 6SFN = 8
9Hill_Valley3.97 × 10 1 ⋆ 4.01 × 10 1 ⋆ 4.00 × 10 1 4.22 × 10 1 4.04 × 10 1 4.15 × 10 1 4.09 × 10 1 4.04 × 10 1
SFN = 5SFN = 5SFN = 5SFN = 10SFN = 4SFN = 8SFN = 6SFN = 4
10Semeion1.35 × 10 1 ⋆ 1.40 × 10 1 ⋆ 1.31 × 10 1 ⋆ 1.38 × 10 1 ⋆ 1.37 × 10 1 ⋆ 1.37 × 10 1 ⋆ 1.37 × 10 1 ⋆ 1.38 × 10 1
SFN = 80SFN = 97SFN = 82SFN = 89SFN = 87SFN = 84SFN = 88SFN = 79
11Madelon9.73 × 10 2 9.95 × 10 2 ⋆ 9.77 × 10 2 3.09 × 10 1 2.74 × 10 1 2.99 × 10 1 2.92 × 10 1 2.75 × 10 1
SFN = 7SFN = 8SFN = 7SFN = 100SFN = 66SFN = 92SFN = 89SFN = 71
12SRBCT4.00 × 10 2 ⋆ 6.13 × 10 2 7.20 × 10 2 6.40 × 10 1 6.40 × 10 1 6.40 × 10 1 6.24 × 10 1 6.40 × 10 1
SFN = 5SFN = 4SFN = 4SFN = 579SFN = 854SFN = 684SFN = 582SFN = 579
13Leukemia15.00 × 10 2 ⋆ 5.15 × 10 2 ⋆ 5.76 × 10 2 1.71 × 10 1 1.67 × 10 1 1.70 × 10 1 1.64 × 10 1 1.76 × 10 1
SFN = 2SFN = 3SFN = 3SFN = 1980SFN = 2078SFN = 2084SFN = 1976SFN = 2000
14DLBCL8.12 × 10 2 1.13 × 10 1 ⋆ 1.04 × 10 1 ⋆ 6.09 × 10 2 ⋆ 6.81 × 10 2 ⋆ 6.23 × 10 2 ⋆ 6.81 × 10 2 5.94 × 10 2
SFN = 3SFN = 3SFN = 2SFN = 2031SFN = 2142SFN = 2144SFN = 2030SFN = 2064
15Brain12.09 × 10 1 2.42 × 10 1 2.35 × 10 1 2.59 × 10 1 2.59 × 10 1 2.59 × 10 1 2.59 × 10 1 2.59 × 10 1
SFN = 4SFN = 3SFN = 5SFN = 2147SFN = 2387SFN = 2282SFN = 2141SFN = 2154
16Leukemia25.91 × 10 2 ⋆ 8.03 × 10 2 ⋆ 6.52 × 10 2 1.42 × 10 1 1.42 × 10 1 1.32 × 10 1 1.42 × 10 1 1.36 × 10 1
SFN = 2SFN = 2SFN = 2SFN = 2688SFN = 2889SFN = 2830SFN = 2692SFN = 2707
17CNS3.17 × 10 1 ⋆ 3.41 × 10 1 ⋆ 3.35 × 10 1 4.28 × 10 1 4.31 × 10 1 4.31 × 10 1 4.07 × 10 1 4.15 × 10 1
SFN = 3SFN = 5SFN = 2SFN = 2750SFN = 2837SFN = 2877SFN = 2739SFN = 2795
18Arcene1.43 × 10 1 ⋆ 1.57 × 10 1 ⋆ 1.49 × 10 1 4.33 × 10 1 4.33 × 10 1 4.33 × 10 1 4.33 × 10 1 4.33 × 10 1
SFN = 7SFN = 27SFN = 7SFN = 4037SFN = 4533SFN = 4159SFN = 4036SFN = 4037
19Brain22.93 × 10 1 ⋆ 3.31 × 10 1 ⋆ 3.13 × 10 1 3.89 × 10 1 3.82 × 10 1 3.73 × 10 1 3.80 × 10 1 3.82 × 10 1
SFN = 3SFN = 8SFN = 3SFN = 4271SFN = 4407SFN = 4402SFN = 4275SFN = 4319
20Prostate5.70 × 10 2 8.06 × 10 2 ⋆ 6.34 × 10 2 2.53 × 10 1 2.48 × 10 1 2.47 × 10 1 2.55 × 10 1 2.56 × 10 1
SFN = 3SFN = 13SFN = 4SFN = 4338SFN = 4475SFN = 4457SFN = 4353SFN = 4377
Table 6. Mean HV performances of each algorithm on each dataset, with best ones marked in gray, and those performances better than DTEA_Main prefixed by Δ .
Table 6. Mean HV performances of each algorithm on each dataset, with best ones marked in gray, and those performances better than DTEA_Main prefixed by Δ .
No.DataDTEADTEA_BackDTEA_ROIDTEA_Main
1Wine Δ 8.7802 × 10 1 8.7756 × 10 1 Δ 8.7802 × 10 1 8.7756 × 10 1
±1.02 × 10 2 ±1.03 × 10 2 ±1.02 × 10 2 ±1.03 × 10 2
2Australian Δ 7.9918 × 10 1 7.9894 × 10 1 Δ 7.9918 × 10 1 7.9894 × 10 1
±5.48 × 10 3 ±5.19 × 10 3 ±5.48 × 10 3 ±5.19 × 10 3
3Climate8.9031 × 10 1 8.9031 × 10 1 8.9031 × 10 1 8.9031 × 10 1
±6.44 × 10 3 ±6.44 × 10 3 ±6.44 × 10 3 ±6.44 × 10 3
4German Δ 7.2807 × 10 1 7.2687 × 10 1 Δ 7.2807 × 10 1 7.2687 × 10 1
±7.83 × 10 3 ±6.73 × 10 3 ±7.83 × 10 3 ±6.73 × 10 3
5Breast Δ 9.3969 × 10 1 9.3938 × 10 1 Δ 9.3969 × 10 1 9.3938 × 10 1
±4.40 × 10 3 ±4.84 × 10 3 ±4.40 × 10 3 ±4.84 × 10 3
6Ionosphere Δ 8.8640 × 10 1 8.8614 × 10 1 Δ 8.8640 × 10 1 8.8614 × 10 1
±8.75 × 10 3 ±8.62 × 10 3 ±8.75 × 10 3 ±8.62 × 10 3
7Lung7.3808 × 10 1 7.4643 × 10 1 7.3808 × 10 1 7.4643 × 10 1
±6.23 × 10 2 ±6.37 × 10 2 ±6.23 × 10 2 ±6.37 × 10 2
8Sonar Δ 8.1310 × 10 1 8.0758 × 10 1 Δ 8.1310 × 10 1 8.0758 × 10 1
±1.96 × 10 2 ±2.05 × 10 2 ±1.96 × 10 2 ±2.05 × 10 2
9Hill_Valley6.3204 × 10 1 6.3196 × 10 1 Δ 6.3269 × 10 1 6.3261 × 10 1
±7.98 × 10 3 ±1.14 × 10 2 ±9.52 × 10 3 ±1.25 × 10 2
10Semeion Δ 8.4357 × 10 1 Δ 8.4357 × 10 1 7.9829 × 10 1 7.9829 × 10 1
±6.83 × 10 3 ±6.83 × 10 3 ±1.56 × 10 2 ±1.56 × 10 2
11Madelon Δ 9.0843 × 10 1 Δ 9.0835 × 10 1 9.0629 × 10 1 9.0632 × 10 1
±3.47 × 10 3 ±2.40 × 10 3 ±3.82 × 10 3 ±3.80 × 10 3
12SRBCT Δ 9.6300 × 10 1 Δ 9.5694 × 10 1 5.0049 × 10 1 5.0049 × 10 1
±3.57 × 10 2 ±4.27 × 10 2 ±1.64 × 10 1 ±1.64 × 10 1
13Leukemia1 Δ 9.5436 × 10 1 Δ 9.5848 × 10 1 6.5524 × 10 1 6.5524 × 10 1
±3.49 × 10 2 ±2.66 × 10 2 ±3.43 × 10 2 ±3.43 × 10 2
14DLBCL Δ 9.2604 × 10 1 Δ 9.1945 × 10 1 7.3657 × 10 1 7.3657 × 10 1
±4.24 × 10 2 ±4.81 × 10 2 ±3.28 × 10 2 ±3.28 × 10 2
15Brain1 Δ 8.1017 × 10 1 Δ 8.1017 × 10 1 5.9503 × 10 1 5.9503 × 10 1
±2.57 × 10 2 ±3.90 × 10 2 ±1.16 × 10 2 ±1.16 × 10 2
16Leukemia2 Δ 9.4614 × 10 1 Δ 9.4201 × 10 1 6.6568 × 10 1 6.6568 × 10 1
±4.75 × 10 2 ±6.20 × 10 2 ±1.98 × 10 2 ±1.98 × 10 2
17CNS Δ 7.1200 × 10 1 Δ 7.0022 × 10 1 4.6720 × 10 1 4.6720 × 10 1
±6.10 × 10 2 ±7.25 × 10 2 ±5.10 × 10 2 ±5.10 × 10 2
18Arcene Δ 8.6957 × 10 1 Δ 8.6554 × 10 1 4.4171 × 10 1 4.4171 × 10 1
±2.37 × 10 2 ±2.21 × 10 2 ±4.14 × 10 3 ±4.14 × 10 3
19Brain2 Δ 7.3324 × 10 1 Δ 7.2920 × 10 1 4.6463 × 10 1 4.6463 × 10 1
±6.86 × 10 2 ±6.11 × 10 2 ±3.37 × 10 2 ±3.37 × 10 2
20Prostate Δ 9.4809 × 10 1 Δ 9.4418 × 10 1 5.5444 × 10 1 5.5444 × 10 1
±3.05 × 10 2 ±2.59 × 10 2 ±2.27 × 10 2 ±2.27 × 10 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, H. A Dynamic Tasking-Based Evolutionary Algorithm for Bi-Objective Feature Selection. Mathematics 2024, 12, 1431. https://doi.org/10.3390/math12101431

AMA Style

Xu H. A Dynamic Tasking-Based Evolutionary Algorithm for Bi-Objective Feature Selection. Mathematics. 2024; 12(10):1431. https://doi.org/10.3390/math12101431

Chicago/Turabian Style

Xu, Hang. 2024. "A Dynamic Tasking-Based Evolutionary Algorithm for Bi-Objective Feature Selection" Mathematics 12, no. 10: 1431. https://doi.org/10.3390/math12101431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop