Next Article in Journal
Topology of the World Tourism Web
Previous Article in Journal
Multifunctional Hyperelastic Structured Surface for Tunable and Switchable Transparency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Binary QUasi-Affine TRansformation Evolutionary (QUATRE) Algorithm

1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
College of Science and Engineering, Flinders University, 1284 South Road, Clovelly Park, SA 5042, Australia
3
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
4
Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(5), 2251; https://doi.org/10.3390/app11052251
Submission received: 20 January 2021 / Revised: 24 February 2021 / Accepted: 26 February 2021 / Published: 4 March 2021

Abstract

:
QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm generalized differential evolution (DE) algorithm to matrix form. QUATRE was originally designed for a continuous search space, but many practical applications are binary optimization problems. Therefore, we designed a novel binary version of QUATRE. The proposed binary algorithm is implemented using two different approaches. In the first approach, the new individuals produced by mutation and crossover operation are binarized. In the second approach, binarization is done after mutation, then cross operation with other individuals is performed. Transfer functions are critical to binarization, so four families of transfer functions are introduced for the proposed algorithm. Then, the analysis is performed and an improved transfer function is proposed. Furthermore, in order to balance exploration and exploitation, a new liner increment scale factor is proposed. Experiments on 23 benchmark functions show that the proposed two approaches are superior to state-of-the-art algorithms. Moreover, we applied it for dimensionality reduction of hyperspectral image (HSI) in order to test the ability of the proposed algorithm to solve practical problems. The experimental results on HSI imply that the proposed methods are better than Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA).

1. Introduction

The optimization problem refers to determine the value of decision variables under special constraints so that the objective functions can reach the optimal values. Traditional methods to solve the optimization problems include simplex method, steepest descent method, trust region method, penalty function method, etc. However, traditional optimization algorithms usually need to satisfy some specific conditions. For example, some algorithms require that the objective function must be continuous or differentiable, some require the problem to be solved is a convex optimization problem, and others require the constraint conditions to meet linear conditions. However, it is sometimes difficult to satisfy these conditions in practical applications. Therefore, in the 1940s, heuristic algorithms emerged, which constructed a model highly similar to the problem to be solved based on intuition or experience, and gave a feasible solution to the optimization problem at an acceptable cost. Furthermore, the deviation degree from the obtained feasible solution to the optimal solution could not be predicted. Heuristic algorithms reduce computational complexity at the expense of computational accuracy. In the 1960s, under the inspiration of bionics, meta-heuristic algorithms appeared, which obtained inspiration from random phenomena in nature. Meta-heuristic algorithms combined random algorithms with local algorithms in order to jump out of the local optimal solution with a probability. Meta-heuristic algorithms have become one of the hottest spots in the optimization problems, for there is not any special condition that constrains the objective functions, whereas satisfactory solutions can often be obtained.
According to the mechanisms to be based on, meta-heuristic algorithms can be coarsely divided into two categories: one is inspired by biological processes, and the other is inspired by physical or mathematical models. Among them, meta-heuristic algorithms inspired by biological processes can be finely separated into swarm intelligence algorithms, which are based on animal social behaviors, and evolutionary algorithms, which are based on the theory of evolution. In the meta-heuristic algorithms inspired by physical or mathematical models, Simulated Annealing (SA) [1,2,3] was inspired by the internal molecular state and internal energy of solids in the process of changing from high temperature to low temperature. It is easy to compute but is hard to converge; Gravitational Search Algorithm (GSA) [4,5,6] was proposed based on Newtonian gravity and the laws of motion, and it has strong exploitation ability but weak exploration ability; Sine Cosine Algorithm (SCA) [7,8,9] created multiple initial random candidate solutions and then used a mathematical model based on sine and cosine functions to make these solutions fluctuate in the direction of the optimal solution or in the opposite direction. In Swarm intelligence algorithms, Particle Swarm Optimization (PSO) [10,11,12,13] simulated the foraging behaviour of birds to obtain the optimal solution. In particular, it was the first swarm intelligence algorithm; Ant Colony Optimization (ACO) [14,15,16] was inspired by the foraging behavior of ant colony. The parameter setting of ACO is complicated. If the parameter setting is improper, it is easy to deviate from the high-quality solution; Cat Swarm Optimization (CSO) [17,18] mimicked cats hunting behaviour to obtain the optimal solution; Bat Algorithm (BA) [19,20,21] was proposed based on the echolocation behavior of bats; Pigeon Inspired Optimization (PIO) [22,23] was designed by mimicking the homing behavior of the pigeon. It requires very few adjustment parameters and is easy to implement; Symbiotic Organism Search (SOS) [24,25] imitated the interactive population relationship between different organisms in nature to enhance their adaptability to the environment, so as to improve the survival ability of the population; Grey Wolf Optimizer (GWO) [26,27] simulated the hierarchy and predation behavior of wolves. However, it is easy to fall into local extremum and hard to converge; Cuckoo Search (CS) [28,29,30,31] mimicked brood parasitism of cuckoo and it has a strong exploration ability; Monkey King Evolutionary (MKE) [32,33] Algorithm was inspired by the action of the Monkey King, a character of a famous Chinese mythological novel, named Journey to the West. In evolutionary algorithms, Genetic Algorithm (GA) [34,35,36,37] was proposed based on the theory of natural selection and the principle of genetics, and the optimal solution was obtained by mutation, crossover and selection; DE [38,39,40,41] was proposed to improve GA. The difference was that DE generated the mutation vector by the difference vector of the parent generation, and performed crossover with the individual vector of the parent generation to generate the new individual vector.
In particular, QUATRE was proposed by Khanesar, M.A. [42] to improve the drawback of DE that did not achieve equilibrium search in search space without prior knowledge. QUATRE generalized the crossover operation of DE from vector to matrix. Therefore, DE can be regarded as a special case of QUATRE. Subsequently, a series of algorithms based on QUATRE was proposed due to the good performance of QUATRE. The relations between QUATRE and other meta-heuristic algorithms are discussed by Meng, Z. [43], including PSO variants and DE variants and a framework of gesture recognition was proposed based on QUATRE also by Meng, Z. [44]. A compact QUATRE algorithm that used a pairwise competition mechanism to enhance the performance of QUATRE was proposed [45] and an enhanced mutation strategy with time stamp mechanism for the QUATRE algorithm was shown by Meng, Z. and Pan, J.S. [46]. The reason why the exploration bias still exists in the binomial crossover was discussed [47]; then, a novel QUATRE structure with some novel standards and adaptation schemes was proposed. In addition, a novel bi-population QUasi-Affine TRansformation Evolution algorithm (BP-QUATRE) algorithm was proposed, which divided the populations into two subpopulations with different sorting and mutation strategies in each subpopulation [48] and a multi-group multi-choice communication strategy was proposed to solve the disadvantage that the original QUATRE was always easy to fall into local optimization and apply the proposed algorithm in wireless sensor network node localization [49]. A novel algorithm that combined QUATRE with PIO was proposed to avoid falling into local optimum by Sun, X.X. [50].
Practical optimization problems are various, some are high dimensional, some are many-objective optimization problems and others require minimal memory. Consequently, many excellent algorithms were proposed to apply the meta-heuristic algorithms for solving practical problems. Furthermore, a new surrogate-assisted model on PSO to deal with the high dimensional problem is used [12]; S. Qin et al. proposed a modified PSO based on a decomposition framework with different ideal points on each reference vector to solve many-objective optimization problems [13]; P.-C. Song et al. developed a compact scheme on CSO to effectively save the memory of the unmanned robot [30]. There are some other practical problems that are binary optimization problems, such as feature extraction and 0–1 knapsack problems, but most meta-heuristic algorithms are designed for a continuous search space. Therefore, it is essential to convert meta-heuristic algorithms in a binary version.
The continuous PSO was converted into binary version for the first time by Khanesar, M.A. [51]. A binary version of the CS was proposed by Rodrigues by Rodrigues, D. [52]. In Reference [53] a hyper learning binary Dragonfly Algorithm (DA) was designed for coronavirus disease 2019 (COVID-19). In Reference [54], an improved binary GWO was discussed and used for feature selection by Hu, P. A optimized binary BA was presented and applied for classification of white blood cells by Gupta, D. [55]. The GSA was converted into binary form based on mutual information by Bostani, H. [56] for intrusion detection systems. Almost all excellent meta-heuristic algorithms have variants of the binary version. However, QUATRE was designed for solving continuous optimization problems and no one has yet turned it into a binary version. Therefore, we discussed how to change the QUATRE algorithm into binary form in this manuscript for the first time.
Remote sensing is an important technology, which is widely used in the military and by civilians [57,58]. On the military side, sensing technology is widely used in military reconnaissance, missile early warning, military surveying and mapping, marine surveillance, and meteorological observation. On the civil side, remote sensing technology is widely used in earth resources survey, vegetation classification, crop pests, crop diseases and crop yield surveys, environmental pollution monitoring, marine development, and earthquake monitoring. At present, high resolution remote sensing has become the mainstream of remote sensing [59]. Hyperspectral remote sensing is an important aspect of high resolution remote sensing. It is a technique to obtain approximately continuous spectral data by imaging spectrometer within the range of visible, near-infrared, mid-infrared and thermal infrared bands of electromagnetic spectrum. Hyperspectral remote sensing can obtain hundreds of spectral bands at the same time, and the wavelength range of each spectral band is only about 10 nm. This many bands can provide extremely rich spectral information for ground object information extraction, so hyperspectral images are often used for fine object classification. Nevertheless, the phenomenon of ”Hughes" is also introduced into many dimensions [60,61], that is, with a certain number of samples, the classification accuracy increases first and then decreases with the increase in data dimension. Hughes phenomenon has seriously affected the application and popularization of hyperspectral remote sensing technology. Therefore, dimensionality reduction is often needed before a fine classification.
The main contributions of this paper are concluded as follows:
(1)
Two approaches are presented to convert QUATRE to binary version and four families of transfer functions are used by the two approaches, respectively.
(2)
The search space of the proposed binary QUATRE is analyzed, then new transfer functions are introduced.
(3)
A new scaling factor is proposed in order to balance exploration and exploitation.
(4)
The proposed algorithm is performed on 23 benchmark functions and HSI.
(5)
A new fitness function for dimensionality reduction of HSI is proposed.
The rest of the paper is organized as follows. Section 2 describes related works include QUATRE and transfer functions. Section 3 describes the proposed two binary QUATRE approaches. Section 4 is experimental results and analysis on benchmark test functions and HSI. Section 5 depicts the main work of the paper and gives some suggestions for further work.

2. Related Works

2.1. QUATRE

The algorithm is named QUATRE because its evolution equation is in an affine- transformation like form. Affine transformation is the transformation of a vector space into another vector space by a linear transformation coupled with a translation in geometry. It can be written as X = M X + B . Set X = [ x 1 , x 2 , . . . x p s ] T denotes position matrix of the particle population, where p s is the population size. A particle can be written as x i = [ x i 1 , x i 2 , . . . x i D ] , where i = 1 , 2 , . . . p s , and D is the dimension of problem. The exact evolution equation used in QUATRE is shown in Equation (1). Where ⨂ means component-wise multiplication, same as the ”.*” operation in Matlab. X G , X G + 1 are the position matrixes at G t h and ( G + 1 ) t h generations.
X G + 1 = M X G + M ¯ B
Matrix B denotes the mutation matrix of particles, which can be generated in six schemes shown in Table 1. Among them, F ( 0 , ) is a scaling factor but, usually, we constrain F [ 0 , 2 ] . F means variation rate and the value of F determines the ability of exploration and exploitation. X r 1 , G , X r 2 , G , X r 3 , G , X r 4 , G and X r 5 , G are random matrixes generated by permutating row vectors of position matrix X G . X g b e s t , G is defined as Equation (2), where x g b e s t , G is a vector denotes the global best particle in G t h generation. As shown in Table 1, the first mutation strategy is B = X g b e s t , G + F ( X r 1 , G X r 2 , G ) which can be denoted as QUATRE/best/1, meaning that the vector to be perturbed is the global best solution X g b e s t , G , and that only one difference vector ( X r 1 , G X r 2 , G ) is included.
X g b e s t , G = x g b e s t , G x g b e s t , G . . . x g b e s t , G
M can be considered as the selection matrix, made up of 0 and 1. M ¯ means a binary inverse operation on the matrix elements of M. As shown in Equation (3), binary inversion means that the element 0 inverts the contravariant to 1, element 1 inverts the contravariant to 0.
M = 1 1 0 0 1 0 0 0 1 1 1 1 1 0 1 1 , M ¯ = 0 0 1 1 0 1 1 1 0 0 0 0 0 1 0 0
The initial value of the M is set to the lower triangular matrix, shown as M t m p in Equation (4). Then, randomly swap each row and each column of M t m p , we can get M. In this way, QUATRE can achieve equilibrium search in the search space.
M t m p = 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 0 1 0 0 1 0 1 = M
The number of rows in M is the population size p s , while the number of columns in M is the dimension D of problem. If p s > D , we can extend M t m p according to p s . As shown in Equation (5), if p s = k × D + i , then the first k × D rows of M t m p are made up of k lower triangular matrices of D × D . The last i rows of M t m p are the first i rows of D × D matrix. Then, we can get M by randomly swap each row and each column. If p s < D , we can do similar operations to extend the matrix M t m p according to D.
M t m p = 1 . . . 1 1 . . . 1 1 1 . . . . . . 1 1 1 . . . 1 1 . . . 1 1 . . . 1 1 1 . . . . . . 1 1 1 . . . 1 1 . . . 1 1 . . . 1 1 . . . 1 1 1 . . . 1 1 . . . . . . 1 1 . . . 1 1 . . . 1 1 1 . . . 1 1 1 1 . . . 1 . . . 1 . . . 1 1 . . . 1 . . . = M
The pseudo code of the QUATRE Algorithm is shown in Algorithm 1, where X p b e s t , G = [ x 1 p b , G , x 2 p b , G , . . . x i p b , G , . . . x p s p b , G ] is the personal best position matrix. x i p b , G , i = 1 , 2 , . . . , p s is a vector which means the personal best position of particle i until G t h generation.
Algorithm 1: Pseudo code of QUATRE.
input:
The dimension D, popolation size p s , max iterations M A X G
mutation schema to calculate B, and the fitness function f ( X ) .
Initialization:
Initialize searching space V, G = 1 , position matrix X G
X p b e s t , G = X G and calculation X g b e s t , G .
Iteration:
1: while G < M A X G | ! s t o p C r i t e r i o n do
2:    Generate matrix M by Equation (5), calculate M ¯
3:    Calculation mutation Matrix B according to mutation schema.
4:     X G + 1 = M X G + M ¯ B
5:    for i = 1 : ps do
6:       if f ( x i , G + 1 ) optimal than f ( x i p b , G ) then
7:              x i p b , G + 1 = x i , G + 1
8:       else
9:              x i p b , G + 1 = x i p b , G
10:      end if
11:   end for
12:    X G + 1 = X p b e s t , G + 1
13:    x g b e s t , G + 1 = o p t { X p b e s t , G + 1 } .
14:   Update X g b e s t , G + 1 by Equation (2).
15:    G = G + 1
16: end while
output:
The global optima X g b e s t , G , f ( X g b e s t , G ) .

2.2. Transfer Function

In the binary version of the meta-heuristic algorithm, the role of transfer function is very important because the value of the transfer function is the probability that the element of the position vector takes 0/1 or the probability that the element goes from 0 to 1. Therefore, the transfer function must be a bounded function of [0,1]. In this section, four families of transfer functions are introduced.
In Reference [62], the sigmoid transfer function was firstly proposed on binary PSO by Kennedy, J. The particles of binary PSO can only be 0 or 1 according to their position vector; as shown in Equation (6), where v i k ( t ) is the velocity value of particle i at iteration t in k dimension.
T ( v i k ( t ) ) = 1 1 + e v i k ( t )
After converting the continuous value of velocity to a probability value, the position value of particle i at iteration t + 1 in k dimension x i k ( t + 1 ) can be updated with the probability value by Equation (7), where r a n d [ 0 , 1 ] is a random variable. In this strategy, the sigmoid function forces particles to take values of 0 or 1 according to their velocity values.
x i k ( t + 1 ) = 0 , r a n d < T ( v i k ( t ) ) 1 , r a n d T ( v i k ( t ) )
Subsequently, in Reference [63], S-shaped and V-shaped families of transfer functions are proposed by Mirjalili, S. The expressions are shown in Table 2 and Table 3. The S-shaped families of transfer functions are extensions of the sigmoid function. Therefore, they used Equation (7) to update the position value. However, the V-shaped transfer functions used a different position update strategy as shown in Equation (8), where x i k ( t ) 1 refers to the binary inverse operation. In this strategy, particles stay in their current positions when their velocity values are low, and switch to their complements when the velocity values are high.
x i k ( t + 1 ) = x i k ( t ) 1 , r a n d < T ( v i k ( t ) ) x i k ( t ) , r a n d T ( v i k ( t ) )
The new U-shaped families of transfer functions [63] and Z-shaped families of transfer functions [64] show good performance in binary PSO. The expresses are shown in Table 4 and Table 5. The position value update strategy used in U-shaped and Z-shaped transfer functions is Equation (8), the same as V-shaped families of transfer functions.

3. Proposed Binary QUATRE Algorithm

In this section, a novel binary QUasi-Affine TRansformation Evolutionary (BQUATRE) algorithm is proposed for dimensionality reduction on HSI. First of all, mathematical analysis is performed on BQUATRE, then the improved four families of transfer functions are proposed. Furthermore, in order to balance the exploration and exploitation, a new linear increment scale factor is presented. Finally, two approaches of BQUATRE algorithm are described in detail.

3.1. Mathematical Analysis

Suppose that the position matrix at G generation X G of BQUATRE is made up of 0 or 1. So the X r 1 , G , X r 2 , G , X r 3 , G , X r 4 , G , X r 5 , G and X g b e s t , G at G t h generation are also binary matrixes. The mutation matrix B is calculated according to the schemes shown in Table 1. However, the range of scale factor F is [ 0 , 2 ] . Therefore, the mutation matrix B will not be a binary matrix, and the position matrix X G + 1 at ( G + 1 ) t h generation obtained by Equation (1) will also not be a binary matrix. In order to further analyse, we choose the first mutation strategy in Table 1 as an example, which is Equation (9).
B = X g b e s t , G + F ( X r 1 , G X r 2 , G )
There are eight combinations about the values of X g b e s t , G , X r 1 , G and X r 2 , G , which are described as follows:
(1) if X g b e s t , G = 0 , X r 1 , G = 0 and X r 2 , G = 0 .
B = 0 + F ( 0 0 ) = 0 .
(2) if X g b e s t , G = 0 , X r 1 , G = 0 and X r 2 , G = 1 .
B = 0 + F ( 0 1 ) = F , thereby B [ 2 , 0 ] .
(3) if X g b e s t , G = 0 , X r 1 , G = 1 and X r 2 , G = 0 .
B = 0 + F ( 1 0 ) = F , thereby B [ 0 , 2 ] .
(4) if X g b e s t , G = 0 , X r 1 , G = 1 and X r 2 , G = 1 .
B = 0 + F ( 1 1 ) = 0 .
(5) if X g b e s t , G = 1 , X r 1 , G = 0 and X r 2 , G = 0 .
B = 1 + F ( 0 0 ) = 1 .
(6) if X g b e s t , G = 1 , X r 1 , G = 0 and X r 2 , G = 1 .
B = 1 + F ( 0 1 ) = 1 F , thereby B [ 1 , 1 ] .
(7) if X g b e s t , G = 1 , X r 1 , G = 1 and X r 2 , G = 0 .
B = 1 + F ( 1 0 ) = 1 + F , thereby B [ 1 , 3 ] .
(8) if X g b e s t , G = 1 , X r 1 , G = 1 and X r 2 , G = 1 .
B = 1 + F ( 1 1 ) = 1 .
From the above analysis, it can be concluded that B [ 2 , 3 ] . The same conclusion can be drawn from mutation strategies QUATRE/target/1 and QUATRE/best/1. In the last three mutation strategies, QUATRE/target/2, QUATRE/rand/2 and QUATRE/best/2, it can be concluded that B [ 4 , 5 ] by the same way.

3.2. Improved Four Families of Transfer Functions

In this section, the results concluded from Section 3.1 will continued to be discussed. The same as in Section 3.1, we choose Equation (9) as an example, then B [ 2 , 3 ] . As shown in Figure 1, the four dotted lines represent the original S-shaped families of transfer functions define in Table 2, we can find that they do not fit the search space [ 2 , 3 ] . The center of search space is 0.5, while the center of S-shaped families of transfer functions is 0. Another problem is that the maximal and minimal value of these functions in [ 2 , 3 ] does not reach 1 and 0. However, we update the position matrix by Equation (7), in which the r a n d is generated in [0,1]. Therefore, we shift the functions to the center of the search space, then stretch the value of them to [ 0 , 1 ] . The exact expression of the four improved S-shaped families of transfer functions are described in Table 6. The solid lines in Figure 1 show the curve of the improved four functions.
Similar operations are performed on the V-shaped and Z-shaped families of transfer functions. Since the maximal and minimal value of U-shaped families of transfer functions are 1 and 0, we do not need to stretch the value of them. Table 7, Table 8 and Table 9 are the exact formulas while Figure 2, Figure 3 and Figure 4 describe the curves of the improved three families of transfer functions.

3.3. New Scaling Factor Based on Exploration and Exploitation

The convergence process of meta-heuristic algorithm can be regarded as two stages: exploration and exploitation. In the early stage of the algorithm, the exploration ability is conducive to searching more space and jumping out of the local optimal solution. In the later stage of the algorithm, the approximate optimal solution has been found, and the development ability can help to find the optimal solution. Similarly, for the binary version, the position needs to be able to switch between 0 and 1 quickly in the beginning and slowly in the end. In continuous QUATRE, Liu, N. proposed a linear decrease scale factor shown as Equation (10) [48], where F m a x and F m i n are the predetermined maximum and minimum values of scale factor F. Usually, we set F [ 0 , 2 ] , then F m a x = 2 and F m i n = 0 . G is the current generation number, and M A X G is the maximum generation number.
F = F m a x ( F m a x F m i n ) × G M A X G
However, in the binary version, things are different, as proved by Hu, P. [54] and Liu, J. [65], as the transfer function also determines exploration and exploitation. The slope of the transfer function can be calculated to evaluate the velocity trend, for the slope indicates the switching speed of position. Figure 5 shows the curve of original S 1 ( x ) , V 1 ( x ) , U 1 ( x ) , and Z 1 ( x ) functions described in Table 2, Table 3, Table 4 and Table 5. We can find that the slope is small when the value of | x | is large, while the slope is large when the value of | x | is small.
Since the transfer function and scale factor F combined determine the probability of conversion in the proposed binary QUATRE, a linearly increasing scale factor is presented in this manuscript as shown in Equation (11).
F = ( F m a x F m i n ) × G M A X G

3.4. Binary QUATRE Algorithm—Approach 1 (BQUATRE1)

The first approach of binary QUATRE algorithm (BQUATRE1) converts the continuous position matrix at ( G + 1 ) t h generation X G + 1 to the binary. In this section, BQUATRE1 will be described in detail.
Suppose that the position matrix at G t h generation X G is obtained. First of all, the cooperative search matrix is generated at this generation M t m p and M is calculated according to Equation (5). M ¯ can be obtained through binary inverse operation on the matrix M.
After that, the scale factor F can be obtained by Equation (11), that is F = ( F m a x F m i n ) × G M A X G . The mutation matrix B is calculated according to a mutation strategy in Table 1. For example, we choose the first mutation strategy Equation (9), that is B = X g b e s t , G + F ( X r 1 , G X r 2 , G ) . Then X G + 1 can be calculated by Equation (1), that is X G + 1 = M X G + M ¯ B .
The element in X G + 1 denoted as x i k , G + 1 [ 2 , 3 ] refers to the i t h particle in k dimension. In order to distinguish, the continuous position matrix at ( G + 1 ) t h generation is denoted as X G + 1 c o n t , the binary matrix at ( G + 1 ) t h generation is denoted as X G + 1 b i n . Then, Equation (1) can be rewritten as Equation (12).
X G + 1 c o n t = M X G b i n + M ¯ B
The particle i at k dimension of X G + 1 c o n t denoted as x i k , G + 1 c o n t . Convert every element x i k , G + 1 c o n t to probability value by a specified transfer function, then whole matrix X G + 1 c o n t can be obtained. In detail, if S-shaped families of transfer functions are selected, the element of matrix X G + 1 c o n t can be updated with the probability value by Equation (13).
x i k , G + 1 b i n = 0 , r a n d < T ( x i k , G + 1 c o n t ) 1 , r a n d T ( x i k , G + 1 c o n t )
However, if V-shaped, U-shaped or Z-shaped families of transfer functions are selected, the position matrix X G + 1 b i n can be updated with the probability value by Equation (14).
x i k , G + 1 b i n = ( x i k , G b i n ) 1 , r a n d < T ( x i k , G + 1 c o n t ) x i k , G b i n , r a n d T ( x i k , G + 1 c o n t )
As shown in Equation (12), some elements of matrix X G + 1 c o n t come from X G b i n , while the others come from B. Since the elements come from X G b i n is inherently binary, the improved four families of transfer functions do not fit X G + 1 c o n t . In this situation, the original four families of transfer functions T ( x ) are used to convert X G + 1 c o n t from a continuous matrix to binary one. The pseudo code of BQUATRE1 is described in Algorithm 2. Since most matrices are binary, we only denote the continuous matrix, vector or number, for example, X c o n t means X is a continuous matrix.
Algorithm 2: Pseudo code of binary QUATRE algorithm-Approach 1(BQUATRE1).
input:
The dimension D, population size p s , max iterations M A X G , fitness function f ( X ) ,
mutation schema to calculate B, and original transfer function T ( x ) .
Initialization:
Initialize searching space V, G = 1 , position matrix X G
X p b e s t , G = X G and calculation X g b e s t , G .
Iteration:
1: while G < M A X G | ! s t o p C r i t e r i o n do
2:       Generate matrix M by Equation (5), calculate M ¯
3:        F = ( F m a x F m i n ) × G M A X G
4:       Calculation mutation Matrix B according to mutation schema.
5:        X G + 1 c o n t = M X G + M ¯ B
6:       Convert every x i k , G + 1 c o n t to probability value T ( x i k , G + 1 c o n t ) by original transfer function.
7:       Convert T ( x i k , G + 1 c o n t ) to binary according to Equation (13) or Equation (14).
8:    for i = 1 : ps do
9:       if f ( x i , G + 1 ) optimal than f ( x i p b , G ) then
10:             x i p b , G + 1 = x i , G + 1
11:      else
12:             x i p b , G + 1 = x i p b , G
13:      end if
14:   end for
15:    X G + 1 = X p b e s t , G + 1
16:    x g b e s t , G + 1 = o p t { X p b e s t , G + 1 } .
17:   Update X g b e s t , G + 1 by Equation (2).
18:    G = G + 1
19: end while
output:
The global optima X g b e s t , G , f ( X g b e s t , G ) .

3.5. Binary QUATRE Algorithm—Approach 2 (BQUATRE2)

The second approach of binary QUATRE algorithm (BQUATRE2) only converts the mutation matrix B to the binary. In this section, BQUATRE2 will be described in detail.
Again, we suppose that the position matrix at G t h generation X G is got. First of all, M and M ¯ can be obtained in the same way.
The mutation matrix B is calculated according to Equations (9) and (11). The particle i of B in k dimension can be denoted as b i k , where i = 1 , 2 , . . . , p s and k = 1 , 2 , . . . D . Since all elements in B are continuous values, the improved families of transfer functions T i ( x ) are used in this approach. In order to distinguish, the continuous mutation matrix is denoted as B c o n t , while the binary mutation matrix denoted as B b i n .
Then, we should convert the probability value T i ( x ) to binary. If the S-shaped families of transfer functions are selected, consider the second case in Section 3.1, that is if X g b e s t , G = 0 , X r 1 , G = 0 and X r 2 , G = 1 , then B [ 2 , 0 ] . In this situation, the b i k should have a high probability to be 0 for only the value of X r 2 , G is not 0. Similarly, consider the seventh case in Section 3.1, if X g b e s t , G = 1 , X r 1 , G = 1 and X r 2 , G = 0 , then B [ 1 , 3 ] . If the value of b i k have a high probability to be 0, the design is reasonable. Therefore, if the S-shaped functions are selected, the Equation (7) is replaced by Equation (15) to updated with the probability value in BQUATRE2. Among them, b i k b i n is the element of B b i n , b i k c o n t is the element of B c o n t , where i = 1 , 2 , . . . p s and k = 1 , 2 , . . . D .
b i k b i n = 1 , r a n d < T i ( b i k c o n t ) 0 , r a n d T i ( b i k c o n t )
If the improved V-shaped, U-shaped or Z-shaped families of functions, which are described in Table 7, Table 8 and Table 9 are chosen, we still adopt the strategy shown as Equation (8) to update the probability value. In this approach, the update function is shown in Equation (16), where x i k , G is a binary value that shows the position of particle i in k dimension at G generation. Then, the whole binary mutation matrix B b i n can be obtained. Finally, the position matrix X G + 1 can be obtained.
b i k b i n = ( x i k , G ) 1 , r a n d < T i ( b i k c o n t ) x i k , G , r a n d T i ( b i k c o n t )
The pseudo code of BQUATRE2 described in Algorithm 3.
Algorithm 3: Pseudo code of binary QUATRE algorithm-Approach 2(BQUATRE2).
input:
The dimension D, population size p s , max iterations M A X G , fitness function f ( X ) mutation schema to calculate B, and improved transfer function T i ( x ) .
Initialization:
Initialize searching space V, G = 1 , position matrix X G
X p b e s t , G = X G and calculation X g b e s t , G .
Iteration:
1: while G < M A X G | ! s t o p C r i t e r i o n do
2:       Generate matrix M by Equation (5), calculate M ¯
3:        F = ( F m a x F m i n ) × G M A X G
4:       Calculation continues mutation Matrix B c o n t according to mutation schema.
5:       Convert every element b i k c o n t to probability value T i ( b i k c o n t ) by improved transfer function .
6:       Convert T i ( b i k c o n t ) to binary b i k b i n by Equation (14) or Equation (15).
7:        X G + 1 = M X G + M ¯ B b i n
8:    for i = 1 : ps do
9:       if f ( x i , G + 1 ) optimal than f ( x i p b , G ) then
10:             x i p b , G + 1 = x i , G + 1
11:      else
12:             x i p b , G + 1 = x i p b , G
13:      end if
14:   end for
15:    X G + 1 = X p b e s t , G + 1
16:    x g b e s t , G + 1 = o p t { X p b e s t , G + 1 } .
17:   Update X g b e s t , G + 1 by Equation (2).
18:    G = G + 1
19: end while
output:
The global optima X g b e s t , G , f ( X g b e s t , G ) .

4. Experimental Results and Analysis

4.1. Benchmark Function

In this section, the proposed BQUATRE1 and BQUATRE2 are examined by 23 benchmark functions. The mathematical formulas and properties of these functions are described in Table 10, Table 11 and Table 12; especially, D means the dimension of function and f m i n represents the optimum. In detail, Table 10 contains the unimodal functions (denoted as f1–f7), Table 11 shows the details of common multimodal functions (denoted as f8–f13) and Table 12 displays the details of multimodal functions in low dimension (denoted as f14–f23).
Unimodal functions have only a global optimal solution and no local optimal solution, which can verify whether the algorithms can find the global optimal solution within the finite population size and iterations. Multimodal functions have many local optima, which can verify whether the algorithms can fall into the local optimal solutions or not. Since the function dimension is too low, the algorithms are prone to precocity and fall into local optimum. Therefore, the multimodal functions in low dimension can verify the convergence results under more stringent conditions.
Two approaches of BQUATRE algorithm with four families of transfer functions are examined in the experiment. Due to space limitation, we choose the second function of these families of transfer functions as an example, that is S 2 , V 2 , U 2 , Z 2 , S 2 i , V 2 i , U 2 i and Z 2 i . Fist of all, the S-shaped an V-shaped families of transfer functions are compared, that is the first approach of BQUATRE algorithm with S 2 transfer function (record as BQUATRE1-S2), the first approach of BQUATRE algorithm with V 2 transfer function (record as BQUATRE1-V2), the second approach of BQUATRE algorithm with the improved transfer function S 2 i (record as BQUATRE2-S2i) and the second approach of BQUATRE algorithm with the improved transfer function V 2 i (record as BQUATRE2-V2i). Since QUATRE is the improved DE, the proposed binary QUATRE is compared with Binary DE(record as BDE). The binary PSO is the most primitive binary algorithm, so we also used it in our experiment (record as BPSO). The third compared algorithm is the Advanced Binary Grey Wolf Optimizer algorithm with the V3a transfer function (record as ABGWO-V3a), which is proposed in Reference [54] by Hu, P. ABGWO-V3a can obtain better performance than many other binary meta-heuristic algorithms. Table 13 shows the experimental results of BDE, ABGWO-V3a, BQUATRE1-S2, BQUATRE1-V2, BQUATRE2-S2i, BQUATRE2-V2i, and BPSO. In detail, all algorithms in the experiment run 30 times, 500 iterations and 30 individuals on each benchmark function. Among them, AVG and STD are the mean and standard deviation of the results of 30 times, respectively. The results in red and in blue are the best result of the seven algorithms. The blue results means that all algorithms obtained the optimal solution. The last line refers to the number of times the algorithm gets the optimal result.
From Table 13, we can see that the seven algorithms get the same solutions on 6 benchmark functions, which are all multimodal functions in low dimensions. BQUATRE1-V2, BQUATRE2-S2i and BQUATRE2-V2i obtained 14, 15 and 14 red results, respectively. However, BDE, ABGWO-V3a, and BPSO only obtained 6, 3 and 1 red results, respectively. Therefore, BQUATRE1-V2, BQUATRE2-S2i and BQUATRE2-V2i are superior to BDE, ABGWO-V3a, and BPSO. At the same time, we can see that BQUATRE1-S2 does not perform well. However, BQUATE2-S2i obtained the best results compared to the other five algorithms. This result shows Equation (14) is effective for BQUATE2. Figure 6, Figure 7 and Figure 8 are the visualization results of Table 13.
Moreover, the U-shaped and Z-shaped families of transfer functions are compared to BDE, ABGWO-V3a, and BPSO, that is the first approach of BQUATRE algorithm with U 2 transfer function (record as BQUATRE1-U2), the first approach of BQUATRE algorithm with Z 2 transfer function (record as BQUATRE1-Z2), the second approach of BQUATRE algorithm with the improved transfer function U 2 i (record as BQUATRE2-U2i) and the second approach of BQUATRE algorithm with the improved transfer function Z 2 i (record as BQUATRE2-Z2i). The results are shown in Table 14.
From Table 14, it can be seen that the seven algorithms get the same solutions on six benchmark functions. However, BQUATE1-U2 gets the best results on other 12 functions. Furthermore, BQUATRE1-Z2, BQUATRE2-U2i and BQUATRE2-Z2i obtained 16, 15 and 7 red results, respectively. However, BDE, ABGWO-V3a, and BPSO only obtained 6, 3 and 1 red results, respectively. On the whole, BQUATRE1-U2, BQUATRE1-Z2, BQUATRE2-U2i and BQUATRE2-Z2i are superior to BDE, ABGWO-V3a, and BPSO. From Table 13 and Table 14, we can conclude that V-shaped, U-shaped and Z-shaped functions in BQUATRE1 and BQUATRE2 are superior to BDE, ABGWO-V3a, and BPSO. The BQUATRE2 with S-shaped transfer function can also perform well. The proposed second approach of BQUATRE can perform well on the unimodal functions and common multimodal functions. However, at the same time, we can see the improved algorithm does not improve much on multimodal functions in low dimensions. Figure 9, Figure 10 and Figure 11 are the visualization results of Table 14.
To further compare the results, t-test is used for a significance test. The t-test is used to compare the mean value of two groups of data. It is suitable for a normal distribution with a small sample size and unknown population standard deviation. In the experiments, a two tailed t-test is used with significant level of 0.01, which means very significant. Table 15 is shows the results between ABGWO-3Va and QUATRE1-S2, QUATRE1-V2, QUATRE1-U2, QUATRE1-Z2, QUATRE2-S2i, QUATRE2-V2i, QUATRE2-U2i, QUATRE2-Z2i. “+” appears that the compared algorithm is superior to ABGWO-3Va, and “−” indicates that the algorithm is inferior to ABGWO-3Va. “=” implies that the performance of the algorithms is consistent.
We can see from Table 15 that the four BQUATRE2 algorithms and BQUATRE1 with Z2 transfer function are not inferior to ABGWO-3Va on all 23 benchmark functions. BQUATRE1 with V2 and U2 transfer function are superior to ABGWO-3Va on 11 functions, while inferior on 2 functions.
Table 16 shows the results between BDE and the eight BQUATRE. From Table 16, QUATRE1-Z2, QUATRE2-S2i, QUATRE2-U2i and QUATRE2-Z2i are not inferior to BDE on all 23 benchmark functions. QUATRE1-V2, QUATRE1-U2 and QUATRE2-V2i are superior to BDE on 9 functions, while inferior on 1 or 2 functions.
Finally, we compare the runtime of the 10 algorithms in this experiment, as shown in Table 17. It can be seen that BDE algorithm has the shortest running time and ABGWO-V3A has the longest running time. As a whole, the four BQUATE2 takes longer to run than BQUATE1, as the transfer functions they used are more complex.

4.2. Hyperspectral Imagery

In this section, the proposed BQUATRE1 and BQUATRE2 are used for dimensionality reduction in hyperspectral dataset. The dataset used in the experiment is the Indian Pines image, which is obtained by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in 1992. The dataset captures the Indian Pines unlabeled agricultural site in northwestern Indiana and contains 220 × 145 × 145 bands. The spatial resolution of this image is 20 m per pixel. The water absorption bands of HSI are seriously polluted by noise and are not suitable for classification. Therefore, twenty water absorption bands have been deleted from the Indian Pines. In the image, there are 16 features, including corn, soybean, alfalfa, wheat, oat, grass, tree, etc. Some features are subdivided; for example, corn is divided into no-tillage, low-tillage and traditional tillage. Figure 12 is the false-color composite image and ground truth data of Indian Pines. As is known, the HSI cannot be displayed on the computer because it has too many bands. False color image is the color image obtained by the synthesis of different wavebands and is often used in the classification of remote sensing images. False color synthesis technology is one of the methods of image enhancement. It converts the images composed of many bands (more than four) into three-bands or four-bands synthesis images. Figure 12a is synthesized by three-bands synthesis technology.
Classification performance is the main index of dimensionality reduction in HSI. In this manuscript, three common metrics are used to evaluate classifier performance, namely: overall accuracy(OA), average accuracy (AA) and Kappa coefficient. Therefore, the fitness function is Equation (17), where O A k f o l d L o s s , A A k f o l d L o s s and K a p p a k f o l d L o s s are the OA, AA and Kappa coefficient errors of k-fold cross validation; W i , i = 1 , 2 , 3 are weight coefficients. In our experiment, we set k = 5 and set W 1 = 0.4 , W 2 = 0.3 , W 3 = 0.3 , for the OA is the most important index of the three classification performances.
f i t n e s s = W 1 O A k f o l d L o s s + W 2 A A k f o l d L o s s + W 3 K a p p a k f o l d L o s s
In this experiment, we use the first function of the families of the transfer function. In order to compare S-shaped, V-shaped, U-shaped and Z-shaped transfer functions, we adopt the approach BQUATRE1 with U-shaped and Z-shaped transfer functions, the BQUATRE2 with with S-shaped and V-shaped ones. The compared dimensionality reduction algorithms are classic PCA and LDA. The number of dimensions selected by PCA and LDA is 10, which is computed by the ”intrinsic dimensionality estimation" function of ”Matlab toolbox for dimensionality reduction" [66]. The classifier used in experiments is Support Vector Machine (SVM), which is implemented from a library for LIBSVM and used a Gaussian kernel [67]. Since it is difficult to obtain samples of hyperspectral images, the classification with fewer samples is of more practical significance. Thereby, in this experiment, the training samples are randomly chosen and account for 5 % of the ground truth. Thereby, seven algorithms are compared in this experiment; that is, the traditional SVM without dimensionality reduction (record as SVM), PCA performed to dimensionality reduction before SVM (record as PCA-SVM), LDA performed to dimensionality reduction before SVM (record as LDA-SVM), BQUATRE1 performed with U-shaped function to dimensionality reduction before SVM (record as BQUATRE1-U1-SVM), BQUATRE1 performed with Z-shaped function to dimensionality reduction before SVM (record as BQUATRE1-Z1-SVM), BQUATRE2 performed with improved S-shaped function to dimensionality reduction before SVM (record as BQUATRE2-S1i-SVM), and BQUATRE2 performed with improved V-shaped function to dimensionality reduction before SVM (record as BQUATRE2-V1i-SVM). Furthermore, the proposed algorithms run 500 iterators with 20 populations. We performed random 5 % sampling 500 times before doing the experiment in order to ensure the robustness.
Table 18 describes the classification performance of these algorithms performed on the Indian Pines dataset. The red color means the proposed algorithm performs better than all three compared algorithms on this index. We can see that the proposed four algorithms all get higher OA and Kappa indexes than the three compared algorithms. However, the AA value of all proposed algorithms are slightly lower than LDA-SVM. That is due to the poor performance in classifying the ”oats" and ”alfalfa" by the proposed algorithms. It can be seen that the samples of these two classes are smaller than others from Figure 12. LDA-SVM correctly classified a few samples in these two classes that has a greater impact on the value of AA, but has a smaller impact on the value of OA. Figure 13 is the visualization results of the average OA, AA, Kappa values of the BQUATE2-S1i-SVM, BQUATE2-V1i-SVM, BQUATE1-U1-SVM, BQUATE1-Z1-SVM, SVM, PCA-SVM, and LDA-SVM on Indian Pines.
In order to visually see the classification results, we paint the different ground features with different colors. Figure 14 shows the classification maps. As a whole, we can see that the proposed algorithms perform better for dimensionality reduction of HSI than PCA and LDA.

5. Conclusions

In this manuscript, we convert the QUATRE algorithm to binary version by two approaches in order to enable the QUATRE algorithm to solve the practical application of binary types. In the first approach, the new individuals produced by mutation and crossover operation are binarized. In the second approach, binarization is done after mutation, then perform cross operation with other individuals. Mathematical analysis is performed on the proposed algorithm and the improved transfer functions are proposed in order to improve the performance of the proposed algorithm. Furthermore, in order to balance the exploration and exploitation, a new linear increment scale factor is proposed. The proposed algorithm performs well on benchmark functions and then it is applied for practical dimensionality reduction of HSI. It can be seen from the results that the proposed algorithm is superior to state-of-art algorithms and it is helpful for solving the practical problems. The proposed algorithm cannot handle multi-model low-dimensional functions well. This will be the main work in the future.

Author Contributions

Conceptualization, J.-S.P.; data curation, Z.Z.; formal analysis, J.L. investigation, S.-C.C.; methodology, S.-C.C.; software, Z.Z.; supervision, J.-S.P.; writing—original draft, Z.Z.; writing—review and editing, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China (61872085).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van Laarhoven, P.J.; Aarts, E.H. Simulated annealing. In Simulated Annealing: Theory and Applications; Springer: Berlin, Germany, 1987; pp. 7–15. [Google Scholar]
  2. Tsai, C.W.; Hsia, C.H.; Yang, S.J.; Liu, S.J.; Fang, Z.Y. Optimizing hyperparameters of deep learning in predicting bus passengers based on simulated annealing. Appl. Soft Comput. 2020, 88, 106068. [Google Scholar] [CrossRef]
  3. Grobelny, J.; Michalski, R. A novel version of simulated annealing based on linguistic patterns for solving facility layout problems. Knowl.-Based Syst. 2017, 124, 55–69. [Google Scholar] [CrossRef]
  4. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  5. Olivas, F.; Valdez, F.; Melin, P.; Sombra, A.; Castillo, O. Interval type-2 fuzzy logic for dynamic parameter adaptation in a modified gravitational search algorithm. Inf. Sci. 2019, 476, 159–175. [Google Scholar] [CrossRef]
  6. Pelusi, D.; Mascella, R.; Tallini, L.; Nayak, J.; Naik, B.; Abraham, A. Neural network and fuzzy system for the tuning of Gravitational Search Algorithm parameters. Expert Syst. Appl. 2018, 102, 234–244. [Google Scholar] [CrossRef]
  7. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  8. Yang, Q.; Chu, S.C.; Pan, J.S.; Chen, C.M. Sine Cosine Algorithm with Multigroup and Multistrategy for Solving CVRP. Math. Probl. Eng. 2020, 2020, 1–10. [Google Scholar] [CrossRef] [Green Version]
  9. Gupta, S.; Deep, K. A novel hybrid sine cosine algorithm for global optimization and its application to train multilayer perceptrons. Appl. Intell. 2020, 50, 993–1026. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 17 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Yu, H.; Tan, Y.; Zeng, J.; Sun, C.; Jin, Y. Surrogate-assisted hierarchical particle swarm optimization. Inf. Sci. 2018, 454, 59–72. [Google Scholar] [CrossRef]
  12. Sun, C.; Jin, Y.; Zeng, J.; Yu, Y. A two-layer surrogate-assisted particle swarm optimization algorithm. Soft Comput. 2015, 19, 1461–1475. [Google Scholar] [CrossRef] [Green Version]
  13. Qin, S.; Sun, C.; Zhang, G.; He, X.; Tan, Y. A modified particle swarm optimization based on decomposition with different ideal points for many-objective optimization problems. In Complex & Intelligent Systems; Springer: Berlin, Germany, 2020; pp. 1–12. [Google Scholar]
  14. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the IEEE 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 2, pp. 1470–1477. [Google Scholar]
  15. Chu, S.C.; Roddick, J.F.; Pan, J.S. Ant colony system with communication strategies. Inf. Sci. 2004, 167, 63–76. [Google Scholar] [CrossRef]
  16. Pan, H.; You, X.; Liu, S.; Zhang, D. Pearson correlation coefficient-based pheromone refactoring mechanism for multi-colony ant colony optimization. In Applied Intelligence; Springer: Berlin, Germany, 2020; pp. 1–23. [Google Scholar]
  17. Chu, S.C.; Tsai, P.W.; Pan, J.S. Cat swarm optimization. In Pacific Rim International Conference on Artificial Intelligence; Springer: Berlin, Germany, 2006; pp. 854–858. [Google Scholar]
  18. Ahmed, A.M.; Rashid, T.A.; Saeed, S.A.M. Cat Swarm Optimization Algorithm: A Survey and Performance Evaluation. Comput. Intell. Neurosci. 2020, 2020, 4854895. [Google Scholar] [CrossRef]
  19. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef] [Green Version]
  20. Nguyen, T.T.; Pan, J.S.; Dao, T.K. A compact bat algorithm for unequal clustering in wireless sensor networks. Appl. Sci. 2019, 9, 1973. [Google Scholar] [CrossRef] [Green Version]
  21. Cai, X.; Wang, H.; Cui, Z.; Cai, J.; Xue, Y.; Wang, L. Bat algorithm with triangle-flipping strategy for numerical optimization. Int. J. Mach. Learn. Cybern. 2018, 9, 199–215. [Google Scholar] [CrossRef]
  22. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  23. Tian, A.Q.; Chu, S.C.; Pan, J.S.; Cui, H.; Zheng, W.M. A compact pigeon-inspired optimization for maximum short-term generation mode in cascade hydroelectric power station. Sustainability 2020, 12, 767. [Google Scholar] [CrossRef] [Green Version]
  24. Abdullahi, M.; Ngadi, M.A. Symbiotic Organism Search optimization based task scheduling in cloud computing environment. Future Gener. Comput. Syst. 2016, 56, 640–650. [Google Scholar] [CrossRef]
  25. Chu, S.C.; Du, Z.G.; Pan, J.S. Symbiotic organism search algorithm with multi-group quantum-behavior communication scheme applied in wireless sensor networks. Appl. Sci. 2020, 10, 930. [Google Scholar] [CrossRef] [Green Version]
  26. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  27. Shieh, C.S.; Wang, H.Y.; Dao, T.K. Enhanced diversity herds grey wolf optimizer for optimal area coverage in wireless sensor networks. In International Conference on Genetic and Evolutionary Computing; Springer: Berlin, Germany, 2016; pp. 174–182. [Google Scholar]
  28. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the IEEE 2009 World congress on nature & biologically inspired computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  29. Cui, Z.; Zhang, M.; Wang, H.; Cai, X.; Zhang, W. A hybrid many-objective cuckoo search algorithm. Soft Comput. 2019, 23, 10681–10697. [Google Scholar] [CrossRef]
  30. Song, P.C.; Pan, J.S.; Chu, S.C. A parallel compact cuckoo search algorithm for three-dimensional path planning. Appl. Soft Comput. 2020, 94, 106443. [Google Scholar] [CrossRef]
  31. Gunen, M.A.; Besdok, E.; Civicioglu, P.; Atasever, U.H. Camera calibration by using weighted differential evolution algorithm: A comparative study with ABC, PSO, COBIDE, DE, CS, GWO, TLBO, MVMO, FOA, LSHADE, ZHANG and BOUGUET. Neural Comput. Appl. 2020, 32, 17681–17701. [Google Scholar] [CrossRef]
  32. Meng, Z.; Pan, J.S. Monkey king evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization. Knowl. Based Syst. 2016, 97, 144–157. [Google Scholar] [CrossRef]
  33. Balasubramanian, D.; Govindasamy, V. Binary Monkey-King Evolutionary Algorithm for single objective target based WSN. EAI Endorsed Trans. Internet Things 2019, 5, 5. [Google Scholar] [CrossRef]
  34. Whitley, D. A genetic algorithm tutorial. Stat. Comput. 1994, 4, 65–85. [Google Scholar] [CrossRef]
  35. Chu, S.C.; Xue, X.; Pan, J.S.; Wu, X. Optimizing ontology alignment in vector space. J. Internet Technol. 2020, 21, 15–22. [Google Scholar]
  36. Chen, Y.H.; Huang, H.C.; Cai, H.Y.; Chen, P.F. A Genetic Algorithm Approach for the Multiple Length Cutting Stock Problem. In Proceedings of the 2019 IEEE 1st Global Conference on Life Sciences and Technologies (LifeTech), Osaka, Japan, 12–14 March 2019; pp. 162–165. [Google Scholar]
  37. Zheng, Y.; Huang, M.; Lu, Y.; Li, W. Fractional stochastic resonance multi-parameter adaptive optimization algorithm based on genetic algorithm. In Neural Computing and Applications; Springer: Berlin, Germany, 2018; pp. 1–12. [Google Scholar]
  38. Price, K.V. Differential evolution. In Handbook of Optimization; Springer: Berlin, Germany, 2013; pp. 187–214. [Google Scholar]
  39. Sui, X.; Chu, S.C.; Pan, J.S.; Luo, H. Parallel Compact Differential Evolution for Optimization Applied to Image Segmentation. Appl. Sci. 2020, 10, 2195. [Google Scholar] [CrossRef] [Green Version]
  40. Mousavirad, S.J.; Rahnamayan, S. Differential Evolution Algorithm Based on a Competition Scheme. In Proceedings of the 2019 14th International Conference on Computer Science & Education (ICCSE), Toronto, ON, USA, 19–21 August 2019; pp. 929–934. [Google Scholar]
  41. Jin, C.; Tsai, P.W.; Qin, A.K. A Study on Knowledge Reuse Strategies in Multitasking Differential Evolution. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 1564–1571. [Google Scholar]
  42. Pan, J.S.; Meng, Z.; Xu, H.; Li, X. QUasi-Affine TRansformation Evolution (QUATRE) algorithm: A new simple and accurate structure for global optimization. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems; Springer: Berlin, Germany, 2016; pp. 657–667. [Google Scholar]
  43. Meng, Z.; Pan, J.S.; Xu, H. QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm: A cooperative swarm based algorithm for global optimization. Knowl.-Based Syst. 2016, 109, 104–121. [Google Scholar] [CrossRef]
  44. Meng, Z.; Pan, J.S. QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm: The framework analysis for global optimization and application in hand gesture segmentation. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Chengdu, China, 6–10 November 2016; pp. 1832–1837. [Google Scholar]
  45. Meng, Z.; Pan, J.S. A competitive QUasi-Affine TRansformation Evolutionary (C-QUATRE) algorithm for global optimization. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 001644–001649. [Google Scholar]
  46. Meng, Z.; Pan, J.S. QUasi-Affine TRansformation Evolution with External ARchive (QUATRE-EAR): An enhanced structure for differential evolution. Knowl.-Based Syst. 2018, 155, 35–53. [Google Scholar] [CrossRef]
  47. Meng, Z.; Pan, J.S.; Lin, F. The QUATRE structure: An efficient approach to tackling the structure bias in differential evolution. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 3074–3079. [Google Scholar]
  48. Liu, N.; Pan, J.S. A bi-population QUasi-Affine TRansformation Evolution algorithm for global optimization and its application to dynamic deployment in wireless sensor networks. EURASIP J. Wirel. Commun. Netw. 2019, 2019, 175. [Google Scholar] [CrossRef]
  49. Du, Z.G.; Pan, J.S.; Chu, S.C.; Luo, H.J.; Hu, P. Quasi-affine transformation evolutionary algorithm with communication schemes for application of RSSI in wireless sensor networks. IEEE Access 2020, 8, 8583–8594. [Google Scholar] [CrossRef]
  50. Sun, X.X.; Pan, J.S.; Chu, S.C.; Hu, P.; Tian, A.Q. A novel pigeon-inspired optimization with QUasi-Affine TRansformation evolutionary algorithm for DV-Hop in wireless sensor networks. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720932749. [Google Scholar] [CrossRef]
  51. Khanesar, M.A.; Teshnehlab, M.; Shoorehdeli, M.A. A novel binary particle swarm optimization. In Proceedings of the IEEE 2007 Mediterranean conference on control & automation, Guangzhou, China, 30 May–1 June 2007; pp. 1–6. [Google Scholar]
  52. Rodrigues, D.; Pereira, L.A.; Almeida, T.; Papa, J.P.; Souza, A.; Ramos, C.C.; Yang, X.S. BCS: A binary cuckoo search algorithm for feature selection. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 465–468. [Google Scholar]
  53. Too, J.; Mirjalili, S. A Hyper Learning Binary Dragonfly Algorithm for Feature Selection: A COVID-19 Case Study. Knowl.-Based Syst. 2020, 212, 106553. [Google Scholar] [CrossRef]
  54. Hu, P.; Pan, J.S.; Chu, S.C. Improved Binary Grey Wolf Optimizer and Its application for feature selection. Knowl.-Based Syst. 2020, 195, 105746. [Google Scholar] [CrossRef]
  55. Gupta, D.; Arora, J.; Agrawal, U.; Khanna, A.; de Albuquerque, V.H.C. Optimized Binary Bat algorithm for classification of white blood cells. Measurement 2019, 143, 180–190. [Google Scholar] [CrossRef]
  56. Bostani, H.; Sheikhan, M. Hybrid of binary gravitational search algorithm and mutual information for feature selection in intrusion detection systems. Soft Comput. 2017, 21, 2307–2324. [Google Scholar] [CrossRef]
  57. Shen, X.; Liu, B.; Zhou, Y.; Zhao, J.; Liu, M. Remote sensing image captioning via Variational Autoencoder and Reinforcement Learning. Knowl.-Based Syst. 2020, 203, 105920. [Google Scholar] [CrossRef]
  58. Basaeed, E.; Bhaskar, H.; Al-Mualla, M. Supervised remote sensing image segmentation using boosted convolutional neural networks. Knowl.-Based Syst. 2016, 99, 19–27. [Google Scholar] [CrossRef]
  59. Cui, B.; Cui, J.; Lu, Y.; Guo, N.; Gong, M. A Sparse Representation-Based Sample Pseudo-Labeling Method for Hyperspectral Image Classification. Remote Sens. 2020, 12, 664. [Google Scholar] [CrossRef] [Green Version]
  60. Mohanty, R.; Happy, S.; Routray, A. Spatial–Spectral Regularized Local Scaling Cut for Dimensionality Reduction in Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2018, 16, 932–936. [Google Scholar] [CrossRef]
  61. Cui, B.; Cui, J.; Hao, S.; Guo, N.; Lu, Y. Spectral-spatial hyperspectral image classification based on superpixel and multi-classifier fusion. Int. J. Remote Sens. 2020, 41, 6157–6182. [Google Scholar] [CrossRef]
  62. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
  63. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  64. Guo, S.S.; Wang, J.S.; Guo, M.W. Z-Shaped Transfer Functions for Binary Particle Swarm Optimization Algorithm. Comput. Intell. Neurosci. 2020, 2020, 6502807. [Google Scholar] [CrossRef] [PubMed]
  65. Liu, J.; Mei, Y.; Li, X. An analysis of the inertia weight parameter for binary particle swarm optimization. IEEE Trans. Evol. Comput. 2015, 20, 666–681. [Google Scholar] [CrossRef]
  66. Van der Maaten, L.; Postma, E.O.; van den Herik, H.J. Matlab Toolbox for Dimensionality Reduction. In MICC; Maastricht University: Maastricht, The Netherlands, 2007. [Google Scholar]
  67. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 1–27. [Google Scholar] [CrossRef]
Figure 1. The original (the four dotted lines) and improved (the four solid lines) S-shaped families of transfer functions.
Figure 1. The original (the four dotted lines) and improved (the four solid lines) S-shaped families of transfer functions.
Applsci 11 02251 g001
Figure 2. The original (the four dotted lines) and improved (the four solid lines) V-shaped families of transfer functions.
Figure 2. The original (the four dotted lines) and improved (the four solid lines) V-shaped families of transfer functions.
Applsci 11 02251 g002
Figure 3. The original (the four dotted lines) and improved (the four solid lines) U-shaped families of transfer functions.
Figure 3. The original (the four dotted lines) and improved (the four solid lines) U-shaped families of transfer functions.
Applsci 11 02251 g003
Figure 4. The original (the four dotted lines) and improved (the four solid lines) Z-shaped families of transfer functions.
Figure 4. The original (the four dotted lines) and improved (the four solid lines) Z-shaped families of transfer functions.
Applsci 11 02251 g004
Figure 5. The curve of S1(x), V1(x), U1(x), and Z1(x) transfer functions.
Figure 5. The curve of S1(x), V1(x), U1(x), and Z1(x) transfer functions.
Applsci 11 02251 g005
Figure 6. The optimal values of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO on f1-f7.
Figure 6. The optimal values of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO on f1-f7.
Applsci 11 02251 g006
Figure 7. The optimal values of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO on f8-f13.
Figure 7. The optimal values of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO on f8-f13.
Applsci 11 02251 g007
Figure 8. The optimal values of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO on f14-f23.
Figure 8. The optimal values of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO on f14-f23.
Applsci 11 02251 g008
Figure 9. The optimal values of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO on f1-f7.
Figure 9. The optimal values of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO on f1-f7.
Applsci 11 02251 g009
Figure 10. The optimal values of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO on f8-f13.
Figure 10. The optimal values of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO on f8-f13.
Applsci 11 02251 g010
Figure 11. The optimal values of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO on f14-f23.
Figure 11. The optimal values of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO on f14-f23.
Applsci 11 02251 g011
Figure 12. Thematic maps for the Indian Pines data set with 16 classes. (a) false-color composite; (b) ground truth data with 16 classes in different colors.
Figure 12. Thematic maps for the Indian Pines data set with 16 classes. (a) false-color composite; (b) ground truth data with 16 classes in different colors.
Applsci 11 02251 g012
Figure 13. The OA, AA, Kappa values of BQUATE2-S1i-SVM, BQUATE2-V1i-SVM, BQUATE1-U1-SVM, BQUATE1-Z1-SVM, SVM, PCA-SVM, and LDA-SVM on Indian Pines.
Figure 13. The OA, AA, Kappa values of BQUATE2-S1i-SVM, BQUATE2-V1i-SVM, BQUATE1-U1-SVM, BQUATE1-Z1-SVM, SVM, PCA-SVM, and LDA-SVM on Indian Pines.
Applsci 11 02251 g013
Figure 14. The classification maps of the different tested algorithms. (a) Ground truth (b) BQUATE2-S1i-SVM (c) BQUATE2-V1i-SVM (d) BQUATE1-U1-SVM (e) BQUATE1-Z1-SVM (f) SVM (g) PCA-SVM (h) LDA-SVM.
Figure 14. The classification maps of the different tested algorithms. (a) Ground truth (b) BQUATE2-S1i-SVM (c) BQUATE2-V1i-SVM (d) BQUATE1-U1-SVM (e) BQUATE1-Z1-SVM (f) SVM (g) PCA-SVM (h) LDA-SVM.
Applsci 11 02251 g014
Table 1. The six schemes of Matrix B calculation in QUATRE algorithm.
Table 1. The six schemes of Matrix B calculation in QUATRE algorithm.
NumberQUATRE/BEquation
1QUATRE/best/1 B = X g b e s t , G + F ( X r 1 , G X r 2 , G )
2QUATRE/rand/1 B = X r 1 , G + F ( X r 2 , G X r 3 , G )
3QUATRE/target/1 B = X G + F ( X r 1 , G X r 2 , G )
4QUATRE/target/2 B = X G + F ( X r 1 , G X r 2 , G ) + F ( X r 3 , G X r 4 , G )
5QUATRE/rand/2 B = X r 1 , G + F ( X r 2 , G X r 3 , G ) + F ( X r 4 , G X r 5 , G )
6QUATRE/best/2 B = X g b e s t , G + F ( X r 1 , G X r 2 , G ) + F ( X r 3 , G X r 4 , G )
Table 2. The expressions of the S-shaped families of transfer functions.
Table 2. The expressions of the S-shaped families of transfer functions.
NameTransfer Function
S 1 ( x ) T 1 ( x ) = 1 ( 1 + e 2 x )
S 2 ( x ) T 2 ( x ) = 1 ( 1 + e x )
S 3 ( x ) T 3 ( x ) = 1 ( 1 + e x / 2 )
S 4 ( x ) T 4 ( x ) = 1 ( 1 + e x / 3 )
Table 3. The expressions of the V-shaped families of transfer functions.
Table 3. The expressions of the V-shaped families of transfer functions.
NameTransfer Function
V 1 ( x ) T 1 ( x ) = | e r f ( π 2 x ) |
V 2 ( x ) T 2 ( x ) = | tanh ( x ) |
V 3 ( x ) T 3 ( x ) = | x / 1 + x 2 |
V 4 ( x ) T 4 ( x ) = | 2 π arctan ( 2 π x ) |
Table 4. The expressions of the U-shaped families of transfer functions.
Table 4. The expressions of the U-shaped families of transfer functions.
NameTransfer Function
U 1 ( x ) T 1 ( x ) = min ( | x 1.5 | , 1 )
U 2 ( x ) T 2 ( x ) = min ( | x 2 | , 1 )
U 3 ( x ) T 3 ( x ) = min ( | x 3 | , 1 )
U 4 ( x ) T 4 ( x ) = min ( | x 4 | , 1 )
Table 5. The expressions of the Z-shaped families of transfer functions.
Table 5. The expressions of the Z-shaped families of transfer functions.
NameTransfer Function
Z 1 ( x ) T 1 ( x ) = 1 2 x
Z 2 ( x ) T 2 ( x ) = 1 5 x
Z 3 ( x ) T 3 ( x ) = 1 8 x
Z 4 ( x ) T 4 ( x ) = 1 20 x
Table 6. The expressions of the improved S-shaped transfer functions.
Table 6. The expressions of the improved S-shaped transfer functions.
NameTransfer Function
S 1 i ( x ) T 1 i ( x ) = 1 / ( 1 + e 2 ( x 0.5 ) ) 0.5 1 / ( 1 + e 5 ) 0.5 + 0.5 x 0.5 1 / ( 1 + e 2 ( x 0.5 ) ) 0.5 0.5 1 / ( 1 + e 5 ) + 0.5 x < 0.5
S 2 i ( x ) T 2 i ( x ) = 1 / ( 1 + e ( x 0.5 ) ) 0.5 1 / ( 1 + e 2.5 ) 0.5 + 0.5 x 0.5 1 / ( 1 + e ( x 0.5 ) ) 0.5 0.5 1 / ( 1 + e 2.5 ) + 0.5 x < 0.5
S 3 i ( x ) T 3 i ( x ) = 1 / ( 1 + e ( x 0.5 ) / 2 ) 0.5 1 / ( 1 + e 1.25 ) 0.5 + 0.5 x 0.5 1 / ( 1 + e ( x 0.5 ) / 2 ) 0.5 0.5 1 / ( 1 + e 1.25 ) + 0.5 x < 0.5
S 4 i ( x ) T 4 i ( x ) = 1 / ( 1 + e ( x 0.5 ) / 3 ) 0.5 1 / ( 1 + e 2.5 / 3 ) 0.5 + 0.5 x 0.5 1 / ( 1 + e ( x 0.5 ) / 3 ) 0.5 0.5 1 / ( 1 + e 2.5 / 3 ) + 0.5 x < 0.5
Table 7. The expressions of the improved V-shaped transfer functions.
Table 7. The expressions of the improved V-shaped transfer functions.
NameTransfer Function
V 1 i ( x ) T 1 i ( x ) = | e r f ( π 2 ( x 0.5 ) ) | / e r f ( π 2 ( 2.5 ) )
V 2 i ( x ) T 2 i ( x ) = | tanh ( x 0.5 ) | / tanh ( 2.5 )
V 3 i ( x ) T 3 i ( x ) = | ( x 0.5 ) / 1 + ( x 0.5 ) 2 | / ( 2.5 ) / 1 + ( 2.5 ) 2
V 4 i ( x ) T 4 i ( x ) = | 2 π arctan ( 2 π ( x 0.5 ) ) | / 2 π arctan ( 2 π ( 2.5 ) )
Table 8. The expressions of the improved U-shaped transfer functions.
Table 8. The expressions of the improved U-shaped transfer functions.
NameTransfer Function
U 1 i ( x ) T 1 i ( x ) = min ( | ( x 0.5 ) 1.5 | , 1 )
U 2 i ( x ) T 2 i ( x ) = min ( | ( x 0.5 ) 2 | , 1 )
U 3 i ( x ) T 3 i ( x ) = min ( | ( x 0.5 ) 3 | , 1 )
U 4 i ( x ) T 4 i ( x ) = min ( | ( x 0.5 ) 4 | , 1 )
Table 9. The expressions of the improved Z-shaped transfer functions.
Table 9. The expressions of the improved Z-shaped transfer functions.
NameTransfer Function
Z 1 i ( x ) T 1 i ( x ) = 1 2 x 0.5 / 1 2 2.5
Z 2 i ( x ) T 2 i ( x ) = 1 5 x 0.5 / 1 5 2.5
Z 3 i ( x ) T 3 i ( x ) = 1 8 x 0.5 / 1 8 2.5
Z 4 i ( x ) T 4 i ( x ) = 1 20 x 0.5 / 1 20 2.5
Table 10. Unimodal benchmark functions.
Table 10. Unimodal benchmark functions.
NameFunctionSearch SpaceD f m i n
Shpere f 1 ( x ) = i = 1 n x i 2 [−100,100]300
Schwefel’s function 2.21 f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | [−10,10]300
Schwefel’s function 1.2 f 3 ( x ) = i = 1 n ( j = 1 i x j ) 2 [−100,100]300
Schwefel’s function 2.22 f 4 ( x ) = max i { | x i | , 1 i n } [−100,100]300
Rosenbroke f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [−30,30]300
Step f 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 [−100,100]300
Dejong’s noisy f 7 ( x ) = i = 1 n i x i 4 + r a n d [ 0 , 1 ) [−1.28,1.28]300
Table 11. Common multimodal benchmark functions.
Table 11. Common multimodal benchmark functions.
NameFunctionSearch SpaceD f m i n
Schwefel f 8 ( x ) = i = 1 n x i sin ( | x i | ) [−500,500]30−12,569
Rastringin f 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] [−5.12,5.12]300
Ackley f 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) [−32,32]300
exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e
Griewank f 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 [−600,600]300
Generalized penalized 1 f 12 = π n { 10 sin ( π y 1 ) + i = 1 n ( y i 1 ) 2 [−50,50]300
[ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 }
+ i = 1 n u ( x i , 10 , 100 , 4 )
y i = 1 + x i + 1 4
u ( x i , a , k , m ) = k ( x i a ) m x i > a 0 a < x i < a k ( x i a ) m x i < a
Generalized penalized 2 f 13 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) [−50,50]300
+ i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ]
+ ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] }
+ i = 1 n u ( x i , 10 , 100 , 4 )
Table 12. Multimodal benchmark functions in low dimension.
Table 12. Multimodal benchmark functions in low dimension.
NameFunctionSearch SpaceD f m i n
Fifth of Dejong f 14 ( x ) = ( 1 500 j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 [−65,65]21
Kowalik f 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 [−5,5]40.00030
Six-hump camel back f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 [−5,5]2−1.0316
Branins f 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 [−5,5]20.398
Goldstein-Price f 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 [−2,2]23
+ 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1
+ 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ]
Hartman 1 f 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) [1,3]3−3.86
Hartman 2 f 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) [0,1]6−3.32
Shekel 1 f 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 [0,10]4−10.1532
Shekel 2 f 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 [0,10]4−10.4028
Shekel 3 f 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 [0,10]4−10.5363
Table 13. The statistical results of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO.
Table 13. The statistical results of BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE2-S2i, BQUATE2-V2i, and BPSO.
FunctionBDEABGWO-V3aBQUATE1-S2BQUATE1-V2BQUATE2-S2iBQUATE2-V2iBPSO
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
f13.73331.87423.43331.30473.83330.53070.00000.00000.00000.00000.00000.00007.83331.1769
f23.53331.45592.70001.31703.60000.81370.00000.00000.00000.00000.00000.00008.16671.0532
f393.533369.405773.666763.7264116.700038.72200.00000.00000.00000.00000.00000.0000652.1333178.1527
f41.00000.00001.00000.00000.96670.18260.00000.00001.00000.00001.00000.00001.00000.0000
f50.00000.0000543.4667140.4708333.166779.242129.00000.00000.00000.000047.200063.5227172.7667233.0352
f613.30003.078313.90002.313414.63331.87057.50000.00007.50000.00007.50000.000024.23331.9989
f745.033418.492332.266716.423143.79767.81210.00010.00010.00220.00210.03760.0306108.733213.3579
f8−25.24410.0000−22.60750.8765−18.20380.7808−19.55020.9293−25.24410.0000−25.24410.0000−17.19411.0525
f93.86671.59162.90001.21344.00000.52520.00000.00000.00000.00000.00000.00007.96670.9279
f101.29080.24081.26180.26061.31850.14270.00000.00000.00000.00000.00000.00001.93060.1250
f110.17250.05070.12540.05040.10200.02180.00000.00000.00000.00000.00000.00000.29780.0443
f122.32000.28342.10360.21642.25280.10221.66900.00001.66900.00001.66900.00003.02900.2758
f130.00000.00000.31330.13830.85000.09000.68670.08600.00000.00000.00000.00000.93000.1264
f1412.67050.000012.67050.000012.67050.000012.67050.000012.67050.000012.67050.000012.67050.0000
f150.14840.00000.14840.00000.14840.00000.14840.00000.14840.00000.14840.00000.14840.0000
f160.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.0000
f1727.70290.000027.70290.000027.70290.000027.70290.000027.70290.000027.70290.000027.70290.0000
f18600.00000.0000600.00000.0000600.00000.0000600.00000.0000600.00000.0000600.00000.0000600.00000.0000
f19−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000
f20−0.15560.0280−0.14400.0433-0.16570.0000−0.16570.0000−0.16570.0000−0.16570.0000−0.16570.0000
f21−5.05520.0000−5.05520.0000−5.05520.0000−5.05520.0000−5.05520.0000−5.05520.0000−1.99481.8770
f22−5.08770.0000−5.08770.0000−5.08770.0000−5.08770.0000−5.08770.0000−5.08770.0000−3.13872.1192
f23−5.12850.0000−5.12850.0000−5.12850.0000−5.12850.0000−5.12850.0000−5.12850.0000−2.60712.0951
count6 + 63 + 64 + 614 + 615 + 614 + 61 + 6
Table 14. The statistical results of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO.
Table 14. The statistical results of BDE, ABGWO-V3a, BQUATE1-U2, BQUATE1-Z2, BQUATE2-U2i, BQUATE2-Z2i, and BPSO.
FunctionBDEABGWO-V3aBQUATE1-U2BQUATE1-Z2BQUATE2-U2iBQUATE2-Z2iBPSO
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
f13.73331.87423.43331.30470.00000.00000.00000.00000.00000.00000.10000.30517.83331.1769
f23.53331.45592.70001.31700.00000.00000.00000.00000.00000.00000.10000.30518.16671.0532
f393.533369.405773.666763.72640.00000.00000.00000.00000.00000.00000.60001.0700652.1333178.1527
f41.00000.00001.00000.00000.00000.00001.00000.00001.00000.00001.00000.00001.00000.0000
f50.00000.0000543.4667140.470829.00000.00000.00000.00000.00000.00000.00000.0000172.7667233.0352
f613.30003.078313.90002.31347.50000.00007.50000.00007.50000.00007.56670.365124.23331.9989
f745.033418.492332.266716.42310.00010.00010.00010.00010.00310.00311.21721.1935108.733213.3579
f8−25.24410.0000−22.60750.8765−19.12940.9358−25.24410.0000−25.24410.0000−25.24410.0000−17.19411.0525
f93.86671.59162.90001.21340.00000.00000.00000.00000.00000.00000.10000.30517.96670.9279
f101.29080.24081.26180.26060.00000.00000.00000.00000.00000.00000.07170.21881.93060.1250
f110.17250.05070.12540.05040.00000.00000.00000.00000.00000.00000.00780.00920.29780.0443
f122.32000.28342.10360.21641.66900.00001.66900.00001.66900.00001.70430.06073.02900.2758
f130.00000.00000.31330.13830.75000.11960.00000.00000.00000.00000.00000.00000.93000.1264
f1412.67050.000012.67050.000012.67050.000012.67050.000012.67050.000012.67050.000012.67050.0000
f150.14840.00000.14840.00000.14840.00000.14840.00000.14840.00000.14840.00000.14840.0000
f160.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.0000
f1727.70290.000027.70290.000027.70290.000027.70290.000027.70290.000027.70290.000027.70290.0000
f18600.00000.0000600.00000.0000600.00000.0000600.00000.0000600.00000.0000600.00000.0000600.00000.0000
f19−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000−0.33480.0000
f20−0.15560.0280−0.14400.0433−0.16570.0000−0.16570.0000−0.16570.0000−0.16570.0000−0.16570.0000
f21−5.05520.0000−5.05520.0000−4.91610.7619−5.05520.0000−5.05520.0000−5.05520.0000−1.99481.8770
f22−5.08770.0000−5.08770.0000−5.08770.0000−5.08770.0000−5.08770.0000−5.08770.0000−3.13872.1192
f23−5.12850.0000−5.12850.0000−4.71051.2754−5.12850.0000−5.12850.0000−5.12850.0000−2.60712.0951
count6 + 63 + 612 + 616 + 615 + 67 + 61 + 6
Table 15. The t-test results of the compared algorithms (QUATRE1-S2, QUATRE1-V2, QUATRE1-U2, QUATRE1-Z2, QUATRE2-S2i, QUATRE2-V2i, QUATRE2-U2i, and QUATRE2-Z2i) on ABGWO-V3a.
Table 15. The t-test results of the compared algorithms (QUATRE1-S2, QUATRE1-V2, QUATRE1-U2, QUATRE1-Z2, QUATRE2-S2i, QUATRE2-V2i, QUATRE2-U2i, and QUATRE2-Z2i) on ABGWO-V3a.
FunctionQUATRE1-S2QUATRE1-V2QUATRE1-U2QUATRE1-Z2QUATRE2-S2iQUATRE2-V2iQUATRE2-U2iQUATRE2-Z2i
f1=+++++++
f2+++++++
f3+++++++
f4========
f5++++++++
f6=+++++++
f7+++++++
f8+++++
f9+++++++
f10=+++++++
f11=+++++++
f12+++++++
f13+++++
f14========
f15========
f16========
f17========
f18========
f19========
f20++++++++
f21========
f22========
f23========
Table 16. The t-test results of the compared algorithms (QUATRE1-S2, QUATRE1-V2, QUATRE1-U2, QUATRE1-Z2, QUATRE2-S2i, QUATRE2-V2i, QUATRE2-U2i, and QUATRE2-Z2i) on BDE.
Table 16. The t-test results of the compared algorithms (QUATRE1-S2, QUATRE1-V2, QUATRE1-U2, QUATRE1-Z2, QUATRE2-S2i, QUATRE2-V2i, QUATRE2-U2i, and QUATRE2-Z2i) on BDE.
FunctionQUATRE1-S2QUATRE1-V2QUATRE1-U2QUATRE1-Z2QUATRE2-S2iQUATRE2-V2iQUATRE2-U2iQUATRE2-Z2i
f1=+++++++
f2=+++++++
f3=+++++++
f4========
f5====
f6=+++++++
f7=+++++++
f8=====
f9=+++++++
f10=+++++++
f11++++++++
f12=+++++++
f13=====
f14========
f15========
f16========
f17========
f18========
f19========
f20========
f21========
f22========
f23========
Table 17. The time consumption of the algorithms(BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE1-U2, BQUATE1-Z2, BQUATE2-S4i, BQUATE2-V4i, BQUATE2-U4i, and BQUATE2-Z4i).
Table 17. The time consumption of the algorithms(BDE, ABGWO-V3a, BQUATE1-S2, BQUATE1-V2, BQUATE1-U2, BQUATE1-Z2, BQUATE2-S4i, BQUATE2-V4i, BQUATE2-U4i, and BQUATE2-Z4i).
FunctionBDEABGWO-V3aBQUATE1-S2BQUATE1-V2BQUATE1-U2BQUATE1-Z2BQUATE2-S4iBQUATE2-V4iBQUATE2-U4iBQUATE2-Z4i
f10.6274416.25093.46803.41555.35393.67656.78336.31425.93976.6418
f20.7099316.73802.55463.26355.27373.79856.65446.31286.00676.5947
f32.176317.21743.19633.88976.18094.60477.63227.06196.15317.3415
f40.680716.77962.60083.32465.32313.94126.80116.28225.67016.5871
f50.811017.61032.72933.30035.44414.29546.99986.33295.73656.5454
f60.705715.46792.57563.25385.36564.44737.22556.26495.68546.6404
f70.718315.79412.59583.28645.47844.29046.79436.26235.60096.6486
f80.716214.99212.59573.40295.43494.39907.53266.27895.73296.4835
f90.716117.77802.59653.29335.33744.03918.21886.30126.60566.6817
f100.735115.47742.61083.27745.40293.71067.88016.30216.40676.6676
f110.771614.51062.65743.29625.35583.72918.33996.57466.35316.6952
f121.247914.73383.06663.53075.61244.05697.94356.45226.67867.2990
f131.263214.70073.17423.62265.92344.08658.13026.41716.56516.8091
f140.61151.82611.49921.09581.44491.20671.88821.51451.52131.5724
f150.66942.03740.52840.60291.10200.70561.32891.03191.05751.0909
f160.55531.03380.42020.35060.66070.42440.92980.59160.60310.6352
f170.54281.02660.33770.35450.54090.41880.58470.59560.59300.6113
f180.55921.01661.56410.35350.59850.43130.57060.59090.59660.6220
f190.77701.60171.17160.54371.00790.66120.88130.91540.90520.9389
f200.79733.05842.47360.86341.40211.09171.46181.51241.52851.5924
f210.87982.12331.68260.69741.07300.80951.12941.15961.16161.2014
f220.96922.16111.33340.73541.23360.82181.18371.20241.20661.2449
f231.02452.25111.15890.80681.18450.88691.41821.28991.29731.3347
Table 18. The classification performance of BQUATE2-S1i-SVM, BQUATE2-V1i-SVM, BQUATE1-U1-SVM, BQUATE1-Z1-SVM, SVM, PCA-SVM, and LDA-SVM on Indian Pines
Table 18. The classification performance of BQUATE2-S1i-SVM, BQUATE2-V1i-SVM, BQUATE1-U1-SVM, BQUATE1-Z1-SVM, SVM, PCA-SVM, and LDA-SVM on Indian Pines
AccurateBQUATE2-S1i-SVMBQUATE2-V1i-SVMBQUATE1-U1-SVMBQUATE1-Z1-SVMSVMPCA-SVMLDA-SVM
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
Alfalfa0.52810.14120.48370.14070.54840.14890.53680.14000.49130.13190.68200.04030.72280.1128
Corn-no till0.74770.04470.74960.04060.74080.04010.74570.03740.67660.04200.60650.03990.78360.0389
Corn-min till0.69540.10160.70060.08550.70160.08860.70600.08360.59520.04430.36590.08010.60620.0400
Corn0.47080.07570.46360.05520.47800.06850.47100.05940.45300.05260.31180.05750.44520.0341
Grass/pasture0.89230.03030.89090.02650.88830.02950.88630.02760.85610.02650.83800.03020.88040.0169
Grass/trees0.92170.01840.91920.01390.91720.01560.92310.01800.90870.01560.92720.03240.96050.0170
Grass/pasture-mowed0.76630.13710.82240.13250.82610.12910.76370.15740.65730.12860.38990.05490.58450.1107
Hay-windrowed0.99150.00490.99120.00290.99330.00350.99080.00570.99250.00390.99380.00490.99490.0045
Oats0.49230.09400.48080.10460.49120.09160.49310.10250.40820.06770.36520.10600.61810.1453
Soybean-no till0.64340.03590.64320.03650.64770.03630.64630.03700.63730.04430.54490.05690.60720.0346
Soybean-min till0.83500.02500.83280.02450.82860.02580.83370.02620.81610.02800.70260.04800.80390.0178
Soybean-clean till0.65660.05280.65780.05500.65400.05300.66350.05380.60120.04110.56020.05670.63590.0758
Wheat0.91100.04330.91380.03970.90930.04240.90670.04500.90180.04930.80120.04380.98620.0061
Woods0.94360.01650.94090.01830.94190.01700.94180.01800.94050.01650.93250.01030.97530.0086
Bldg-Grass-Tree-Drives0.66990.08450.65710.08560.65470.08180.66370.07450.60520.07940.65590.08790.67240.0379
Stones-steel towers0.83870.04240.84860.05400.83910.05890.84140.05290.81760.06850.78170.13370.97270.0205
OA0.78150.01460.78050.01510.77990.01470.78270.01410.74730.01120.67170.02050.77390.0085
AA0.75030.01600.74980.01580.75380.01690.75080.01580.70990.01210.65370.01920.76560.0134
Kappa0.75220.01580.75100.01630.75040.01600.75350.01530.71390.01210.62890.02190.74420.0090
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chu, S.-C.; Zhuang, Z.; Li, J.; Pan, J.-S. A Novel Binary QUasi-Affine TRansformation Evolutionary (QUATRE) Algorithm. Appl. Sci. 2021, 11, 2251. https://doi.org/10.3390/app11052251

AMA Style

Chu S-C, Zhuang Z, Li J, Pan J-S. A Novel Binary QUasi-Affine TRansformation Evolutionary (QUATRE) Algorithm. Applied Sciences. 2021; 11(5):2251. https://doi.org/10.3390/app11052251

Chicago/Turabian Style

Chu, Shu-Chuan, Zhongjie Zhuang, Junbao Li, and Jeng-Shyang Pan. 2021. "A Novel Binary QUasi-Affine TRansformation Evolutionary (QUATRE) Algorithm" Applied Sciences 11, no. 5: 2251. https://doi.org/10.3390/app11052251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop