Next Article in Journal
Balancing Efficiency and Accuracy: Enhanced Visual Simultaneous Localization and Mapping Incorporating Principal Direction Features
Previous Article in Journal
Applicability of Virtual Excursions in Technical Subjects Teaching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Historical Elite Differential Evolution Based on Particle Swarm Optimization Algorithm for Texture Optimization with Application in Particle Physics

by
Emmanuel Martínez-Guerrero
1,†,
Pedro Lagos-Eulogio
2,
Pedro Miranda-Romagnoli
2,*,†,
Roberto Noriega-Papaqui
2 and
Juan Carlos Seck-Tuoh-Mora
3,*
1
Área Académica de Computación y Electrónica, Universidad Autónoma del Estado de Hidalgo, Pachuca 42184, Mexico
2
Área Académica de Matemáticas y Física, Universidad Autónoma del Estado de Hidalgo, Pachuca 42184, Mexico
3
Área Académica de Ingeniería y Arquitectura, Universidad Autónoma del Estado de Hidalgo, Pachuca 42184, Mexico
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(19), 9110; https://doi.org/10.3390/app14199110
Submission received: 12 August 2024 / Revised: 19 September 2024 / Accepted: 20 September 2024 / Published: 9 October 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Within the phenomenology of particle physics, the theoretical model of 4-zero textures is validated using a chi-square criterion that compares experimental data with the computational results of the model. Traditionally, analytical methods that often imply simplifications, combined with computational analysis, have been used to validate texture models. In this paper, we propose a new meta-heuristic variant of the differential evolution algorithm that incorporates aspects of the particle swarm optimization algorithm called “HE-DEPSO” to obtain chi-squared values that are less than a bound value, which exhaustive and traditional algorithms cannot obtain. The results show that the proposed algorithm can optimize the chi-square function according to the required criteria. We compare simulated data with experimental data in the allowed search region, thereby validating the 4-zero texture model.

1. Introduction

Broadly speaking, physics can be classified into three areas: the theoretical part, which, through mathematical formalism, explains and understands the physical phenomena; the experimental part, which groups disciplines related to the acquisition of data, the methods of data acquisition, and the design and performance of experiments; and finally, the aspect that links these two, known as phenomenology, which deals with (1) validating theoretical models, (2) investigating how to measure their parameters, (3) investigating how to distinguish one model from another, and (4) studying the experimental consequences of these models. This paper addresses a phenomenological approach by providing a solution to the validation of a theoretical model of physics, more specifically, from particle physics, to unveil the mechanisms of fermion mass generation and to reproduce the elements of the V C K M  (Cabibbo–Kobayashi–Maskawa unitary matrix), which describes the mixing of different types of quark during weak interactions by encapsulating the probabilities of transitions between these quark types (see, for example, pertinent introductory reviews [1,2,3]), known as the mixing matrix.
The beginning of these models goes back to the first years of the 1970s, shortly after the establishment of the standard model (SM) of particle physics. Since then, different approaches have been developed in the context of theoretical and phenomenological models, which can be broadly classified as follows: Radiative mechanisms, where fermion masses come from contributions of processes generated by new particles added to SM [4,5]; Textures, where the mass matrix has zeros in some entries [6,7,8]; Symmetries between families, where a mathematical group symmetry is added to the theory [9,10]; and Seesaw mechanisms, an elegant way of generating small masses for neutrinos [11,12,13]. These approaches are phenomenological interrelated; for example, when we add an extra group symmetry to the theory it is common to arrive at texture structure to mass matrices [14,15], and in linear and inverse Seesaw models a texture structure is used to obtain the neutrino masses [16].
The texture formalism was born by considering that certain entries of the mass matrix are zero, such that we can compute analytically the matrix that diagonalizes it and hence the matrix V C K M . In 1977, Harald Fritzsch created this formalism by using 6-zero hermetic textures as a viable model [17]; in 2005, with experimental data from that year, he found that 4-zero hermetic textures were viable to generate the quark masses and the mixing matrix [18]. The analytical approach is very complex, so certain considerations are made to simplify the model. However, the results obtained are not accurate, which is why numerical approaches have recently been used to obtain more accurate solutions. Given the precision of the current experimental data, it is worthwhile assessing the feasibility of these texture models with early studies on the 4-zero mass matrix [19], further exploring their potential to align with experimental data.
There are numerical works on 4-zero texture models; however, the detailed numerical descriptions of the techniques and algorithms used are not described by their authors [20]. The numerical analysis of texture models requires a χ 2 criterion that establishes when the theoretical part of the model agrees with the experimental part [21]. That is, to validate the model, a function (which we will call χ 2 ( X ) ) is built, and permissible values of the free parameters X of the model must be found, such that this function takes the minimum value possible greater than 0 but less than 1. In other words, it is necessary to optimize said function under a certain threshold. For the particular case of the function χ 2 ( X ) obtained from the texture formalism, the difficulty comes from the cumbersome algebraic manipulation of the expressions involved, which complicate and entangle the application of classic optimization techniques; thus, alternative optimization techniques are required.
To the best of the authors’ knowledge, the works where bio-inspired optimization algorithms have been used within particle physics are as follows: in experimental contexts, where particle swarm optimization algorithms (PSOs), as well as genetic algorithms (GAs), have been implemented within the hyperparameter optimization of machine-learning (ML) models used in the analysis of data obtained in high-energy physics experiments [22]; in the optimization of the design of particle accelerators, where the differential evolution (DE) algorithm is quite effective [23,24]; and in phenomenology, where genetic algorithms have been used to discriminate models from supersymmetric theories [25].
As can be seen, the incursion of bio-inspired optimization algorithms in particle physics has been very limited, even when the results are favorable. For this reason, further application of these techniques and algorithms in this type of particle physics area, especially in texture formalism, is of great interest.
The DE algorithm is one of the evolutionary algorithms that has stood out the most in recent times due to its simplicity, power, and efficiency [26,27]; however, like other evolutionary algorithms, it is likely to suffer premature convergence and stagnation in local minima [26,28]. The strategies implemented to solve these problems can be classified as follows [29]:
  • Change the mutation strategy. The mutation phase of the DE algorithm is important since it allows the integration of new individuals into the existing population, thus influencing its performance. Algorithms such as CoDE [30] have introduced new mutation strategies and implemented different mutation strategies, respectively, in order to improve the efficiency of the DE algorithm.
  • Change the parameter control. The DE algorithm is sensitive to the configuration of its main parameters: the scale factor F and the crossover probability C r  [26,31]. Self-adaptive control of these parameters has been shown to improve the performance of the DE algorithm significantly. In this sense, the SHADE [32] and SaDE [33] algorithms represent two fairly well-known variants.
  • Incorporation of population mechanisms. The way in which the population is handled in the DE algorithm can improve its performance. Techniques such as genotypic topology [34], opposition-based initialization [35], and external population archives, as seen in JADE [36], have shown positive effects.
  • Hybridization with other optimization algorithms. One way to improve the performance of the DE algorithm is to take advantage of the operators’ strengths from other algorithms and incorporate them into the structure of the DE algorithm through a hybridization process [27,37]. Hybridization with other computational intelligence algorithms, such as artificial neural networks (ANNs), ant colony optimization (ACO), and particle swarm optimization (PSO), has marked a trend within the last decade [27,38].
The self-adaptive differential evolution algorithm based on particle swarm optimization (DEPSO) [39] is a recent variant of the DE algorithm that integrates the PSO mutation strategy within its structure. DEPSO employs a probabilistic selection technique to choose between two mutation strategies: a new mutation strategy with an elite file called DE/e-rand/1, a modification of the DE/rand/1 strategy, and the mutation strategy of the PSO algorithm. This probabilistic selection technique enables DEPSO to improve the balance between exploitation and exploration, resulting in significantly better performance compared to both DE and PSO on various single-objective optimization problems.
This paper proposes a new variant of the DEPSO algorithm called historical elite differential evolution based on particle swarm optimization (HE-DEPSO) to improve optimization performance in solving complex single-objective problems, with a specific focus on addressing the problem of optimizing the χ 2 criteria for the 4-zero texture model of high energy physics. The proposed variant aims to explore the areas of opportunity encountered to enhance DEPSO, specifically introducing a new mutation strategy named DE/current-to-EHE/1, which utilizes information from the elite individuals of the population and incorporates historical data from the evolutionary process to improve the balance between exploration and exploitation by leveraging information from elite individuals and historical data, particularly during the early stages of the evolutionary process. Additionally, HE-DEPSO employs the self-adaptive parameter control mechanism from the SHADE algorithm to reduce the sensitivity of the algorithm’s parameters. To test the HE-DEPSO algorithm’s performance, it was compared against other optimization algorithms, including DE, PSO, CoDE, SHADE, and DEPSO, using the CEC 2017 single-objective benchmark function set. The HE-DEPSO algorithm outperformed the other algorithms in terms of solution quality. Finally, the validation of the 4-zero texture model was conducted, optimizing the χ 2 function and, at the same time, comparing the performance of our proposal against DE, PSO, CoDE, SHADE, and DEPSO algorithms for this particular application. The results are encouraging and expand the use of bio-inspired methods in high-energy physics by integrating a metaheuristic approach.
The remainder of the paper is structured as follows: Section 2 contains the definition of the problem to be solved, including the definition of the χ 2 criterion; Section 3 reviews the versions of the PSO and DEPSO algorithms used here. Section 4 explains our proposal, the HE-DEPSO algorithm; Section 5 subjects our algorithm to benchmark tests, while Section 6 presents the validation problem of the 4-zero texture model. Finally, in Section 7, the conclusions are presented.

2. Problem Definition

Within the scope of the SM, the masses of the quarks come from hermetic 3 × 3 matrices known as mass matrices (one for u-type quarks and another for d-type quarks) [40,41,42], as the absolute values of their eigenvalues and the matrix V C K M as the product of the matrix that diagonalizes the u-type quarks and the matrix that diagonalizes the d-type quarks; however, due to the mathematical formalism used, the mass matrices remain entirely unknown and, consequently, the masses of the quarks and the mixing matrix V C K M cannot be theoretically predicted. The experiment provides us with the numerical value of these quantities.
The mixing matrix V C K M can be written in a general way as:
V C K M = V C K M u d V C K M u s V C K M u b V C K M c d V C K M c s V C K M c b V C K M t d V C K M t s V C K M t b
and is a unitary matrix containing information about the probability of transition between u-type quarks (left-hand index) and d-type quarks (right-hand index), through the weak interaction [43]. That is, | V C K M u s | quantifies the transition probability between the quark u and the quark s through the interaction with the boson W ± . Over many years and different collaborations, the magnitudes of the elements of the mixing matrix V C K M have been experimentally measured with an accuracy up to 10 5 , and it is well known that four quantities (three angles and one phase) are needed to have a parametrization that adequately describes the matrix V C K M  [3,44]. In this work, we choose the Chau–Keng parametrization [3], and the three corresponding angles θ 13 , θ 12 , and θ 23 are obtained by choosing the three magnitudes of the elements | V C K M u s | , | V C K M u b | , and | V C K M c b | , and the phase δ 13 is obtained from the Jarslog invariant (J) through the following relations:
sin θ 13 = | V C K M u b |
sin θ 12 = | V C K M u s | 1 | V C K M u b | 2
sin θ 23 = | V C K M c b | 1 | V C K M u b | 2
sin δ 13 = J 1 ( sin θ 12 ) 2 sin θ 12 [ 1 ( sin θ 13 ) 2 ] sin θ 13 1 ( sin θ 23 ) 2 sin θ 23
where J = I m ( V u s V c b V u b * , V c s * ) . Hence, we choose | V C K M u s | , | V C K M u b | , | V C K M c b | , and J as independent quantities.
Within the SM context, the mixing matrix, V C K M , is defined by [3]:
V C K M = U u U d
where U u is the matrix that diagonalizes the mass matrix of the u-type quarks and U d is the matrix that diagonalizes the d-type quarks. It is here where the texture formalism is born, which consists of proposing structures with zeros in some entries of the mass matrix in such a way that we can find the matrices that diagonalize it and be able to calculate analytically the mixing matrix V C K M and validate the chosen texture structure.
Without loss of generality, the mass matrices M u and M d are considered hermitian so that the general mass structure has the form:
M q = E q D q F q D q * C q B q F q * B q * A q
where the index q runs over the labels u , d . The elements A q , C q , and E q are real, while B q , D q , and F q are complex and are usually written in their polar form Z q = | Z q | e ϕ Z q , where | Z q | is the magnitude and ϕ Z q is its angular phase ( Z = B , D , F ).
A matrix of the type 4-zero texture [18] is formed from the above matrix by taking the entries ( 1 , 1 ) , ( 1 , 3 ) , and ( 3 , 1 ) equal to zero. Thus, we arrive at the following matrix structure:
M q = 0 D q 0 D q * C q B q 0 B q * A q .
In references [6,18], it is shown that this matrix can be diagonalized by a unitary matrix as follows:
U q M q U q = d i a g { λ 1 q , λ 2 q , λ 3 q } O q T P q M q O q T P q = d i a g { λ 1 q , λ 2 q , λ 3 q }
where d i a g { λ 1 q , λ 2 q , λ 3 q } denotes a diagonal matrix and λ i q denotes each of the three eigenvalues of M q . The matrices P q and O q are given by:
P q = 1 0 0 0 e i ϕ D q 0 0 0 e i ( ϕ D q + ϕ B q )
and
O q = ( O q ) 11 ( O q ) 12 ( O q ) 13 ( O q ) 21 ( O q ) 22 ( O q ) 23 ( O q ) 31 ( O q ) 32 ( O q ) 33 = λ 2 q λ 3 q ( A q λ 1 q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 1 q ) η q λ 1 q λ 3 q ( λ 2 q A q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 2 q ) λ 1 q λ 2 q ( A q λ 3 q ) A q ( λ 3 q λ 1 q ) ( λ 3 q λ 2 q ) η q λ 1 q ( λ 1 q A q ) ( λ 2 q λ 1 q ) ( λ 3 q λ 1 q ) λ 2 q ( A q λ 2 q ) ( λ 2 q λ 1 q ) ( λ 3 q λ 2 q ) λ 3 q ( λ 3 q A q ) ( λ 3 q λ 1 q ) ( λ 3 q λ 2 q ) η q λ 1 q ( A q λ 2 q ) ( A q λ 3 q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 1 q ) λ 2 q ( A q λ 1 q ) ( λ 3 q A q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 2 q ) λ 3 q ( A q λ 1 q ) ( A q λ 2 q ) A q ( λ 3 q λ 1 q ) ( λ 3 q λ 2 q )
Taking λ 3 q > 0 and A q > 0 , the relations between λ i q and the physical masses of the quarks are:
( λ 1 u , λ 2 u , λ 3 u ) = ( η u m u , η u m c , m t )
( λ 1 d , λ 2 d , λ 3 d ) = ( η d m d , η d m s , m b )
where m u is the u quark mass, m c is the c quark mass, m t is the t quark mass, m d is the d quark mass, m s is the s quark mass, and m b is the b quark mass, and its experimental value is presented in Appendix A. In this work, the same 4-zero texture structure is considered for the u-quark mass matrix and the d-type quark mass matrix (a parallel mass structure). The q-index takes two values, q = u and q = d .
From Equation (9), it is noticed that the elements of the matrix O q depend on the free parameters η q and A q ; the first parameter takes values of + 1 and 1 and tells us that the eigenvalue of the mass matrix is negative, when η q = 1 the first eigenvalue λ 1 q is negative and the second eigenvalue λ 2 q is positive, and when η q = 1 the first eigenvalue λ 1 q is positive and the second eigenvalue λ 2 q is negative. The combinations of signs of the η u and η d parameters define the different cases of study to be considered (see Table 1). The second pair of free parameters are A u and A d , whose values are restricted to the intervals m c < A u < m t and m s < A d < m b to ensure that the elements of the O u and O d matrices are real.
From the above, the mixing matrix V C K M , predicted by the 4-zero texture model, is given by:
V C K M t h O u P u P d O d
in an explicit form:
( V C K M t h ) i j = ( O u ) 1 i ( O d ) 1 j + ( O u ) 2 i ( O d ) 2 j e i ϕ 1 + ( O u ) 3 i ( O d ) 3 j e i ( ϕ 1 + ϕ 2 )
where the phases ϕ 1 and ϕ 2 are defined as:
ϕ 1 = ϕ D u ϕ D d , ϕ 2 = ϕ B u ϕ B d .
These are considered to be measured in radians with principal argument [ 0 , 2 π ] , and the indices i and j correspond, respectively, to the indices ( u , c , t ) and ( d , s , b ) .
The magnitude of the elements of the mixing matrix is given by:
| ( V C K M ) i j t h | = { [ ( O u ) 1 i ( O d ) 1 j ] 2 + [ ( O u ) 2 i ( O d ) 2 j ] 2 + [ ( O u ) 3 i ( O d ) 3 j ] 2 + 2 [ ( O u ) 1 i ( O d ) 1 j ] [ ( O u ) 2 i ( O d ) 2 j ] C o s ( ϕ 1 ) + 2 [ ( O u ) 1 i ( O d ) 1 j ] [ ( O u ) 3 i ( O d ) 3 j ] C o s ( ϕ 1 + ϕ 2 ) + 2 [ ( O u ) 2 i ( O d ) 2 j ] [ ( O u ) 3 i ( O d ) 3 j ] C o s ( ϕ 2 ) } 1 / 2
At this point, it is essential to emphasize that analytical expressions are obtained for the elements of the mixing matrix predicted by the 4-zero texture formalism; with this information, it is possible to construct a more complete theory of the standard model where the origin of the mixing matrix V C K M is explained.
It only remains to validate the theoretical model of textures with 4 zeros, that is, to find the range of the free parameters A u , A d , ϕ 1 , and ϕ 2 that agree with the experimental values of the matrix V C K M , and for this, we define a function χ 2 ( X ) and use a chi-square criterion [45,46] established by:
0 < χ 2 ( X ) 4 < 1
where X = ( A u , A d , ϕ 1 , ϕ 2 ) and
χ 2 ( X ) = ( V C K M ) u s t h ( X ) V u s 2 σ V u s 2 + ( V C K M ) u b t h ( X ) V u b 2 σ V u b 2 + ( V C K M ) c b t h ( X ) V c b 2 σ V c b 2 + J t h ( X ) J 2 σ J q 2
The super-indexes “ t h ” are given by Equation (15) and the quantities without super-index are the experimental data with uncertainty σ V k l 2 (see Equations (A1) and (A2)).
Although, at first sight, the mathematically constructed function χ 2 ( X ) turns out to have a simple structure and the amount of free parameters is small, the difficulty of finding the numerical range of the same ones that fulfill the condition given in Equation (16) comes from the cumbersome composition of functions that constitute it. In order to have a notion of the topographic relief of this function, different projections of the function χ 2 ( X ) in different planes are shown in Figure 1. Figure 1a shows the projection onto the planes A u and A d (held fixed values for ϕ 1 = c 1 = c o n s t and ϕ 2 = c 2 = c o n s t ); that is, the dependence of the function χ 2 ( X ) on the variables A u and A d is shown (right graph) and the corresponding contour lines are illustrated (left graph). Similarly, Figure 1b corresponds to the projection of the function χ 2 ( X ) onto A u and ϕ 1 (setting A d = c 3 = c o n s t . and ϕ 2 = c 4 = c o n s t ). Analogously in the graphical displays, we show the behavior of the function χ 2 ( X ) in the planes A u and ϕ 2 (Figure 1c), in the planes A d and ϕ 1 (Figure 1d), in the planes A d and ϕ 2 (Figure 1e), and in the ϕ 1 and ϕ 2 planes (Figure 1f).
The following color code has been established; regions towards the intense red color mean larger values of χ 2 ( X ) , while regions towards the intense blue color correspond to smaller values of χ 2 ( X ) . We point out that the function to optimize χ 2 ( X ) : R 4 R is defined on (17) with variables A u , A d , ϕ 1 , ϕ 2 bounded on the boundaries [ m c , m t ] , [ m s , m b ] , [ 0 , 2 π ] , [ 0 , 2 π ] , and as can be observed from the graphs (Figure 1), the topography that it is intended to optimize is rugged. With this, one sees the opportunity to explore the use of alternative optimization techniques such as bio-inspired algorithms.

3. Review of PSO, DE, and DEPSO

3.1. Particle Swarm Optimization

The particle swarm optimization (PSO) algorithm [47] is one of the most popular swarm intelligence-based algorithms. It models the behavior of groups of individuals, such as flocks, to solve complex optimization problems in D-dimensional space. The algorithm performs a search for the global optimum using a swarm of N P particles. For the i-th particle of the swarm X i , t ( i = 1 , 2 , , N P ), its position and velocity at iteration t are represented by X i , t = { x i , t 1 , x i , t 2 , , x i , t D } and V i , t = { v i , t 1 , v i , t 2 , , v i , t D } , respectively. At each iteration, the position and velocity of each particle is updated, taking into account the information of the best solution found by the swarm X g b e s t , t and the best solution found individually, X p b e s t , t , by each particle. In the standard version of the PSO algorithm, the way in which the updating of the position and velocity of each particle is performed is carried out according to the following expressions [48]:
V i , t + 1 = ω · V i , t + c 1 · r 1 · ( X g b e s t , t X i , t ) + c 2 · r 2 · ( X p b e s t , t X i , t )
X i , t + 1 = X i , t + V i , t + 1
where c 1 and c 2 are the social and cognitive coefficients, respectively, which commonly take the values c 1 = c 2 = 2 . r 1 and r 2 are two randomly selected values within the interval [ 0 , 1 ] . ω is known as the inertia parameter and it aims to provide a better balance between global and local search. The inertia parameter ω is updated at each iteration as follows:
ω = ω m a x t · ( ω m a x ω m i n ) t m a x
where t refers to the current iteration, t m a x is the maximum number of iterations, and t m i n and t m a x are the minimum and maximum values of the inertia factor, which are generally set to values t m i n = 0.4 and t m a x = 0.9 .

3.2. Differential Evolution

The DE algorithm proposed by Storn and Price [49,50] is one of the most representative evolutionary algorithms, which, due to its ease of implementation, effectiveness, and robustness, has been widely used in the solution of several complex optimization problems [51,52]. This algorithm makes use of a population P constituted from N P individuals or parameter vectors:
P t = { X 1 , t , X 2 , t , , X i , t } , i = 1 , , N P
where t refers to the current iteration and each individual X i , t = { x i , t 1 , x 2 , t 2 , , x i , t D } has D dimensions. At the beginning of the execution of this algorithm, the population is initialized randomly within the search space of the problem.
The DE algorithm consists mainly of three operators: mutation, crossover, and selection. These operators allow the algorithm to perform the search for the global optimum. Mutation, crossover, and selection are applied consecutively to each individual X i , t of the population P t in each of the t iterations until a certain stopping criterion is satisfied. Within the mutation, in each iteration and for each individual, a mutated vector V i , t is generated by means of the information of the current population P t and the application of a mutation scheme. In the standard version of the DE algorithm, the mutated vector is generated following the DE/rand/1 mutation scheme, described as follows:
V i , t = X r 1 , t + F · ( X r 2 , t X r 3 , t )
where the indices r 1 , r 2 , and r 3 are randomly selected within the range [ 1 , N P ] such that they are different from each other and different from the index i. F > 0 is the scaling factor and is one of the parameters of the algorithm. The value of parameter F is typically within the interval [ 0 , 1 ] .
The next stage within the DE algorithm consists of the crossover, in which a test vector U i , t is generated; that is, once the mutated vector V i , t is generated for the individual X i , t , the information crossover between the vector X i , t and the vector V i , t is performed. This crossover operation is performed consistently following the binomial crossover:
U i , t j = V i , t j if r a n d [ 0 , 1 ] < C r or j = j rand X i , t j otherwise
where r a n d [ 0 , 1 ] is a number uniformly selected within the interval [ 0 , 1 ] and j rand is an index corresponding to a variable that is uniformly selected in the interval [ 1 , D ] . Here, C r [ 0 , 1 ] is known as the crossover probability, and like F, it is another parameter of the algorithm.
Once the corresponding test vector U i , t is generated for the individual X i , t , a selection procedure is carried out from which the population for iteration t + 1 is constructed. The standard way to perform this selection procedure consists of comparing the fit value of the test vector U i , t against the fit value of the individual X i , t , always keeping the individual with the best fit value. This procedure is performed as follows:
X i , t + 1 = U i , t if f ( U i , t ) < f ( X i , t ) X i , t otherwise

3.2.1. Improvements on DE Parameters Control

One of the components that influence the performance of the DE algorithm is the control of the F and C r parameters [32,53]. The standard version of the DE algorithm uses fixed values for these two parameters; however, the selection of these parameters can be done deterministically (following a certain rule that modifies these values after a certain number of iterations), adaptively (according to the feedback of the search process), and in a self-adaptive way (by means of the evolution information provided by the individuals of the population) [37]. One of the most representative variants concerning the improvement of the control of the parameters F and C r of the DE algorithm is proposed in the SHADE algorithm [32].
Precisely, in each iteration of the SHADE algorithm, the parameters F and C r , which are associated with test vectors with a better fit than their parent vector, are stored in two sets: S F and S C r , respectively. Two archives or memories, M F and M C r with a defined time H, and whose contents M C r , k and M F , k ( k = 1 , , H ) are initialized with a value of 0.5 , serve to generate the parameters F i and C r i of the i-th individual X i , t at each iteration t. The way to generate the values of these parameters requires the uniform selection of an index, r i [ 1 , H ] , and is carried out according to the following expressions:
C r i = randn i ( M C r , r i , 0.1 )
F i = randc i ( M F , r i , 0.1 )
where M C R , r i is the element selected from M C r to generate the value of C r i , while M F , r i is the element selected from M F to generate the value of F i ; randn i ( M C r , r i , 0.1 ) represents a normal distribution with mean M C R , r i and standard deviation 0.1 , while randc i ( M F , r i , 0.1 ) represents a Cauchy distribution with parameter position M F , r i and scale factor 0.1 . When generating the value of the parameter C r i , it must be verified that it is within the range [ 0 , 1 ] ; otherwise, if C r i < 0 then C r i is truncated to 0 and if C r i > 1 then C r i is truncated to 1. For the parameter F i , if F i > 1 then F i is truncated to 1, otherwise, if F i < 0 then F i is regenerated until F i > 0 .
At the end of each iteration, the contents of the M C r and M F memories are updated as follows:
M C r , k , t + 1 = m e a n W L ( S C r ) , si S C r M C r , k , t , otherwise
M F , k , t + 1 = m e a n W L ( S F ) , si S F M F , k , t , otherwise
where m e a n W L ( S ) is the Lehmer weighted mean defined by the means of Equation (29), and S refers to S C r or S F .
m e a n W L = k = 1 | S | w k · S k 2 k = 1 | S | w k · S k
w k = Δ f k l = 1 | S | Δ f l
Δ f k = | f ( X p , k , t ) f ( X k , t ) |
In Equations (27) and (28), k [ 1 , H ] determines the position in memory to be updated. At iteration t, the k-th element in memory is updated. At the beginning of the optimization process, k = 1 and is incremented each time a new element is added to memory. If k > H , k is assigned equal to 1.
Due to the good performance obtained by the SHADE algorithm when solving optimization problems using this strategy, this paper takes advantage of the adaptive control of the parameters provided by the SHADE algorithm.

3.2.2. DEPSO Algorithm

The self-adaptive differential evolution algorithm based on particle swarm optimization (DEPSO) [39] is a recent method, which incorporates characteristics of the PSO algorithm within the structure of the DE algorithm for solving numerical optimization problems. In this algorithm, the top α · N P ( α [ 0.1 , 0.9 ] ) of the best particles in the current iteration is used to form an elite sub-swarm ( P ) , while the rest are used to form a non-elite sub-swarm ( Q ) .
The DEPSO algorithm follows a scheme similar to the standard version of the DE algorithm, i.e., it uses mutation, crossover, and selection operators within the evolutionary process of the algorithm. Within the mutation, a mutation strategy is implemented that, in a self-adaptive way, selects between two mutation schemes to generate in each iteration a mutated vector V i , t + 1 for the i-th particle X i , t , as follows:
V i , t + 1 = X r 1 , t P + F i · ( X r 2 , t p X r 3 , t Q ) if r a n d [ 0 , 1 ] < S P t ω · X i , t + c 1 · r 1 · ( X g b e s t , t X i , t ) + c 2 · r 2 · ( X p b e s t , t X i , t ) otherwise
where r a n d [ 0 , 1 ] is a random number selected uniformly within the interval [ 0 , 1 ] and S P t represents the probability of selecting between one of the two mutation schemes. In Equation (32), the case in which r a n d [ 0 , 1 ] < S P t represents the form in which a novel mutation scheme denoted DE/e-rand/1 generates a mutated vector V i , t + 1 . In this scheme, two optimal solutions of the elite sub-swarm ( X r 1 , t P and X r 2 , t p ) and one solution of the non-elite sub-swarm ( X r 3 , G Q ) are required, and a scaling factor F i is used, which acts at the level of each particle. The remaining case represents the way in which the mutation scheme of the standard PSO algorithm is used to generate a mutated vector V i , t + 1 .
The selection probability S P t of Equation (32) changes adaptively within the evolution of the algorithm, as follows:
S P t = 1 1 + e ( 1 ( t m a x / ( t + 1 ) ) τ )
where t m a x is the maximum number of iterations and t is the current iteration. τ is a positive constant.
Within the crossover operator, a test vector U i , t is generated by combining the information of the current particle X i , t and the mutated vector V i , t + 1 , following the binomial crossover (see Equation (23)). The selection of the surviving particle to the next iteration is carried out by the competition between the current particle X i , t and the test vector U i , t ; the one with the best fit value according to the objective function is selected to remain within the next iteration.
When a particle remains stagnant over a maximum number of iterations (e.g., 5), the values of its corresponding scale factor F i and crossover probability C r i are reset in order to increase diversity. The way in which the values of these parameters are reset is as follows:
F i , t + 1 = F i , t if N S i < N S m a x F l + r a n d [ 0 , 1 ] · ( F u F l ) otherwise
C r i , t + 1 = C r i , t if N S i < N S m a x C r l + r a n d [ 0 , 1 ] · ( C r u C r l ) otherwise
where F i and C r i are the scaling factor and the crossover probability, respectively, of the particle X i , t at iteration t. F l = 0.1 and F u = 0.8 are the lower and upper bounds respectively of the scaling factor. C r l = 0.3 and C r u = 1.0 are the lower and upper bounds, respectively, of the crossover probability and r a n d [ 0 , 1 ] is a random number selected within the interval [ 0 , 1 ] . N S i is a stagnation counter for each particle and N S m a x is the maximum number of iterations with stagnation.
To avoid stagnation, the DEPSO algorithm randomly updates a sub-dimension of individuals within the non-elite population ( Q ) , in which N S i > N S m a x , to reset them as follows:
X i , t j = X m i n j + r a n d [ 0 , 1 ] · ( X m a x j X m i n j ) if r a n d [ 0 , 1 ] j γ , j = 1 , 2 , D X i , t j otherwise
where r a n d [ 0 , 1 ] j is a random number selected within the interval [ 0 , 1 ] , γ is a probability fixed value, and X m i n j and X m a x j are the lower and upper limits, respectively, of the variable j.
In general, the DEPSO algorithm manages to be superior to other variants of the DE algorithm in different optimization problems. The performance of this algorithm is due to the fact that it maintains a good balance between exploration and exploitation, thanks to the use of the self-adaptive mutation strategy, in which the “DE/e-rand/1” scheme has better exploration abilities, while the mutation scheme of the PSO algorithm achieves better convergence abilities. With this strategy, the population manages to maintain a good diversity in the first stages of the evolutionary process and a faster convergence towards the last stages of the process.

4. Proposed HE-DEPSO Algorithm

Despite the development of several advanced versions of the DE algorithm in recent years, its performance still needs to improve in optimization problems with multiple local minima. To address these issues, the design of effective mutation operators and parameter control are two key aspects to improve the performance of the DE algorithm. In the proposed HE-DEPSO algorithm, an adaptive hybrid mutation operator is developed, which takes the self-adaptive mutation strategy from the DEPSO algorithm [39] as a basis and incorporates the parameter control mechanism from the SHADE algorithm [32]. This adaptive hybrid mutation operator introduces a new mutation strategy called “DE/current-to-EHE/1”, which utilizes the historical information of the elite individuals in the population to enhance the optimization of the χ 2 function.

4.1. Adaptive Hybrid Mutation Operator

In order to achieve a better balance between exploration and exploitation, the mutation operator of the HE-DEPSO algorithm adopts a dual mechanism in which it adaptively selects between two mutation strategies. First, a new mutation strategy called “DE/current-to-EHE/1” is presented, which is oriented to improve the balance between exploration and exploitation abilities within the early stages of the evolutionary process. On the other hand, to improve the exploitation capacity within the more advanced stages of the evolutionary process, the mutation scheme of the PSO algorithm (Equation (32)) is incorporated in a similar way to the self-adaptive mutation strategy of the DEPSO algorithm (Equation (18)).
Unlike the “DE/e-rand/1” scheme used within the self-adaptive mutation strategy of the DEPSO algorithm, in which only the information of the individuals of the current iteration is used, the “DE/current-to-EHE/1” strategy uses the historical information of the elite and obsolete individuals of the evolutionary process to improve the exploration capability of the algorithm. This strategy is described below.

4.1.1. DE/Current-to-EHE/1 Mutation Strategy

Within evolutionary algorithms, the best individuals in the population, also known as elite individuals, retain valuable evolutionary information to guide the population to promising regions [39,54,55,56]. However, many of the proposals made use of the information from the elite individuals of the current iteration, completely forgetting the information from the previous elite individuals. This absence of information within subsequent iterations may limit the ability of new individuals to explore the search space. Because of this, historical evolution information is used to improve the ability to explore new individuals.
Before applying the mutation operator, all individuals in the current population P t are sorted in ascending order based on their fitness values. After reordering the population, two partitions of individuals are created. The first partition, denoted by E t , consists of the top p b % [ 0.1 , 1 ] best individuals or elite individuals. The second partition, denoted by NEt, groups all non-elite individuals, i.e., N P N P · p b % . It is important to note that Et∪ NEt = P t .
Unlike other mutation strategies [36,39,57], in which only elite individuals corresponding to the current iteration are used, this strategy makes use of the evolution history by incorporating two external archives of size N P . The first archive, denoted by HE, stores at each iteration the elite individuals belonging to the Et partition. If the size of the archive HE is greater than N P then its size is readjusted. The second archive used, denoted by HL, stores the obsolete individuals (individuals discarded within the selection process) and is updated at each iteration. Similar to the HE archive, if the size of the HL file is larger than N P then its size is reset. With the information of the individuals belonging to Et and HE, a group of candidates can be formed using Et∪ HE to mutate individuals in the population. On the other hand, the individuals of NEt and HL make up a group of individuals NEt∪ HL, whose information contributes to the exploration of the search space. The way in which a mutated vector is generated by the “DE/current-to-EHE/1” strategy is as follows:
V i , t = X i , t + F i · ( X E r , t X i , t ) + F i · ( X P r , t X H L r , t )
where X i , t is the i-th individual at iteration t, X E r , t is a randomly selected individual from among the individuals in the group Et∪ HE, X P r , t is an individual randomly selected from within the current population P t , X H L r , t is an individual randomly selected from the individuals comprising the group NEt∪ HL, and F i is the scaling factor corresponding to the i-th individual.
Following Equation (37), it is possible to observe that the “DE/current-to-EHE/1” strategy can help the HE-DEPSO algorithm to maintain a good exploration capability and direct the mutated individuals to promising regions without leading to stagnation in local minima. This can be explained as follows:
  • The use of a randomly selected individual within the group Et ∪ HE, i.e., X E r , t , helps to guide mutated individuals to more promising regions. However, due to the presence of the historical information of the elite individuals (HE), the mutated individuals are prevented from being directed towards the best regions found, thus allowing the maintenance of good diversity in the population and increasing the chances of targeting optimal regions.
  • The participation of two individuals, X P r , t and X L r , t , randomly selected from P t and HL, respectively, promotes the diversity of mutated individuals. Consequently, the search diversity of the HE-DEPSO algorithm is considerably improved, which is beneficial for escaping from possible local minima.
At the beginning of the execution of the HE-DEPSO algorithm, the archives HE and HL are defined as empty. Through the evolutionary process, they store the elite and obsolete individuals, respectively. When analyzing the effect of the parameter p b % , which determines the percentage of individuals within the participation of Et, it was found that values close to 0.9 promote the participation of a more significant number of elite individuals, causing a greater diversity in the population. However, this diversity increase may affect the HE-DEPSO algorithm’s convergence capability. On the other hand, values close to 0.1 may restrict the number of elite individuals to be considered, which improves the convergence of the algorithm but may cause stagnation at local minima. Due to the above, within the HE-DEPSO algorithm we choose to update in each iteration the value of p b % following a linear decrement, given as follows:
p b % = p m a x t · ( p m a x p m i n ) t m a x
where t refers to the current iteration, t m a x is the maximum number of iterations, and p m i n and p m a x are the minimum and maximum values of the interval assigned for the percentage of individuals within the partition Et. In this work, we have chosen to use the values p m a x = 0.4 and p m i n = 0.1 . In this way, the sensitivity of the p b % parameter is reduced and, at the same time, a good balance between exploitation and exploration is maintained.

4.1.2. Selection of Mutation Strategy for Adaptive Hybrid Mutation Operator

The adaptive hybrid mutation operator implemented in the HE-DEPSO algorithm uses two mutation strategies that aim to improve the exploration and exploitation abilities of the HE-DEPSO algorithm at different stages of the evolutionary process. Concretely, for each individual X i , t in the population, this operator generates a mutated vector V i , t as follows:    
V i , t = X i , t + F i · ( X E r , t X i , t ) + F i · ( X P r , t X H L r , t ) if r a n d [ 0 , 1 ] < α t ω · X i , t + c 1 · r 1 · ( X g b e s t , t X i , t ) + c 2 · r 2 · ( X p b e s t , t X i , t ) otherwise
where r a n d [ 0 , 1 ] is a random number selected uniformly within the interval [ 0 , 1 ] and α t represents the probability of selecting between the “DE/current-to-EHE/1” mutation strategy and the adopted mutation strategy of the PSO algorithm (Equation (18)).
The probability of selection α t is updated at each iteration according to Equation (33), taking a value of τ = 1.8 . Figure 2 shows the mutation strategy selection curve followed by the adaptive hybrid mutation operator of the HE-DEPSO algorithm. The probability selection curve described by Equation (33) with a value of τ = 1.8 , for a maximum number of iterations t m a x , is illustrated in red in Figure 2a. On the other hand, Figure 2b shows, as an example, a possible distribution of occurrences among which is selected the mutation strategy “DE/current-to-EHE/1” (squares in blue color) or the adopted strategy of the PSO algorithm (circles in magenta color). According to these graphs, it can be observed that, within the first stages of evolution of the HE-DEPSO algorithm, the probability of selection α t tends towards values close to 1, which increases the probability that the mutation strategy “DE/current-to-EHE/1” is selected, and the balance between exploration and exploitation provided by this strategy is taken advantage of. On the other hand, towards the advanced stages of the algorithm evolution, the mutation strategy adopted from the PSO algorithm is selected more frequently; this implies that, during the advanced stages, there is a higher probability that the HE-DEPSO algorithm increases its exploitation ability due to more frequent use of the information of the best positions of each individual, as well as the information of the position of the best individual in the population.

4.2. The Complete HE-DEPSO Algorithm

The HE-DEPSO algorithm follows the usual structure of the DE algorithm in general, in which we have the stages of mutation, crossover, and selection. After generating a mutated vector V i , t for the i-th individual of the population, as described in the previous section, we proceed to generate a test vector U i , t for the individual X i , t using the binomial crossover (Equation (23)). In order to reduce the sensitivity of the F and C r parameter controls, the HE-DEPSO algorithm takes advantage of the individual-level adaptive parameter control of the SHADE algorithm to generate the parameter configuration of F and C r for each individual. Finally, in the selection stage, the individuals that would make up the population for the next iteration are identified; this is done according to Equation (24). Based on the above description, the pseudocode of the complete HE-DEPSO is reported in Algorithm 1.
Algorithm 1: The pseudocode of HE-DEPSO
Applsci 14 09110 i001

4.3. Complexity Analysis of the HE-DEPSO Algorithm

The logical operations of HE-DEPSO are simple, so its computational complexity is low. The analysis uses the big- O notation. The initialization of the NP individuals has a complexity of O ( N P · D ) . The evaluation of the cost function is O ( N P ) , and the sorting of N P individuals has a complexity of O ( N P · l o g N P ) . Updating the values of each individual depends on mutation and crossover operations, each with a complexity of O ( D ) , so the update of the individuals is given by O ( N P ( 1 + l o g N P + 2 D ) ) . Thus, the total complexity of HE-DEPSO is O ( t m a x · N P ( 1 + l o g N P + 2 D ) ) . This complexity is similar to the computational cost of recent DE variants (CoDE, SHADE, and DEPSO) proposed for global optimization; therefore, HE-DEPSO has similar execution times to these variations [30,32,39].

5. Experimental Results and Analysis

This section verifies the performance of the proposed HE-DEPSO algorithm in solving different optimization problems. First, the set of test problems used is presented, then the algorithms selected to carry out comparisons are presented, as well as the configuration of their parameters. Then, the performance of the HE-DEPSO algorithm is compared against the selected algorithms within the test problem set. Finally, a discussion of the results obtained is presented.

5.1. Benchmark Functions

In order to validate the performance of the HE-DEPSO algorithm in solving different optimization problems, the set of single-objective test functions with boundary constraints CEC 2017 [58] was used. This set consists of 29 test functions whose global optimum is known. We have two unimodal functions ( F 1 and F 3 ) , seven simple multimodal functions ( F 4 F 10 ) , ten hybrid functions ( F 11 F 20 ) , and ten composite functions ( F 21 F 30 ) . In all these optimization problems, we seek to find the global minimum ( F m i n ) within the search space bounded in each dimension ( D ) by the interval [ 100 , 100 ] . Information about this set of test functions is briefly presented in Table 2.

5.2. Algorithms and Parameter Settings

In this subsection, HE-DEPSO is compared with five algorithms, including the PSO and DE algorithms, as well as three representative and advanced variants of the DE algorithm, named CoDE [30], SHADE [32], and DEPSO [39]. Among these algorithms, CoDE and DEPSO incorporate modifications mainly within the mutation operator of the DE algorithm, while SHADE modifies the control of the F and C r parameters. The performance comparison of the HE-DEPSO algorithm with the previously mentioned algorithms was performed in two dimensions, D = 10 and D = 30 , on the CEC 2017 test function set.
In order to ensure a fair comparison, the parameter settings in common to all algorithms were assigned identically: the maximum number of iterations, t m a x , was set to 1000, the population size, N P , to 100, and 31 independent runs were performed. The configuration of each algorithm’s parameters followed the values suggested by their respective authors, as presented in Table 3. The proposed HE-DEPSO algorithm was implemented in Python version 3.11, and all algorithms were executed on the same computer with 8 GB of RAM and a six-core 3.6 GHz processor.

5.3. Comparison with DE, PSO, and Three State-of-the-Art DE Variants

In order to evaluate the performance of the HE-DEPSO algorithm, its optimization results and convergence properties are compared and analyzed with respect to those obtained by the algorithms DE, PSO, CoDE, SHADE, and DEPSO, on the set of CEC 2017 test functions.
Table 4 and Table 5 report the statistical results for each algorithm at D = 10 and D = 30 , respectively. The error measure of the solution | F m i n F ( X * ) | , where F m i n is the known solution of the problem and X * is the best solution found in each iteration of each algorithm, was used to obtain these results. These tables show the mean and standard deviation (Std) of the solution error measure for each function and algorithm over the 31 independent runs and the ranking achieved based on the mean value obtained. The best results are highlighted in bold.
A non-parametric Wilcoxon rank sum test, with a confidentiality level of α = 0.05 , was performed to identify statistically significant differences between the results obtained by the HE-DEPSO algorithm and the results obtained by the other algorithms. In these tables, the statistical significance related to the performance of the HE-DEPSO algorithm is represented by the symbols “+”, “≈”, and “−”, which indicate that the performance of the HE-DEPSO algorithm is “better/similar/worse” than the algorithm to which it is compared. The row “W/T/L” counts the total number of “+”, “≈”, and “−”, respectively.

5.3.1. Optimization Results

According to the results reported in Table 4, for D = 10 the proposed HE-DEPSO algorithm obtains the global optimal solution for functions F 1 F 4 , F 6 , and F 9 . For the unimodal function F 3 , DEPSO and SHADE obtain the global optimal solution. On the other hand, SHADE, CoDE, and DE obtain the global optimal solution in the simple multimodal function F 9 . For functions F 5 , F 7 F 9 , F 11 , F 13 F 20 , F 23 , and F 30 , the HE-DEPSO algorithm achieves the best result among all algorithms. DEPSO is the best in the functions F 12 , F 21 , F 24 , and F 27 F 29 . SHADE obtains the best results on function F 10 , while CoDE obtains the best results on functions F 22 , F 25 , and F 26 . The PSO does not show superior results in any function. The results of this table indicate that, for these tests, the proposed HE-DEPSO algorithm obtains the best ranking according to the mean value among all the algorithms. On the other hand, the results of the Wilcoxon rank sum test show that HE-DEPSO is superior to DEPSO, SHADE, CoDE, DE, and PSO in 17, 20, 24, 25, and 29 functions out of a total of 29 test functions, respectively.
For D = 30 , the statistical results presented in Table 5 show that the HE-DEPSO algorithm achieves the best results on the functions F 1 F 5 , F 7 F 8 , F 10 F 18 , F 20 F 24 , F 26 F 27 , and F 29 . On the other hand, DEPSO, SHADE, CoDE, and DE achieve the best results in the F 27 function. DEPSO is the best in the functions F 6 , F 9 , F 25 , F 28 , and F 30 , while SHADE is shown to be the best in the function F 19 in comparison to the rest of the algorithms. With these results, it is possible to identify that HE-DEPSO obtains the best ranking among all the algorithms for these tests. According to the Wilcoxon rank sum test results, among the 29 test functions, HE-DEPSO has 19, 20, 26, 26, 25, and 27 items that are better than DEPSO, SHADE, CoDE, DE, and PSO, respectively.

5.3.2. Convergence Properties

The convergence properties can be summarized into four types, which are represented by the graphs presented in Figure 3 and Figure 4.
  • The convergence properties of the functions F 1 F 4 , F 6 , and F 9 can be identified within the same class; this can be observed with the help of Figure 3a–d. In this type, the HE-DEPSO algorithm shows faster convergence than the other algorithms at D = 10 , while at D = 30 it obtains a better mean error value.
  • In Figure 3e–h, the convergence curves of functions F 5 and F 8 are presented, which are similar to the corresponding ones for functions F 10 , F 16 , F 17 , and F 20 . All algorithms have different degrees of evolutionary stagnation or exhibit slow evolution for these functions.
  • The convergence curves of the functions F 12 F 15 , F 18 F 19 , F 23 , and F 29 F 30 are similar to the curves of the functions F 7 and F 24 (see Figure 4a–d) obtained for the HE-DEPSO algorithm. In this type, HE-DEPSO presents a fast convergence at the beginning and, subsequently, it keeps evolving downward.
  • The evolutionary process presented in the convergence curves of the functions F 22 and F 27 follows the same trend as the convergence curves of the functions F 25 and F 28 , as can be seen in Figure 4e–h. Here, most of the algorithms present stagnation in an accelerated way, although some of them manage to keep evolving downward.
The experiments performed above prove the superiority of the proposed HE-DEPSO algorithm in solving different optimization problems. The reasons why the HE-DEPSO algorithm obtains such superior performance can be summarized as follows:
  • The adaptive hybrid mutation operator implemented in HE-DEPSO is based on the self-adaptive mutation strategy of the DEPSO algorithm, which has proven to be helpful in tackling several complex optimization problems. On the other hand, the collaborative work between the “DE/current-to-EHE/1” mutation strategy with historical information of the elite individuals and the PSO algorithm strategy generates a good balance between exploration and exploitation at different stages of the evolutionary process.
  • The self-adaptive control of the F and C r parameters adopted from the SHADE algorithm allows the mitigation of parameter sensitivity. In this way, the crossover probability and the scale factor are dynamically adjusted during the evolutionary process at the level of each individual, which can make the proposed algorithm suitable for a wider variety of optimization problems.

6. Four-Zero Texture Model Validation

In order to verify the physical feasibility of the 4-zero texture model, it is essential to find sets of numerical values for the parameters A u , A d , ϕ 1 , and ϕ 2 that minimize the χ 2 ( X ) function to a value of less than one (see Equation (16)), and also that the numerical values for all | V c k m | predicted for the 4-zero texture model (evaluated in these sets of values) are in agreement with the experimental data. Firstly, an analysis of the HE-DEPSO algorithm’s behavior when optimizing the χ 2 ( X ) function will be conducted. The study will assess the algorithm’s optimization, convergence, and stability properties, providing insights into its performance when dealing with the χ 2 optimization problem.

6.1. HE-DEPSO Performance in χ 2 Optimization

The given optimization problem seeks to minimize the function χ 2 ( X ) given by Equation (17). The search space is constrained by the possible signs of the parameters η u and η d , as shown in Table 1. The phases ϕ 1 and ϕ 2 can take values between 0 and 2 π . The ranges of parameters A u and A d depend on the signs of η u and η d . Specifically considering case of study 1, A u can take values in the interval [ m c , m t ] , while A d can take values in the interval [ m s , m b ] .

6.1.1. Experiment Configuration

Through an experiment, an evaluation of the performance of the HE-DEPSO algorithm in the optimization of the χ 2 ( X ) function will be carried out in order to determine its ability to find the global minimum. This study will be carried out in the search space defined in case 1 previously mentioned. The performance of HE-DEPSO will be compared with the PSO and DE algorithms, as well as with advanced variants of the DE algorithm, such as CoDE, SHADE, and DEPSO. To ensure a fair comparison, standard parameters have been set for all algorithms ( t m a x = 1000 and N P = 100 ) and 31 independent runs have been performed. The settings of the other parameters of each algorithm have been made following the indications in Table 3.

6.1.2. Experiment Results

To evaluate the effectiveness of the proposed HE-DEPSO algorithm, comparisons and analysis of its optimization results were performed with the DE, PSO, and three advanced variants of DE algorithms applied to the χ 2 ( X ) function. Using an approach similar to that described in Section 5, Table 6 presents the statistical data for each algorithm evaluated.
According to the data presented in Table 6, the HE-DEPSO algorithm excels in obtaining the best results in optimizing the χ 2 ( X ) function, outperforming DEPSO. Although DEPSO comes close to the results of HE-DEPSO, its accuracy is not matched. On the other hand, the DE algorithm also demonstrated acceptable accuracy, although it was lower than that achieved by HE-DEPSO and DEPSO. In contrast, the SHADE, CoDE, and PSO algorithms obtained the lowest results compared to the others. In terms of average ranking, HE-DEPSO is positioned as the leader among all the analyzed algorithms. Furthermore, the results of the Wilcoxon test confirm the superiority of HE-DEPSO over DEPSO, SHADE, CoDE, DE, and PSO in the optimization of the χ 2 ( X ) function.
Figure 5 presents the curves obtained over 31 independent runs by the HE-DEPSO, DEPSO, SHADE, CoDE, DE, and PSO algorithms. It is observed that the HE-DEPSO algorithm exhibits excellent convergence behavior, consistently outperforming SHADE, CoDE, DE, and PSO and achieving a lower mean error value. Although DEPSO converges faster starting from the 300 iteration, the performance of HE-DEPSO stands out for its consistency and exploration capability, which could explain its more extended convergence compared to DEPSO.
In Figure 6, box and whisker plots show the results of the HE-DEPSO, DEPSO, SHADE, CoDE, DE, and PSO algorithms, based on the best overall fits of 31 independent runs to the chi-square function. These graphs illustrate the distribution of these values into quartiles, with the median highlighted by a red horizontal line within a blue box. The box boundaries correspond to the upper (Q3) and lower (Q1) quartiles. Lines extending from the box indicate the maximum and minimum values, excluding outliers represented by the “+” symbol in blue. The green circles represent the mean of the overall best fits of the 31 independent runs.
According to the box and whisker plots, the HE-DEPSO algorithm demonstrated superior performance compared with the other evaluated algorithms regarding stability and solution quality, based on 31 independent runs. The distribution of the best global fitness values achieved by HE-DEPSO presents a higher concentration around a lower optimum, reflected in a lower average than the other algorithms. Consequently, the HE-DEPSO offers a more robust and stable performance.

6.2. Valid Regions for Parameters A u , A d , ϕ 1 , and ϕ 2

In this subsection, we present the results obtained after applying the HE-DEPSO algorithm to search for valid regions of the A u , A d , ϕ 1 , and ϕ 2 parameters. This process was carried out using the most recent experimental values of the V C K M matrix elements, the quark masses, and the Jarlskog invariant (see Appendix A).
Figure 7 and Figure 8 present the allowed regions for the parameters A u , A d , ϕ 1 , and ϕ 2 , scaled as A u / m t , A d / m b , ϕ 1 / π , and ϕ 2 / π , respectively, in the first case study of Table 1. These regions are classified by the solutions found using the HE-DEPSO algorithm and are represented by black, blue, and orange dots, which correspond to different levels of precision of the χ 2 ( X ) function. The orange region, the smallest, indicates the region of highest precision, where experimental data agree best with theoretical predictions. These solutions were found by iteratively executing the algorithm until a total of 5000 solutions were collected at each of the precisions ( χ 2 < 1 , χ 2 < 1 × 10 1 , and χ 2 < 1 × 10 2 ).
It is worth noting that, for the rest of the case studies (see Table 1), the found regions for the parameters A u , A d , ϕ 1 , and ϕ 2 were similar.

6.3. Predictions for the V C K M Elements

The second part of model validation is as follows: With the data obtained from the valid regions for the four free parameters of the 4-zero texture model, A u , A d , ϕ 1 , and ϕ 2 , found with the help of HE-DEPSO, we can evaluate the feasibility of the model according to the latest experimental values of the V C K M matrix and the Jarlskog invariant. This can be achieved by checking the model’s predictive power when predicting the remaining elements of the V C K M matrix that were not used within the χ 2 fit, i.e., | V c d | , | V u d | , | V c s | , | V t b | , | V t d | , and | V t s | . Using Equations (2)–(5), the Chau–Keng parametrization [3], and the solutions found, we can calculate the numerical predictions for those elements. These predictions are then presented using scatter plots. Each graph includes the experimental central value of each element with its associated experimental error value. The points that fall within the region delimited by the experimental values are considered good predictions, while those outside the region are considered poor predictions.
In Figure 9, the predictions for the elements | V u d | and | V c d | in the first case study are presented. The black, blue, and orange points indicate a precision of χ 2 < 1 , χ 2 < 1 × 10 1 , and χ 2 < 1 × 10 2 , respectively. For | V c d | , the experimental central value ( 0.22636 ) is indicated with a solid red vertical line on the corresponding axis, while its experimental error ( ± 0.00048 ) is shown with two red dashed vertical lines on either side of the central value. In the case of | V u d | , the experimental central value ( 0.97401 ) is represented with a solid blue horizontal line on the | V u d | axis, and its experimental error ( ± 0.00011 ) is indicated with two blue dashed horizontal lines on either side of the central value. The predictions for the two elements show that the data with a precision of χ 2 < 1 can be outside the experimental error region, suggesting a poor prediction. On the other hand, the predictions with χ 2 < 1 × 10 1 and χ 2 < 1 × 10 2 remain within this region, indicating accurate predictions. This pattern is repeated in the rest of the analyzed case studies.
The predictions for the elements | V c s | and | V t b | are shown in Figure 10 for the first case study. The experimental central value of | V c s | is represented by a solid red vertical line at the center of the | V c s | axis at 0.97320 , while its experimental error of ± 0.00011 is indicated by red dashed vertical lines on either side. The experimental central value of | V t b | is shown by a solid blue horizontal line at the center of the | V t b | axis at 0.999172 , and its experimental error of ± 0.000024 is represented by blue dashed horizontal lines on either side. According to the predictions shown in Figure 10, the data obtained with χ 2 < 1 do not represent good predictions, as they are outside the experimental errors. On the other hand, the predictions with χ 2 < 1 × 10 1 and χ 2 < 1 × 10 2 remain within the error region, indicating that they are good predictions. This pattern is repeated in the four case studies.
Finally, in Figure 11, the predictions for the elements | V t d | and | V t s | are shown in the four case studies. For | V t d | , the experimental central value is represented by a solid vertical line at | V t d | = 0.00854 , and its experimental error with two red dashed vertical lines at ± 0.00016 . For | V t s | , the experimental central value is shown with a solid blue horizontal line at | V t s | = 0.03978 , and its experimental error with two blue dashed horizontal lines at ± 0.00060 . The predictions for these two elements indicate that the data obtained with precisions of χ 2 < 1 and χ 2 < 1 × 10 1 may be outside the experimental errors, so they are not good predictions. In contrast, the prediction with χ 2 < 1 × 10 2 remains within the errors, being a good prediction. These characteristics are observed in the four case studies.
When analyzing the predictions of the elements of the V C K M matrix ( | V u d | , | V c d | , | V c s | , | V t b | , | V t d | , and | V t s | ), it was observed that only the solutions of the HE-DEPSO algorithm, with an accuracy of χ 2 < 1 × 10 2 , fit within the limits of experimental error. These solutions are considered valid and capable of reproducing the V C K M matrix in all case studies and elements. Thus, it can be established that the 4-zero texture model is physically viable. It would be interesting, however, to explore the implications of this information obtained within the texture formalism.

7. Conclusions

This paper explored the feasibility of the 4-zero texture model using the most recent experimental values. To gather data that can inform decisions on the model’s viability, we employed a chi-square fit to compare the theoretical expressions of the model with the experimental values of well-measured physical observables. We developed a single-objective optimization model by defining a χ 2 ( X ) function to identify the allowed regions for the free parameters of the 4-zero texture model that are consistent with the experimental data. To address the challenge of optimizing the χ 2 ( X ) function within the chi-square fit, we proposed a new DEPSO algorithm variant, HE-DEPSO.
The proposed algorithm has demonstrated its ability to efficiently optimize different functions, particularly those from the CEC 2017 single-objective benchmark functions set. The convergence properties of HE-DEPSO show a good balance between solution precision and convergence speed in most cases. Regarding optimizing the χ 2 ( X ) function, our findings indicate that HE-DEPSO and DEPSO are more than adequate for solving the optimization problem, with HE-DEPSO providing the best solution quality and consistency. At the same time, DEPSO maintains a faster convergence rate. This part of the investigation also highlights the difficulty that some algorithms can face when optimizing the χ 2 ( X ) function, such as algorithms like SHADE and CoDE, despite exhibiting good performance on the CEC 2017 test set. We can also conclude that optimization algorithms such as SHADE, CoDE, and DE would be worth considering for optimization problems in high-energy physics. Finally, with the use of the HE-DEPSO algorithm we show that the 4-zero texture model is compatible with current experimental data, and thus we can affirm that this model is physically feasible. Importantly, our proposed approach, with the help of HE-DEPSO, allows a more accurate exploration of the full parameter space of the elements of the 4-zero texture quark mass matrix, an aspect that is often overlooked in analytical approaches. While aligning with the latest experimental data, this approach could enhance the structural features of 4-zero quark mass matrices. It may serve as a valuable starting point for future model building.
Future work will focus on enhancing the convergence speed of the HE-DEPSO algorithm and evaluating its effectiveness across diverse problem domains, exploring its ability to tackle more complex optimization challenges. An extended comparative analysis against other bio-inspired algorithms or other advanced DE variants could also be conducted. Furthermore, this research could be expanded by analyzing additional texture models like 2- and 5-zero texture models and determining their validity based on current experimental data. Additionally, the use of this method could have potential applications in any phenomenological fermion mass model SM extension, for example, nearest-neighbor textures [59], neutrino mass models [60], and discrete flavor symmetries models [14,15].

Author Contributions

Conceptualization, P.M.-R. and R.N.-P.; methodology, P.L.-E., P.M.-R. and R.N.-P.; software, E.M.-G.; validation, E.M.-G. and P.L.-E.; formal analysis, P.L.-E. and P.M.-R.; investigation, E.M.-G. and P.L.-E.; resources, P.M.-R. and P.L.-E.; writing—original draft preparation, E.M.-G., P.M.-R. and R.N.-P.; writing—review and editing, P.M.-R. and J.C.S.-T.-M.; visualization, P.M.-R. and J.C.S.-T.-M.; supervision, P.M.-R. and J.C.S.-T.-M.; funding acquisition, J.C.S.-T.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Autonomous University of Hidalgo (UAEH) and the National Council for Humanities, Science, and Technology (CONAHCYT) with project number F003-320109.

Data Availability Statement

The chi-Square function code is available on Github: https://github.com/EmX-Mtz/Chi-Square_Function, accessed on 5 August 2024.

Acknowledgments

The first author acknowledges the National Council for Humanities, Science, and Technology (CONAHCYT) for the financial support received in pursuing graduate studies, which has been essential for the completion of this research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this work, the following experimental values for the elements of the V C K M , the Jarlskog invariant, and the masses of the quarks (at the scale of M z ) are considered [61]:
| V C K M | = 0.97401 ± 0.00011 0.22650 ± 0.00048 0.00361 0.00009 + 0.00011 0.22636 ± 0.00048 0.97320 ± 0.00011 0.04053 0.00061 + 0.00083 0.00854 0.00016 + 0.00023 0.03978 0.00060 + 0.00082 0.999172 0.000035 + 0.000024
J = ( 3.00 0.09 + 0.15 ) × 10 5
m u = 1.23 ± 0.21 MeV , m d = 2.67 ± 0.19 MeV , m s = 53.16 ± 4.61 MeV , m c = 0.620 ± 0.017 GeV , m b = 2.839 ± 0.026 GeV , m t = 168.26 ± 0.75 GeV

References

  1. Battaglia, M.; Buras, A.J.; Gambino, P.; Stocchi, A.; Abbaneo, D.; Ali, A.; Amaral, P.; Andreev, V.; Artuso, M.; Barberio, E.; et al. The CKM Matrix and the Unitarity Triangle. In Proceedings of the First Workshop on the CKM Unitarity Triangle, CERN, Geneva, Switzerland, 13–16 February 2002. [Google Scholar] [CrossRef]
  2. Fleischer, R. Flavor physics and CP violation. In Proceedings of the European School on High-Energy Physics, Tsakhkadzor, Armenia, 24 August–6 September 2003; pp. 81–150. [Google Scholar]
  3. Branco, G.C.; Lavoura, L.; Silva, J.P. CP Violation; Clarendon Press: Oxford, UK, 1999; Volume 103. [Google Scholar]
  4. Altarelli, G.; Feruglio, F. Neutrino masses and mixings: A theoretical perspective. Phys. Rept. 1999, 320, 295–318. [Google Scholar] [CrossRef]
  5. Altarelli, G.; Feruglio, F. Models of neutrino masses and mixings. New J. Phys. 2004, 6, 106. [Google Scholar] [CrossRef]
  6. Fritzsch, H.; Xing, Z.Z. Mass and flavor mixing schemes of quarks and leptons. Prog. Part. Nucl. Phys. 2000, 45, 1–81. [Google Scholar] [CrossRef]
  7. Xing, Z.Z. Flavor structures of charged fermions and massive neutrinos. Phys. Rept. 2020, 854, 1–147. [Google Scholar] [CrossRef]
  8. Gupta, M.; Ahuja, G. Flavor mixings and textures of the fermion mass matrices. Int. J. Mod. Phys. A 2012, 27, 1230033. [Google Scholar] [CrossRef]
  9. Froggatt, C.D.; Nielsen, H.B. Hierarchy of Quark Masses, Cabibbo Angles and CP Violation. Nucl. Phys. B 1979, 147, 277–298. [Google Scholar] [CrossRef]
  10. King, S.F.; Luhn, C. Neutrino Mass and Mixing with Discrete Symmetry. Rept. Prog. Phys. 2013, 76, 056201. [Google Scholar] [CrossRef]
  11. Borzumati, F.; Nomura, Y. Low scale seesaw mechanisms for light neutrinos. Phys. Rev. D 2001, 64, 053005. [Google Scholar] [CrossRef]
  12. Lindner, M.; Ohlsson, T.; Seidl, G. Seesaw mechanisms for Dirac and Majorana neutrino masses. Phys. Rev. D 2002, 65, 053014. [Google Scholar] [CrossRef]
  13. Flieger, W.; Gluza, J. General neutrino mass spectrum and mixing properties in seesaw mechanisms. Chin. Phys. C 2021, 45, 023106. [Google Scholar] [CrossRef]
  14. Aranda, A.; Bonilla, C.; Rojas, A.D. Neutrino masses generation in a Z4 model. Phys. Rev. D 2012, 85, 036004. [Google Scholar] [CrossRef]
  15. Ahn, Y.H.; Kang, S.K. Non-zero θ13 and CP violation in a model with A4 flavor symmetry. Phys. Rev. D 2012, 86, 093003. [Google Scholar] [CrossRef]
  16. Sinha, R.; Samanta, R.; Ghosal, A. Maximal Zero Textures in Linear and Inverse Seesaw. Phys. Lett. B 2016, 759, 206–213. [Google Scholar] [CrossRef]
  17. Fritzsch, H. Calculating the Cabibbo angle. Phys. Lett. B 1977, 70, 436–440. [Google Scholar] [CrossRef]
  18. Fritzsch, H.; Xing, Z.Z. Four zero texture of Hermitian quark mass matrices and current experimental tests. Phys. Lett. B 2003, 555, 63–70. [Google Scholar] [CrossRef]
  19. Kang, K.; Kang, S.K. New class of quark mass matrix and calculability of the flavor mixing matrix. Phys. Rev. D 1997, 56, 1511–1514. [Google Scholar] [CrossRef]
  20. Xing, Z.-Z.; Zhao, Z.-H. On the four-zero texture of quark mass matrices and its stability. Nucl. Phys. B 2015, 897, 302–325. [Google Scholar] [CrossRef]
  21. Gómez-Ávila, S.; López-Lozano, L.; Miranda-Romagnoli, P.; Noriega-Papaqui, R.; Lagos-Eulogio, P. 2-zeroes texture and the Universal Texture Constraint. arXiv 2021, arXiv:2105.01554. [Google Scholar]
  22. Tani, L.; Rand, D.; Veelken, C.; Kadastik, M. Evolutionary algorithms for hyperparameter optimization in machine learning for application in high energy physics. Eur. Phys. J. C 2021, 81, 170. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Zhou, D. Application of differential evolution algorithm in future collider optimization. In Proceedings of the 7th International Particle Accelerator Conference (IPAC’16), Busan, Republic of Korea, 8–13 May 2016; pp. 1025–1027. [Google Scholar]
  24. Wu, J.; Zhang, Y.; Qin, Q.; Wang, Y.; Yu, C.; Zhou, D. Dynamic aperture optimization with diffusion map analysis at CEPC using differential evolution algorithm. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2020, 959, 163517. [Google Scholar] [CrossRef]
  25. Allanach, B.C.; Grellscheid, D.; Quevedo, F. Genetic algorithms and experimental discrimination of SUSY models. J. High Energy Phys. 2004, 2004, 069. [Google Scholar] [CrossRef]
  26. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  27. Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential evolution: A recent review based on state-of-the-art works. Alex. Eng. J. 2022, 61, 3831–3872. [Google Scholar] [CrossRef]
  28. Pan, J.S.; Liu, N.; Chu, S.C. A Hybrid Differential Evolution Algorithm and Its Application in Unmanned Combat Aerial Vehicle Path Planning. IEEE Access 2020, 8, 17691–17712. [Google Scholar] [CrossRef]
  29. Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
  30. Wang, Y.; Cai, Z.; Zhang, Q. Differential Evolution With Composite Trial Vector Generation Strategies and Control Parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  31. Peñuñuri, F.; Cab, C.; Carvente, O.; Zambrano-Arjona, M.; Tapia, J. A study of the Classical Differential Evolution control parameters. Swarm Evol. Comput. 2016, 26, 86–96. [Google Scholar] [CrossRef]
  32. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar] [CrossRef]
  33. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm with Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  34. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential Evolution Using a Neighborhood-Based Mutation Operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar] [CrossRef]
  35. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef]
  36. Zhang, J.; Sanderson, A. JADE: Adaptive Differential Evolution with Optional External Archive. Evol. Comput. IEEE Trans. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  37. Chakraborty, S.; Saha, A.K.; Sharma, S.; Sahoo, S.K.; Pal, G. Comparative Performance Analysis of Differential Evolution Variants on Engineering Design Problems. J. Bionic Eng. 2022, 19, 1140–1160. [Google Scholar] [CrossRef] [PubMed]
  38. Lagos-Eulogio, P.; Miranda-Romagnoli, P.; Seck-Tuoh-Mora, J.C.; Hernández-Romero, N. Improvement in Sizing Constrained Analog IC via Ts-CPD Algorithm. Computation 2023, 11, 230. [Google Scholar] [CrossRef]
  39. Wang, S.; Li, Y.; Yang, H. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization. Appl. Soft Comput. 2019, 81, 105496. [Google Scholar] [CrossRef]
  40. Novaes, S.F. Standard model: An Introduction. In Proceedings of the 10th Jorge Andre Swieca Summer School: Particle and Fields, São Paulo, Brazil, 6–12 February 1999; pp. 5–102. [Google Scholar]
  41. Langacker, P. Introduction to the Standard Model and Electroweak Physics. In Proceedings of the Theoretical Advanced Study Institute in Elementary Particle Physics: The Dawn of the LHC Era, Boulder, CO, USA, 1 January 2010; pp. 3–48. [Google Scholar] [CrossRef]
  42. Isidori, G.; Nir, Y.; Perez, G. Flavor Physics Constraints for Physics Beyond the Standard Model. Ann. Rev. Nucl. Part. Sci. 2010, 60, 355. [Google Scholar] [CrossRef]
  43. Antonelli, M.; Asner, D.M.; Bauer, D.; Becher, T.; Beneke, M.; Bevan, A.J.; Blanke, M.; Bloise, C.; Bona, M.; Bondar, A.; et al. Flavor Physics in the Quark Sector. Phys. Rept. 2010, 494, 197–414. [Google Scholar] [CrossRef]
  44. Eidelman, S. Review of particle physics. Particle Data Group. Phys. Lett. B 2004, 592, 1. [Google Scholar] [CrossRef]
  45. Félix-Beltrán, O.; González-Canales, F.; Hernández-Sánchez, J.; Moretti, S.; Noriega-Papaqui, R.; Rosado, A. Analysis of the quark sector in the 2HDM with a four-zero Yukawa texture using the most recent data on the CKM matrix. Phys. Lett. B 2015, 742, 347–352. [Google Scholar] [CrossRef]
  46. Barranco, J.; Delepine, D.; Lopez-Lozano, L. Neutrino mass determination from a four-zero texture mass matrix. Phys. Rev. D 2012, 86, 053012. [Google Scholar] [CrossRef]
  47. Eberhart, R.C.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95, Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  48. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), Anchorage, Alaska, 4–9 May 1998; pp. 69–73. [Google Scholar] [CrossRef]
  49. Storn, R.; Price, K. Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. J. Glob. Optim. 1995, 23. Available online: https://cse.engineering.nyu.edu/~mleung/CS909/s04/Storn95-012.pdf (accessed on 1 July 2024).
  50. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  51. Sánchez Vargas, O.; De León Aldaco, S.E.; Aguayo Alquicira, J.; Vela Valdés, L.G.; Mina Antonio, J.D. Differential Evolution Applied to a Multilevel Inverter&mdash;A Case Study. Appl. Sci. 2022, 12, 9910. [Google Scholar] [CrossRef]
  52. Batool, R.; Bibi, N.; Muhammad, N.; Alhazmi, S. Detection of Primary User Emulation Attack Using the Differential Evolution Algorithm in Cognitive Radio Networks. Appl. Sci. 2023, 13, 571. [Google Scholar] [CrossRef]
  53. Elsayed, S.M.; Sarker, R.A.; Ray, T. Differential evolution with automatic parameter configuration for solving the CEC2013 competition on Real-Parameter Optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1932–1937. [Google Scholar] [CrossRef]
  54. Zhong, X.; Cheng, P. An elite-guided hierarchical differential evolution algorithm. Appl. Intell. 2021, 51, 4962–4983. [Google Scholar] [CrossRef]
  55. Yang, Q.; Guo, X.; Gao, X.D.; Xu, D.D.; Lu, Z.Y. Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization. Mathematics 2022, 10, 1261. [Google Scholar] [CrossRef]
  56. Huang, Y.; Yu, Y.; Guo, J.; Wu, Y. Self-adaptive Artificial Bee Colony with a Candidate Strategy Pool. Appl. Sci. 2023, 13, 10445. [Google Scholar] [CrossRef]
  57. Wang, S.; Li, Y.; Yang, H.; Liu, H. Self-adaptive differential evolution algorithm with improved mutation strategy. Soft Comput. 2017, 22, 3433–3447. [Google Scholar] [CrossRef]
  58. Kumar, A.; Price, K.V.; Mohamed, A.W.; Hadi, A.A.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016; pp. 1–34. [Google Scholar]
  59. Branco, G.C.; Emmanuel-Costa, D.; Simoes, C. Nearest-Neighbour Interaction from an Abelian Symmetry and Deviations from Hermiticity. Phys. Lett. B 2010, 690, 62–67. [Google Scholar] [CrossRef]
  60. Centelles Chuliá, S.; Herrero-Brocal, A.; Vicente, A. The Type-I Seesaw family. J. High Energy Phys. 2024, 2024, 60. [Google Scholar] [CrossRef]
  61. Fritzsch, H.; Xing, Z.Z.; Zhang, D. Correlations between quark mass and flavor mixing hierarchies. Nucl. Phys. B 2022, 974, 115634. [Google Scholar] [CrossRef]
Figure 1. χ 2 function projections over variables A u , A d , ϕ 1 , ϕ 2 . (a) Dependence of the function χ 2 on the variables A u and A d (right graph). From the contour lines (left-hand graph), we note that the region that minimizes χ 2 lies around a straight line at 45 . (b) Dependence of the function χ 2 with respect to the variables A u and ϕ 1 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima of the function χ 2 with respect to the variable ϕ 1 . (c) Dependence of the function χ 2 with respect to the variables A u and ϕ 2 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima and the minima of the function χ 2 with respect to the variable ϕ 2 . (d) Dependence of the function χ 2 with respect to the variables A d and ϕ 1 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima of the function χ 2 with respect to the variable ϕ 1 . (e) Dependence of the function χ 2 with respect to the variables A d and ϕ 2 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima and the minima of the function χ 2 with respect to the variable ϕ 2 . (f) Dependence of the function χ 2 on the variables ϕ 1 and ϕ 2 (right-hand graph) and corresponding contour lines (left-hand graph).
Figure 1. χ 2 function projections over variables A u , A d , ϕ 1 , ϕ 2 . (a) Dependence of the function χ 2 on the variables A u and A d (right graph). From the contour lines (left-hand graph), we note that the region that minimizes χ 2 lies around a straight line at 45 . (b) Dependence of the function χ 2 with respect to the variables A u and ϕ 1 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima of the function χ 2 with respect to the variable ϕ 1 . (c) Dependence of the function χ 2 with respect to the variables A u and ϕ 2 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima and the minima of the function χ 2 with respect to the variable ϕ 2 . (d) Dependence of the function χ 2 with respect to the variables A d and ϕ 1 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima of the function χ 2 with respect to the variable ϕ 1 . (e) Dependence of the function χ 2 with respect to the variables A d and ϕ 2 (right-hand graph) and corresponding contour lines (left-hand graph). We note a periodicity of the maxima and the minima of the function χ 2 with respect to the variable ϕ 2 . (f) Dependence of the function χ 2 on the variables ϕ 1 and ϕ 2 (right-hand graph) and corresponding contour lines (left-hand graph).
Applsci 14 09110 g001
Figure 2. Strategy selection curve; (a) Selection probability α t , (b) Distribution of strategy selection.
Figure 2. Strategy selection curve; (a) Selection probability α t , (b) Distribution of strategy selection.
Applsci 14 09110 g002
Figure 3. Convergence curves for the functions F 1 , F 3 , F 5 , and F 8 , with D = 10 and 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 3. Convergence curves for the functions F 1 , F 3 , F 5 , and F 8 , with D = 10 and 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Applsci 14 09110 g003aApplsci 14 09110 g003b
Figure 4. Convergence curves for the functions F 1 , F 3 , F 5 and F 8 , with D = 10 and 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 4. Convergence curves for the functions F 1 , F 3 , F 5 and F 8 , with D = 10 and 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Applsci 14 09110 g004
Figure 5. Convergence curves of the error measure in the solution for the χ 2 ( X ) function. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent iterations.
Figure 5. Convergence curves of the error measure in the solution for the χ 2 ( X ) function. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent iterations.
Applsci 14 09110 g005
Figure 6. Box and whisker plots of the best global fit values obtained by HE-DEPSO and the DEPSO, SHADE, CoDE, DE, and PSO algorithms in the 31 independent repetitions in the χ 2 ( X ) function. The horizontal axis shows the algorithms to be compared, and the vertical axis shows the global fit values.
Figure 6. Box and whisker plots of the best global fit values obtained by HE-DEPSO and the DEPSO, SHADE, CoDE, DE, and PSO algorithms in the 31 independent repetitions in the χ 2 ( X ) function. The horizontal axis shows the algorithms to be compared, and the vertical axis shows the global fit values.
Applsci 14 09110 g006
Figure 7. Allowed regions for A u and A d constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dots), χ 2 < 1 × 10 1 (blue dots), and χ 2 < 1 × 10 2 (orange dots).
Figure 7. Allowed regions for A u and A d constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dots), χ 2 < 1 × 10 1 (blue dots), and χ 2 < 1 × 10 2 (orange dots).
Applsci 14 09110 g007
Figure 8. Allowed regions for ϕ 1 and ϕ 2 constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dost), χ 2 < 1 × 10 1 (blue dots), and χ 2 < 1 × 10 2 (orange dots).
Figure 8. Allowed regions for ϕ 1 and ϕ 2 constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dost), χ 2 < 1 × 10 1 (blue dots), and χ 2 < 1 × 10 2 (orange dots).
Applsci 14 09110 g008
Figure 9. Predictions for | V c d | and | V u d | for η u = + 1 and η d = + 1 .
Figure 9. Predictions for | V c d | and | V u d | for η u = + 1 and η d = + 1 .
Applsci 14 09110 g009
Figure 10. Predictions for | V c s | and | V t b | for η u = + 1 and η d = + 1 .
Figure 10. Predictions for | V c s | and | V t b | for η u = + 1 and η d = + 1 .
Applsci 14 09110 g010
Figure 11. Predictions for | V t d | and | V t s | for η u = + 1 and η d = + 1 .
Figure 11. Predictions for | V t d | and | V t s | for η u = + 1 and η d = + 1 .
Applsci 14 09110 g011
Table 1. Cases of study.
Table 1. Cases of study.
Case η u η d
1 + 1 + 1
2 + 1 1
3 1 + 1
4 1 1
Table 2. Details of benchmark functions used in the experiments.
Table 2. Details of benchmark functions used in the experiments.
Function TypeIndexFunction NameOptimum ( F min )
Unimodal1Shifted and Rotated Bent Cigar100
Functions3Shifted and Rotated Zakharov300
4Shifted and Rotated Rosenbrock400
5Shifted and Rotated Rastrigin500
Simple6Shifted and Rotated Expanded Schaffer F6600
Multimodal7Shifted and Rotated Lunacek Bi-Rastrigin700
Functions8Shifted and Rotated Non-Continuous Rastrigin800
9Shifted and Rotated Levy900
10Shifted and Rotated Schwefel1000
11Hybrid Problem 1 (N = 3)1100
12Hybrid Problem 2 (N = 3)1200
13Hybrid Problem 3 (N = 3)1300
Hybrid14Hybrid Problem 4 (N = 4)1400
Functions15Hybrid Problem 5 (N = 4)1500
16Hybrid Problem 6 (N = 4)1600
17Hybrid Problem 7 (N = 5)1700
18Hybrid Problem 8 (N = 5)1800
19Hybrid Problem 9 (N = 5)1900
20Hybrid Problem 10 (N = 6)2000
21Composition Problem 1 (N = 3)2100
22Composition Problem 2 (N = 3)2200
23Composition Problem 3 (N = 4)2300
Composition24Composition Problem 4 (N = 4)2400
Functions25Composition Problem 5 (N = 5)2500
26Composition Problem 6 (N = 5)2600
27Composition Problem 7 (N = 6)2700
28Composition Problem 8 (N = 6)2800
29Composition Problem 9 (N = 9)2900
30Composition Problem 10 (N = 3)3000
Table 3. Parameter settings of all algorithms.
Table 3. Parameter settings of all algorithms.
AlgorithmParameters
HE-DEPSO p m i n = 0.1 , p m a x = 0.4 , τ = 1.8 , c 1 = c 2 = 2 , ω [ 0.4 , 0.9 ] ,
H = N P , M F ( 1 : H ) = 0.5 , M C r ( 1 : H ) = 0.8
DEPSO c 1 = c 2 = 2 , ω [ 0.4 , 0.9 ] , C r [ 0.3 , 1.0 ] , F [ 0.1 , 0.8 ] ,
N S m a x = 5 , γ = 0.001 , τ = 1.8 , S E P = 0.3 · N P
SHADE H = N P , M F ( 1 : H ) = M C r ( 1 : H ) = 0.5
CoDE(1) [ F = 1.0 , C r = 0.1 ] , (2) [ F = 1.0 , C r = 0.9 ] , (3) [ F = 0.8 , C r = 0.2 ]
DE C r = 0.9 , F = 0.5
PSO ω [ 0.4 , 0.9 ] , c 1 = c 2 = 2
Table 4. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the set of CEC 2017 test functions with D = 10 .
Table 4. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the set of CEC 2017 test functions with D = 10 .
FunctionMetricsHE-DEPSODEPSOSHADECoDEDEPSO
F 1 Mean0.00001.8337 ×   10 15 0.00001.1580 ×   10 10 3.5103 ×   10 5 1.1366 ×   10 10
Std0.00004.8427 ×   10 15 0.00006.0507 ×   10 11 2.0793 ×   10 5 9.7631 ×   10 9
Rank141325
Significance +++
F 3 Mean9.1683 ×   10 16 1.3752 ×   10 14 1.6503 ×   10 14 2.0001 ×   10 9 1.7359 ×   10 4 1.5886 ×   10 10
Std0.00000.00000.00002.7390 ×   10 11 4.8441 ×   10 4 6.2297 ×   10 2
Rank111234
Significance +++
F 4 Mean0.00002.3838 ×   10 14 2.0805 ×   10 0 2.3103 ×   10 9 1.0409 ×   10 3 5.8071 ×   10 1
Std0.00002.8513 ×   10 14 6.7544 ×   10 1 1.2601 ×   10 9 6.4578 ×   10 4 6.9704 ×   10 1
Rank125346
Significance +++++
F 5 Mean2.8439 ×   10 0 4.0119 ×   10 0 3.8489 ×   10 0 8.5594 ×   10 0 2.9710 ×   10 1 3.0728 ×   10 1
Std7.2094 ×   10 1 1.6940 × 10 0 1.0401 ×   10 0 1.4246 ×   10 0 3.9355 ×   10 0 1.2073 ×   10 1
Rank132456
Significance +++++
F 6 Mean0.00004.6321 ×   10 3 2.7530 ×   10 4 4.6895 ×   10 3 4.5748 ×   10 5 1.1970 ×   10 1
Std0.00002.5790 ×   10 2 5.5026 ×   10 4 2.4954 ×   10 3 2.2095 ×   10 5 9.6167 ×   10 0
Rank143526
Significance +++++
F 7 Mean1.2678 ×   10 1 2.1067 ×   10 1 1.3953 ×   10 1 2.0678 ×   10 1 3.9143 ×   10 1 3.0657 ×   10 1
Std7.0313 ×   10 1 6.7134 ×   10 0 7.6551 ×   10 1 2.2560 ×   10 0 3.7389 ×   10 0 8.6791 ×   10 0
Rank142365
Significance +++++
F 8 Mean2.2853 ×   10 0 4.2045 ×   10 0 3.8070 ×   10 0 8.6884 ×   10 0 2.6037 ×   10 1 2.4833 ×   10 1
Std8.1114 ×   10 1 1.9346 ×   10 0 7.3957 ×   10 1 1.8765 ×   10 0 5.5290 ×   10 0 1.1493 ×   10 1
Rank132465
Significance +++++
F 9 Mean0.00007.3346 ×   10 15 0.00000.00000.00006.8233 ×   10 1
Std0.00002.8391 ×   10 14 0.00000.00000.00001.9812 ×   10 2
Rank121113
Significance ++
F 10 Mean2.4499 ×   10 2 8.9742 ×   10 2 1.5812 ×   10 2 7.4438 ×   10 2 1.4730 ×   10 3 9.1356 ×   10 2
Std1.1864 ×   10 2 1.6890 ×   10 2 7.6565 ×   10 1 1.1745 ×   10 2 1.7509 ×   10 2 3.3829 ×   10 2
Rank241365
Significance +++
F 11 Mean2.5687 ×   10 7 8.6658 ×   10 1 2.3208 ×   10 0 1.2502 ×   10 0 6.7909 ×   10 0 8.1368 ×   10 1
Std1.4103 ×   10 6 6.1559 ×   10 1 4.7869 ×   10 1 4.6467 ×   10 1 9.9142 ×   10 1 8.3317 ×   10 1
Rank124356
Significance +++++
F 12 Mean8.1966 ×   10 1 1.1535 ×   10 1 3.5298 ×   10 1 2.7652 ×   10 2 5.3458 ×   10 2 8.7444 ×   10 7
Std7.0615 ×   10 1 2.8045 ×   10 1 1.4666 ×   10 1 6.4786 ×   10 1 1.2345 ×   10 2 3.8318 ×   10 8
Rank312456
Significance +++
F 13 Mean3.8882 ×   10 0 4.6716 ×   10 0 5.9948 ×   10 0 1.1231 ×   10 1 1.2796 ×   10 1 6.3448 ×   10 3
Std2.4042 ×   10 0 9.7586 ×   10 1 6.0682 ×   10 1 2.0901 ×   10 0 1.7030 ×   10 0 1.2009 ×   10 4
Rank123456
Significance ++++
F 14 Mean2.4091 ×   10 0 6.8868 ×   10 0 1.9065 ×   10 1 2.0295 ×   10 1 2.6185 ×   10 1 1.7551 ×   10 2
Std5.1573 ×   10 0 8.0249 ×   10 0 3.0340 ×   10 0 1.2717 ×   10 1 1.2365 ×   10 0 9.5615 ×   10 1
Rank123456
Significance +++++
F 15 Mean3.6646 ×   10 1 5.7132 ×   10 1 1.0838 ×   10 0 1.3990 ×   10 0 2.1239 ×   10 0 4.6041 ×   10 2
Std2.3145 ×   10 1 4.9456 ×   10 1 3.3723 ×   10 1 5.4455 ×   10 1 1.2934 ×   10 0 1.0508 ×   10 3
Rank123456
Significance +++++
F 16 Mean1.4450 ×   10 0 1.6357 ×   10 0 4.5365 ×   10 0 4.3009 ×   10 0 1.9028 ×   10 1 1.2496 ×   10 2
Std3.2202 ×   10 1 3.2942 ×   10 0 2.2215 ×   10 0 2.5242 ×   10 0 1.6484 ×   10 1 1.0780 ×   10 2
Rank124356
Significance ++++
F 17 Mean7.2016 ×   10 0 1.2075 ×   10 1 2.1001 ×   10 1 2.0112 ×   10 1 2.8168 ×   10 1 5.7710 ×   10 1
Std3.0221 ×   10 0 8.5311 ×   10 0 3.2211 ×   10 0 2.8326 ×   10 0 1.8397 ×   10 0 2.4017 ×   10 1
Rank124356
Significance +++++
F 18 Mean3.8770 ×   10 1 8.2949 ×   10 0 1.9823 ×   10 1 2.0117 ×   10 1 2.0542 ×   10 1 2.7241 ×   10 4
Std1.4583 ×   10 1 6.7399 ×   10 0 1.9348 ×   10 0 3.4569 ×   10 2 1.0532 ×   10 1 1.8731 ×   10 4
Rank123456
Significance +++++
F 19 Mean9.7895 ×   10 2 5.4625 ×   10 1 1.0349 ×   10 0 5.9210 ×   10 1 1.2208 ×   10 0 5.0978 ×   10 3
Std1.2871 ×   10 1 3.7564 ×   10 1 1.5715 ×   10 1 4.4716 ×   10 2 2.0310 ×   10 1 1.1584 ×   10 4
Rank124356
Significance +++++
F 20 Mean5.2046 ×   10 0 6.9310 ×   10 0 1.7787 ×   10 1 2.0000 ×   10 1 2.5137 ×   10 1 5.9825 ×   10 1
Std3.1153 ×   10 0 6.8469 ×   10 0 3.9314 ×   10 0 1.4932 ×   10 4 1.9501 ×   10 0 4.0129 ×   10 1
Rank123456
Significance ++++
F 21 Mean1.7676 ×   10 2 1.0695 ×   10 2 1.5575 ×   10 2 1.2554 ×   10 2 1.9598 ×   10 2 1.9045 ×   10 2
Std4.6029 ×   10 1 2.0972 ×   10 1 4.2795 ×   10 1 4.8087 ×   10 1 5.7606 ×   10 1 6.2148 ×   10 1
Rank413265
Significance ++
F 22 Mean1.0000 ×   10 2 9.7428 ×   10 1 9.9647 ×   10 1 7.4280 ×   10 1 1.0070 ×   10 2 1.2711 ×   10 2
Std0.00001.5939 ×   10 1 2.1997 ×   10 0 4.4524 ×   10 1 5.7442 ×   10 1 5.6792 ×   10 1
Rank423156
Significance +++
F 23 Mean3.0613 ×   10 2 3.0845 ×   10 2 3.0859 ×   10 2 3.2072 ×   10 2 3.3600 ×   10 2 3.4479 ×   10 2
Std1.4783 ×   10 0 2.1842 ×   10 0 1.4096 ×   10 0 3.4098 ×   10 0 5.2784 ×   10 0 1.6592 ×   10 1
Rank123456
Significance +++++
F 24 Mean3.2397 ×   10 2 2.6066 ×   10 2 3.0017 ×   10 2 3.0685 ×   10 2 3.6119 ×   10 2 3.7743 ×   10 2
Std4.1615 ×   10 1 1.1813 ×   10 2 6.5109 ×   10 1 9.2286 ×   10 1 4.7582 ×   10 0 1.5025 ×   10 1
Rank412356
Significance ++
F 25 Mean4.1869 ×   10 2 4.2783 ×   10 2 4.0980 ×   10 2 3.9783 ×   10 2 4.0247 ×   10 2 4.7998 ×   10 2
Std2.3095 ×   10 1 2.2521 ×   10 1 2.0463 ×   10 1 1.5694 ×   10 1 1.3981 ×   10 1 6.1710 ×   10 1
Rank453126
Significance ++
F 26 Mean3.0304 ×   10 2 3.0849 ×   10 2 3.0085 ×   10 2 2.9625 ×   10 2 3.8419 ×   10 2 5.6150 ×   10 2
Std1.1775 ×   10 1 3.2854 ×   10 1 1.7405 ×   10 1 6.3643 ×   10 1 1.6020 ×   10 2 2.5242 ×   10 2
Rank342156
Significance +++
F 27 Mean3.7584 ×   10 2 3.7153 ×   10 2 3.7683 ×   10 2 4.1575 ×   10 2 3.9004 ×   10 2 4.1060 ×   10 2
Std2.3098 ×   10 1 5.3891 ×   10 1 7.7935 ×   10 0 1.8031 ×   10 1 3.4010 ×   10 1 2.0006 ×   10 1
Rank213546
Significance ++++
F 28 Mean4.7251 ×   10 2 4.7218 ×   10 2 4.7338 ×   10 2 4.7816 ×   10 2 4.7253 ×   10 2 6.1933 ×   10 2
Std3.0463 ×   10 2 1.8112 ×   10 0 2.7252 ×   10 0 6.1120 ×   10 0 1.6134 ×   10 1 1.5566 ×   10 2
Rank214536
Significance ++++
F 29 Mean2.3616 ×   10 2 2.3508 ×   10 2 2.4658 ×   10 2 2.5569 ×   10 2 2.8284 ×   10 2 3.4235 ×   10 2
Std6.1991 ×   10 0 8.2538 ×   10 0 6.1477 ×   10 0 9.0010 ×   10 0 1.4685 ×   10 1 9.4838 ×   10 1
Rank213456
Significance ++++
F 30 Mean2.0061 ×   10 2 2.0113 ×   10 2 2.0227 ×   10 2 2.0313 ×   10 2 2.0320 ×   10 2 1.6044 ×   10 6
Std1.5718 ×   10 1 5.2187 ×   10 1 3.8772 ×   10 1 4.7938 ×   10 1 8.5716 ×   10 1 2.1981 ×   10 6
Rank123456
Significance +++++
Average rank 1.682.272.753.244.485.48
Final rank 123456
W/T/L −/−/−17/9/320/4/524/3/225/4/029/0/0
Table 5. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the CEC 2017 test function set with D = 30 .
Table 5. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the CEC 2017 test function set with D = 30 .
FunctionMetricsHE-DEPSODEPSOSHADECoDEDEPSO
F 1 Mean1.6045 ×   10 14 2.0034 ×   10 4 2.3598 ×   10 3 1.2860 ×   10 8 2.0817 ×   10 9 6.3600 ×   10 10
Std6.0758 ×   10 15 4.9086 ×   10 4 1.2985 ×   10 3 3.5599 ×   10 7 7.3074 ×   10 8 5.8100 ×   10 10
Rank123456
Significance +++++
F 3 Mean1.4486 ×   10 13 1.3343 ×   10 2 6.1189 ×   10 4 7.0539 ×   10 4 4.1473 ×   10 5 2.1200 ×   10 4
Std4.1092 ×   10 14 8.8712 ×   10 1 2.2749 ×   10 4 2.6622 ×   10 4 1.3053 ×   10 5 2.9800 ×   10 4
Rank124563
Significance +++++
F 4 Mean2.8405 ×   10 0 1.2605 ×   10 1 7.7040 ×   10 1 9.6061 ×   10 1 3.5736 ×   10 1 1.7300 ×   10 3
Std2.1842 ×   10 0 8.6176 ×   10 0 3.8595 ×   10 0 2.0503 ×   10 1 3.2288 ×   10 0 1.3000 ×   10 3
Rank124536
Significance +++++
F 5 Mean2.3850 ×   10 1 1.8175 ×   10 2 9.3463 ×   10 1 1.8418 ×   10 2 2.4169 ×   10 2 1.8500 ×   10 2
Std5.0701 ×   10 0 1.2845 ×   10 1 1.0871 ×   10 1 1.2630 ×   10 1 1.2302 ×   10 1 4.4900 ×   10 1
Rank132465
Significance +++++
F 6 Mean4.6793 ×   10 1 5.7971 ×   10 3 3.7760 ×   10 1 5.8751 ×   10 1 2.4651 ×   10 1 5.3800 ×   10 1
Std3.5101 ×   10 1 2.9636 ×   10 2 5.8394 ×   10 0 5.2672 ×   10 0 3.3305 ×   10 0 1.2000 ×   10 1
Rank214635
Significance ++++
F 7 Mean5.2487 ×   10 1 2.3377 ×   10 2 1.1601 ×   10 2 2.1572 ×   10 2 2.6887 ×   10 2 2.9100 ×   10 2
Std3.3716 ×   10 0 1.4172 ×   10 1 8.0551 ×   10 0 1.2272 ×   10 1 1.3070 ×   10 1 1.1300 ×   10 2
Rank142356
Significance +++++
F 8 Mean2.0726 ×   10 1 1.7779 ×   10 2 9.3771 ×   10 1 1.8161 ×   10 2 2.3844 ×   10 2 1.7100 ×   10 2
Std4.6475 ×   10 0 1.1810 ×   10 1 9.9764 ×   10 0 1.3211 ×   10 1 1.6657 ×   10 1 3.8700 ×   10 1
Rank142563
Significance +++++
F 9 Mean1.1236 ×   10 0 2.8880 ×   10 3 7.6008 ×   10 2 4.1017 ×   10 3 4.8783 ×   10 2 2.8400 ×   10 3
Std9.0322 ×   10 1 1.6080 ×   10 2 2.7907 ×   10 2 8.3155 ×   10 2 1.5998 ×   10 2 1.2400 ×   10 3
Rank214635
Significance ++++
F 10 Mean2.6526 ×   10 3 6.7636 ×   10 3 3.4600 ×   10 3 6.3166 ×   10 3 7.9631 ×   10 3 5.2400 ×   10 3
Std3.0794 ×   10 2 2.5866 ×   10 2 2.3363 ×   10 2 2.0917 ×   10 2 3.0010 ×   10 2 8.1500 ×   10 2
Rank152463
Significance +++++
F 11 Mean3.2932 ×   10 1 7.9240 ×   10 1 7.3961 ×   10 1 2.7578 ×   10 2 2.1665 ×   10 2 9.3600 ×   10 2
Std1.7540 ×   10 1 9.4859 ×   10 0 1.6453 ×   10 1 6.0425 ×   10 1 4.1872 ×   10 1 1.7000 ×   10 3
Rank132546
Significance +++++
F 12 Mean6.0704 ×   10 3 7.4374 ×   10 4 1.7001 ×   10 6 5.9649 ×   10 8 2.5500 ×   10 9 1.1600 ×   10 10
Std5.2411 ×   10 3 7.3297 ×   10 4 5.7892 ×   10 5 1.8156 ×   10 8 8.9672 ×   10 8 1.5200 ×   10 10
Rank123456
Significance +++++
F 13 Mean5.3737 ×   10 1 2.3200 ×   10 3 7.2862 ×   10 3 3.3010 ×   10 8 1.0201 ×   10 8 1.0300 ×   10 10
Std4.9535 ×   10 1 5.3868 ×   10 3 2.4057 ×   10 4 8.7976 ×   10 7 3.9539 ×   10 7 1.5000 ×   10 10
Rank123546
Significance +++++
F 14 Mean5.1892 ×   10 1 9.7833 ×   10 1 7.5446 ×   10 1 1.5942 ×   10 4 4.0951 ×   10 3 2.2100 ×   10 5
Std1.3639 ×   10 1 2.2150 ×   10 1 1.0230 ×   10 1 4.5198 ×   10 3 8.1599 ×   10 2 4.9000 ×   10 5
Rank132546
Significance +++++
F 15 Mean6.7166 ×   10 1 7.9018 ×   10 1 1.0487 ×   10 2 3.0409 ×   10 7 3.1028 ×   10 6 2.9100 ×   10 8
Std4.6957 ×   10 1 3.4973 ×   10 1 2.4704 ×   10 1 9.9730 ×   10 6 1.2198 ×   10 6 1.6200 ×   10 9
Rank123546
Significance +++
F 16 Mean2.7544 ×   10 2 1.4323 ×   10 3 8.5936 ×   10 2 1.6794 ×   10 3 2.3141 ×   10 3 1.7100 ×   10 3
Std1.4178 ×   10 2 1.6353 ×   10 2 1.1941 ×   10 2 1.7833 ×   10 2 1.8579 ×   10 2 6.0700 ×   10 2
Rank132465
Significance +++++
F 17 Mean6.6918 ×   10 1 3.8170 ×   10 2 1.9226 ×   10 2 6.8169 ×   10 2 1.1646 ×   10 3 8.4800 ×   10 2
Std1.3274 ×   10 1 1.0402 ×   10 2 6.2540 ×   10 1 9.4653 ×   10 1 1.4710 ×   10 2 3.2100 ×   10 2
Rank132465
Significance +++++
F 18 Mean6.1722 ×   10 1 2.4655 ×   10 2 2.3829 ×   10 5 3.4473 ×   10 5 7.9235 ×   10 5 1.9200 ×   10 6
Std2.7440 ×   10 1 2.4587 ×   10 2 1.4060 ×   10 5 1.1610 ×   10 5 1.9796 ×   10 5 4.8700 ×   10 6
Rank123456
Significance +++++
F 19 Mean4.0473 ×   10 1 3.7316 ×   10 1 3.2834 ×   10 1 5.0014 ×   10 3 3.7873 ×   10 4 4.2500 ×   10 8
Std2.2587 ×   10 1 7.4498 ×   10 0 6.1811 ×   10 0 2.4562 ×   10 3 2.1473 ×   10 4 7.1600 ×   10 8
Rank321456
Significance +++
F 20 Mean8.7890 ×   10 1 2.6052 ×   10 2 3.3149 ×   10 2 4.8347 ×   10 2 1.0591 ×   10 3 6.0400 ×   10 2
Std3.2291 ×   10 1 9.0720 ×   10 1 7.0732 ×   10 1 7.4079 ×   10 1 1.1750 ×   10 2 2.3100 ×   10 2
Rank123465
Significance +++++
F 21 Mean2.2047 ×   10 2 3.7820 ×   10 2 2.7893 ×   10 2 3.9662 ×   10 2 4.4351 ×   10 2 3.8400 ×   10 2
Std4.7309 ×   10 0 1.3317 ×   10 1 3.7140 ×   10 1 7.1300 ×   10 0 1.3511 ×   10 1 5.0800 ×   10 1
Rank132564
Significance +
F 22 Mean1.0186 ×   10 2 2.9760 ×   10 3 2.2144 ×   10 2 6.5363 ×   10 3 8.0766 ×   10 3 4.5500 ×   10 3
Std2.0196 ×   10 0 3.4464 ×   10 3 2.9380 ×   10 1 5.3749 ×   10 2 2.9236 ×   10 2 2.0400 ×   10 3
Rank132564
Significance ++++
F 23 Mean3.7543 ×   10 2 5.4924 ×   10 2 4.4917 ×   10 2 5.5419 ×   10 2 5.8732 ×   10 2 7.2800 ×   10 2
Std8.3139 ×   10 0 1.3235 ×   10 1 9.7459 ×   10 0 1.2816 ×   10 1 1.9032 ×   10 1 7.9500 ×   10 1
Rank132456
Significance +++++
F 24 Mean4.4893 ×   10 2 6.2854 ×   10 2 5.3810 ×   10 2 6.9034 ×   10 2 6.7462 ×   10 2 8.2800 ×   10 2
Std6.6585 ×   10 0 1.3484 ×   10 1 1.6953 ×   10 1 1.5747 ×   10 1 1.2401 ×   10 1 1.0000 ×   10 2
Rank132546
Significance ++
F 25 Mean3.8051 ×   10 2 3.7837 ×   10 2 3.7882 ×   10 2 4.1044 ×   10 2 3.9396 ×   10 2 5.6400 ×   10 2
Std1.1308 ×   10 1 6.0195 ×   10 2 1.1887 ×   10 0 6.6423 ×   10 0 4.4183 ×   10 0 2.0000 ×   10 2
Rank312546
Significance ++
F 26 Mean1.3469 ×   10 3 2.6899 ×   10 3 1.4149 ×   10 3 2.6587 ×   10 3 3.0652 ×   10 3 4.9700 ×   10 3
Std9.6715 ×   10 1 2.2460 ×   10 2 5.7298 ×   10 2 9.0222 ×   10 1 1.2139 ×   10 2 9.0600 ×   10 2
Rank142356
Significance ++++
F 27 Mean5.0001 ×   10 2 5.0001 ×   10 2 5.0001 ×   10 2 5.0001 ×   10 2 5.0001 ×   10 2 7.1600 ×   10 2
Std1.5786 ×   10 4 1.2917 ×   10 4 9.3150 ×   10 5 5.9917 ×   10 5 8.1534 ×   10 5 1.0700 ×   10 2
Rank111112
Significance
F 28 Mean4.9902 ×   10 2 4.9059 ×   10 2 4.9845 ×   10 2 5.0001 ×   10 2 5.0001 ×   10 2 1.2300 ×   10 3
Std3.0735 ×   10 0 4.2116 ×   10 0 1.3995 ×   10 0 6.6494 ×   10 5 6.6473 ×   10 5 8.4400 ×   10 2
Rank312445
Significance +
F 29 Mean3.8546 ×   10 2 9.3919 ×   10 2 4.6431 ×   10 2 1.6187 ×   10 3 2.0724 ×   10 3 1.9200 ×   10 3
Std4.5529 ×   10 1 2.1194 ×   10 2 4.2813 ×   10 1 1.8521 ×   10 2 1.8256 ×   10 2 6.1300 ×   10 2
Rank132465
Significance +++++
F 30 Mean2.8659 ×   10 2 2.3933 ×   10 2 1.7439 ×   10 4 1.4650 ×   10 7 2.4426 ×   10 6 4.7800 ×   10 7
Std5.8034 ×   10 1 6.7166 ×   10 0 2.1431 ×   10 4 6.8820 ×   10 6 1.5971 ×   10 6 8.9800 ×   10 7
Rank213546
Significance ++++
Average rank 1.312.442.444.374.755.13
Final rank 122345
W/T/L −/−/−19/5/520/9/026/3/025/4/027/2/0
Table 6. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the function χ 2 ( X ) .
Table 6. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the function χ 2 ( X ) .
FunctionMetricsHE-DEPSODEPSOSHADECoDEDEPSO
χ 2 ( X ) Mean1.7267 ×   10 28 3.4971 ×   10 28 7.2296 ×   10 3 2.3720 ×   10 5 6.1610 ×   10 13 7.4247 ×   10 0
Std1.4496 ×   10 28 5.6400 ×   10 28 8.7102 ×   10 3 7.7748 ×   10 5 3.4294 ×   10 12 1.6815 ×   10 1
Classification125436
Significance +++++
Final
classification
125436
W/T/L −/−/−1/0/01/0/01/0/01/0/01/0/0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martínez-Guerrero, E.; Lagos-Eulogio, P.; Miranda-Romagnoli, P.; Noriega-Papaqui, R.; Seck-Tuoh-Mora, J.C. Historical Elite Differential Evolution Based on Particle Swarm Optimization Algorithm for Texture Optimization with Application in Particle Physics. Appl. Sci. 2024, 14, 9110. https://doi.org/10.3390/app14199110

AMA Style

Martínez-Guerrero E, Lagos-Eulogio P, Miranda-Romagnoli P, Noriega-Papaqui R, Seck-Tuoh-Mora JC. Historical Elite Differential Evolution Based on Particle Swarm Optimization Algorithm for Texture Optimization with Application in Particle Physics. Applied Sciences. 2024; 14(19):9110. https://doi.org/10.3390/app14199110

Chicago/Turabian Style

Martínez-Guerrero, Emmanuel, Pedro Lagos-Eulogio, Pedro Miranda-Romagnoli, Roberto Noriega-Papaqui, and Juan Carlos Seck-Tuoh-Mora. 2024. "Historical Elite Differential Evolution Based on Particle Swarm Optimization Algorithm for Texture Optimization with Application in Particle Physics" Applied Sciences 14, no. 19: 9110. https://doi.org/10.3390/app14199110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop