Next Article in Journal
Estimation of the Impulse Response of the AWGN Channel with ISI within an Iterative Equalization and Decoding System That Uses LDPC Codes
Previous Article in Journal
A Federated Adversarial Fault Diagnosis Method Driven by Fault Information Discrepancy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Dynamical Interpretation of the Mean Strategy

1
Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu 610213, China
2
School of Computer Science and Engineering, Southwest Minzu University, Chengdu 610041, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
Sichuan Digital Transportation Technology Co., Ltd., Chengdu 610095, China
*
Author to whom correspondence should be addressed.
Current address: Tianfu Yongxing Laboratory, No. 168, Wenxing Section, Dajian Road, Airport Development Zone, Shuangliu District, Chengdu 610225, China.
Entropy 2024, 26(9), 719; https://doi.org/10.3390/e26090719
Submission received: 29 June 2024 / Revised: 13 August 2024 / Accepted: 20 August 2024 / Published: 23 August 2024
(This article belongs to the Section Quantum Information)

Abstract

:
The method of quantum dynamics is employed to investigate the mean strategy in the swarm intelligence algorithm. The physical significance of the population mean point is explained as the location where the optimal solution with the highest likelihood can be found once a quantum system has reached a ground state. Through the use of the double well function and the CEC2013 test suite, controlled experiments are conducted to perform a comprehensive performance analysis of the mean strategy. The empirical results indicate that implementing the mean strategy not only enhances solution diversity but also yields accurate, efficient, stable, and effective outcomes for finding the optimal solution.

1. Introduction

The mean strategy is a common operation in Swarm Intelligence (SI) algorithms. Many well-known algorithms such as Particle Swarm Optimization (PSO) [1], Quantum-behaved Particle Swarm Optimization (QPSO) [2], Cuckoo Search (CS) [3], and others incorporate this operation. The advantages of employing the mean strategy are manifold: firstly, it is easy to implement and parallelize; secondly, it helps avoid being trapped in local optima; thirdly, it accelerates the convergence speed of the algorithm; and fourthly, it enables a better balance between global and local search in the algorithm, leading to improved performance.
However, despite its widespread use, a precise theoretical explanation for why the mean strategy is effective is currently lacking. This knowledge gap presents a significant challenge for SI algorithms. A multitude of optimization algorithm models, particularly those based on natural systems, offer explanations for the functioning of optimization algorithms from their own perspectives, often without a complete mathematical framework [4]. The absence of a rigorous and comprehensive mathematical foundation has hindered the transformation of optimization algorithms into a formal scientific discipline [5]. The verification of the effectiveness and high efficiency of computational intelligence algorithms through numerical experimental methods and specific application means remains an important method for studying computational intelligence algorithms [6].
Fortunately, Feynman has provided us with clear guidance. He stated, “Nature is not classical, and if you want to simulate nature, you’d better turn it into quantum mechanics”. During our team’s research on the SI algorithm with quantum dynamics, we discovered that when the objective function of the optimization problem is interpreted as the potential energy within a quantum system, the optimization problem can be reformulated as the task of determining the ground state of a constrained quantum system. Consequently, it becomes feasible to develop and analyze SI algorithms by simulating the dynamic evolution process of the physical optimization system toward its ground state, thus establishing a theoretical model for SI algorithms. Table 1 shows the similarities between the quantum dynamics and the SI algorithm.
Additionally, we investigate the mean strategy in the SI algorithm using a quantum dynamic frame (QDF). This article presents several significant contributions, which are summarized as follows:
(1)
Utilizing a novel theoretical method, quantum dynamics, to study and analyze the mean strategy of the optimization algorithm, offering a fresh perspective for optimization algorithm research.
(2)
Employing the double well function and the CEC2013 test suite, conducting controlled experiments, performing comprehensive analyses of the mean strategy, and demonstrating its effectiveness.
(3)
Analyzing the convergence process of the optimization algorithm using the wave function.
Subsequent sections delve into related work on the mean strategy of the SI algorithm and introduce the optimization problem from the viewpoint of quantum dynamics in Section 3. The theoretical underpinning of the mean strategy is expounded upon in Section 4, while Section 5 presents the experimental design and results. Conclusions and future research directions are outlined in Section 6.

2. Related Work

This section summarizes the related work about mean strategy on the SI algorithm.
In 2000, Kennedy demonstrated the benefits of individual particle learning from the center of the group it was currently located in, which significantly improved the algorithm’s performance [7]. In 2013, Xu utilized average velocity information in PSO to devise a feedback control strategy. By introducing average velocity into the particle position update formula, the convergence speed and accuracy of PSO were significantly enhanced [8]. During the same year, Beheshti et al. introduced Median-oriented Particle Swarm Optimization (MPSO) to conduct a global search over the entire search space, accelerating convergence speed while avoiding local optima [9]. This was achieved by incorporating the median position of particles and the worst and median fitness values of the swarm into the standard PSO. The results demonstrated that MPSO substantially improved the performance of the PSO paradigm in terms of convergence speed and the ability to find global or good near-global optima. In 2016, Yu et al. discovered that introducing the mean coordinate vector in the update formula or adding the individual average velocity term effectively enhanced the algorithm’s performance [10].
In 2004, Sun et al. discovered that in the revised QPSO, the evaluation of the parameter L depended on the mean best position, which demonstrated relative stability as the population evolved [11]. In 2008, Cai et al. introduced a weighting strategy for the average position of particles. This strategy involved weighting the average optimal position based on the fitness value of particles, aiming to achieve a better balance between global search and local search within the Quantum-behaved Particle Swarm Optimization (QPSO) and consequently improve its performance [12,13]. Furthermore, a QPSO algorithm based on a truncated mean stabilization strategy has been proposed, which aims to enhance population diversification and increase convergence efficiency. Experimental results have demonstrated that the search accuracy and convergence of the QPSO algorithm with the truncated mean stabilization strategy outperform those of other typical PSO variants [14].
Many other algorithms, apart from PSO and QPSO, also incorporate the mean strategy. At the 2015 International Computing and Business Intelligence Conference, Wang et al. proposed the Elephant Herding Optimization algorithm (EHO) [15], in which the elephant migration behavior with the best fitness was entirely influenced by the mean vector of elephants. In 2017, Cheung et al. enhanced the CS algorithm [16] by integrating a quantum model into the CS algorithm and introducing a non-homogeneous search strategy based on the quantum mechanism. Under this strategy, there was a probability that the cuckoo’s coordinate updating would align with the mean coordinate of birds, resulting in a significantly improved algorithm compared to the original CS algorithm. During the same year, the MGWO algorithm was developed by modifying the position update (encircling behavior) equations of the Gray Wolf Optimizer (GWO) [17], and the results illustrate that the performance of the modified variant is capable of achieving the best solutions with a high level of accuracy in classification and improved avoidance of local optima [18]. The accuracy and convergence performances of the modified variant were tested on several well-known classical sine datasets and cantilever beam design functions [19]. In 2019, Huang et al. improved the bat algorithm and introduced the Gaussian Quantum Bat Algorithm (GQBA) with quantum behavior. This algorithm utilized an average position direction flight strategy to assist bats in escaping local optimal solutions, thereby enhancing the optimization capability of the algorithm [20]. Also in 2019, Hiba et al. proposed a center-based mutation for the SHADE algorithm (CSHADE). In its mutation scheme, the base vector for SHADE’s mutation was replaced with center-based sampled candidate solutions using the normal distribution. Experimental results showed that CSHADE outperformed the Success-History Based Parameter Adaptation Differential Evolution (SHADE) algorithm across a majority of benchmark functions in terms of solution accuracy [21].
The various algorithms and their enhanced versions mentioned above appear to integrate a fundamental operation to ensure the optimization algorithm’s performance—namely, the utilization of mean information. The effective utilization of mean information can enhance the solution accuracy of algorithms. However, to our knowledge, these algorithms have not provided a clear explanation of the specific significance of mean information in the SI algorithm.
In 2019, our team’s Ye et al. proposed a new multi-scale Quantum Harmonic Optimization Algorithm (MQHOA) with a Truncated Mean Stabilization (TS-MQHOA) policy. Theoretical and experimental analyses indicate that the truncated mean stabilization strategy helps diversify populations and improve convergence efficiency. Furthermore, experimental results on complex test problems demonstrate that the proposed TS-MQHOA, in most function evaluations, achieves better convergence toward the global optimum compared with several renowned heuristic algorithms based on swarm intelligence [22]. While truncating the mean has some effect, it leads to a loss of group diversity, and we believe that the authors’ theory and experimental methods are incomplete. Subsequently, in the same year, the mean strategy of the MQHOA algorithm is explained from the perspective of local optimal solutions, and various replacement strategies are compared. Experimental results show that the algorithm’s performance is not significantly improved by the random migration strategy [23]. Then, in 2024, we conducted a parameter estimation of the ground state wave function through the maximum likelihood estimation method to explain the mathematical significance of the population mean point from the dynamics perspective [24]. Building on this work, we now aim to elucidate the essential significance of the mean strategy from the perspective of quantum dynamics and demonstrate the effectiveness of the mean strategy using a quantum harmonic oscillator model.

3. Quantum Dynamic Frame for Swarm Intelligence Algorithms

The quantum dynamics frame (QDF) for swarm intelligence algorithm is a paradigm that transforms the iterative evolution process of optimization algorithms into the evolution process of a quantum constrained state to the ground state (Figure 1).

3.1. Quantum Dynamic Equation of Optimization Problem

From a physics perspective, the iterative process of optimization algorithms operating on a classical computer can be likened to dynamics evolving over time. Some scholars have begun to recognize the connection between optimization algorithms and dynamical systems [25]. To establish a mathematical–physical model of the optimization algorithm, the most direct approach is to seek a time evolution equation that describes the optimization algorithm. This equation is known as the kinetic equation. Therefore, we naturally think of the Schrödinger equation. The objective function in the optimization problem is considered as the constrained potential energy in the quantum system, thereby transforming the solution of the optimization problem into the solution of the constrained ground state wave function problem. The Schrödinger equation of a single particle moving in one dimension in quantum mechanics is as follows:
i Ψ ( x , t ) t = 2 2 m 2 x 2 + V ( x ) Ψ ( x , t ) .
The dynamic equation of the optimization algorithm is not directly related to the microscopic particle dynamics equation. In order to obtain the Quantum Dynamic Equation (QDE) of the optimization algorithm, the objective function f ( x ) in the optimization algorithm is utilized as the constraint for the quantum system’s potential energy, expressed as V ( x ) = f ( x ) . By establishing this equivalence, the time-dependent Schrödinger equation is transformed into the QDE of the optimization algorithm
i Ψ ( x , t ) t = 2 2 m 2 x 2 + f ( x ) Ψ ( x , t ) .
Since the iterative process of the optimization algorithm is a process of continuous motion evolution, this project regards the iterative evolution process of the optimization algorithm as a dynamic process and establishes the time-dependent dynamic equation of the optimization algorithm.

3.2. Diffusion Model of QDF

In optimization problems, the objective function f ( x ) is considered as a black box, and the analytic expression of the objective function cannot be directly obtained in optimization problems. Therefore, Taylor expansion has been used to study the influence of the objective function f ( x ) on the dynamic process.
In mathematics, the Taylor series is the expression of a function f ( x ) for which the derivatives of all orders exist at a point x 0 in the domain of f ( x ) in the form of the power series, so let the estimate of the objective function under the black box be f B ( x ) ; then,
f ( x ) f B ( x ) = j = 0 n f n ( x 0 ) j ! ( x x 0 ) .
Then QDE is transformed into
i Ψ ( x , t ) t = D 2 x 2 + f B ( x ) Ψ ( x , t ) .
Since the Taylor zero-order approximation of the objective function is constant, it may be scaled and translated to 0, and let D = 2 m . Then, the dynamic equation is reduced to the free particle equation:
i Ψ ( x , t ) t = D 2 Ψ ( x , t ) x 2 .
The free particle equation in quantum mechanics is the diffusion equation of the imaginary diffusion coefficient. In order to facilitate the analysis and operation, in Equation (5), after the real time is converted into an imaginary time, Equation (6) is a standard diffusion equation:
Ψ ( x , τ ) t = D 2 Ψ ( x , τ ) x 2 .
Its corresponding green function is a normal function
G ( x , τ ; x 0 , 0 ) = 1 4 π D τ e ( x x ) 2 4 D τ .
The Taylor zero-order approximation of the objective function corresponds to the black-box model of the objective function. This shows that the optimization algorithm’s population is normally sampled when the objective function is a black box. The sampling formula is
x = x + σ N ( 0 , 1 ) ,
where σ = Δ τ / m is the scale of the sampling.

3.3. Reaction–Diffusion Model of QDF

The objective function is performed as a first-order Taylor expansion, and then the QDE of the optimization problem is the diffusion equation with the drift term.
i Ψ ( x , t ) t = D 2 x 2 + f B ( x ) Ψ ( x , t ) = D 2 x 2 + f ( x 0 ) ( x x 0 ) Ψ ( x , t ) .
It is a diffusion-action equation of the imaginary diffusion coefficient.
Equation (9) shows that by comparing the values of the two samples before and after, the slope is obtained. Under the QDF, the values of two adjacent samples are expressed by energy E a and E b ( E a is the low-energy state and E b is the high-energy state), respectively, according to the energy superposition Equation (15), and the superposition of n energy states in the formula is approximately the superposition of two energy states E a and E b . That is the two-energy level approximation (TELA). At this point, the energy superposition formula is approximately expressed as a two-level formula:
Ψ ( x , t ) = c a ψ a e x p ( i E a t ) + c b ψ b e x p ( i E b t ) .
The probability of two energy levels is mainly determined by the exponential term of Equation (15), so the worse solution acceptance probability can be approximated as
P b e x p ( i E b t ) e x p ( i E a t ) + e x p ( i E b t ) e ( E b E a ) i t .
By TELA, the algorithm introduces a priori knowledge based on the assumption of the objective problem, so as to orient the movement direction of the particles and worse solution acceptance probability.
P b = 1 , i f f ( x b ) < f ( x a ) e ( E b E a ) i t , i f f ( x b ) f ( x a ) .
After the Wick rotation, the probability of worse solution acceptance is
P b = 1 , i f f ( x b ) < f ( x a ) e ( E b E a ) τ , i f f ( x b ) f ( x a ) .

3.4. Quantum Harmonic Oscillator Model of QDF

If the sampling point is the extreme point, the QDE of the objective function under Taylor’s second-order approximation near the global best point is isomorphic to the quantum harmonic oscillator equation:
i Ψ ( x , t ) t = D 2 x 2 + f B ( x ) Ψ ( x , t ) = D 2 x 2 + 1 2 f ( x 0 ) ( x x 0 ) 2 Ψ ( x , t ) .
In this case, the complicated objective function is approximated by the harmonic oscillator potential function of a single peak. Then, solving the optimization problem is transformed into solving the ground state wave function of the quantum harmonic oscillator.

3.5. Basic Iteration Process of Optimization Algorithm

When explaining the iterative process of swarm intelligence algorithm from the perspective of quantum dynamics, the core iterative operation of these algorithms becomes a fundamental requirement within the theoretical framework of quantum dynamics. The three models derived from QDF can provide a general basic iterative process (BIP) (see Algorithm 1).
Algorithm 1 Pseudocode of BIP with mean strategy
  • population initialization within the defined domain
  • while does not meet the algorithm termination conditions do
  •     while does not meet the ground state condition do
  •         current scale group normal random sampling
  •         adjusting particle density based on the sampling of the objective function
  •     end while
  •     scale reduction
  • end while
  • output
The BIP of this swarm intelligence algorithm can serve as the foundation for the design and improvement of the swarm intelligence algorithm.

3.6. Ground State Convergence Theory

The general solution of Equation (2) is
Ψ ( x , t ) = n c n ψ n ( x ) e x p ( i E n t ) .
In the paper, the Planck constant is defined as 1 in natural units. And the solution after the Wick rotation [26] is
Ψ ( x , τ ) = n c n ψ n ( x ) e x p ( E n τ ) .
Following Equation (16), the BIP [27] inevitably converges to the optimal solution as the iterations progress [28]. The solution | Ψ ( τ ) is approximated by the test state | Ψ ( ϕ ( τ ) ) , ϕ ( τ ) = ϕ 1 ( τ ) , ϕ 2 ( τ ) , , ϕ N ( τ ) . As τ approaches infinity, the ground state is obtained by Ψ a r b ( x , τ ) ’s evolution.
d E τ d τ = j , k C j A j , k 1 C k 0 lim τ Ψ ( x , τ ) = c 0 Ψ 0 ( x )
where A j k = R e ϕ | ϕ j | ϕ ϕ j , C j = R e ϕ | ϕ j H | ϕ [28]. This is the convergence theory of quantum dynamic frame (QDF).

4. Theory of Mean Strategy

4.1. Analysis of the Characteristics of the Quantum Harmonic Oscillator Model

The quantum harmonic oscillator is a well-known model in quantum mechanics, extending the classical harmonic oscillator. It is particularly valuable for approximating the small vibrations of microscopic systems near equilibrium points. Owing to its simple analytical solutions, it finds a wide application in physics for modeling the complex vibrations of crystals around their equilibrium positions.
Furthermore, in optimization problems, the objective function is often complex, but only the probability distribution near the global optimal solution is essential. Therefore, the quantum harmonic oscillator potential well model can be employed to approximate the complicated objective function. The central position of the potential well represents the mean point position, where the maximum probability of finding the global optimal solution occurs [24].
As the image of the quantum harmonic oscillator (Figure 2) and its ground state wave function (Figure 3) are centrosymmetric, we regard the central position as the targeted solution. Placing emphasis on the center within the population leads us towards the optimal solution.
In conclusion, selecting the population’s center as the position with the highest probability of the optimal solution can enhance our comprehension of the underlying physical principles and yield improved outcomes in practical applications.

4.2. The Physical Implications of the Mean Strategy

The lowest energy state of the quantum harmonic oscillator is known as the ground state, and the corresponding wave function is denoted by
Ψ x = 1 2 π σ 2 1 4 e x p x 2 4 σ 2 ,
where σ = 1 2 a . This expression represents the standard form of the Gaussian function. When the equivalent subsystem reaches the ground state, the probability distribution of the solution follows a normal distribution N x ¯ , σ 2 , where x ¯ represents the mean of the particles.
Based on the properties of the normal distribution, if the solution error is represented by ε , the integral x ε x + ε N x ¯ , σ 2 d x can be maximized when the sampling point x = x ¯ ; thus, the probability of finding the optimal solution near the mean point is maximized. In optimization problems, although the objective function is often complicated, it suffices to know the probability distribution around the global optimal solution. Therefore, we can use the quantum harmonic oscillator potential well model to approximate the objective function, and its central position (mean point position) serves as the position with the maximum probability of finding the global optimal solution.
In Figure 2, we project the actual sampling point of the target function f x onto the approximate function of the objective function. Since the approximation function is a quadratic function with upward opening, (19) obviously holds.
f σ x ¯ < f σ x w o r s t x ¯ = i = 1 k x i / k x w o r s t = arg max x i { f ( x i ) } , , k } .
Here, k denotes the number of sampling points. The fitness f σ x of the sampling point mean x ¯ is always better than the worst solution in the process of approximating the objective function f x by the quadratic function with upward opening. Therefore, we can use a simple strategy based on the mean information, i.e., generate a new solution using the coordinate mean of the population, to replace the current worst solution and complete the migration of the entire population towards the mean coordinate of the group. This is called “mean strategy”.
The complexity of the objective function makes it impossible to achieve accurate approximations using a single-scale harmonic oscillator potential energy. Therefore, a multi-scale harmonic oscillator potential well can be used to estimate the position of the minimum value of the complex function. As depicted in Figure 4a, at a larger scale, the potential well’s constraint on the harmonic oscillator is minimal, resulting in only slight initial positioning of the minimum value. Conversely, Figure 4b shows that at a smaller scale, the potential well constrains the motion of the harmonic oscillator within a limited range, allowing for precise localization of the minimum value’s position.

4.3. Mean Strategy of SI Algorithm

If the sampling point is the extreme point, the QDE of the objective function is the quantum harmonic oscillator equation. The corresponding pseudo code (Algorithm 2) of BIP with the mean strategy is as follows:
Algorithm 2 Pseudocode of BIP with mean strategy
Input: particles number k, left boundary L B , right boundary R B , domain [ L B , R B ] , scale σ = U B L B
Output:  arg min x ( f ( x )
  • Generate k copies of free particle in the domain
  • while termination condition is not met do
  •     while not reached ground state do
  •         Normal sampling according to Equation (8)
  •         Value comparing according to Equation (9)
  •         Worse solution accepting according to Equation (13)
  •     end while
  •     Mean value replacing (Mean strategy) according to Equation (19)
  •     Scale reducing
  • end while
This algorithm is divided into two internal and external loops. The inner loop is used to find the ground state at the anchored scale. It begins with normal sampling, compares the current sampled value with the value from the previous sampling, retains the optimal solution, and then determines whether the ground state has been reached at the current scale. If the ground state has not been reached, the process continues as described above. If the ground state is reached, it enters the outer loop, accepts worse solutions, and performs scale reduction operations. Meanwhile, the outer loop controls the entire iteration process of the algorithm based on the requirements for solution accuracy and the number of iterations.

5. Experiment

The experiment aimed to investigate the impact of the mean strategy on the algorithm from two perspectives: process and results. The former analyzed the role of the mean strategy in the BIP-mean algorithm by considering the objective function (double well function) as a white box. Theoretical analysis was conducted to understand how the mean strategy affects the algorithm’s performance. The latter considered the objective function (CEC2013 test suit) as a black box and selected three algorithms from the BIP cluster to observe and analyze the impact of the mean strategy on algorithm performance. These algorithms are the BIP algorithm without using the mean strategy, the BIP-mean algorithm with the mean strategy, and the MQHOA-wmn algorithm with incomplete information but using the mean strategy.

5.1. Test Functions

In this article, the test function refers to the objective function selected for evaluating the algorithm.

5.1.1. Double Well Function

The experiment used a test function known as the double well function (DWF), which is an important model originating from quantum physics. The DWF has been used for decades to analyze energy, wave function, and tunnel effect, making it an ideal model for optimization algorithm research [29,30]. It has also been applied to test quantum-inspired optimization algorithms and demonstrate the principle of quantum annealing for optimization problems [31,32,33].
The DWF can be accurately controlled by adjusting its function parameters [34], including the potential well depth, potential well distance, and barrier height. This allows for precise tracking of the physical image of the iterative process of the algorithm, especially in studying the dynamic characteristics of the wave function.
The one-dimensional asymmetric DWF is expressed as follows:
f ( x ) = V ( x 2 a 2 ) 2 a 4 + δ x .
The DWF obtains a minimum near x ± a , and the difference between the two minima is determined by the parameter δ . The smaller δ is, the smaller the gap between the two minima. The parameter V determines the barrier between the minima, and the larger V is, the higher the barrier will be. By adjusting the three parameters a, δ , and V, the position and difficulty of the optimal solution of the objective function can be changed. Since the optimal solution of the DWF is not at the center of all extreme points, it can avoid the sampling advantage of optimization algorithms for the center position.
The high-dimensional asymmetric DWF can be realized by superimposing multiple one-dimensional DWFs. Its expression is as follows:
f n ( x ) = i = 1 n V ( x i 2 a 2 ) 2 a 4 + δ x i
As a standard test function, the high-dimensional asymmetric DWF can comprehensively analyze the convergence process and performance of the algorithm through the adjustment of parameters and dimensions. The number of global optimum and the number of extreme points of the high-dimensional asymmetric DWF are 1 and 2 n , respectively.
Overall, the DWF is a versatile and relevant test function for optimization algorithm research due to its clear physical background and controllable parameters. Its one-dimensional and high-dimensional forms can provide valuable insights into the convergence and performance of optimization algorithms.

5.1.2. CEC2013 Test Suite

Table 2 lists the CEC2013 test suite, where n represents the dimensionality of the problem.This test set comprises a total of 28 functions that are categorized into unimodal functions ( f 1 f 5 ), multimodal functions ( f 6 f 20 ), and composite functions ( f 21 f 28 ) based on their functional structures. The domain of definition for all functions is 100 , 100 .

5.2. Parameter Settings

To examine the effectiveness of the mean strategy, this study selected three representative BIP-like algorithms for experimental observation: BIP, BIP-mean (BIP with mean strategy), and the worst particle replaced by the mean position without the best and the worst (MQHOA-wmn). While BIP is a bare-bones algorithm without a mean strategy, BIP-mean has only one additional mean strategy than BIP. On the other hand, MQHOA-wmn is a BIP-like algorithm with a mean strategy but incomplete mean information that eliminates the optimal and worst solutions. To ensure consistency, we set the number of sampling points (k) in the experimental group to 30, while the scale reduction coefficient (also known as scale coefficient) was set to 2. All control experiments were repeated 51 times, and the maximum number of iterations for a single operation was 10,000 × d , where d represented dimensionality. For the experimental dimension, we set it to 30 (Table 3).

5.3. Convergence Analysis

Through the analysis of the wave function evolution process, this paper investigates the convergence of the algorithm. Figure 5 displays a two-dimensional schematic diagram, particle distribution state, and top view of the ground state wave function on the objective function from left to right, with t representing the number of iterations. The results indicate that as the scale gradually decreases, the wave function converges towards the global optimal region, and the algorithm transitions from a global search to a local search.

5.4. Exploration and Exploitation

Exploration and exploitation are two abilities of SI algorithms. The process of accelerating convergence is also a conversion between exploration and exploitation. The mean-oriented learning strategy based on multi-scale ground state wave functions effectively promotes the transformation of sampling populations to the exploitation process. Exploration can be understood as the process of completely searching for new areas in the solution space, while exploitation can be understood as the process of searching for solutions near previously visited points [35]. Reasonably adjusting the relationship between exploration and exploitation is beneficial to improving the performance of the algorithm. Ref. [36] indicates that the key to the optimization process is to balance exploration and exploitation. However, the two are not only opposed to each other but are, sometimes, mutually reinforcing, and promoting exploitation does not necessarily harm exploration. In BIP, the iterative process of particles is affected by the scale σ . In the early iteration stage, the scale σ is large. When the system reaches the ground state, the ground state wave function at this scale is widely distributed, and many particles are in previously unknown regions and in the exploration process. As the scale decreases, the ground state wave function gradually converges, and a large number of particles turn from global to optimal solutions. The transformation between the exploration process and the exploitation process is analyzed by tracking the iterative process of particles and the convergence process of the wave function.
Figure 5 illustrates that in the early stages of iteration, particles are widely dispersed and many sampling points are in the exploration state due to the large search space. As the algorithm progresses and the scale reduces, particles gradually converge towards the global optimal solution, transitioning into an exploitation state. Concurrently, the ground state wave function also converges towards the global optimal solution, improving the certainty and enabling a more precise search. The mean strategy facilitates this process by promoting exploitation.
Figure 6 demonstrates that the mean strategy aids in improving the evaluation and solving the accuracy of the algorithm. Sampling points with poor fitness have a higher likelihood of entering the exploitation state by learning towards the mean value. This method also promotes convergence by helping sampling points transition from exploration to exploitation, thereby enhancing the algorithm’s performance. Furthermore, mean-direction learning is effective in escaping local optima, which otherwise present a challenge with traditional sampling methods.

5.5. Solution Accuracy Discussion of Mean Strategy

To validate the effectiveness of the mean strategy, we conducted a control experiment. The results presented in Table 4 demonstrate that BIP-mean outperforms both BIP and MQHOA-wmn in terms of solving error, solving error mean, and optimal solution. Although MQHOA-wmn also exhibits outstanding performance [23], its lack of optimal and worst solutions in calculating the mean coordinates results in a larger evaluation error of the optimal solution under the current ground state, compared to BIP-mean. Additionally, due to a shortage of samples in the maximum likelihood estimation process, MQHOA-wmn may experience a deficiency in this stage when compared to BIP-mean.

5.6. Stability Discussion of the Mean Strategy

To capture the characteristics of solving error distribution, we created boxplots diagrams based on data from each experiment of the algorithm (Figure 7). These diagrams display the maximum, minimum, median, and upper and lower quartiles of the algorithm’s optimization results in repeated experiments, allowing us to directly evaluate the stability of the algorithm’s solution. When optimizing the algorithm, a flatter boxplots diagram with an ordinate tending towards zero indicates higher stability.
Our boxplots diagrams clearly demonstrate that the mean strategy experimental group, which utilized complete population coordinate information, achieved superior accuracy and stability in the CEC2013 test. This confirms the stability of the mean strategy in quantum systems with objective function potential energy constraints.

5.7. Effectiveness Discussion of the Mean Strategy

To assess the effectiveness of the mean strategy on the optimization algorithm, we conducted a Wilcoxon signed rank test on the results obtained from experiments conducted on the control group. If the mean value of repetitions of the algorithm is reduced upon adding the mean strategy behavior, as compared to the control group without this behavior, and if the Wilcoxon signed rank test yields a true result (significance level less than 0.05 , p-Value 0.05 ), then we consider the results of the algorithm with a mean strategy behavior to be significantly better than those of the original algorithm, and vice versa.
The experimental results are presented in Table 5. Our findings show that the optimization results achieved by BIP-mean on 23 test functions are significantly better than those of the original BIP algorithm in the series control experiments. While the mean strategy adopted by MQHOA-wmn is based on fewer samples, it still exhibits significant advantages over BIP on 21 test functions. Moreover, BIP-mean outperforms MQHOA-wmn on 19 test functions. These results demonstrate the effectiveness of mean replacement and suggest that a greater sample size leads to a more accurate estimation of the optimal solution.

5.8. Solving Speed Discussion of Mean Strategy

To visualize the average iteration time of our experiments, we calculated and plotted the data in Figure 8. To provide a clear representation of the results, we normalized the time data and applied color coding. The darker areas indicate that less time was consumed during the execution of the algorithm. Our findings suggest that BIP-mean outperforms BIP and MQHOA-wmn in terms of time efficiency. Specifically, BIP-Mean exhibits a significant advantage over the other two algorithms in minimizing time consumption across all test functions studied.

6. Conclusions and Future Works

In this paper, the physical significance of the mean strategy in the SI algorithm is revealed and explained by the quantum harmonic oscillator model in quantum dynamics. We prove that the highest probability of obtaining the optimal solution occurs when the mean value of the population coincides with the lowest potential energy point, subject to the current constraint. This approach strikes an excellent balance between exploration and exploitation by increasing population diversity at the onset of the iteration and promoting faster convergence during the middle and later stages of the iteration. Our experimental results demonstrate that employing the mean strategy yields superior accuracy, speed, stability, and efficiency in finding the optimal solution. Future research should focus on developing more accurate approximations of the objective function under complex potential well constraints.

Author Contributions

Conceptualization, F.W. and P.W.; methodology, F.W.; software, F.W. and Y.J.; validation, F.W. and Y.J.; formal analysis, F.W. and Y.J.; investigation, F.W. and Y.J.; resources, F.W. and Y.J.; data curation, F.W. and Y.J.; writing—original draft preparation, F.W. and Y.J.; writing—review and editing, F.W.; visualization, F.W. and Y.J.; supervision, P.W.; project administration, P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

Author Yuwei Jiao was employed by the company Sichuan Digital Transportation Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SISwarm Intelligence
PSOParticle Swarm Optimization
QPSOQuantum-behaved Particle swarm Optimization
WQPSOWeighted Quantum-behaved Particle swarm Optimization
MPSOMedian-oriented Particle Swarm Optimization
MGWOMean Gray Wolf Optimization
QMBAQuantum-behaved Bat Algorithm with Mean Best Position Directed
EHOElephant Herding Optimization
CSCuckoo Search
GQBAGaussian Quantum Bat Algorithm
SHADESuccess-History Based Parameter Adaptation Differential Evolution
CSHADECenter-based mutation for Success-History based parameter Adaptation
Differential Evolution
MQHOAMulti-scale quantum harmonic optimization algorithm
QDFQuantum Dynamic Frame
QDEQuantum Dynamic Equation
BIPBasic Iteration Process
DWFDouble Well Function
MQHOAMulti-scale Quantum Harmonic Optimization Algorithm
TELATwo Energy Level Approximation

References

  1. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Citeseer: Princeton, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  2. Sun, J.; Feng, B.; Xu, W. Particle swarm optimization with particles having quantum behavior. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753), Portland, OR, USA, 19–23 June 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 1, pp. 325–331. [Google Scholar]
  3. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 210–214. [Google Scholar]
  4. Wang, P.; Wang, F. Overview of Intelligent Optimization Algorithms from the Perspective of Quantum. J. Univ. Electron. Sci. Technol. China 2022, 51, 14. [Google Scholar]
  5. Wang, P.; Chen, Y.; Xin, G.; Jiao, Y.; Yin, X.; Yang, G.; Zhou, Y.; Mu, L.; Wang, F. A brief study on quantum dynamics of optimization algorithm. J. Southwest Minzu Univ. (Nat. Sci. Ed.) 2021, 47, 288–296. [Google Scholar]
  6. Zhang, J.; Zhan, Z. Computational Intelligence; Tsinghua University Press: Beijing, China, 2009. [Google Scholar]
  7. Kennedy, J. Stereotyping: Improving particle swarm performance with cluster analysis. In Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No. 00TH8512), La Jolla, CA, USA, 16–19 July 2000; IEEE: Piscataway, NJ, USA, 2000; Volume 2, pp. 1507–1512. [Google Scholar]
  8. Xu, G. An adaptive parameter tuning of particle swarm optimization algorithm. Appl. Math. Comput. 2013, 219, 4560–4569. [Google Scholar] [CrossRef]
  9. Beheshti, Z.; Shamsuddin, S.M.H.; Hasan, S. MPSO: Median-oriented particle swarm optimization. Appl. Math. Comput. 2013, 219, 5817–5836. [Google Scholar] [CrossRef]
  10. Yu, K.; Wang, X.; Wang, Z. Multiple learning particle swarm optimization with space transformation perturbation and its application in ethylene cracking furnace optimization. Knowl.-Based Syst. 2016, 96, 156–170. [Google Scholar] [CrossRef]
  11. Sun, J.; Xu, W.; Feng, B. A global search strategy of quantum-behaved particle swarm optimization. In Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems, The Hague, Netherlands, 10–13 October 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 1, pp. 111–116. [Google Scholar]
  12. Xi, M.; Sun, J.; Xu, W. An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position. Appl. Math. Comput. 2008, 205, 751–759. [Google Scholar] [CrossRef]
  13. Cai, Y.; Sun, J.; Wang, J.; Ding, Y.; Tian, N.; Liao, X.; Xu, W. Optimizing the codon usage of synthetic gene with QPSO algorithm. J. Theor. Biol. 2008, 254, 123–127. [Google Scholar] [CrossRef]
  14. Zhou, N.R.; Xia, S.H.; Ma, Y.; Zhang, Y. Quantum particle swarm optimization algorithm with the truncated mean stabilization strategy. Quantum Inf. Process. 2022, 21, 42. [Google Scholar] [CrossRef]
  15. Wang, G.G.; Deb, S.; Coelho, L.d.S. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–5. [Google Scholar]
  16. Cheung, N.J.; Ding, X.M.; Shen, H.B. A nonhomogeneous cuckoo search algorithm based on quantum mechanism for real parameter optimization. IEEE Trans. Cybern. 2016, 47, 391–402. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  18. Singh, N.; Singh, S. A modified mean gray wolf optimization approach for benchmark and biomedical problems. Evol. Bioinform. 2017, 13, 1176934317729413. [Google Scholar] [CrossRef] [PubMed]
  19. Singh, N. A modified variant of grey wolf optimizer. Sci. Iran. 2020, 27, 1450–1466. [Google Scholar] [CrossRef]
  20. Huang, X.; Li, C.; Pu, Y.; He, B. Gaussian quantum bat algorithm with direction of mean best position for numerical function optimization. Comput. Intell. Neurosci. 2019, 2019. [Google Scholar] [CrossRef]
  21. Hiba, H.; El-Abd, M.; Rahnamayan, S. Improving SHADE with center-based mutation for large-scale optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1533–1540. [Google Scholar]
  22. Ye, X.; Wang, P.; Xin, G.; Jin, J.; Huang, Y. Multi-scale quantum harmonic oscillator algorithm with truncated mean stabilization strategy for global numerical optimization problems. IEEE Access 2019, 7, 18926–18939. [Google Scholar] [CrossRef]
  23. Ye, X.; Wang, P. Impact of migration strategies and individual stabilization on multi-scale quantum harmonic oscillator algorithm for global numerical optimization problems. Appl. Soft Comput. 2019, 85, 105800. [Google Scholar] [CrossRef]
  24. Wang, F.; Wang, P.; Jiao, Y. Research on the Utilization of Mean Value Information in Optimization Algorithm. J. Northeast. Univ. (Nat. Sci.) 2024, 45, 49. [Google Scholar]
  25. Li, Y.; Xiang, Z.; Xia, J. Dynamical System Models and Convergence Analysis for Simulated Annealing Algorithm. Chin. J. Comput. 2019, 42, 1161–1173. [Google Scholar]
  26. D’Andrea, F.; Kurkov, M.A.; Lizzi, F. Wick rotation and fermion doubling in noncommutative geometry. Phys. Rev. D 2016, 94, 025030. [Google Scholar] [CrossRef]
  27. Wang, P.; Xin, G.; Wang, F. Investigation of bare-bones algorithms from quantum perspective: A quantum dynamical global optimizer. arXiv 2021, arXiv:2106.13927. [Google Scholar]
  28. Wang, F.; Wang, P. Convergence of the quantum dynamics framework for optimization algorithm. Quantum Inf. Process. 2024, 23. [Google Scholar] [CrossRef]
  29. Kohn, R.V. The relaxation of a double-well energy. Contin. Mech. Thermodyn. 1991, 3, 193–236. [Google Scholar] [CrossRef]
  30. Griffiths, D.J.; Schroeter, D.F. Introduction to Quantum Mechanics; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  31. Stella, L.; Santoro, G.E.; Tosatti, E. Optimization by quantum annealing: Lessons from simple cases. Phys. Rev. B 2005, 72, 014303. [Google Scholar] [CrossRef]
  32. Finnila, A.B.; Gomez, M.; Sebenik, C.; Stenson, C.; Doll, J.D. Quantum annealing: A new method for minimizing multidimensional functions. Chem. Phys. Lett. 1994, 219, 343–348. [Google Scholar] [CrossRef]
  33. Johnson, M.W.; Amin, M.H.; Gildert, S.; Lanting, T.; Hamze, F.; Dickson, N.; Harris, R.; Berkley, A.J.; Johansson, J.; Bunyk, P.; et al. Quantum annealing with manufactured spins. Nature 2011, 473, 194–198. [Google Scholar] [CrossRef]
  34. Wang, P.; Yang, G. Using double well function as a benchmark function for optimization algorithm. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 28 June–1 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 886–892. [Google Scholar]
  35. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. (CSUR) 2013, 45, 1–33. [Google Scholar] [CrossRef]
  36. Li, J.; Tan, Y. Loser-out tournament-based fireworks algorithm for multimodal function optimization. IEEE Trans. Evol. Comput. 2017, 22, 679–691. [Google Scholar] [CrossRef]
Figure 1. The quantum dynamic framework for the swarm intelligence algorithm.
Figure 1. The quantum dynamic framework for the swarm intelligence algorithm.
Entropy 26 00719 g001
Figure 2. The approximation of the target function (double well function) by the two-dimensional double well harmonic oscillator potential. The dashed line represents the target function after a second-order approximation, which is the harmonic oscillator potential, while the solid line represents the target function, and the dotted arrows represent the mapping from the sampling point to the harmonic oscillator.
Figure 2. The approximation of the target function (double well function) by the two-dimensional double well harmonic oscillator potential. The dashed line represents the target function after a second-order approximation, which is the harmonic oscillator potential, while the solid line represents the target function, and the dotted arrows represent the mapping from the sampling point to the harmonic oscillator.
Entropy 26 00719 g002
Figure 3. The ground state wave function evolution of the quantum harmonic oscillator.
Figure 3. The ground state wave function evolution of the quantum harmonic oscillator.
Entropy 26 00719 g003
Figure 4. The approximation process of a two-dimensional double well potential of the harmonic oscillator for the objective function (double well function): (a) approximation at a large scale, (b) approximation at a small scale.
Figure 4. The approximation process of a two-dimensional double well potential of the harmonic oscillator for the objective function (double well function): (a) approximation at a large scale, (b) approximation at a small scale.
Entropy 26 00719 g004
Figure 5. Convergence process of wave function and the red dots represent sampling points.
Figure 5. Convergence process of wave function and the red dots represent sampling points.
Entropy 26 00719 g005aEntropy 26 00719 g005b
Figure 6. The promotion of mean direction learning strategy for exploitation process and the red dots represent sampling points.
Figure 6. The promotion of mean direction learning strategy for exploitation process and the red dots represent sampling points.
Entropy 26 00719 g006
Figure 7. Boxplots of algorithms on CEC2013; the experiment dimension was 30, it was repeated 51 times, and a horizontal black line is the confidence interval of the median.
Figure 7. Boxplots of algorithms on CEC2013; the experiment dimension was 30, it was repeated 51 times, and a horizontal black line is the confidence interval of the median.
Entropy 26 00719 g007aEntropy 26 00719 g007b
Figure 8. The average time taken by the algorithm for the corresponding test function, measured in seconds (s); the legend on the right is the relative time after taking the logarithm; the closer it is to 0, the faster the algorithm is.
Figure 8. The average time taken by the algorithm for the corresponding test function, measured in seconds (s); the legend on the right is the relative time after taking the logarithm; the closer it is to 0, the faster the algorithm is.
Entropy 26 00719 g008
Table 1. The basic correspondence between the SI algorithm and quantum dynamics.
Table 1. The basic correspondence between the SI algorithm and quantum dynamics.
SI AlgorithmQuantum Mechanics
Objective FunctionPotential Field
Solution to Optimization ProblemWave function
Optimal SolutionGround State Wave Function
Local Optimal SolutionExcited State Wave Function
Dynamic Equation of SI AlgorithmSchrödinger Equation
Iteration ProcessEvolution of Wave Function
Probability Distribution of the SolutionModulus Square of the Wave Function
Reception for Worse SolutionTwo Energy Level Approximation
Population SamplingKinetic Energy
Table 2. Test functions of CEC 2013 single objective optimization benchmark suite.
Table 2. Test functions of CEC 2013 single objective optimization benchmark suite.
CategoriesNo.Functions
Unimodal
Functions
1Sphere Function
2Rotated High Conditioned Elliptic Function
3Rotated Bent Cigar Function
4Rotated Discus Function
5Different Powers Function
Basic
Multimodal
Functions
6Rotated Rosenbrocks Function
7Rotated Schaffers F7 Function
8Rotated Ackleys Function
9Rotated Weierstrass Function
10Rotated Griewanks Function
11Rastrigins Function
12Rotated Rastrigins Function
13Non-Continuous Rotated Rastrigins Function
14Schwefel’s Function
15Rotated Schwefel’s Function
16Rotated Katsuura Function
17Lunacek Bi_Rastrigin Function
18Rotated Lunacek Bi_Rastrigin Function
19Expanded Griewanks plus Rosenbrocks Function
20Expanded Scaffers F6 Function
Composition
Functions
21Composition Function 1 (Rotated)
22Composition Function 2 (Unrotated)
23Composition Function 3 (Rotated)
24Composition Function 4 (Rotated)
25Composition Function 5 (Rotated)
26Composition Function 6 (Rotated)
27Composition Function 7 (Rotated)
28Composition Function 8 (Rotated)
Table 3. Parameter setting.
Table 3. Parameter setting.
Parameterk σ Repeat TimesMax Iterations
setting30251300,000
Table 4. The average error, standard deviation, and minimum error of each algorithm, the bold part is the optimal solution.
Table 4. The average error, standard deviation, and minimum error of each algorithm, the bold part is the optimal solution.
f ( x ) BIPBIP-MeanMQHOA-wmn
MEAN-ERRSTD-ERRMIN-ERRMEAN-ERRSTD-ERRMIN-ERRMEAN-ERRSTD-ERRMIN-ERR
f 1 1.02 × 1021.09 × 1021.24 × 1013.34 × 10−131.15 × 10−132.27 × 10−132.19 × 1011.03 × 1016.64 × 100
f 2 2.34 × 1085.19 × 1071.02 × 1081.43 × 1062.45 × 1051.02 × 1063.06 × 1074.20 × 1062.41 × 107
f 3 3.57 × 10102.32 × 10101.25 × 10101.54 × 1071.66 × 1078.51 × 1051.22 × 1093.84 × 1085.98 × 108
f 4 5.44 × 1047.80 × 1033.61 × 1043.11 × 1043.86 × 1032.25 × 1041.37 × 1042.19 × 1039.78 × 103
f 5 3.87 × 1037.35 × 1021.51 × 1032.02 × 10−33.04 × 10−41.42 × 10−31.75 × 1029.78 × 1011.21 × 102
f 6 5.23 × 1026.53 × 1013.25 × 1022.33 × 1012.14 × 1014.41 × 10−19.90 × 1011.81 × 1015.57 × 101
f 7 1.62 × 1024.90 × 1011.15 × 1021.41 × 1019.58 × 1002.49 × 1004.57 × 1018.71 × 1003.15 × 101
f 8 2.10 × 1014.54 × 10−22.08 × 1012.10 × 1014.77 × 10−22.08 × 1012.09 × 1015.13 × 10−22.08 × 101
f 9 3.92 × 1011.01 × 1003.65 × 1013.96 × 1011.09 × 1003.53 × 1013.94 × 1011.44 × 1003.63 × 101
f 10 9.29 × 1029.79 × 1017.07 × 1022.45 × 10−11.89 × 10−16.16 × 10−25.83 × 1016.90 × 1004.24 × 101
f 11 3.09 × 1022.03 × 1012.67 × 1021.66 × 1025.32 × 1016.96 × 1002.14 × 1021.36 × 1011.83 × 102
f 12 3.15 × 1021.31 × 1012.86 × 1021.56 × 1025.47 × 1012.19 × 1012.12 × 1021.28 × 1011.82 × 102
f 13 3.12 × 1021.61 × 1012.66 × 1021.84 × 1021.95 × 1011.07 × 1022.16 × 1021.40 × 1011.73 × 102
f 14 7.40 × 1032.46 × 1026.90 × 1037.29 × 1033.02 × 1026.56 × 1037.37 × 1032.20 × 1026.89 × 103
f 15 7.38 × 1032.89 × 1026.51 × 1037.35 × 1032.82 × 1026.63 × 1037.27 × 1033.12 × 1026.31 × 103
f 16 2.51 × 1002.68 × 10−11.91 × 1002.53 × 1002.92 × 10−11.77 × 1002.40 × 1002.85 × 10−11.85 × 100
f 17 3.48 × 1021.21 × 1013.19 × 1022.10 × 1021.24 × 1011.79 × 1022.39 × 1021.21 × 1011.99 × 102
f 18 3.49 × 10 2 2.61 × 10 1 3.12 × 10 2 2.12 × 10 2 1.18 × 10 1 1.88 × 10 2 2.41 × 10 2 9.65 × 10 0 2.25 × 10 2
f 19 3.08 × 10 2 2.43 × 10 2 3.75 × 10 1 1.52 × 10 1 1.40 × 10 0 1.13 × 10 1 2.02 × 10 1 1.29 × 10 0 1.76 × 10 1
f 20 1.45 × 10 1 2.25 × 10 1 1.39 × 10 1 1.19 × 10 1 2.69 × 10 1 1.12 × 10 1 1.26 × 10 1 4.78 × 10 1 1.17 × 10 1
f 21 1.25 × 10 3 9.05 × 10 1 1.04 × 10 3 3.46 × 10 2 9.09 × 10 1 2.00 × 10 2 4.12 × 10 2 9.46 × 10 1 3.39 × 10 2
f 22 7.74 × 10 3 3.07 × 10 2 6.85 × 10 3 7.65 × 10 3 3.39 × 10 2 6.80 × 10 3 7.67 × 10 3 3.44 × 10 2 6.74 × 10 3
f 23 7.91 × 10 3 2.80 × 10 2 7.10 × 10 3 7.76 × 10 3 3.24 × 10 2 7.09 × 10 3 7.82 × 10 3 3.16 × 10 2 7.10 × 10 3
f 24 3.15 × 10 2 1.13 × 10 1 2.96 × 10 2 2.30 × 10 2 6.72 × 10 0 2.16 × 10 2 2.45 × 10 2 2.51 × 10 0 2.37 × 10 2
f 25 3.47 × 10 2 5.10 × 10 0 3.31 × 10 2 2.89 × 10 2 2.81 × 10 1 2.51 × 10 2 2.92 × 10 2 4.08 × 10 1 2.35 × 10 2
f 26 2.24 × 10 2 5.86 × 10 0 2.11 × 10 2 2.10 × 10 2 3.08 × 10 1 2.00 × 10 2 2.22 × 10 2 5.02 × 10 1 2.01 × 10 2
f 27 1.40 × 10 3 2.94 × 10 1 1.32 × 10 3 6.58 × 10 2 7.29 × 10 1 5.12 × 10 2 7.92 × 10 2 3.58 × 10 1 6.96 × 10 2
f 28 2.19 × 10 3 2.10 × 10 2 1.00 × 10 3 3.00 × 10 2 1.29 × 10 13 3.00 × 10 2 4.28 × 10 2 3.64 × 10 1 3.75 × 10 2
Table 5. Wilcoxon sign rank test results for 51 repeated experiments at 30 dimensions, the bold part indicating the cases where the test was found to be true.
Table 5. Wilcoxon sign rank test results for 51 repeated experiments at 30 dimensions, the bold part indicating the cases where the test was found to be true.
p-ValueCroupingBIP-Mean vs. BIPMQHOA-wmn vs. BIPBIP-Mean vs. MQHOA-wmn
f ( x )
f 1 9.49 × 10 19 9.7555 × 10 10 1.46343 × 10 11
f 2 3.30 × 10 18 3.01986 × 10 11 3.01986 × 10 11
f 3 3.30 × 10 18 3.01986 × 10 11 3.01986 × 10 11
f 4 4.70 × 10 18 3.01986 × 10 11 3.01986 × 10 11
f 5 3.30 × 10 18 3.01986 × 10 11 3.01986 × 10 11
f 6 3.30 × 10 18 3.01986 × 10 11 1.20567 × 10 10
f 7 3.30 × 10 18 3.01986 × 10 11 1.77691 × 10 10
f 8 7.13 × 10 1 0.1223529260.923442132
f 9 1.75 × 10 2 0.4825169040.325526587
f 10 3.30 × 10 18 3.01986 × 10 11 3.01608 × 10 11
f 11 3.30 × 10 18 3.01986 × 10 11 7.08811 × 10 8
f 12 3.30 × 10 18 3.01986 × 10 11 2.0338 × 10 9
f 13 3.30 × 10 18 3.01986 × 10 11 1.84999 × 10 8
f 14 1.41 × 10 1 0.9117089750.363222313
f 15 5.12 × 10 1 0.0678688610.982307053
f 16 6.54 × 10 1 0.2009488970.059427915
f 17 3.30 × 10 18 3.01986 × 10 11 4.1825 × 10 9
f 18 3.30 × 10 18 3.01986 × 10 11 4.97517 × 10 11
f 19 3.30 × 10 18 3.01986 × 10 11 3.01986 × 10 11
f 20 3.30 × 10 18 3.01986 × 10 11 3.96477 × 10 8
f 21 4.09 × 10 19 3.01986 × 10 11 9.45134 × 10 5
f 22 1.43 × 10 1 0.3111876430.853381737
f 23 1.63 × 10 2 0.7282652960.233988916
f 24 3.30 × 10 18 3.01986 × 10 11 4.07716 × 10 11
f 25 3.30 × 10 18 1.95678 × 10 10 0.970516051
f 26 1.63 × 10 14 1.10772 × 10 6 0.371077032
f 27 3.30 × 10 18 3.01986 × 10 11 5.96731 × 10 9
f 28 1.39 × 10 20 3.01986 × 10 11 1.21178 × 10 12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, F.; Wang, P.; Jiao, Y. Quantum Dynamical Interpretation of the Mean Strategy. Entropy 2024, 26, 719. https://doi.org/10.3390/e26090719

AMA Style

Wang F, Wang P, Jiao Y. Quantum Dynamical Interpretation of the Mean Strategy. Entropy. 2024; 26(9):719. https://doi.org/10.3390/e26090719

Chicago/Turabian Style

Wang, Fang, Peng Wang, and Yuwei Jiao. 2024. "Quantum Dynamical Interpretation of the Mean Strategy" Entropy 26, no. 9: 719. https://doi.org/10.3390/e26090719

APA Style

Wang, F., Wang, P., & Jiao, Y. (2024). Quantum Dynamical Interpretation of the Mean Strategy. Entropy, 26(9), 719. https://doi.org/10.3390/e26090719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop