1. Introduction
Aiming at the optimization algorithm, the heuristic algorithm [
1] is proposed. In general, an optimization problem can be seen as a form of mathematical programming, searching for different combinations of variables in a specified range to obtain the ideal response to the issue, that is, when the running time is allowed to be long enough, to ensure an optimal solution [
2,
3]. The heuristic algorithm is different from the optimization algorithm [
4]. It is an algorithm based on experience or intuitive construction. Within the acceptable cost range, it generally refers to the calculation time and space and gives a feasible solution to the problem. Generally, it is impossible to estimate the feasible solution. Heuristic algorithms can be classified into traditional heuristic algorithms and metaheuristic algorithms [
5]. Traditional heuristic algorithms contain relaxation method, solution space reduction algorithm, stereotype method, local search algorithm [
6] and so on. The metaheuristic algorithm is enhanced using the heuristic algorithm. Combining the random algorithm with local search, it can solve the best or satisfactory solution of complicated optimization problems. The metaheuristic algorithm is a mechanism based on computational intelligence to solve problems, so it is also called an intelligent optimization algorithm. Common metaheuristic algorithms include artificial neural network algorithm [
7,
8], simulated annealing algorithm [
9], genetic algorithm [
10,
11], ant colony optimization algorithm [
12], particle swarm optimization algorithm [
13,
14,
15], artificial fish swarm algorithm [
16,
17], artificial bee colony algorithm [
18,
19,
20], tabu search algorithm [
21], differential evolution algorithm [
22], etc. Jeng-Shyang Pan et al. proposed two novel algorithms based on State of Matter Search (SMS) algorithm [
23]. Pei Hu et al. proposed a multi-surrogate assisted binary particle swarm optimization named as MS-assisted DBPSO [
24]. Shu Chuan Chu et al. proposed a parallel fish migration optimization algorithm combining compact technology(PCFMO) [
25]. Trong-The Nguyen et al. proposed an improved swarm algorithm method (ISA), which works well with picture segmentation, resulting in remarkable computing in terms of global convergence and resilience and preventing local optimization trapping [
26]. Xingsi Xue et al. proposed a compact hybrid Evolutionary Algorithm (chEA) [
27]. Huang Y et al. proposed a multiobjective particle swarm optimization (MOPSO) with diversity enhancing (DE) (MOPSO-DE) strategy [
28]. Wang GG proposed a new kind of metaheuristic algorithm, called moth search (MS) algorithm [
29]. Yu H. et al. proposed a surrogate-assisted hierarchical particle swarm optimizer [
30]. In response to the proliferation of bio-inspired optimisation approaches in recent years, Molina et al. [
31] propose to present two comprehensive, principle-based taxonomies and review and study more than three hundred papers dealing with naturally-inspired and bio-inspired algorithms, thus providing a critical summary of design trends and the similarities between them. Sörensen et al. [
32] argue that most “novel” metaheuristics based on new metaphors are going backwards rather than forwards, and therefore call for a more rigorous evaluation of these approaches and point to some of the most promising avenues of research in the field of metaheuristics.
Metaheuristic algorithms are versatile optimization techniques, while entropy is a concept in information theory that gauges uncertainty and disorder in data. Despite their apparent disconnect, these two can synergize to tackle problems effectively. In certain optimization scenarios, particularly within metaheuristic algorithms, entropy can define “diversity” or “exploration”, preventing fixation on local optima. Here’s how these concepts intersect: 1. Diversity gauge. In metaheuristic algorithms, entropy quantifies solution set diversity. Higher entropy signals a wider range of solutions, indicating thorough exploration. Tracking entropy changes fine-tunes algorithm parameters, balancing exploration and exploitation. 2. Trade-off between exploration and exploitation. Metaheuristic algorithms juggle exploration and exploitation. Entropy gauges solution space uncertainty. Higher entropy implies more uncharted areas, pushing the algorithm to explore. Conversely, lower entropy suggests familiarity, guiding the algorithm using existing insights. 3. Multi-Objective optimization. Entropy measures solution distribution balance across objectives in multi-objective problems. Uniformly distributed solutions offer objective equilibrium, benefiting specific multi-objective optimization challenges.
The new metaheuristic algorithm for bamboo forest growth optimization algorithm [
33] combines the growth characteristics of bamboo forest [
34] with the optimization process of the algorithm. Bamboo is a tall tree-like grass plant, its growth rate is very fast, and its rapid growth process mainly occurs in its germination period [
35]. For the first four years, the bamboo grows only 3 cm, however, from the fifth year onwards, the bamboo can grow at a rate of 30 cm per day, reaching 15 m in six weeks. In the soil, the root system of bamboo can stretch hundreds of square meters, and during the growth of bamboo shoots, bamboo can show rapid growth within a short time. Thus, the whole process of bamboo forest growth can be classified into the period when the bamboo whip expands underground and the upward growth period of bamboo shoots. Besides, the bamboo forest is made up of several bamboo whips, which are the underground stems of bamboo, usually relatively long and thin. The bamboo on a bamboo whip belongs to a group. Bamboo whip supplies its own energy by absorbing nutrients in the soil, thereby carrying out cell division and cell differentiation. The shoots growing on the bamboo whip will develop in two directions, one part will grow out of the ground and become new bamboo shoots, and the other part will grow horizontally and grow into new bamboo whips. When the metaheuristic algorithm is used to solve the problem, the two periods in the bamboo growth process, the period when the bamboo whip expands underground, and the period when bamboo shoots grow, can be used to correspond to the global exploration stage and the local development stage of the algorithm respectively. The BFGO algorithm is highly competitive in handling optimization problems, but its exploitation ability is not outstanding. Therefore we need to improve BFGO to enhance its exploitation ability.
The BFGO algorithm has some significant differences from the three algorithms, GA, PSO and ACO, in terms of the basic inspirations and the simulation objects, the iterative approach, and the parallelism. In terms of basic inspirations and simulated objects, BFGO draws inspiration from the growth process of bamboo. The subsurface expansion of bamboo whips and the growth of bamboo shoots correspond to global exploration and localized exploitation, respectively. GA emulates the process of natural evolution, using genetic encoding and genetic operators to search for the best solution. PSO mimics collective behavior observed in groups like birds or fish, where particles update their positions and velocities based on their own and the group’s information. ACO imitates the foraging behavior of ants. Ants choose paths based on pheromone information and heuristic knowledge. In the iterative approach, BFGO uses bamboo forest growth characteristics and differential equations of bamboo forest growth to adjust the positions of bamboo shoots on bamboo whips. GA generates new individuals in each generation through selection, crossover, and mutation operations. PSO operates by having each particle update its position and velocity based on the optimal position of itself and the population. ACO relies on ants selecting paths based on pheromone and heuristic information while releasing pheromone on the paths they traverse. Regarding parallelism, BFGO employs a parallel strategy, allowing individuals to communicate effectively with each other. GA can parallelize the processing of multiple individuals, with each individual undergoing crossover and mutation operations independently. PSO naturally exhibits parallelism, as each particle can be updated independently. ACO has a certain degree of parallelism, as multiple ants can explore different paths concurrently.
Quasi-Affine Transformation Evolution(QUATRE) algorithm using the quasi-affine transformation method is a group-based algorithm [
36,
37]. In terms of parameter optimization and large-scale optimization, this algorithm is superior to other algorithms. In addition, the algorithm has good cooperation, which can reduce the time complexity to a certain extent. Under the condition that the total number of times of entering the evaluation function remains unchanged, it can achieve better performance by increasing the overall size of particles to reduce the number of generations required for objective optimization. In general, the algorithm performs well in unimodal functions, multimodal functions, and high-dimensional optimization problems.
Experimental design is a mathematical theory and method, which is based on probability theory and mathematical statistics theory, economically and scientifically develops the experimental design, and effectively conducts statistical analysis on experimental data. The basic idea of orthogonal learning was proposed by Dr. Taguchi in Japan. Orthogonal learning strategy is widely used in production line design and process conditions, because it can output high-quality products while using few computing resources. Orthogonal arrays are an important tool in orthogonal learning strategy. Taguchi algorithm can use orthogonal arrays to improve its performance. In Taguchi’s algorithm, only two levels of orthogonal arrays are used to join the optimization process. An orthogonal array is first defined, since each column of the array will represent the value of a factor under consideration, and the factors in this orthogonal array can be manipulated independently.
Based on the bamboo forest growth optimization algorithm, this work proposes a new heuristic algorithm named Orthogonal Learning Bamboo Forest Growth Optimization Algorithm with quasi-affine transformation evolutionary (OQBFGO). This algorithm combines the quasi affine transformation evolution algorithm to expand the particle distribution range, which is a process of entropy increase and can greatly improve the particle search ability. The algorithm also uses an orthogonal learning strategy to accurately aggregate particles from a chaotic state, which is an entropy reduction process that can more accurately perform global development. Finally, a balance between exploration and development was achieved during the process of entropy increase and entropy decrease.
Finally, the improved algorithm is used to solve the capacitated vehicle routing problem (CVRP) [
38,
39], referred to as the vehicle routing problem. As the fundamental model of vehicle routing problem, this model usually only constrains the load and driving distance (or time) of vehicles, and there are almost no other constraints. For this problem, many algorithms have been applied to find the optimal solution of this problem [
40,
41]. Most of the other models’ various solving algorithms are also derived from this model [
42]. At the end of this paper, the proposed new algorithm is used to solve this problem and has achieved good results. The main contributions are as follows.
1. For the first time, we combined the QUATRE algorithm with the BFGO algorithm. The new algorithm utilizes the evolutionary matrix from the Quasi-Affine Transformation Evolution algorithm to update particle positions, making particle movement more scientifically grounded, expanding the search space, and significantly improving the algorithm’s local search capabilities.
2. Innovatively, within the BFGO algorithm, we incorporated the use of an orthogonal learning strategy, enhancing the algorithm’s precision in global exploration, consequently improving its global development efficiency.
3. We tested the improved algorithm on both the CEC2013 and CEC2017 benchmark sets, comparing it with the original algorithm, various modifications of the original algorithm, and three other established algorithms, thus demonstrating the excellent performance of the new algorithm.
4. Building on the strong evidence of the enhanced algorithm’s effectiveness, we discretized the continuous OQBFGO algorithm and achieved success in solving the CVRP problem.
Other parts of the article are structured as follows.
Section 2 will briefly introduce the theoretical basis of BFGO, QUATRE and Orthogonal Learning.
Section 3 will introduce the specific process of the new algorithm OQBFGO in detail.
Section 4 tests the algorithm on the CEC2017 benchmark function and shows the test results.
Section 5 applies the new algorithm to the CVRP problem and compares its effect with the other five algorithms. In
Section 6 of this paper, the work of this paper is summarized and the future direction is prospected.
3. The Proposed OQBFGO Algorithm
When combined with the growth characteristics of bamboo, the BFGO algorithm has better performance on complex problems, and it can balance the development capabilities during the exploration process. The OQBFGO algorithm proposed in this work adds an orthogonal learning strategy on the basis of the bamboo forest growth algorithm, which can carry out the global development more accurately and reduce the convergence time of the algorithm. Moreover, the algorithm also adds the QUATRE algorithm, which is helpful, and the search range is greatly expanded. When improving metaheuristic algorithms, it’s essential to choose strategies based on the optimization characteristics of each stage. In the early stages of the algorithm, the entire population should quickly explore a wide decision space in a distributed manner. In the mid-stage, perturbations and fine-tuning should be conducted near potential optimal positions to develop more likely extremal points. In the later stages, a balance between exploration and exploitation is crucial, avoiding both excessive dispersion and over-concentration. The appropriate combination of exploration and exploitation strategies is essential for achieving the best results. Algorithm 1 shows the pseudocode of OQBFGO.
Algorithm 1 Pseudocode of OQBFGO |
Initialize population size N, dimension D, and number of bamboo whips K. |
Initialize the bamboo position and divide the population into K groups according to the fitness. |
Update global optimal and intra group optimal . |
while t < T do |
if t > 10 and not updat then |
select some individuals from the elite library to update, and update , . |
end if |
if not update then |
reshuffle the individuals and then group them. |
end if |
Update according to Equations (1)–(5), sort and update and . |
Update according to Equations (6)–(12) and update , and . |
Carry out an orthogonal learning on the average value of obtained at this time and . |
Update , and . |
Update the current coordinates of the particle according to Equations (13)–(16). |
end while |
Step 1: Initialization. Same as the original BFGO, the particle position is first initialized to generate a population position of size
. The specific form is shown in Equation (
17), where the number of particles is represented by
N and the number of dimensions by
D.
Step 2: Find the fitness values of all particles, and divide all particles equally into K groups, that is, K bamboo whips and each bamboo whip has n bamboo buds. Traverse K bamboo whips, update the optimal in the group as needed, and then update the global optimal .
Step 3: Enter the iteration, first judge whether the algorithm has iterated enough times and the global optimal position has not been updated, if yes, select some individuals from the elite library to update, and update , .
Step 4: Determine whether the optimal within the group of all groups has not been updated, if so, reshuffle the individuals and then group them.
Step 5: Bamboo Whip Extension Stage: Update according to the Equations (1)–(5) mentioned above, and sort and update and .
Step 6: Bamboo forest growth stage. Update the position of the bamboo according to Equations (6)–(10) mentioned above, and then update the optimal within the group and the global optimal .
Step 7: An orthogonal learning is carried out between the global optimal position obtained after the above steps and the average value of the optimal position of each group, and , and are updated.
Step 8: Update the current coordinates of the particle according to Equations (13)–(16) and use it as the initial position of the next iteration.