Next Article in Journal
Enhancing Legged Robot Locomotion Through Smooth Transitions Using Spiking Central Pattern Generators
Previous Article in Journal
Design and Control of Bio-Inspired Joints for Legged Robots Driven by Shape Memory Alloy Wires
Previous Article in Special Issue
Binary Particle Swarm Optimization with Manta Ray Foraging Learning Strategies for High-Dimensional Feature Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Nature-Inspired Optimization Algorithm: Grizzly Bear Fat Increase Optimizer

1
Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, Bellavista 7, Santiago 8420524, Chile
2
School of Information Systems and Management, Muma College of Business, University of South Florida, Tampa, FL 33620, USA
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(6), 379; https://doi.org/10.3390/biomimetics10060379 (registering DOI)
Submission received: 21 April 2025 / Revised: 1 June 2025 / Accepted: 2 June 2025 / Published: 7 June 2025

Abstract

:
This paper introduces a novel nature-inspired optimization algorithm called the Grizzly Bear Fat Increase Optimizer (GBFIO). The GBFIO algorithm mimics the natural behavior of grizzly bears as they accumulate body fat in preparation for winter, drawing on their strategies of hunting, fishing, and eating grass, honey, etc. Hence, three mathematical steps are modeled and considered in the GBFIO algorithm to solve the optimization problem: (1) finding food sources (e.g., vegetables, fruits, honey, oysters), based on past experiences and olfactory cues; (2) hunting animals and protecting offspring from predators; and (3) fishing. Thirty-one standard benchmark functions and thirty CEC2017 test benchmark functions are applied to evaluate the performance of the GBFIO, such as unimodal, multimodal of high dimensional, fixed dimensional multimodal, and also the rotated and shifted benchmark functions. In addition, four constrained engineering design problems such as tension/compression spring design, welded beam design, pressure vessel design, and speed reducer design problems have been considered to show the efficiency of the proposed GBFIO algorithm in solving constrained problems. The GBFIO can successfully solve diverse kinds of optimization problems, as shown in the results of optimization of objective functions, especially in high dimension objective functions in comparison to other algorithms. Additionally, the performance of the GBFIO algorithm has been compared with the ability and efficiency of other popular optimization algorithms in finding the solutions. In comparison to other optimization algorithms, the GBFIO algorithm offers yields superior or competitive quasi-optimal solutions relative to other well-known optimization algorithms.

1. Introduction

For single or multiple objectives, optimization involves determining the best state for decision/design variables. Prior to the heuristic era of optimization, mathematical problems were solved using analytical approaches. It is necessary for analytical methods to consider derivatives of either the sole objective function or the constraint-penalized objective function, along with values and violations of constraints. Using such data, they are able to find the optimal solution for linear and convex non-linear problems as effectively as possible. The downside to this is that it is vulnerable to entrapment in local optima in complicated issues, those with numerous local optima, and is not available for issues associated with stochastic or indeterminate search spaces. Problems in the real world are characterized by stochastic behaviors and unidentified search spaces. Metaheuristic algorithms were developed as a result. In contrast with derivative-based algorithms, metaheuristic algorithms are derivative-free and do not need any limitations or assumptions. This makes them suitable for solving a wide range of problems [1,2,3,4,5].
There has been an increase in the popularity of metaheuristic algorithms and effective methods used to solve complex optimization problems. There are several factors that drive the popularity. First and foremost, metaheuristic algorithms are very simple. There are fundamental theories or mathematical schemes that are derived from nature, which underlie these metaheuristic schemes. It is generally simple to execute these methods because they are simple and straightforward. Metaheuristics can be applied to real-world problems due to their simplicity. Additionally, current techniques can be used to develop their variants. Secondly, optimization methods are seen as black boxes because they have the ability to provide a group of outputs for a particular problem given a particular group of inputs. These methods can also be easily modified to obtain desirable solutions by altering their parameters and structures. Thirdly, metaheuristic algorithms are characterized by randomness. By exploring the whole search space, metaheuristic algorithms prevent themselves from falling into local optima. Metaheuristics—with complex search spaces or, in particular, with multiple local optima—are made successful by this theory. The flexibility and versatility of these metaheuristics imply their applicability to a wide range of optimization problems, particularly non-differentiable and non-linear problems, as well as complex numerical problems having many local minima. The application of metaheuristic algorithms has been successful in many domains.
There are two types of metaheuristic algorithms: local search algorithms and population-based algorithms. Neighborhood structures are used to enhance local search-based algorithms by taking one solution at a time [6], for example, simulated annealing [7], Variable Neighborhood Search [8], Greedy Randomized Adaptive Search Procedure [9], b-hill climbing [10], Stochastic Local Search [11], tabu searches [12], Guided Local Search [13], and recursive Tabu search [14].
The major benefit of local search-driven algorithms is their rapid search speeds, but their major disadvantage is that they tend to emphasize exploitation over exploration, which increases the risk of becoming trapped in local optima. A population-based algorithm, on the other hand, considers a population at a time, recombining the available solutions and generating a new solution at all iterations. In spite of their effectiveness in detecting useful regions in the search space, the techniques are inefficient at exploiting the searched area [15].
Metaheuristics based on populations generate a group of candidate solutions called ‘populations’. A better set of solutions is generated to replace the initially generated ones. In contrast to single-solution candidates, the new candidates are a collection of solutions rather than a single solution. Some examples of population-based algorithms inspired by nature are the following: Differential Evolution (DE) [16], Particle Swarm Optimization (PSO) [17], Harris Hawk Optimization [18], Grey Wolf Optimizer (GWO) [19], and Whale Optimization Algorithm (WOA) [20].
There are five main categories of population-based metaheuristics, including swarm intelligence-based, evolutionary-based, physics-based, event-based, and plant-based. Based on the principles of biological evolution, evolutionary algorithms evolve iteratively so that they become progressively better as they evolve. Some examples of popular evolutionary algorithms include DE [16], Genetic Algorithm [21], Biogeography-Based Optimizer [22], Evolution Strategy [23], and Evolutionary Programming [24].
Swarm intelligence schemes inspired by the behaviors of bees, wolves, ants, bats, etc. in nature, are the second group. Some examples from this group include PSO [17], Bat Algorithm [25], Crow Search Algorithm [26], Fruit Fly Optimization algorithm [27], Firefly Algorithm [28], Dragonfly Algorithm [29], Ant Lion Optimization [30], GWO [19], Salp Swarm Algorithm [31], Cuckoo Search Algorithm [32], Grasshopper Optimization Algorithm [33], WOA [20], and Dolphin Echolocation [34].
The third group is inspired by the beliefs, lifestyle, and social behaviors of animals and humans, called event-based metaheuristics. Some popular algorithms from this category include Harmony Search [35], Deep Sleep Optimizer [36], Teaching Learning-Based Optimizer (TLBO) [37], Group Search Optimizer [38], and Imperialist Competitive Algorithm [39].
Physics-based algorithms are the fourth category of metaheuristics schemes. Physics-inspired metaheuristics have been developed using laws like gravity, explosions, relativity, and Brownian motion. Some examples in this category are Big Bang Big Crunch [40], Electromagnetic Field Optimization [41], Sine Cosine Algorithm [42], Thermal Exchange Optimization [43], Arithmetic Optimization Algorithm [44], Gravitational Search Algorithm [45], Water Evaporation Optimization [46], Magnetic Charged System Search [47], Central Force Optimization [48], Henry Gas Solubility Optimization [49], Solar System Algorithm [50], and Optics Inspired Optimization [51].
Plant-based algorithms are the fifth category of metaheuristics schemes. Algorithms such as the Tree–Seed Algorithm [52,53], Carnivorous Plant Algorithm (CPA) [54], Improved Carnivorous Plant Algorithm [55], Flower Pollination Algorithm [56], Sunflower Optimization Algorithm [57], Walking Palm Tree Algorithm [58], Bamboo Forest Growth Algorithm [59], Lotus Effect Algorithm [60], and Flower Pollination Algorithm [61] are inspired by biological processes and reproduction mechanisms in plants.
A note should be made about the No Free Lunch (NFL) theorem [62]. As a result of this theorem, all optimization problems cannot be solved efficiently by any metaheuristic. It is also possible for a metaheuristic to perform well on a set of problems while showing inadequate results on a different set of problems. NFL promotes high activity in research in this area, leading to new metaheuristics being proposed yearly and enhancing current approaches. The reason for developing a new metaheuristic is to increase fat in Grizzly bears in order to gain fitness.
Optimization algorithms do not necessarily produce global optimal solutions, which is the key to understanding them. As a result, quasi-optimal solutions to optimization problems can be achieved through the use of optimization algorithms [53,55,63,64,65,66,67,68].
Quasi-optimal solutions are best when they equal the global optimal solutions. If they are not equal, they should be near them. A better algorithm to solve optimization problems is therefore one that can provide a quasi-optimal solution near the global optimal solution when comparing various optimization algorithms. In addition, it is important to keep in mind that although an optimization algorithm may be highly effective at solving a particular optimization problem, it is likely to be ineffective at solving other optimization problems. Thus, many optimization algorithms have been designed to generate quasi-optimal solutions that can be considered more accurate and closer to the global optimal solution.
Optimization algorithm performance is determined by applying them to standard optimization problems that already have a known optimal solution in order to determine how well they perform in providing quasi-optimal results. Optimization algorithms are ranked according to their ability to provide solutions that are as close to the global optimum as possible. This means that new optimization algorithms can always be developed that outperform current algorithms for solving optimization problems.
A new metaheuristic optimization algorithm is described and illustrated in this paper, which is named the Grizzly Bear Fat Increase Optimizer (GBFIO), which mimics the behavior of Grizzly bears to increase fat in their body to survive in winter sleep. The behavior of bears used to design the proposed optimization algorithm was inspired by the authors’ observation of the documentary Bears, produced in 2014 by Alastair Fothergill and Keith Scholey [69,70]). There are no previous studies in the optimization literature on this subject to the best of the authors’ knowledge. This work differs from recently published papers in that it simulates a range of bear behaviors, such as finding vegetables, fruits, honey, oysters, and other food resources using memory the sense of smell, hunting animals, and taking care of cubs to avoid being hunted by other animals, and fishing. These behaviors are modeled to incorporate both the advantages of local search-based algorithms and population-based algorithms within a unified framework.
A total of 31 mathematical optimization problems and also 4 constrained engineering design problems such as tension/compression spring design (TCSD), welded beam design (WBD), pressure vessel design (PVD), and Speed reducer design (SRD) problems are solved in this study to appraise the GBFIO algorithm’s effectiveness. The GBFIO’s performance is superior in comparison with state-of-the-art optimization methods based on the outcomes of optimization.
The rest of the paper is organized as follows: the theory of the proposed GBFIO algorithm and its mathematical model is presented in Section 2. In Section 3, 31 mathematical optimization problems and thirty CEC2017 test benchmark functions are presented and solved with the proposed GBFIO algorithm and also compared with four well-known optimization algorithms—PSO, DE, TLBO, and GWO. In Section 4, the proposed GBFIO algorithm is used to solve four engineering problems: TCSD, WBD, PVD, and SRD problems. Finally, the main conclusion is presented in Section 5.

2. Grizzly Bear Fat Increase Optimization Algorithm

Firstly, the GBFIO algorithm will be described in this section, and next its mathematical model will be illustrated to optimize diverse optimization problems.

2.1. GBFIO Algorithm Theory

Bears must store enough fat in their bodies during the warm months of the year to feed themselves during the cold months when they hibernate and are inactive for several months. Bears with cubs must also store enough fat to nurse their cubs during hibernation, and if they cannot store enough fat, the cubs will die [69,70,71,72]. Figure 1 shows a grizzly bear with her two cubs that is feeding and trying to increase its fat to survive in the winter.
Grizzly bears are omnivores, and their diet depends on the available food sources. In addition to fishing and hunting, brown bears feed on plant materials such as fruits, roots, shellfish, honey, etc. [69,70,71,72]. Therefore, the increase in fat in grizzly bears can be classified into the following three stages, with each step storing some fat in the bear until it reaches the amount needed for hibernation (see Figure 1) [69,70,71,72]: (1) finding the location of vegetables, fruits, shellfish, ponds, rivers for fishing and also following the movement of fish based on the memory of previous years and the sense of smell; (2) hunting other animals and also taking care of the offspring to avoid being hunted; (3) fishing (this is a local search). Therefore, the proposed optimization algorithm based on the increase in fat in grizzly bears is modeled as follows:

2.1.1. Phase 1: Finding Plants, Honey, Shellfish, Corpses, and Fishing Rivers

The main diet of grizzly bears for gaining fat is fish, but until the fish arrive from the sea to the spawning grounds and grizzly bears find a suitable place for fishing, they eat other things, including vegetables, fruits, honey, shellfish, and dead animal carcasses. Therefore, gaining fat by eating vegetables, fruits, shellfish, etc., and also finding fish is modeled as follows:
x b e a r s e a r c h ( t + 1 ) = x b e a r s e a r c h ( t ) + r 1 . x f i s h + r 2 . x h o n e y + r 3 . x s h e l l + r 4 . x c o r p s e + r 5 . x p l a n t s ,
x f i s h = x f i s h x b e a r s e a r c h ( t ) ,
x h o n e y = x h o n e y x b e a r s e a r c h ( t ) ,
x s h e l l = x s h e l l x b e a r s e a r c h ( t ) ,
x c o r p s e = x c o r p s e x b e a r s e a r c h ( t ) ,
x p l a n t s = x p l a n t s x b e a r s e a r c h ( t ) ,
where x f i s h , x h o n e y , x s h e l l , x c o r p s e , and x p l a n t s are fish, honey, shells, corpses, and plants that bears try to find in order to eat and increase the fat for surviving in the winter. x b e a r s e a r c h ( t ) is the current population, and the top five best populations are selected as x f i s h , x h o n e y , x s h e l l , x c o r p s e , and x p l a n t s , respectively. The current iteration’s number is indicated by t .
Also, as finding each of them is challenging for bears and is found randomly, r 1 to r 5 are defined to show the random state of each food source, hence r 1 to r 5 are defined as follows:
r 1 s = 2 . r a n d . r a n d ,
r 2 s = r a n d . r a n d ,
r 3 s = r a n d 2 . r a n d ,
r 4 s = r a n d 4 . r a n d ,
r 5 s = r a n d 8 . r a n d ,
where r a n d value is selected randomly in the range of [0, 1]. The r 1 value is larger than other random values because fish is a more interesting food for bears and has a high nutritional value in terms of fat gain. Therefore, with the reduction in nutritional value and its effect on increasing fat, the value of its effect coefficient also decreases, so we have r 1 s > r 2 s > r 3 s > r 4 s > r 5 s . In addition, the r 1 s , r 2 s , r 3 s , r 4 s , and r 5 s values are in the range of [0, 2], [0, 1], [0, 0.5], [0. 0.25], [0, 125], respectively.

2.1.2. Phase 2: Hunting Phase and Safeguarding Cubs from Being Hunted

One way that grizzly bears gain fat is by hunting other animals. Female bears must remain vigilant to protect their cubs during the hunt from potential predators, including coyotes and other bears, which affects the hunting process. If the cubs are killed, the bear needs less food and less fat for the winter, which is also modeled. Therefore, the stage of hunting a bear is as follows.
As a first step, bears identify where their prey is and move towards them. As a result of simulating the behavior of the bear, the proposed GBFIO searches the search space in order to discover various search areas. A key feature of the GBFIO is that the prey’s location within the search space is determined at random. Equation (12) simulates how the bear moves to its target and how these concepts work.
x b e a r = 2 . r 1 h . x b e a r h u n t ( t ) r 2 h . x p r e y ( t ) ,
x b e a r h u n t ( t + 1 ) = x b e a r h u n t ( t ) A . x b e a r ,
where r 1 h , and r 2 h are randomly selected values in the range [0, 1]. The larger the prey, the more fat the bear accumulates; hence, to have maximize fat accumulation, the best population, obtained after the previous update step, is selected as the prey. So, x b e a r h u n t is the current population that is selected as the bear, which tries to hunt the best prey to increase its fat level. ( x p r e y ( t ) is the best population which is obtained from the updated population from the previous step). A shows the coefficient vector that has been computed by Equation (14).
A = α ( 2 . r 3 h 1 ) ,
In which α value is in [0, 2.5], and it decreases linearly from 2.5 to zero within the iteration process, and r 3 h shows a random value in the range of [0, 1].
In the second step, the predation of the cubs by other animals, including coyotes, is modeled. In this step, it is assumed that the bear has two cubs that she must protect during the hunt and prevent them from being hunted. If the cubs are hunted and killed, the mother bear stores more fat by not feeding the cubs. Hence, the current population, which is the mother bear, can increase more fat if the cubs are hunted by coyote. Three individuals from the population are randomly selected as cubs and coyotes, and also, since the coyote is a predator and it is stronger than cubs, the best member among these three selected members is selected as the coyote, and the other two members are selected as cubs. So, hunting the cubs by the coyote is modeled as follows:
x c o y o t e c u b 1 = 2 . r 4 h . x c u b ( 1 ) r 5 h . x c o y o t e x c o y o t e c u b 2 = 2 . r 6 h . x c u b ( 2 ) r 7 h . x c o y o t e ,
x b e a r c a r e t + 1 = x b e a r c a r e t B 1 . x c o y o t e c u b 1 B 2 . x c o y o t e c u b 2 ,
B 1 = B 2 = ρ ( 2 . r 8 h 1 ) ,
where x b e a r c a r e t is the current population that is selected as the mother bear that should save and increase fat in her body, and x c o y o t e , x c u b ( 1 ) , and x c u b ( 2 ) are three random members in the updated population after Phase 1. ( x c o y o t e is the best member among these three selected members, and the other two members are chosen as x c u b ( 1 ) and x c u b ( 2 ) ). r 4 h , r 5 h , r 6 h , r 7 h , and r 8 h are the random vectors in [0, 1]. B 1 and B 2 show the coefficient vectors that are computed by Equation (17). In which ρ value is in [0, 1.25], and it decreases linearly from 1.25 to zero within the iteration process.
In this section, it is assumed that the bear either hunts and gains fat, or that the bear can store more fat by losing cubs. Since the bear takes care of the cubs and can also survive some attacks by running away and fighting, the bear is more likely to be hunted than the cubs are hunted by coyotes or other bears. Taking the above into account, either the hunting state is considered or the cubs are lost, so we have the following:
x b e a r h u n t i n g c a r e t + 1 = x b e a r h u n t t + 1             i f       β 0.7 x b e a r h u n t i n g c a r e t + 1 = x b e a r c a r e t + 1           i f       β > 0.7 ,
where β is the random value in the range [0, 1].

2.1.3. Phase 3: Fishing

Grizzly bears have a strong preference for fish. Every year, thousands of salmon migrate upstream to spawn. These fish provide the bears with the rich fats and proteins they need to survive. The abundance of fish helps the bears gain the weight they need to survive the winter.
Grizzly bears position themselves along the migratory path of salmon, catching fish as they leap and ascend the river. Each bear occupies a specific location and is capable of catching a certain number of fish per day within a circular fishing area defined by a radius r . With each successful catch, the bear increases the amount of fat in its body. As the cold season progresses and approaches its end, the number of fish decreases and the bear’s fat is closer to the amount needed for hibernation. Therefore, we model the following:
x b e a r f i s h i n g t + 1 = 1 + F . x b e a r f i s h i n g ( t ) ,
F = η . γ . c o s ( 2 . π . r 1 f ) ,
where x b e a r f i s h i n g ( t ) is the current updated population after phase 2, corresponding to bears engaged in fishing to increase their fat reserves. The term c o s ( 2 . π . r 1 f ) is used to model the circular fishing area, where r 1 f is the random value in [0, 1]. The parameter γ is in [0, 1], and it denotes a decay factor that decreases linearly from 1 to 0 within the iteration process. The constant η is set at 0.3.
An adult grizzly bear catches about 25 fish per day. To account for the daily fish catch, f t h   is the number of fishing attempts (or repetitions) per day for each bear to increase its fat. Therefore, we express the fishing phase as follows:
x b e a r f i s h i n g t + 1 ( f ) = 1 + F . x b e a r f i s h i n g t ( f ) ,                     f = { 1 ,   2 , ,   25 } ,
where the maximum number of f equals 25. In general, in the fishing phase, each population is updated 25 times around its current position in each iteration. After each fishing attempt, if the newly generated solution (position) yields a better fitness value, it replaces the previous one.

2.2. Pseudo-Code and Execution Procedure of the Proposed GBFIO Algorithm

In this section, the execution procedure of the proposed GBFIO algorithm is presented step by step as follows:
Step 1: Set input values including the number of variables ( n ), maximum and minimum values of variables, maximum iteration number ( I t e r m a x ), and the population size ( N ). Initialize the suggested GBFIO algorithm parameters including fishing number (f = 25) and η = 0.3 . Finally, set the maximum number of the following parameters: α = 2.5 , ρ = 1.25 , γ = 1 .
Step 2: Generate the initial population randomly within the range of variable values as follows:
X ¯ p o p = r a n d ( 1 , n ) X m a x X m i n + X m i n ,
where X m a x and X m i n are the minimum and maximum values of the variables. r a n d shows a random vector in [0, 1]. Also, the objective function for each population is computed. So that the primary population can be computed, this procedure has been repeated for each parameter.
Step 3: Start the iteration and set i t e r a t i o n = 1 + i t e r a t i o n .
Step 4: Start the searching phase by selecting the top 5 best members based on objective functions such as fish, honey, shells, corpses, and plants, respectively. Compute the entire new searching population based on Equations (1)–(11) (each population is selected as a bear searching for food).
Step 5: Compute the objective function for the entire new computed population in Step 4. If the objective function of each new searching population is better than the initial population, then update the initial population by the new searching population that is computed in Step 4.
Step 6: Start the Hunt and Care Phases.
Step 6.1—Hunt Phase: Select the best-performing member of the population as the prey, which will provide the most fat for the bears. Compute the entire new hunting population based on Equations (12)–(14).
Step 6.2—Care Phase: Select three random members of the population. Among them, identify the best-performing one as the coyote and the other two as bear cubs. Compute the entire new care population based on Equations (15)–(17).
Step 6.3—Select the new population of the Hunt and Care Phases based on Equation (18). If the generated random number is lower than 0.7, then the new hunt and care population ( x b e a r h u n t i n g c a r e ) is selected from the hunting phase (Step 6.1); if the generated random number is more than 0.7, then the new Hunt and Care population ( x b e a r h u n t i n g c a r e ) is selected from the care phase (Step 6.2).
Step 7: Compute the objective function for the entire new computed population in Step 6.3. If the objective function of each new Hunt and Care population ( x b e a r h u n t i n g c a r e ) is better than the updated initial population, then replace the population with the new Hunt and Care population ( x b e a r h u n t i n g c a r e ) that is computed in Step 6.
Step 8: Start the fishing phase. Each updated population represents a bear that performs fishing 25 times each day. Set the fishing number as f = 1 .
Step 8.1: Compute the new fishing population by Equation (21) and then compute the objective function.
Step 8.2: If the objective function of each fishing population ( x b e a r f i s h i n g ) is better than the updated initial population, then replace the population with the new fishing population ( x b e a r f i s h i n g ).
Step 8.3: If the number of fishing times is less than the maximum fishing number (25), then increment the fishing counter f = f + 1 and return to Step 8.1. Otherwise, go to Step 9.
Step 9: If the number of i t e r a t i o n is less than the maximum number of iteration, then go to Step 4; otherwise, go to Step 10.
Step 10: Select the best member of the updated population as a solution and then end the procedure.
The pseudo-code of the proposed GBFIO algorithm is described in Table 1. In addition, the flowchart of the GBFIO algorithm is shown in Figure 2.

3. Results and Discussion

3.1. Benchmark Functions

There are several experiments carried out to evaluate the suggested algorithm’s effectiveness and provide numerical evidence for those theoretical claims. In total, 31 benchmark functions have been applied to assay the GBFIO; 23 of the first test functions correspond to classical benchmark functions. They are employed in a variety of types [73].
As shown in Table 2, Table 3, Table 4 and Table 5, the benchmark functions are categorized into three groups: unimodal benchmark (UB), high-dimensional multimodal (HDMM), and fixed-dimensional multimodal (FDMM), as well as rotated and shifted benchmark (RSB) functions.
The UB functions with a single optimum can be applied to test the exploitation abilities of optimization algorithms. As shown in Table 2, there are seven fixed-dimension and scalable UB functions ( C F 1 to C F 7 ).
The multimodal benchmark functions present greater challenges compared to UB functions because they contain more than one minimum. Multimodal benchmark functions have been applied to test the exploration and evasion of local minima in optimizers. The 16 HDMM and FDMM test functions used to test the GBFIO ( C F 8 to C F 23 ) are listed in Table 3 and Table 4.
In the final group, the RSB functions exhibit higher complexity and follow the composition function paradigm [74]. Table 5 shows the mathematical models for each benchmark function ( C F 24 to C F 31 ).
These tables are arranged by dimension to indicate the size of the problem,   F m i n to indicate the minimum found in the research, and Range to indicate the boundaries of the problem’s search space.

3.2. Results

In this case, the performance of the proposed GBFIO in solving optimization problems is investigated. Hence, 31 objective functions of diverse kinds of unimodal, HDMM, FDMM, and RSB functions have been solved by using the GBFIO. Table 2, Table 3, Table 4 and Table 5 show the used benchmark functions’ details. Additionally, a comparison is also made between the proposed GBFIO optimization algorithm and PSO, DE, TLBO, and GWO, four popular algorithms. In order to implement the objective functions, 500 independent implementations having 500 iterations were implemented. The number of populations selected is 150. The parameter values of the proposed optimization algorithms are expressed as follows: GBFIO (f = 2.5; eta = 1.25; Zeta = 0.3; fishing_number = 25), PSO (Cognitive constant, C 1 = 2; Social constant, C 2 = 2 ; Local constant; W is linearly reduced from 0.9 to 0.1), DE (Crossover rate, c r = 0.7 ; Mutation coefficient, F = 0.2), TLBO (Teaching factor F is randomly selected between [1, 2]).
These tests are run on a Windows 10 Pro system using Intel Core i7-4600U, 2.1~2.7 GHz, 12G RAM and, Matlab R2019a. The algorithms were run 20 times on each mentioned benchmark function. The statistical outcomes (Average (Ave), Standard Deviation (SD), and rank of each algorithm in comparison to each other (Rank-1 is defined based on the average solution, and Rank-2 is defined according to the best solution achieved in 20 independent runs) have been expressed in Table 6, Table 7, Table 8 and Table 9.
Mean and SD indexes are useful to demonstrate algorithms’ ability to avoid local minima. As the Mean index decreases, the algorithm becomes more likely to find a solution that is close to the global optimum. Despite equal Mean values, each algorithm may perform differently in determining the global optimum. As a result, SD allows for a more accurate comparative analysis. A small SD will result in a lower degree of dispersion of results.
There are three criteria for reporting simulation outcomes: (i) the average of the optimal solutions achieved (Mean), (ii) the standard deviation of the optimal solutions achieved (SD), and (iii) the optimal candidate solution (Best). Equations (23) and (9) can be used to calculate Mean and SD.
M e a n = 1 N i l = 1 N i B C F l ,
S D = 1 N i l = 1 N i B C F l M e a n 2 ,
In which, N i defines the number of independent runs and B C F l shows the best candidate solution acquired in the l t h independent run.

3.2.1. Assessment of UB Functions

According to Table 2, C F 1 to C F 7 are unimodal objective functions. A comparison of the proposed GBFIO algorithm compared to four rival algorithms is performed on these functions. A comparison of the C F 1 to C F 7 functions is given in Table 6. Based on the table, the suggested algorithm for the C F 6 with 100 dimension would converge to zero, which is the global optimal. The GBFIO is also the most efficient optimizer for the C F 1 , C F 2 , C F 3 , C F 4 , C F 5 , and C F 7 functions. Comparisons of optimization algorithms show that the GBFIO produces considerably better outcomes than its competitors and is nearer the global optimal. Figure 3 depicts the 2D versions of selected UB functions, the search history, and the convergence curve for the 100-dimensional version of the UB functions for the proposed GBFIO algorithm in comparison to other selected algorithms.

3.2.2. Assessment of HDMM Functions

The proposed GBFIO and other algorithms are analyzed and considered in solving HDMM functions. Six objective functions, C F 8 to C F 13 , are chosen, which is described in Table 3. The results of the run of the GBFIO and other optimization algorithms for HDMM functions are described in Table 7. Based on the GBFIO proposal, the global optimal has been found for C F 9 to C F 13 with zero convergence. The proposed algorithm comes out on top as the best algorithm in terms of quasi-optimal solutions for C F 8 to C F 13 , except C F 12 , which achieved the second rank, and also the solution is very close to zero. DE is the best algorithm for C F 12 while the solution of the GBFIO is very close to the solution of DE. According to the simulation results, it is observed that the proposed GBFIO algorithm has high ability and proficiency in solving this kind of optimization problem.
Figure 4 depicts the 2D versions of selected HDMM functions, the search history, and the convergence curve for the 100-dimensional version of the HDMM functions for the suggested GBFIO algorithm in comparison to other selected algorithms.

3.2.3. Assessment of FDMM Functions

As shown in Table 4, C F 14 to C F 23 evaluate the ability of optimization algorithms to deal with FDMM problems. Table 8 shows optimization outcomes for each objective function using the GBFIO and four competitors’ methods. The GBFIO is shown to converge to the global optimal for C F 14 to C F 23 . The GBFIO is also the most efficient optimizer (based mean of best solutions) for solving C F 14 to C F 18 , C F 20 , and C F 21 . (For C F 19 and C F 23 , the GBFIO algorithm converges to the solution and only based on the SD criterion, it achieves the second rank). Consequently, the proposed GBFIO solves the objective functions more efficiently. Simulation outcomes shown that the GBFIO performed better than the four rival algorithms in solving FDMM optimization problems. Figure 5 depicts the 2D versions of chosen FDMM functions, the search history, and the convergence curve for the proposed GBFIO algorithm in comparison to other selected algorithms.

3.2.4. Assessment of RSB Functions

A detailed evaluation will be conducted here by using recently introduced RSB functions presented in Table 5 [73,74]. Table 5 describes benchmark functions from C F 24 to C F 31 that follow a composition paradigm. Table 9 shows the GBFIO and four rival methods’ outcomes to optimize these objective functions. It has been shown that the GBFIO can converge close to the global optimal for C F 24 to C F 31 . The GBFIO is the top best optimizer (based mean of best solutions) in solving C F 26 , C F 27 , C F 29 , and C F 31 . (For C F 24 and C F 25 , the GBFIO algorithm converges to the solution and only based on the SD criterion, it achieves another rank). Furthermore, the DE algorithm takes the first rank for C F 28 and C F 30 , while the solution of GBFIO convergence to the solution. Consequently, the suggested GBFIO solves the objective functions more efficiently. Simulation outcomes shown that GBFIO performed better than the four rival algorithms in solving RSB optimization problems.
Figure 6 depicts the 2D versions of selected RSB functions, the search history, and the convergence curve for the proposed GBFIO algorithm in comparison to other selected algorithms.

3.3. Statistical Analysis

The Mean and SD indexes can be used to compare and evaluate optimization techniques based on the optimization results of objective functions. However, it is still possible that one algorithm may be better than others based on randomness, even after a number of different executions. The following subsection presents a statistical analysis known as a Wilcoxon sum rank test [75] to demonstrate to what extent the GBFIO is superior to four rival algorithms. The Wilcoxon sum rank test measures similarities between two dependent samples using a non-parametric statistical method. Differences between the two samples are tested to determine whether or not they are statistically important. An indicator known as a p-value is used in the Wilcoxon sum rank test for determining the statistical significance of differences between two algorithms used to optimize various objective functions. Table 10 presents the outcomes of the simulation for the proposed GBFIO against four rival algorithms. The table shows that a p-value below 0.05 indicates that the proposed GBFIO is significantly superior to the rival algorithm for each objective function.

3.4. Comparison-Based Achieved Rank

Here, the proposed GBFIO is compared with GWO, TLBO, DE, and PSO based on the achieved ranks between one another in the average of the best solution and also the best of the best solution. The average values of the mentioned ranks in Table 6, Table 7, Table 8 and Table 9 are computed for this comparison. Figure 7 depicts the rank of optimization algorithms according to the average of the best solution rank, which is achieved for 31 test functions. Figure 8 depicts the rank of optimization algorithms based on the best of the best solution rank, which is achieved for 31 test functions. As can be seen, the GBFIO algorithm achieved the best average ranks (1.273 and 1.227) in this comparison. Hence, the proposed GBFIO is more efficient compared to other algorithms to solve the mentioned 31 test functions. By observing the outcomes, it can be concluded that the suggested GBFIO has high ability and capability in solving the optimization problem.

3.5. Sensitivity Analysis

The purpose of the following subsection is to introduce sensitivity evaluations of the proposed GBFIO for two parameters: the population size and the maximum number of iterations.
All thirty-one objective functions for various population sizes with 20, 30, 40, 50, and 60 members are subject to a sensitivity assessment of the GBFIO’s efficiency to population sizes (maximum iteration is 500). Table 11 presents the results of the GBFIO under various population sizes that are obtained in 20 independent runs. As can be seen in Table 11, the proposed GBFIO tends to converge to more appropriate quasi-optimal solutions as population sizes increase, resulting in a decrease in the objective function values as the number of members increases.
The suggested algorithm was tested on all thirty-one objective functions to examine the performance’s sensitivity to the maximum number of iterations, which is selected as 50, 100, 200, 500, and 1000 iterations (population size is 80). The assessment outcomes for different maximum number of iterations of the 20 independent runs are presented in Table 12. As can be seen in Table 12, by increasing the number of iterations, the proposed GBFIO leads to solutions that are closer to the global optimal based on the sensitivity assessment.

3.6. Comparison-Based Maximum Number of Objective Function Calculation Times

To have a fair comparison, in this sub-section, the GBFIO is compared to four well-known algorithms, GWO, TLBO, DE, and PSO, with maximum 250,000 objective function calculation times. The results of the comparison are presented in Table 13. As can be seen, the suggested GBFIO algorithm reached the first total rank in the best of the best solution and average of the best solution in 20 independent runs.

3.7. Comparison of the Proposed GBFIO Algorithm with Other Optimization Algorithms for Shifted and Rotated Unconstrained CEC2017 Test Functions

In this sub-section, to consider the performance of the proposed GBFIO algorithm under CEC2017 test functions, the GBFIO is compared to seven optimization algorithms, GWO, TLBO, DE, PSO, CPA, Tunicate Swarm Algorithm (TSA) [76], and WOA, with maximum 300,000 objective function calculation times for 10 independent runs in the same conditions.
As shown in Table 14, the CEC2017 benchmark test functions include thirty functions such as three shifted and rotated (S&R) unimodal, seven S&R multimodal, ten S&R hybrid and ten composition functions [53,55,59]. For the entire thirty CEC2017 functions, the dimension is considered as thirty, and the range is [−100, 100].
The results of comparison on CEC2017 test functions are presented in Table 15. As can be seen, the suggested GBFIO algorithm reached the first total rank in the best of the best solutions and average of the best solutions in 10 independent runs.

4. GBFIO in Engineering Problem

These sections investigate the algorithm’s efficiency using four engineering design problems. A comparison of the GBFIO algorithm and four popular algorithms, GWO, TLBO, DE, and PSO, is also conducted to verify the outcomes.
The following section uses four constrained engineering design problems: TCSD, WBD, PVD, and SRD. Due to the different inequality and equality constrained in these problems, the GBFIO must also have a constrained handling strategy for optimizing constrained problems. Therefore, in the GBFIO, the easiest constraint handling strategy, penalty functions, is used efficiently to handle constraints. In the case of a violation of all constraints, the search agents will receive big objective function values. The simple, scalar penalty functions are used for the problems.

4.1. TCSD Problem

The optimization considers a spring construction that acts under tension and compression from the load illustrated in Figure 9 [20,77,78]. A spring volume weight reacting to tension or compression under a load should be minimized. The design variables are x 1 , x 2 , and x 3 representing the number of active coils, winding diameter, and wire diameter, respectively. Optimization is accomplished by optimizing a spring volume described as follows:
Minimize f x = x 1 2 x 2 ( x 3 + 2 ) ,
Subject to: g 1 x = 1 x 2 3 x 3 71785 x 1 4 0 g 2 x = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 0 g 3 x = 1 140.45 x 1 x 2 2 x 3 0 g 4 x = x 1 + x 2 1.5 1 0 (25)
Variable range: 0.05 x 1 2.0 0.25 x 2 1.3 2.00 x 3 15.0 ,
Table 16 describes the results of GBFIO, GWO, TLBO, DE, and PSO algorithms, which achieved the best solution for TCSD variables. The outcomes show that GBFIO, DE, and PSO achieve the best solution for the TCSD problem with the objective function equals to 0.012665. Statistical results of the GBFIO algorithm in comparison to other algorithms on the TCSD problem are shown in Table 17, which shows the GBFIO boasts better values for statistical indicators than its competitors. Figure 10 shows the GBFIO convergence curve achieves optimal values for the TCSD problem. A similar penalty function is used for the GBFIO to make accurate comparisons.

4.2. WBD Problem

A construction consisting of two parts, beam and weld, is considered as an optimization problem as shown in Figure 11 [19,79,80]. The model variables show the dimensions of a welded part, that is optimized for minimizing the weld’s cost and the beam’s material without deviating during optimization. There are four variables in the problem: weld thickness ( x 1 ), attached part of the bar’s length ( x 2 ), bar height ( x 3 ), and bar thickness ( x 4 ). Here are the mathematical formulas:
Minimize f x = 1.10471 x 1 2 x 2 + ( 4.811 × 10 2 ) × x 3 x 4 ( 14 + x 2 ) ,
Subject to: g 1 x = τ x τ m a x 0 g 2 x = σ x σ m a x 0 g 3 x = δ x δ m a x 0 g 4 x = x 1 x 4 0 g 5 x = P P c ( x ) 0 g 6 x = 0.125 x 1 0 g 7 x = 1.10471 x 1 2 + ( 4.811 × 10 2 ) × x 3 x 4 ( 14 + x 2 ) 5 0 (26)
Variable range: 0.1 x 1 2.0 0.1 x 2 10 0.1 x 3 10 0.1 x 4 2.0 ,
where τ , δ , σ , and P c are represented the shear stress, end deflection of the beam, bending stress in the beam and buckling load on the bar, respectively. In addition, the other parameters are in the following:
P = 6000   l b L = 14   i n E = 30 × 10 6   p s i G = 12 × 10 6   p s i τ m a x = 13600   p s i σ m a x = 3 × 10 4   p s i δ m a x = 0.25   i n , τ x = τ 2 + 2 τ τ x 2 2 R + τ 2 τ = P 2 x 1 x 2 τ = M R J M = P ( L + x 2 2 ) R = x 2 2 4 + x 1 + x 2 2 2 σ x = 6 P L x 3 2 x 4 δ x = 4 P L 2 E x 3 2 x 4 J = 2 { 2 x 1 x 2 [ x 2 2 12 + x 1 + x 3 2 2 ] } P c x = 4.013 E x 3 2 x 4 6 36 L 2 ( 1 x 3 2 L E 4 G )
Table 18 describes the results of GBFIO, GWO, TLBO, DE, and PSO algorithms, which achieved the best solution for WBD variables. The outcomes show that GBFIO, DE, and PSO achieve the best solution for the WBD problem, with the objective function equalling 1.668085. Statistical results of the GBFIO algorithm in comparison to other algorithms on the WBD problem are shown in Table 19, which shows that the GBFIO boasts better values for statistical indicators than its competitors. Figure 12 shows the GBFIO convergence curve that achieves optimal values for the WBD problem. A similar penalty function is used for the GBFIO to make accurate comparisons.

4.3. PVD Problem

As can be seen in Figure 13, the optimization problem considers a gas storage container with two hemispherical cylinders at either end [19,79,80]. Containers like this are employed for liquid gases that need to be kept under pressure to maintain their features. The maximum pressure of 1000 [psi] for the minimal volume of 750 [ f t 3 ] should be kept by optimizing the construction weight of the object. Four variables which are in the PVD optimization problem: the shell thickness ( x 1 ), the head thickness ( x 2 ), the inner radius ( x 3 ), and the cylindrical section length excluding the head ( x 4 ). The mathematical formulation of the PVD problem is described below:
Minimize f x = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 ,
Subject to: g 1 x = x 1 + 0.0193 x 3 0 g 2 x = x 2 + 0.00954 x 3 0 g 3 x = 1296000 4 3 π x 3 3 π x 3 2 x 4 0 g 4 x = x 4 240 0 (28)
Variable range: 0 x 1 99 0 x 2 99 10 x 3 200 10 x 4 200
Table 20 presents the results of GBFIO, GWO, TLBO, DE, and PSO algorithms, which achieved the best solution for PVD variables. The outcomes show that PSO and GBFIO algorithms achieve the first and second ranks in finding the best solution for the PVD problem, respectively. As can be seen, GBFIO achieved the second rank with a small difference; as shown in Table 21, the GBFIO algorithm has a low SD, highlighting its robustness and consistency in finding optimal solutions. Statistical results of the GBFIO algorithm in comparison with other algorithms on the PVD problem are shown in Table 21, which shows the GBFIO algorithm boasts better values for statistical indicators than its competitors. Figure 14 shows the GBFIO convergence curve that achieves optimal values for the PVD problem. A similar penalty function is used for GBFIO to make accurate comparisons.

4.4. SRD Problem

There are many applications for speed reducers, which is a part of gear boxes within mechanical systems. SRD requires seven design variables, which makes it more difficult [81,82]. Figure 15 presents a diagram of a speed reducer, including its design variables such as the face width ( x 1 ), the tooth module ( x 2 ), the number of teeth on the pinion ( x 3 ), the distance between bearings on the first shaft ( x 4 ), the distance between bearings on the second shaft ( x 5 ), the diameter of the first shaft ( x 6 ), and the diameter of the second shaft ( x 7 ).
Minimizing the speed reducer’s overall weight whilst meeting eleven limitations would be the goal. Several limitations must be met, consisting of the restrictions on bending stress of gear teeth, surface stress, and transverse deflections of the first and second shafts caused by transmitted force, and stresses in first and second shafts. The mathematical formulation of the SRD problem is presented in the following:
Minimize f x = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3 43.0934 1.508 x 1 x 6 2 + x 7 2 + 7.4777 x 6 3 + x 7 3 + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) ,
Subject to: g 1 x = 27 x 1 x 2 2 x 3 1 0 g 2 x = 397.5 x 1 x 2 2 x 3 2 1 0 g 3 x = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 g 4 x = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 g 5 x = 1 110 x 6 3 745 x 4 x 2 x 3 2 + 16.9 × 10 6 1 0 g 6 x = 1 85 x 7 3 745 x 5 x 2 x 3 2 + 157.5 × 10 6 1 0 g 7 x = x 2 x 3 40 1 0 g 8 x = 5 x 2 x 1 1 0 g 9 x = x 1 12 x 2 1 0 g 10 x = 1.5 x 6 + 1.9 x 4 1 0 g 11 x = 1.1 x 7 + 1.9 x 5 1 0 (29)
Variable range: 2.6 x 1 3.6 0.7 x 2 0.8 17 x 3 28 7.3 x 4 8.3 7.3 x 5 8.3 2.9 x 6 3.9 5 x 7 5.5 ,
Table 22 shows the results of GBFIO, GWO, TLBO, DE, and PSO algorithms, which achieved the best solution for SRD variables. The outcomes show that the GBFIO and DE algorithms achieve first rank in finding the best solution for the SRD problem, respectively. As shown in Table 23, the GBFIO algorithm has a low SD equal to 3.02268 × 10−10, and it is robust in finding the best solution. Statistical results of the GBFIO algorithm in comparison to other algorithms on the SRD problem are shown in Table 23, which shows that the GBFIO boasts better values for statistical indicators than its competitors. Figure 16 shows the GBFIO convergence curve that achieves optimal values for the SRD problem. A similar penalty function is used for the GBFIO to make accurate comparisons.

5. Conclusions

This paper presents and illustrates a novel nature-inspired optimization algorithm inspired by the grizzly bears’ behaviors to increase fat by hunting, fishing, and eating grass, honey, etc. to survive in the winter. An evaluation of 31 benchmark functions was conducted to evaluate the GBFIO’s efficiency in optimization. A comparison is made between the GBFIO’s performance and that of four widely used algorithms—GWO, TLBO, DE, and PSO. A comparative analysis of the achieved ranks, based on both the average of the best solutions and also the best of the best solutions, shows the superior performance of the proposed GBFIO algorithm. For 31 test functions, the average rank based on the best solutions achieved by GBFIO, GWO, TLBO, DE, and PSO is 1.273, 2.659, 3.818, 2.977 and 3.591, respectively. Also, the rank of optimization algorithms based on the best of best solution rank for 31 test functions for GBFIO, GWO, TLBO, DE, and PSO are 1.227, 2.000, 2.864, 2.295, and 3.068, respectively. Furthermore, for the thirty functions of the CEC2017 benchmark function, the rank of optimization algorithms according to the average of the best solution rank for GBFIO, GWO, TLBO, DE, PSO, CPA, TSA, and WOA is 2.233, 3.933, 6.733. 2.933, 4.733, 4, 7.8, and 3.633, respectively. Also, the rank of optimization algorithms based on the best of best solutions rank for GBFIO, GWO, TLBO, DE, PSO, CPA, TSA, and WOA is 2.533, 4.1, 6.767, 2.833, 4.9, 3.533, 7.967, and 3.367, respectively. Hence, the proposed GBFIO is more efficient than other algorithms in solving the abovementioned 61 test functions.
The proposed GBFIO tends to converge to more appropriate quasi-optimal solutions as population sizes increase. Also, by increasing the number of iterations, the proposed GBFIO leads to solutions that are closer to the global optimal. Hence, the suggested algorithm is sensitive to the population size and the iteration number. So, by increasing the population size and the iteration number, the computation time is increased.
According to the outcomes of the simulation, the GBFIO generates effective optimization solutions through a combination of exploration in global search and exploitation in local search. In optimization applications, the GBFIO demonstrated superior performance over other algorithms. Additionally, the GBFIO was implemented to solve four engineering design optimization problems demonstrating that the GBFIO can handle the constrained optimization problems in practice with a low standard deviation, which is indicative of its ability to reach the quasi-optimal solutions.
For further research and future studies, diverse research aspects could be considered, consisting of the design of the multi-objective and binary version of the GBFIO, especially in solving optimization problems in the energy management of smart grids and smart cities; it can also be used for the PMU placement. This paper also proposes using the GBFIO in solving optimization problems in diverse areas of science, suggesting a variety of practical applications.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biomimetics10060379/s1.

Author Contributions

Conceptualization, data curation, formal analysis, software, investigation, resources, and writing—original draft carried out by M.D.; project administration and supervision carried out by M.A. and J.R.; writing—review and editing, validation, visualization, and methodology carried out by M.D., M.A., J.R., E.S. and G.J. Funding acquisition carried out by E.S. All authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Research Office at the University of South Florida, Sarasota-Manatee Campus, from the Pioneer and IRG Interdisciplinary Research Grants awarded to Ehsan Sheybani.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the first and corresponding authors. The Matlab source code for the GBFIO algorithm is provided in the Appendix A and Supplementary File.

Acknowledgments

This work was supported by La Agencia Nacional de Investigación y Desarrollo (ANID), Chile Fondo Nacional de Desarrollo Científico y Tecnológico (FONDEYCT) de Postdoctorado 2025 under Grant 3250347; in part by ANID, Chile FONDECYT Iniciacion under Grant 11230430; and in part by the Solar Energy Research Center (SERC), Chile, under Grant ANID/FONDAP/1523A0006. The work of Jose Rodriguez was supported by the Project ANID under Grant AFB240002.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this appendix, the Matlab source code of the GBFIO algorithm for the first objective function is presented as follows:
clc
clear all
pack
%%%%%%%%%% n is dimension of objective function
n = 30;
%%%%%%%%%% Define the Range of Variables
Xmax = ones(1,n) ×100;
Xmin = ones(1,n)*(−100);
%%%%%%%%%% n is the Population Size
N = 150;
%%%%%%%%%% Maximum Numbe od Iteration
Iter_max = 500;
%%%%%%%%%% Define the GBFIO prameters
f = 2.5;
eta = 1.25;
Zeta = 0.3;
fishing_number = 25;
%%%%%%%%%% Generate the Initial Populations
for i = 1:N
  Ipop(i,:) = rand(1,n).*(Xmax − Xmin) + Xmin;
  cost(i,:) = sum(Ipop(i,:).^2);
end
%%%%%%%%%% Sort the Initial Populations Based on Cost Function
Ipop_mix = [Ipop,cost];
Ipop_sort = sortrows(Ipop_mix,n + 1);
%%%%%%%%%% Specify the Fish, Honey, Body, Shell, and Plants
Fish = Ipop_sort(1,:);
Honey = Ipop_sort(2,:);
Body = Ipop_sort(3,:);
Shell = Ipop_sort(4,:);
Plants = Ipop_sort(5,:);
%%%%%%%%%% Start the Iterationa for Updating Populations
for iter = 1:Iter_max
  for i = 1:N
    %%%%%%%%%% Start the Searching Phase
    r1 = rand(1,n);
    deltax_fish = (2.*r1.*( (r1).^0.5)).*(Fish(1,[1:n]) − Ipop(i,:));
    deltax_honey = (r1.*((r1).^0.5)).*(Honey(1,[1:n]) − Ipop(i,:));
    deltax_body = (0.5.*r1.*((r1).^0.5)).*(Body(1,[1:n]) − Ipop(i,:));
    deltax_shell = (0.25.*r1.*((r1).^0.5)).*(Shell(1,[1:n]) − Ipop(i,:));
    deltax_plants = (0.125.*r1.*((r1).^0.5)).*(Plants(1,[1:n]) − Ipop(i,:)); Xnew_search = Ipop(i,:) + deltax_fish + deltax_honey + deltax_body + deltax_shell + deltax_plants;
    Xnew_search = min(Xnew_search,Xmax);
    Xnew_search = max(Xnew_search,Xmin);
cost_S = sum(Xnew_search.^2);
    %%%%%%%%%% Update the Population
    if cost_S < cost(i,1)
      Ipop(i,:) = Xnew_search;
      cost(i,1) = cost_S;
    end
    %%%%%%%%%% End the Searching Phase
    %%%%%%%%%% Start the Hunting Phase
    %%%%% Start the Hunting Phase: Bear Hunting
    Ipop_mix = [Ipop,cost];
    Ipop_sort = sortrows(Ipop_mix,n + 1);
    Prey = Ipop_sort(1,:);
    A = (f*(1-iter/Iter_max)).*(2.*rand(1,n) − 1);
    D = abs(((2.*rand(1,n)).*Ipop(i,:))-(rand(1,n).*Prey(1,[1:n])));
    X_hunt = Ipop(i,:)-A.*D;
    X_hunt = min(X_hunt,Xmax);
    X_hunt = max(X_hunt,Xmin);
    cost_hunt = sum(X_hunt.^2);
    %%%%% Start the Hunting Phase: Coyote Hunting
    L = randperm(N);
    LL = find(L~ = i);
    LLL = L(LL);
    J1 = LL(1);
    J2 = LL(2);
    J3 = LL(3);
    I_child = [Ipop(J1,:),cost(J1,1);Ipop(J2,:),cost(J2,1);Ipop(J3,:),cost(J3,1)];
    I_chsort = sortrows(I_child,n + 1);
    A1 = (eta*(1-iter/Iter_max)).*(2.*rand(1,n) − 1);
    A2 = (eta*(1-iter/Iter_max)).*(2.*rand(1,n) − 1);
    D1 = abs(((2.*rand(1,n)).*I_chsort(2,[1:n])) − (rand(1,n).*I_chsort(1,[1:n])));
    D2 = abs(((2.*rand(1,n)).*I_chsort(3,[1:n])) − (rand(1,n).*I_chsort(1,[1:n])));
    X_ch = Ipop(i,:) − (A1.*D1 + A2.*D2);
    X_ch = min(X_ch,Xmax);
    X_ch = max(X_ch,Xmin);
    cost_ch = sum(X_ch.^2);
    %%%%%%%%%% Update the Population
    G = rand(1,1);
    if G < 0.75
      if cost_hunt < cost(i,1)
        Ipop(i,:) = X_hunt;
        cost(i,1) = cost_hunt;
      end
    else
      if cost_ch < cost(i,1)
        Ipop(i,:) = X_ch;
        cost(i,1) = cost_ch;
      end
    end
    %%%%%%%%%% End the Hunting Phase
    %%%%%%%%%% Start the Fishing Phase
    for kk = 1:fishing_number
      Z = Zeta*(1 − iter/Iter_max)*cos(2*pi*rand(1,1));
      X_fish = (1 + Z).*Ipop(i,:);
      X_fish = min(X_fish,Xmax);
      X_fish = max(X_fish,Xmin);
      cost_fishing = sum(X_fish.^2);
      %%%%%%%%%% Update the Population
      if cost_fishing < cost(i,1)
        Ipop(i,:) = X_fish;
        cost(i,1) = cost_fishing;
      end
    end
    %%%%%%%%%% Specify the Fish, Honey, Body, Shell, and Plants for
    Ipop_mix = [Ipop,cost];
    Ipop_sort = sortrows(Ipop_mix,n + 1);
    Fish = Ipop_sort(1,:);
    Honey = Ipop_sort(2,:);
    Body = Ipop_sort(3,:);
    Shell = Ipop_sort(4,:);
    Plants = Ipop_sort(5,:);
  end
  %%%%%%%%%% Specify the Parameters of Convergence Curve
  Convergence_Curve(iter,1) = iter;
  Convergence_Curve(iter,2) = Ipop_sort(1,n + 1);
end
%%%%%%%%%% Print the Best Soultion
Best = Ipop_sort(1,:)
plot(Convergence_Curve(:,1),Convergence_Curve(:,2))

References

  1. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Chapter 10—Metaheuristic algorithms: A comprehensive review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Elsevier: Amsterdam, The Netherlands, 2018; pp. 185–231. [Google Scholar] [CrossRef]
  2. Dehghani, M.; Hubálovský, Š.; Trojovský, P. Cat and mouse based optimizer: A new nature-inspired optimization algorithm. Sensors 2021, 21, 5214. [Google Scholar] [CrossRef] [PubMed]
  3. Theodorakatos, N.P.; Babu, R.; Theodoridis, C.A.; Moschoudis, A.P. Mathematical models for the single-channel and multi-channel PMU allocation problem and their solution algorithms. Algorithms 2024, 17, 191. [Google Scholar] [CrossRef]
  4. Theodorakatos, N.P.; Moschoudis, A.P.; Lytras, M.D.; Kantoutsis, K.T. Research on optimization procedure of PMU positioning problem achieving maximum observability based on heuristic algorithms. AIP Conf. Proc. 2023, 2872, 120032. [Google Scholar]
  5. GhasemiGarpachi, M.; Dehghani, M.; Aly, M.; Rodriguez, J. Multi-Objective Improved Differential Evolution Algorithm-Based Smart Home Energy Management System Considering Energy Storage System, Photovoltaic, and Electric Vehicle. IEEE Access. 2025, 13, 89946–89966. [Google Scholar] [CrossRef]
  6. Zamani, H.; Nadimi-Shahraki, M.H.; Mirjalili, S.; Soleimanian Gharehchopogh, F.; Oliva, D. A critical review of moth-flame optimization algorithm and its variants: Structural reviewing, performance evaluation, and statistical analysis. Arch. Comput. Methods Eng. 2024, 31, 2177–2225. [Google Scholar] [CrossRef]
  7. Pardalos, P.M.; Mavridou, T.D. Simulated annealing. In Encyclopedia of Optimization; Springer International Publishing: Cham, Switzerland, 2024; pp. 1–3. [Google Scholar]
  8. Hansen, P.; Mladenović, N.; Todosijević, R.; Hanafi, S. Variable neighborhood search: Basics and variants. EURO J. Comput. Optim. 2017, 5, 423–454. [Google Scholar] [CrossRef]
  9. Resende, M.G.; Ribeiro, C.C. Greedy randomized adaptive search procedures: Advances and extensions. In Handbook of Metaheuristics, Part of the Book Series: International Series in Operations Research & Management Science; Springer: Cham, Switzerland, 2019; Volume 272, pp. 169–220. [Google Scholar]
  10. Al-Betar, M.A.; Aljarah, I.; Awadallah, M.A.; Faris, H.; Mirjalili, S. Adaptive β-hill climbing for optimization. Soft Comput. 2019, 23, 13489–13512. [Google Scholar] [CrossRef]
  11. Hoos, H.H.; Stϋtzle, T. Stochastic local search. In Handbook of Approximation Algorithms and Metaheuristics; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 297–307. [Google Scholar]
  12. Prajapati, V.K.; Jain, M.; Chouhan, L. Tabu search algorithm (TSA): A comprehensive survey. In Proceedings of the 2020 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things (ICETCE), Jaipur, India, 7–8 February 2020; pp. 1–8. [Google Scholar]
  13. Voudouris, C.; Tsang, E.P.; Alsheddy, A. Guided local search. In Handbook of Metaheuristics; Springer: Boston, MA, USA, 2010; pp. 321–361. [Google Scholar]
  14. Koutsoukis, N.C.; Manousakis, N.M.; Georgilakis, P.S.; Korres, G.N. Numerical observability method for optimal phasor measurement units placement using recursive Tabu search method. IET Gener. Transm. Distrib. 2013, 7, 347–356. [Google Scholar] [CrossRef]
  15. Shehab, M.; Abualigah, L.; Al Hamad, H.; Alabool, H.; Alshinwan, M.; Khasawneh, A.M. Moth–flame optimization algorithm: Variants and applications. Neural Comput. Appl. 2020, 32, 9859–9884. [Google Scholar] [CrossRef]
  16. Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
  17. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle swarm optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  18. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Kramer, O.; Kramer, O. Genetic Algorithms; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  22. Guo, W.; Chen, M.; Wang, L.; Mao, Y.; Wu, Q. A survey of biogeography-based optimization. Neural Comput. Appl. 2017, 28, 1909–1926. [Google Scholar] [CrossRef]
  23. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020, 32, 12363–12379. [Google Scholar] [CrossRef]
  24. Jin, Y.; Wang, H.; Chugh, T.; Guo, D.; Miettinen, K. Data-driven evolutionary optimization: An overview and case studies. IEEE Trans. Evol. Comput. 2018, 23, 442–458. [Google Scholar] [CrossRef]
  25. Yang, X.S.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  26. Hussien, A.G.; Amin, M.; Wang, M.; Liang, G.; Alsanad, A.; Gumaei, A.; Chen, H. Crow search algorithm: Theory, recent advances, and applications. IEEE Access 2020, 8, 173548–173565. [Google Scholar] [CrossRef]
  27. Ranjan, R.K.; Kumar, V. A systematic review on fruit fly optimization algorithm and its applications. Artif. Intell. Rev. 2023, 56, 13015–13069. [Google Scholar] [CrossRef]
  28. Kumar, V.; Kumar, D. A systematic review on firefly algorithm: Past, present, and future. Arch. Comput. Methods Eng. 2021, 28, 3269–3291. [Google Scholar] [CrossRef]
  29. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  30. Assiri, A.S.; Hussien, A.G.; Amin, M. Ant lion optimization: Variants, hybrids, and applications. IEEE Access 2020, 8, 77746–77764. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  32. Guerrero-Luis, M.; Valdez, F.; Castillo, O. A review on the cuckoo search algorithm. In Fuzzy Logic Hybrid Extensions of Neural and Optimization Algorithms: Theory and Applications, Studies in Computational Intelligence; Castillo, O., Melin, P., Eds.; Springer: Cham, Switzerland, 2021; Volume 940, pp. 113–124. [Google Scholar]
  33. Meraihi, Y.; Gabis, A.B.; Mirjalili, S.; Ramdane-Cherif, A. Grasshopper optimization algorithm: Theory, variants, and applications. IEEE Access 2021, 9, 50001–50024. [Google Scholar] [CrossRef]
  34. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  35. Dubey, M.; Kumar, V.; Kaur, M.; Dao, T.P. A systematic review on harmony search algorithm: Theory, literature, and applications. Math. Probl. Eng. 2021, 2021, 5594267. [Google Scholar] [CrossRef]
  36. Oladejo, S.O.; Ekwe, S.O.; Akinyemi, L.A.; Mirjalili, S.A. The deep sleep optimiser: A human-based metaheuristic approach. IEEE Access. 2023, 11, 83639–83665. [Google Scholar] [CrossRef]
  37. Zou, F.; Chen, D.; Xu, Q. A survey of teaching–learning-based optimization. Neurocomputing 2019, 335, 366–383. [Google Scholar] [CrossRef]
  38. He, S.; Wu, Q.H.; Saunders, J.R.. Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Trans. Evol. Comput. 2009, 13, 973–990. [Google Scholar] [CrossRef]
  39. Kaveh, A.; Kaveh, A. Imperialist competitive algorithm. In Advances in Metaheuristic Algorithms for Optimal Design of Structures; Springer: Cham, Switzerland, 2017; pp. 353–373. [Google Scholar]
  40. Erol, O.K.; Eksin, I. A new optimization method: Big bang–big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  41. Abedinpourshotorban, H.; Shamsuddin, S.M.; Beheshti, Z.; Jawawi, D.N. Electromagnetic field optimization: A physics-inspired metaheuristic optimization algorithm. Swarm Evol. Comput. 2016, 26, 8–22. [Google Scholar] [CrossRef]
  42. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  43. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  44. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  45. Hashemi, A.; Dowlatshahi, M.B.; Nezamabadi-Pour, H. Gravitational search algorithm: Theory, literature review, and applications. In Handbook of AI-Based Metaheuristics; CRC Press: Boca Raton, FL, USA, 2021; pp. 119–150. [Google Scholar]
  46. Kaveh, A.; Kaveh, A. Water evaporation optimization algorithm. In Advances in Metaheuristic Algorithms for Optimal Design of Structures; Springer: Cham, Switzerland, 2017; pp. 489–509. [Google Scholar]
  47. Kaveh, A.; Motie Share, M.A.; Moslehi, M. Magnetic charged system search: A new meta-heuristic algorithm for optimization. Acta Mech. 2013, 224, 85–107. [Google Scholar] [CrossRef]
  48. Formato, R.A. Central force optimization: A new deterministic gradient-like optimization metaheuristic. Opsearch 2009, 46, 25–51. [Google Scholar] [CrossRef]
  49. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  50. Zitouni, F.; Harous, S.; Maamri, R. The solar system algorithm: A novel metaheuristic method for global optimization. IEEE Access 2020, 9, 4542–4565. [Google Scholar] [CrossRef]
  51. Kashan, A.H. A new metaheuristic for optimization: Optics inspired optimization (OIO). Comput. Oper. Res. 2015, 55, 99–125. [Google Scholar] [CrossRef]
  52. Kiran, M.S. TSA: Tree-seed algorithm for continuous optimization. Expert Syst. Appl. 2015, 42, 6686–6698. [Google Scholar] [CrossRef]
  53. Beşkirli, A.; Dağ, İ.; Kiran, M.S. A tree seed algorithm with multi-strategy for parameter estimation of solar photovoltaic models. Appl. Soft Comput. 2024, 167, 112220. [Google Scholar] [CrossRef]
  54. Ong, K.M.; Ong, P.; Sia, C.K. A carnivorous plant algorithm for solving global optimization problems. Appl. Soft Comput. 2021, 98, 106833. [Google Scholar] [CrossRef]
  55. Beşkirli, A.; Dağ, İ. I-CPA: An improved carnivorous plant algorithm for solar photovoltaic parameter identification problem. Biomimetics 2023, 8, 569. [Google Scholar] [CrossRef] [PubMed]
  56. Yang, X.S. Flower pollination algorithm for global optimization. In International Conference on Unconventional Computing and Natural Computation; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar]
  57. Ehteram, M.; Seifi, A.; Banadkooki, F.B. Sunflower optimization algorithm. In Hellenic Conference on Artificial Intelligence; Springer Nature Singapore: Singapore, 2022; pp. 43–47. [Google Scholar]
  58. Zitouni, F.; Harous, S.; Mirjalili, S.; Mohamed, A.; Bouaicha, H.A.; Mohamed, A.W.; Limane, A.; Lakbichi, R.; Ferhat, A. The Walking Palm Tree algorithm: A new Metaheuristic Algorithm for Solving Optimization Problems. In Proceedings of the International Conference on Advanced Intelligent Systems for Sustainable Development (ICEIS 2024), Aflou, Algeria, 26–27 June 2024. [Google Scholar]
  59. Chu, S.C.; Feng, Q.; Zhao, J.; Pan, J.S. BFGO: Bamboo forest growth optimization algorithm. J. Internet Technol. 2023, 24, 314. [Google Scholar]
  60. Dalirinia, E.; Jalali, M.; Yaghoobi, M.; Tabatabaee, H. Lotus effect optimization algorithm (LEA): A lotus nature-inspired algorithm for engineering design optimization. J. Supercomput. 2024, 80, 761–799. [Google Scholar] [CrossRef]
  61. Mergos, P.E.; Yang, X.S. Flower pollination algorithm with pollinator attraction. Evol. Intell. 2023, 16, 873–889. [Google Scholar] [CrossRef]
  62. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  63. Dehghani, M.; Montazeri, Z.; Hubálovský, Š. GMBO: Group mean-based optimizer for solving various optimization problems. Mathematics 2021, 9, 1190. [Google Scholar] [CrossRef]
  64. Dehghani, M.; Bornapour, S.M.; Sheybani, E. Enhanced Energy Management System in Smart Homes Considering Economic, Technical, and Environmental Aspects: A Novel Modification-Based Grey Wolf Optimizer. Energies 2025, 18, 1071. [Google Scholar] [CrossRef]
  65. Yang, X.S. Engineering Optimization: An Introduction with Metaheuristic Applications; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  66. Dehghani, M.; Bornapour, S.M. Smart Homes Energy Management: Optimal Multi-Objective Appliance Scheduling Model Considering Electrical Energy Storage and Renewable Energy Resources. Heliyon 2025, 11, e42417. [Google Scholar] [CrossRef] [PubMed]
  67. Shokri, M.; Niknam, T.; Mohammadi, M.; Dehghani, M.; Siano, P.; Ouahada, K.; Sarvarizade-Kouhpaye, M. A novel stochastic framework for optimal scheduling of smart cities as an energy hub. IET Gener. Transm. Distrib. 2024, 18, 2421–2434. [Google Scholar] [CrossRef]
  68. Shokri, M.; Niknam, T.; Sarvarizade-Kouhpaye, M.; Pourbehzadi, M.; Javidi, G.; Sheybani, E.; Dehghani, M. A Novel Optimal Planning and Operation of Smart Cities by Simultaneously Considering Electric Vehicles, Photovoltaics, Heat Pumps, and Batteries. Processes 2024, 12, 1816. [Google Scholar] [CrossRef]
  69. Alastair Fothergill and Keith Scholey (Directors), Bears, 2014, Disney Nature. Available online: https://www.imdb.com/title/tt2458776/ (accessed on 1 March 2024).
  70. MJLR McCoy (Directors). Grizzly Bears: The Drama of the Alaska Salmon Run. 2015. Available online: https://www.youtube.com/watch?v=HeEyJo838PA (accessed on 1 October 2024).
  71. Gunther, K.A.; Shoemaker, R.R.; Frey, K.L.; Haroldson, M.A.; Cain, S.L.; Van Manen, F.T.; Fortin, J.K. Dietary breadth of grizzly bears in the Greater Yellowstone Ecosystem. Ursus 2014, 25, 60–72. [Google Scholar] [CrossRef]
  72. Munro, R.H.; Nielsen, S.E.; Price, M.H.; Stenhouse, G.B.; Boyce, M.S. Seasonal and diel patterns of grizzly bear diet and activity in west-central Alberta. J. Mammal. 2006, 87, 1112–1121. [Google Scholar] [CrossRef]
  73. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  74. Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization. RMIT University, Evolutionary Computation and Machine Learning Group, Australia, Tech. Rep. 2013 Mar. Available online: https://www.epitropakis.co.uk/content/benchmark-functions-cec2013-special-session-and-competition-niching-methods-multimodal (accessed on 1 June 2024).
  75. de Barros, R.S.; Hidalgo, J.I.; de Lima Cabral, D.R. Wilcoxon rank sum test drift detector. Neurocomputing 2018, 275, 1954–1963. [Google Scholar] [CrossRef]
  76. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  77. Coello, C.A.; Montes, E.M. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 2002, 16, 193–203. [Google Scholar] [CrossRef]
  78. Trojovský, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef]
  79. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  80. Deb, K. Optimal design of a welded beam via genetic algorithms. AIAA J. 1991, 29, 2013–2015. [Google Scholar] [CrossRef]
  81. Gandomi, A.H.; Yang, X.S. Benchmark problems in structural optimization. In Computational Optimization, Methods and Algorithms; Springer: Berlin/Heidelberg, Germany, 2011; pp. 259–281. [Google Scholar]
  82. Lin, M.H.; Tsai, J.F.; Hu, N.Z.; Chang, S.C. Design optimization of a speed reducer using deterministic techniques. Math. Probl. Eng. 2013, 2013, 419043. [Google Scholar] [CrossRef]
Figure 1. A grizzly bear with two cubs and the process of increasing fat.
Figure 1. A grizzly bear with two cubs and the process of increasing fat.
Biomimetics 10 00379 g001
Figure 2. Flowchart of the suggested GBFIO algorithm.
Figure 2. Flowchart of the suggested GBFIO algorithm.
Biomimetics 10 00379 g002
Figure 3. Results of selected UB functions (100-dimensional): (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Figure 3. Results of selected UB functions (100-dimensional): (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Biomimetics 10 00379 g003
Figure 4. Results of the selected HDMM (100-Dimensional): (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Figure 4. Results of the selected HDMM (100-Dimensional): (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Biomimetics 10 00379 g004
Figure 5. Results of the selected FDMM: (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Figure 5. Results of the selected FDMM: (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Biomimetics 10 00379 g005
Figure 6. Results of the selected RSB functions: (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Figure 6. Results of the selected RSB functions: (i) 2D versions of UB functions; (ii) search history; (iii) convergence curves.
Biomimetics 10 00379 g006aBiomimetics 10 00379 g006b
Figure 7. Rank of optimization methods based on the average of best solutions.
Figure 7. Rank of optimization methods based on the average of best solutions.
Biomimetics 10 00379 g007
Figure 8. Ranking of optimization methods based on the best of the best solution.
Figure 8. Ranking of optimization methods based on the best of the best solution.
Biomimetics 10 00379 g008
Figure 9. Schematic of TCSD.
Figure 9. Schematic of TCSD.
Biomimetics 10 00379 g009
Figure 10. Convergence curve of the GBFIO on the TCSD problem.
Figure 10. Convergence curve of the GBFIO on the TCSD problem.
Biomimetics 10 00379 g010
Figure 11. Schematic of WBD.
Figure 11. Schematic of WBD.
Biomimetics 10 00379 g011
Figure 12. Convergence curve of the GBFIO on the WBD problem.
Figure 12. Convergence curve of the GBFIO on the WBD problem.
Biomimetics 10 00379 g012
Figure 13. Schematic of PVD.
Figure 13. Schematic of PVD.
Biomimetics 10 00379 g013
Figure 14. Convergence curve of the GBFIO on the PVD problem.
Figure 14. Convergence curve of the GBFIO on the PVD problem.
Biomimetics 10 00379 g014
Figure 15. Schematic of SRD.
Figure 15. Schematic of SRD.
Biomimetics 10 00379 g015
Figure 16. Convergence curve of the GBFIO on the SRD problem.
Figure 16. Convergence curve of the GBFIO on the SRD problem.
Biomimetics 10 00379 g016
Table 1. Pseudo code of the proposed GBFIO algorithm.
Table 1. Pseudo code of the proposed GBFIO algorithm.
Start the GBFIO algorithm
Input initial parameters, variables, and constraints
Generate the initial population randomly within the range of variable values based on Equation (22)
Compute the objective function
For i t e r a t i o n = 1 : i t e r a t i o n m a x
For each population
Searching Phase x f i s h , x h o n e y , x s h e l l , x c o r p s e , and x p l a n t s are selected as the best top 5 members, receptively.
Calculate the new population based on the searching phase by Equations (1)–(11)
Compute the objective function
If the objective function of the new population < prior objective function
Updating the population
End if
Hunt and Care PhasesHunt Phase x p r e y is chosen as the best member of the population
Compute the new hunting population based on the hunting phase by Equations (12)–(14)
Care Phase x c o y o t e , x c u b ( 1 ) , and x c u b ( 2 ) are chosen as three random members (The best member of these 3 selected members is chosen as coyote, and the other 2 members are bear cubs)
Compute the entire new care population based on Equations (15)–(17)
Select Phase Select the new population of the hunt and care phase based on Equation (18).
i f       β 0.7
x b e a r h u n t i n g c a r e t + 1 = x b e a r h u n t t + 1 , else
x b e a r h u n t i n g c a r e t + 1 = x b e a r c a r e t + 1           i f       β > 0.7
End if
If the objective function of the new population < prior objective function
updating the population
End if
Fishing Phase Each updated population is a bear that is fishing 25 times each day
For i = 1:25
Compute the new fishing population by Equation (21)
Compute the new objective function
If the objective function of the new population < prior objective function
Updating the population
End if
End for
End for
Select the best member of the updated population as a solution
End the GBFIO algorithm
Table 2. Unimodal benchmark function.
Table 2. Unimodal benchmark function.
Cost FunctionDimensionRange F m i n
C F 1 ( x ) = l = 1 k x l 2 30, 100[−100, 100]0
C F 2 x = l = 1 k x l + l = 1 k | x l | 30, 100[−10, 10]0
C F 3 ( x ) = l = 1 k q = 1 l x l 2 30, 100[−100, 100]0
C F 4 ( x ) = max { x l ,     1 l k } 30, 100[−100, 100]0
C F 5 ( x ) = l = 1 k 1 [ 100 x l + 1 x l 2 2 + x l 1 2 ] 30, 100[−30, 30]0
C F 6 ( x ) = l = 1 k | x l + 0.5 | 2 30, 100[−100, 100]0
C F 7 ( x ) = l = 1 k l x l 4 + r a n d o m ( 0,1 ) 30, 100[−1.28, 1.28]0
Table 3. HDMM test functions.
Table 3. HDMM test functions.
Cost FunctionDimensionRange F m i n
C F 8 ( x ) = l = 1 k x l   s i n ( | x i | ) 30, 100[−500, 500]−418.9829 × Dimension
C F 9 ( x ) = l = 1 k [ x l 2 10 cos 2 π x l + 10 ] 30, 100[−5.12, 5.12]0
C F 10 x = 20 exp 0.2 1 k l = 1 k x l 2 exp 1 k l = 1 k cos 2 π x l + 20 + e 30, 100[−32, 32]0
C F 11 x = 1 4000 l = 1 k x l 2 l = 1 k cos x l l + 1 30, 100[−600, 600]0
C F 12 x = π k 10 sin 2 π y 1 + l = 1 k 1 y l 1 2 1 + 10 sin 2 π y l + 1 + y k 1 2 + l = 1 k u ( x l ,   10 ,   100 ,   4 )
y l = 1 + x l + 1 4
u x l , α , p , q = p x l α q ,                     x l > α 0 ,                     α x l α p x l α q ,                     x l < α
30, 100[−50, 50]0
C F 13 x = 0.1 sin 2 3 π x 1 + l = 1 k x l 1 2 1 + sin 2 3 π x l + 1 + x k 1 2 1 + sin 2 2 π x k + l = 1 k u ( x l ,   5 ,   100 ,   4 ) 30, 100[−50, 50]0
Table 4. FDMM test functions.
Table 4. FDMM test functions.
Cost FunctionDimensionRange F m i n
C F 14 x = 1 500 + l = 1 25 1 l + k = 1 2 x k a k l 6 1 2[−65.53, 65.53]0.998
a k l = 32 32 16 32 0 32 16 32 32 32 32 16 16 16 0 16 16 16 32 16 32 0 16 0 0 0 16 0 32 0 32 16 16 16 0 16 16 16 32 16 32 32 16 0 16 32 32 32 32 32
C F 15 x = l = 1 11 a l x 1 b l 2 + b l x 2 b l 2 + b l x 3 + x 4 2 4[−5, 5]0.0003
a l = [ 0.1957 0.1947 0.1735 0.16 0.0844 0.0627 0.0456 0.0342 0.0323 0.0235 0.0246 ]
b l = [ 4 2 1 1 2 1 4 1 6 1 8 1 10 1 12 1 14 1 16 ]
C F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
C F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 = 10 1 1 8 π c o s x 1 + 10 2[−5, 5]0.398
C F 18 x = [ 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 [ 30 + 2 x 1 3 x 2 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−5, 5]3
C F 19 x = l = 1 4 c l e x p ( k = 1 3 a l k x k P l k 2 ) 3[0, 1]−3.86
a l k = 3 0.1 10 10 30 35 3 0.1 10 10 30 35 ;    c l = 1 1.2 3 3.2 ;             P l k = 0.3689 0.4699 0.117 0.4387 0.2673 0.747 0.1091 0.03815 0.8732 0.5743 0.5547 0.8828
C F 20 x = l = 1 4 c l e x p ( k = 1 6 a l k x k P l k 2 ) 6[0, 1]−3.22
a l k = 10 3 17 0.05 10 17 3.5 1.7 8 0.1 8 14 3 3.5 17 17 8 0.05 10 17 8 10 0.1 14 ;    c l = 1 1.2 3 3.2 ;
P l k = 0.1312 0.16967 0.5569 0.2329 0.4135 0.8307 0.0124 0.8283 0.5886 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.4047 0.8828 0.8732 0.2883 0.3047 0.6650 0.5743 0.1091 0.0381
C F 21 x = l = 1 5 X a l X a l T + 6 c l 1 4[0, 10]−10.1532
a l = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 ;    c l = 0.1 0.2 0.2 0.4 0.4
C F 22 x = l = 1 7 X a l X a l T + 6 c l 1 4[0, 10]−10.4029
a l = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 2 5 7 9 3 3 2 5 7 9 3 ;    c l = 0.1 0.2 0.2 0.4 0.4 0.6 0.3
C F 23 x = l = 1 10 X a l X a l T + 6 c l 1 4[0, 10]−10.5364
a l = 4 1 8 6 3 2 5 8 6 7 4 1 8 6 7 9 3 1 2 3.6 4 1 8 6 3 2 5 8 6 7 4 1 8 6 7 9 3 1 2 3.6 ;    c l = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.5
Table 5. The RSB functions.
Table 5. The RSB functions.
Cost FunctionDimensionRange F m i n
C F 24 x = sin x 1 e [ 1 c o s ( x 2 ) 2 ] + cos x 2 e [ 1 s i n ( x 1 ) 2 ] + x 1 x 2 2 2[−2π, 2π]−106.764537
C F 25 x = 0.5 + sin 2 x 1 2 + x 2 2 1 + 1 × 10 3 ( x 1 2 + x 2 2 ) 2 2[−100, 100]0.5
C F 26 x = i = 1 k 1 0.5 + sin 2 x i 2 + x i + 1 2 0.5 1 + 1 × 10 3 ( x i 2 + x i + 1 2 ) 2 20[−100, 100]0
C F 27 x = 8 | sin x 1 cos x 2 e | c o s ( x 1 2 + x 2 2 ) / 200 | | 2[−10, 10]−8.03985
C F 28 x = 1 × 10 4 sin x 1 sin x 2 e 100 x 1 2 + x 2 2 π + 1 0.1 2[−10, 10]1.00 × 10−4
C F 29 x = 1 × 10 17 sin x 1 sin x 2 e 100 x 1 2 + x 2 2 π 0.4 40[−10, 10]0
C F 30 x = i = 1 k 1 e x p ( | cos x 1 cos x 2 e | 1 x 1 2 + x 2 2 π | 1 | ) 50[−11, 11]0
C F 31 x = x 2 + 47 sin | x 2 + x 1 2 + 47 | + s i n ( | x 1 x 2 + 47 | ) ( x 1 ) 40[−512, 512]−955.6087
Table 6. Assessment results of UB functions.
Table 6. Assessment results of UB functions.
Dimension = 30Dimension = 100
GBFIOGWOTLBODEPSOGBFIOGWOTLBODEPSO
C F 1 Mean06.7809 × 10−2171.3516 × 10−496.0160 × 10−180.3571003.5371 × 10−992.384 × 10−30.83159701.973
SD005.6372 × 10−492.6217 × 10−180.1764703.6119 × 10−990.01042.5849135.75
Best06.77 × 10−2228.52 × 10−751.77 × 10−180.1105209.02544 × 10−1012.51064 × 10−556.86496 × 10−5458.821
Rank-11234512345
Rank-21234512345
C F 2 Mean1.1907 × 10−3161.1846 × 10−1071.0374 × 10−72.9275 × 10−111.29994.57384 × 10−3048.57316 × 10−552.34113 × 10−80.0024233.54858
SD02.1357 × 10−1074.4653 × 10−76.7068 × 10−120.689401.1311 × 10−548.0351 × 10−84.0872 × 10−44.5074
Best4.9766 × 10−3192.3700 × 10−11043.2400 × 10−351.2200 × 10−110.49342.26229 × 10−3051.37055 × 10−557.09498 × 10−340.0018825.94422
Rank-11234512345
Rank-21234512345
C F 3 Mean01.5762 × 10−280.0048276680.2736049.00446250.7293,331.113,741.9
SD06.6136 × 10−280.01481.0682 × 10+342.7337096.89529837.426,3243896.1
Best03.1200 × 10−354.4300 × 10−12117029.941402.0327612.08248,065.98082.3
Rank-11235412354
Rank-21235412354
C F 4 Mean5.4237 × 10−2971.1256 × 10−303.93352.43194.50261.84971 × 10−2890.0465510.4449429.26018.678
SD03.0835 × 10−3015.74551.67451.791400.064522.82535.27282.529
Best7.6100 × 10−2992.1400 × 10−332.3900 × 10−200.01131.43928.62841 × 10−2930.00121.82061 × 10−1021.31213.140
Rank-11234512354
Rank-21234513254
C F 5 Mean23.9525.3926.56556.00104.548795.19595.98116210.6819.2837,611.21
SD0.21560.51660.936629.805990.80580.64370.8265693511133.616,267
Best23.524.225.52229.101794.65594.74197.74243193.3814,952
Rank-11234512435
Rank-22341512345
C F 6 Mean6.8518 × 10−143.6905 × 10−62.22105.1145 × 10−190.02501.482472.9763117.58780.19797875.23681
SD1.3656 × 10−131.0026 × 10−60.37362.1176 × 10−190.01610.56050.89610.92100.523816.0249
Best8.8800 × 10−162.2100 × 10−61.34001.7900 × 10−190.00280.409821.7440115.499784.10507 × 10−645.91405
Rank-12351423415
Rank-22351423415
C F 7 Mean3.2825 × 10−57.3435 × 10−40.00370.00860.01833.23462 × 10−57.33905 × 10−40.002030.009530.016165
SD1.6191 × 10−52.8128 × 10−40.00670.00140.00679.7962 × 10−63.2896 × 10−40.001050.00250.0079
Best1.1800 × 10−53.4100 × 10−43.9900 × 10−40.00630.00431.424679 × 10−52.95192 × 10−40.000490.004640.00688
Rank-11235412345
Rank-21235412345
Table 7. Assessment results of HDMM functions.
Table 7. Assessment results of HDMM functions.
Dimension = 30Dimension = 100
GBFIOGWOTLBODEPSOGBFIOGWOTLBODEPSO
C F 8 Mean−9523.5−6978.5−4889.5−8886.4−6224−26,531−19,559−8781−12,239.7−18,962.5
SD17541012.2458.0441480.3838.16045.34135966.5640.72194
Best−11,400−8620−5770−11,800−7889.6−33,375.9−22,837.4−12,274−13,645.3−21,910.3
Rank-11352412543
Rank-22351412543
C F 9 Mean09.2655158.01586.58529.3828012.641635.0103697.644258.951
SD06.468238.448213.03176.785108.46656323.5329.46930.445
Best0060.358.118.09500.997591.52908641.77199.166
Rank-11254312453
Rank-21143212354
C F 10 Mean4.4400 × 10−157.1025 × 10−158.91555.2165 × 10−103.02164.44089 × 10−152.32703 × 10−1412.85573.32459 × 10−27.9474
SD7.8886 × 10−311.5372 × 10−158.13851.7844 × 10−100.737504.06627 × 10−158.852458.03283 × 10−20.65451
Best4.4400 × 10−154.4400 × 10−154.4400 × 10−152.9400 × 10−101.51854.44089 × 10−151.50990 × 10−147.99361 × 10−159.87967 × 10−46.72938
Rank-11253412534
Rank-21112313245
C F 11 Mean00.00550.019700.290601.08646 × 10−31.84587 × 10−62.87534 × 10−38.41713
SD00.00920.079300.145603.26004 × 10−38.04594 × 10−66.18704 × 10−31.44086
Best00000.06170003.57892 × 10−56.13197
Rank-11321413245
Rank-21111211123
C F 12 Mean5.5608 × 10−160.00130.14605.8625 × 10−191.43256.52623 × 10−35.37489 × 10−22.02809 × 10+71.5101412.39863
SD5.4263 × 10−160.00270.05594.7541 × 10−191.14142.77643 × 10−31.58948 × 10−28.83913 × 10+75.231484.24249
Best4.9600 × 10−171.1400 × 10−70.07851.1700 × 10−190.34642.19108 × 10−33.30271 × 10−25.10567 × 10−12.53465 × 10−56.641798
Rank-12341512534
Rank-22341523415
C F 13 Mean3.3100 × 10−50.02521.51500.05303.95082.982392.9385212.63706396.468240.727
SD1.4428 × 10−40.04810.25050.22652.90040.51320.575587.564171228.85131.559
Best9.9300 × 10−152.8800 × 10−61.07002.4600 × 10−180.62602.07081.878169.775150.315757156.342
Rank-11243521354
Rank-22351432415
Table 8. Assessment results of FDMM functions.
Table 8. Assessment results of FDMM functions.
GBFIOGWOTLBODEPSO
C F 14 Mean0.9980038380.9980038380.998004670.9980038381.14710789
SD06.1799 × 10−122.7057 × 10−600.354938656
Best0.9980038380.9980038380.9980038380.9980038380.998003838
Rank-112314
Rank-211111
C F 15 Mean3.07 × 10−40.00335.7905 × 10−45.8005 × 10−44.4484 × 10−4
SD00.00722.7999 × 10−41.3945 × 10−43.2697 × 10−4
Best3.07 × 10−43.07 × 10−43.09 × 10−43.07 × 10−43.0749 × 10−4
Rank-115342
Rank-211312
C F 16 Mean−1.0316−1.0316−1.0316−1.0316−1.0316
SD2.1642 × 10−167.5654 × 10−107.1017 × 10−52.2204 × 10−162.1642 × 10−16
Best−1.0316−1.0316−1.0316−1.0316−1.0316
Rank-113421
Rank-211111
C F 17 Mean0. 397887360.397887380.397887380.397887360.39788736
SD02.6038 × 10−84.3017 × 10−800
Best0.397887360.397887360.397887360.397887360.39788736
Rank-112311
Rank-211111
C F 18 Mean2.999999999999923.000000058128663.000170614417582.999999999999922.99999999999992
SD09.8523 × 10−84.4211 × 10−49.0468 × 10−162.2204 × 10−16
Best2.999999999999923.000000000254392.999999999999952.999999999999922.99999999999992
Rank-114532
Rank-213211
C F 19 Mean−3.862782−3.862779−3.862460−3.862782−3.862782
SD2.2204 × 10−156.0993 × 10−60.00122.2204 × 10−152.2004 × 10−15
Best−3.862782−3.862782−3.862782−3.862782−3.862782
Rank-123421
Rank-211111
C F 20 Mean−3.207927−3.196545−3.168953−3.202468−3.204332
SD0.00870.0222830.0342730.0065740.008985
Best−3.222190−3.222189−3.207691−3.222190−3.222190
Rank-114532
Rank-212311
C F 21 Mean−10.153200−9.053145−8.791638−9.895080−8.272518
SD2.8087 × 10−152.22711.49551.10023.2574
Best−10.153200−10.153193−10.121451−10.153200−10.153200
Rank-113425
Rank-212311
C F 22 Mean−10.402915−10.139136−9.467246−10.402915−10.020248
SD1.7764 × 10−151.14951.22491.7764 × 10−151.6680
Best−10.402915−10.402912−10.400771−10.402915−10.402915
Rank-112413
Rank-212311
C F 23 Mean−10.536443−10.536363−9.500209−9.945075−10.536443
SD1.1916 × 10−157.7539 × 10−51.36651.78271.1235 × 10−15
Best−10.536443−10.536440−10.535909−10.536443−10.536443
Rank-123541
Rank-212311
Table 9. Assessment results of RSB functions.
Table 9. Assessment results of RSB functions.
GBFIOGWOTLBODEPSO
C F 24 Mean−106.764537−106.764532−106.75358−106.764537−106.764537
SD1.5723 × 10−97.3496 × 10−60.0327292.7335 × 10−142.9296 × 10−14
Best−106.764537−106.764537−106.764537−106.764537−106.764537
Rank-134512
Rank-211111
C F 25 Mean0.50000.50000.50000.50000.5000
SD1.6784 × 10−141.0430 × 10−112.5923 × 10−1200
Best0.50000.50000.50000.50000.5000
Rank-124311
Rank-211111
C F 26 Mean3.9937394.2427616.7571905.4190604.747401
SD0.8501131.134083724941180.5581520.2704710.761412
Best1.8054342.406375.6980894.9330103.215879
Rank-112543
Rank-212543
C F 27 Mean−8.03985−8.03922−8.03937−8.03983−8.03898
SD8.76466 × 10−63.89416 × 10−44.94818 × 10−48.60975 × 10−51.40674 × 10−3
Best−8.03985−8.03985−8.03985−8.03985−8.03945
Rank-114325
Rank-211111
C F 28 Mean0.034640.377380.314500.000370.00055
SD0.050450.091280.069640.000470.00055
Best1.00 × 10−40.2344370.1735171.00 × 10−41.00 × 10−4
Rank-135412
Rank-213211
C F 29 Mean0.017954.145197.852204.502034.23656
SD0.010914.216390.314220.429070.80989
Best0.00510.706686.784263.558292.97515
Rank-112543
Rank-212543
C F 30 Mean0.1176870.4073701.0561830.0065660.386806
SD0.0864180.1601981.1868940.0065260.184927
Best6.492597 × 10−50.1650070.2975621.677507 × 10−50.110005
Rank-124513
Rank-224513
C F 31 Mean−955.608723−953.013547−955.089361−955.089692−884.819663
SD1.68625 × 10−134.494982.262512.26240283.64259
Best−955.608723−955.608723−955.608723−955.608723−955.608723
Rank-114325
Rank-211111
Table 10. p-values acquired from the Wilcoxon sum rank test.
Table 10. p-values acquired from the Wilcoxon sum rank test.
FunctionDimension = 100
C F 1 C F 2 C F 3 C F 4 C F 5 C F 6 C F 7 C F 8
GBFIO V.S. GWO8.0065 × 10−96.7860 × 10−87.9919 × 10−96.7956 × 10−89.0289 × 10−42.3531 × 10−66.7860 × 10−89.7106 × 10−6
GBFIO V.S. TLBO8.0065 × 10−96.7860 × 10−88.0065 × 10−96.7956 × 10−86.1179 × 10−86.7765 × 10−86.7860 × 10−89.1222 × 10−8
GBFIO V.S. DE7.9919 × 10−96.7860 × 10−87.9626 × 10−96.7860 × 10−86.4949 × 10−81.8030 × 10−66.7860 × 10−81.5750 × 10−5
GBFIO V.S. PSO8.0065 × 10−96.7860 × 10−88.0065 × 10−96.7956 × 10−86.4949 × 10−86.7956 × 10−86.7860 × 10−81.5983 × 10−5
FunctionDimension = 100
C F 9 C F 10 C F 11 C F 12 C F 13 C F 14 C F 15 C F 16
GBFIO V.S. GWO7.9480 × 10−93.8352 × 10−90.16266.7765 × 10−80.8711N/A0.0804N/A
GBFIO V.S. TLBO8.0065 × 10−97.6327 × 10−90.16266.7860 × 10−86.7288 × 10−8N/A7.9919 × 10−9N/A
GBFIO V.S. DE7.9919 × 10−97.9626 × 10−98.0065 × 10−92.9223 × 10−50.0017N/A2.9868 × 10−8N/A
GBFIO V.S. PSO8.0065 × 10−98.0065 × 10−98.0065 × 10−96.7956 × 10−86.7956 × 10−81.5427 × 10−91.5427 × 10−94.6827 × 10−10
Function C F 17 C F 18 C F 19 C F 20 C F 21 C F 22 C F 23 C F 24
GBFIO V.S. GWON/AN/AN/A0.03930.04000.3421N/AN/A
GBFIO V.S. TLBON/AN/AN/A6.9726 × 10−57.9480 × 10−91.0381 × 10−71.0968 × 10−6N/A
GBFIO V.S. DEN/AN/AN/A0.02360.1626N/A0.1626N/A
GBFIO V.S. PSO4.6827 × 10−10N/A4.6827 × 10−100.34812.5780 × 10−93.0335 × 10−84.6827 × 10−104.6827 × 10−10
Function C F 25 C F 26 C F 27 C F 28 C F 29 C F 30 C F 31
GBFIO V.S. GWON/A0.3792N/A5.8435 × 10−86.7860 × 10−84.2490 × 10−60.0195
GBFIO V.S. TLBON/A6.7765 × 10−8N/A5.8519 × 10−86.7765 × 10−81.0602 × 10−70.3421
GBFIO V.S. DEN/A4.5110 × 10−7N/A5.5870 × 10−46.7669 × 10−89.1601 × 10−80.3421
GBFIO V.S. PSON/A0.01333.1997 × 10−90.01146.7956 × 10−81.5961 × 10−42.1875 × 10−8
Table 11. Results of the GBFIO algorithm sensitivity analysis with respect to population sizes.
Table 11. Results of the GBFIO algorithm sensitivity analysis with respect to population sizes.
Number of Population
2030405060
C F 1 Mean00000
SD00000
Best00000
C F 2 Mean9.488376 × 10−3088.961751 × 10−3124.425393 × 10−3122.964866 × 10−3124.131339 × 10−314
SD00000
Best4.970226 × 10−3131.223130 × 10−3132.260542 × 10−3145.041268 × 10−3152.445212 × 10−316
C F 3 Mean00000
SD00000
Best00000
C F 4 Mean3.155977 × 10−2907.658761 × 10−2912.641380 × 10−2931.301568 × 10−2931.090979 × 10−293
SD00000
Best5.607299 × 10−2964.390067 × 10−2954.779610 × 10−2961.083217 × 10−2965.658902 × 10−297
C F 5 Mean24.84730124.59257124.50481524.35924624.409804
SD0.4053460.3047910.2978450.3285750.286390
Best24.27747623.94038024.04542423.90913523.919656
C F 6 Mean2.174950 × 10−91.427397 × 10−104.364226 × 10−111.444414 × 10−118.657999 × 10−12
SD2.416979 × 10−91.508869 × 10−103.702102 × 10−111.369896 × 10−118.307039 × 10−12
Best6.53199 × 10−116.958819 × 10−126.127176 × 10−129.838662 × 10−138.529919 × 10−13
C F 7 Mean7.968518 × 10−57.081454 × 10−55.588143 × 10−56.343727 × 10−54.857298 × 10−5
SD4.791609 × 10−53.430162 × 10−52.361767 × 10−52.313969 × 10−52.589811 × 10−5
Best1.383589 × 10−51.308217 × 10−52.123137 × 10−52.875731 × 10−51.143129 × 10−5
C F 8 Mean−7650.140234−8264.838722−8280.885116−8209.417095−8964.229479
SD2173.5697131805.6210091952.3019282257.5073542127.146066
Best−10,376.851123−10,428.494237−11,143.987174−11,067.745203−11,403.276763
C F 9 Mean8.7545986.7656425.66247100
SD23.20294017.05040117.41937500
Best00000
C F 10 Mean4.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−15
SD00000
Best4.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−15
C F 11 Mean00000
SD00000
Best00000
C F 12 Mean6.271662 × 10−118.523194 × 10−122.468391 × 10−126.364377 × 10−131.200745 × 10−13
SD7.522505 × 10−111.295646 × 10−115.984179 × 10−128.595888 × 10−131.360803 × 10−13
Best8.262464 × 10−132.042665 × 10−136.735877 × 10−143.604206 × 10−145.514638 × 10−15
C F 13 Mean0.1970810.1120270.1065910.0779950.038914
SD0.1322520.1115670.0989500.0756630.065934
Best6.583484 × 10−102.716825 × 10−102.515858 × 10−111.976762 × 10−111.399126 × 10−12
C F 14 Mean0.9980038380.9980038380.9980038380.9980038380.998003838
SD9.930137 × 10−1707.021667 × 10−1700
Best0.9980038380.9980038380.9980038380.9980038380.998003838
C F 15 Mean3.461836 × 10−43.152664 × 10−43.113838 × 10−43.168474 × 10−43.084805 × 10−4
SD7.297051 × 10−51.975967 × 10−51.318464 × 10−53.184629 × 10−53.802653 × 10−6
Best3.074859 × 10−43.074859 × 10−43.074859 × 10−43.074859 × 10−43.074859 × 10−4
C F 16 Mean−1.031628−1.031628−1.031628−1.031628−1.031628
SD3.729863 × 10−91.876679 × 10−92.220446 × 10−162.220446 × 10−162.220446 × 10−16
Best−1.031628−1.031628−1.031628−1.031628−1.031628
C F 17 Mean0.397887360.397887360.397887360.397887360.39788736
SD00000
Best0.397887360.397887360.397887360.397887360.39788736
C F 18 Mean2.9999999999999222.9999999999999222.9999999999999222.9999999999999222.99999999999992
SD8.992121 × 10−164.965068 × 10−165.063396 × 10−164.550560 × 10−163.140185 × 10−16
Best2.9999999999999212.9999999999999222.9999999999999212.9999999999999212.999999999999921
C F 19 Mean−3.862782−3.862782−3.862782−3.862782−3.862782
SD2.220446 × 10−152.220446 × 10−152.220446 × 10−152.220446 × 10−152.220446 × 10−15
Best−3.862782−3.862782−3.862782−3.862782−3.862782
C F 20 Mean−3.204730−3.203979−3.207953−3.204986−3.204392
SD0.0188670.0184510.0104470.0087710.018489
Best−3.222190−3.222190−3.222190−3.222190−3.222190
C F 21 Mean−9.286335−9.583326−9.894665−10.132708−10.141487
SD1.7892321.5187921.1102770.0306050.035946
Best−10.1531996790582−10.1531996790582−10.1531996790582−10.1531996790582−10.1531996790582
C F 22 Mean−9.809420−10.381395−10.384025−10.401542−10.402913
SD1.5926990.0571630.0510330.0059851.152257 × 10−5
Best−10.402915−10.402915−10.402915−10.402915−10.402915
C F 23 Mean−9.474190−10.207301−10.264706−10.529081−10.536443
SD2.1802211.4082491.1787940.0306078.881784 × 10−16
Best−10.536443−10.536443−10.536443−10.536443−10.536443
C F 24 Mean−106.764537−106.764537−106.764537−106.764537−106.764537
SD2.313359 × 10−142.377931 × 10−143.759839 × 10−142.561898 × 10−143.552714 × 10−14
Best−106.764537−106.764537−106.764537−106.764537−106.764537
C F 25 Mean0.5000000000003110.5000000000002090.5000000000001820.5000000000000430.500000000000040
SD3.939481 × 10−133.403677 × 10−134.809765 × 10−136.064783 × 10−144.258445 × 10−14
Best0.5000000000000010.5000000000000000.5000000000000000.5000000000000000.500000000000000
C F 26 Mean4.7090925.1055724.8971504.6625654.565587
SD0.7000810.5454400.5471820.7125270.738187
Best3.1500803.9015353.8445433.3372493.091730
C F 27 Mean−8.039597−8.039755−8.039829−8.039806−8.039810
SD0.0002270.0001584.728803 × 10−59.869128 × 10−57.925843 × 10−5
Best−8.03985−8.03985−8.03985−8.03985−8.03985
C F 28 Mean0.1213970.0534920.0534530.0525450.055992
SD0.0864110.0636820.0642240.0709960.056458
Best0.00010.00010.00010.00010.0001
C F 29 Mean0.1658590.1249240.0653250.0720150.041180
SD0.1980430.1565240.0401860.1175560.022127
Best0.0186430.0127140.0131050.0058660.005531
C F 30 Mean0.3009390.2710660.1821620.2312110.196458
SD0.1190960.1084160.0927690.1116510.093854
Best0.1098660.1150390.0623850.0070160.004542
C F 31 Mean−955.608723−955.608723−955.608723−955.608723−955.608723
SD1.368971 × 10−131.219156 × 10−131.296229 × 10−131.368971 × 10−131.296229 × 10−13
Best−955.608723−955.608723−955.608723−955.608723−955.608723
Table 12. Results of the GBFIO algorithm sensitivity analysis with respect to the maximum number of iterations.
Table 12. Results of the GBFIO algorithm sensitivity analysis with respect to the maximum number of iterations.
Number of Maximum Iterations
501002005001000
C F 1 Mean7.236219 × 10−604.888781 × 10−1222.723329 × 10−24500
SD1.329818 × 10−595.664636 × 10−122000
Best8.282774 × 10−621.502527 × 10−1233.714608 × 10−24800
C F 2 Mean8.077162 × 10−313.011036 × 10−622.954380 × 10−1251.095445 × 10−3140
SD5.150460 × 10−313.159078 × 10−626.131570 × 10−12500
Best1.799495 × 10−314.422890 × 10−631.000961 × 10−1265.244914 × 10−3170
C F 3 Mean2.663880 × 10−567.335968 × 10−1152.753337 × 10−23000
SD3.661737 × 10−561.226888 × 10−114000
Best1.716129 × 10−581.095871 × 10−1172.055096 × 10−23600
C F 4 Mean5.417876 × 10−299.849815 × 10−591.081251 × 10−1179.603995 × 10−2951.000000 × 10−323
SD6.091838 × 10−291.687004 × 10−589.333747 × 10−11800
Best3.866891 × 10−301.421012 × 10−616.562393 × 10−1191.265138 × 10−2971.000000 × 10−323
C F 5 Mean26.81301026.26941625.53754324.21936123.006166
SD0.2292450.1663450.22525780.2588550.393699
Best26.39613825.89883225.08146823.79048622.436206
C F 6 Mean0.1744250.0092922.603807 × 10−51.459834 × 10−128.525488 × 10−25
SD0.0957310.0044691.775197 × 10−51.347648 × 10−121.707910 × 10−24
Best0.0533480.0034148.222699 × 10−61.073916 × 10−135.523253 × 10−27
C F 7 Mean4.157443 × 10−42.363886 × 10−41.135765 × 10−44.609243 × 10−52.655697 × 10−5
SD2.696449 × 10−41.036415 × 10−44.512791 × 10−51.712571 × 10−51.169044 × 10−5
Best8.646435 × 10−52.857222 × 10−52.400965 × 10−52.204244 × 10−55.296813 × 10−6
C F 8 Mean−5522.811083−6649.882116−7979.289336−8349.210891−9603.189247
SD1618.1432812108.5372712282.8005752202.5442871486.878032
Best−10,333.978992−10,281.485338−11,297.097530−11,403.322725−11,621.979941
C F 9 Mean12.2760214.3775824.23681200
SD36.82914619.08143718.46783600
Best00000
C F 10 Mean5.684342 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−15
SD1.694536 × 10−150000
Best4.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−15
C F 11 Mean00000
SD00000
Best00000
C F 12 Mean0.0061452.496981 × 10−46.758787 × 10−74.177336 × 10−141.819623 × 10−26
SD0.0030861.323707 × 10−43.414239 × 10−79.785659 × 10−144.362844 × 10−26
Best0.0016928.998801 × 10−52.097865 × 10−71.334239 × 10−152.218079 × 10−28
C F 13 Mean0.2372840.0931050.0275420.0348770.011916
SD0.0859850.1012940.0455620.0635060.040562
Best0.0850860.0090212.653115 × 10−53.509161 × 10−132.273020 × 10−25
C F 14 Mean0.9980038380.9980038380.9980038380.9980038380.998003838
SD2.106500 × 10−161.216188 × 10−16000
Best0.9980038380.9980038380.9980038380.9980038380.998003838
C F 15 Mean3.749509 × 10−43.111576 × 10−43.076168 × 10−43.075054 × 10−43.074910 × 10−4
SD2.001892 × 10−41.428813 × 10−54.882946 × 10−75.363649 × 10−82.195724 × 10−8
Best3.074867 × 10−43.074861 × 10−43.074860 × 10−43.074860 × 10−43.074860 × 10−4
C F 16 Mean−1.031628−1.031628−1.031628−1.031628−1.031628
SD1.140715 × 10−71.453566 × 10−97.645066 × 10−112.220446 × 10−162.220446 × 10−16
Best−1.031628−1.031628−1.031628−1.031628−1.031628
C F 17 Mean0.397887360.397887360.397887360.397887360.39788736
SD00000
Best0.397887360.397887360.397887360.397887360.39788736
C F 18 Mean2.9999999999999302.9999999999999232.9999999999999222.9999999999999222.999999999999922
SD5.504736 × 10−151.731378 × 10−158.770059 × 10−164.094300 × 10−165.063396 × 10−16
Best2.9999999999999232.9999999999999222.9999999999999212.9999999999999212.999999999999920
C F 19 Mean−3.862782−3.862782−3.862782−3.862782−3.862782
SD2.180112 × 10−151.756821 × 10−152.220446 × 10−152.220446 × 10−152.220446 × 10−15
Best−3.862782−3.862782−3.862782−3.862782−3.862782
C F 20 Mean−3.202283−3.203573−3.205763−3.205044−3.203534
SD6.689631 × 10−37.820802 × 10−39.484115 × 10−38.443659 × 10−37.729254 × 10−3
Best−3.222190−3.222190−3.222190−3.222190−3.222190
C F 21 Mean−8.777263−9.876641−10.093695−10.147108−10.152762
SD1.8007730.6548180.2325262.524644 × 10−21.909355 × 10−3
Best−10.1531996790505−10.1531996790582−10.1531996790582−10.1531996790582−10.1531996790582
C F 22 Mean−10.015114−10.054852−10.374473−10.402915−10.402915
SD1.0252741.1764980.1239771.863059 × 10−151.685200 × 10−15
Best−10.402915−10.402915−10.402915−10.402915−10.402915
C F 23 Mean−9.776116−10.162331−10.536443−10.536443−10.53644
SD1.8387041.2079343.445696 × 10−101.432145 × 10−151.191616 × 10−15
Best−10.536443−10.536443−10.536443−10.536443−10.536443
C F 24 Mean−106.764536−106.764536−106.764537−106.764537−106.764537
SD2.747015 × 10−62.350338 × 10−61.903775 × 10−73.177644 × 10−142.929643 × 10−14
Best−106.764537−106.764537−106.764537−106.764537−106.764537
C F 25 Mean0.5000000000041320.5000000000007300.5000000000001370.5000000000000840.500000000000009
SD6.984784 × 10−129.168767 × 10−132.589352 × 10−131.275293 × 10−131.343753 × 10−14
Best0.5000000000000030.5000000000000020.5000000000000010.5000000000000000.500000000000000
C F 26 Mean5.9105135.4697694.9741044.6420323.924982
SD0.6026210.6012450.5550170.4460790.536122
Best4.7085754.2327503.7710433.6802682.808316
C F 27 Mean−8.039649−8.039721−8.039795−8.039843−8.039849
SD1.857984 × 10−41.600838 × 10−46.850032 × 10−51.438728 × 10−51.628580 × 10−6
Best−8.039846−8.039850−8.039850−8.039850−8.039850
C F 28 Mean0.1143510.0617620.0667100.0363620.0474869
SD0.0965350.0846520.0762520.0506050.048523
Best0.00010.00010.00010.00010.0001
C F 29 Mean0.1000300.0822770.0562720.0241950.021766
SD0.0566370.0414470.0477740.018180.009736
Best0.0334100.01443340.0102676.687262 × 10−36.000812 × 10−3
C F 30 Mean0.4468570.3749400.2660770.1175280.132555
SD0.1542620.1685320.1168470.0412540.077086
Best0.1238450.0726050.0798890.0575794.514708 × 10−4
C F 31 Mean−955.608723−955.608723−955.608723−955.608723−955.608723
SD6.283705 × 10−131.219156 × 10−131.219156 × 10−131.368971 × 10−131.850688 × 10−13
Best−955.608723−955.608723−955.608723−955.608723−955.608723
Table 13. Comparison of the GBFIO algorithm with other optimization algorithms.
Table 13. Comparison of the GBFIO algorithm with other optimization algorithms.
GBFIOGWOTLBODEPSO
C F 1 Mean001.294179 × 10−810.1446744.966553 × 10−2
SD003.882538 × 10−810.2342500.057479
Best009.879461 × 10−1741.595589 × 10−368.364170 × 10−3
C F 2 Mean1.218656 × 10−28701.0513195.177139 × 10−440.128009
SD003.1539561.553142 × 10−435.629631 × 10−2
Best1.118360 × 10−29007.386626 × 10−903.227326 × 10−630.025325
C F 3 Mean03.192145 × 10−1059.184483 × 10−5166.61124715.313370
SD09.5764341 × 10−1052.755345 × 10−4205.24549210.381268
Best03.203688 × 10−1321.279402 × 10−1932.2392597.060690
C F 4 Mean6.351895 × 10−2673.568757 × 10−1211.337250 × 10−257.9763983.098161
SD08.394264 × 10−1214.009598 × 10−254.8412870.960820
Best9.529171 × 10−2726.329564 × 10−1272.708885 × 10−502.4366361.641832
C F 5 Mean24.65722625.00045825.47333182.62727967.935807
SD0.3444940.5287511.01483154.94620950.904897
Best24.00774924.14231424.71841827.05634528.198511
C F 6 Mean2.254566 × 10−97.538724 × 10−22.0052641.574209 × 10−34.139476 × 10−3
SD2.433088 × 10−90.1151570.2789414.594092 × 10−32.617503 × 10−3
Best9.271979 × 10−111.580530 × 10−71.5216746.430025 × 10−268.398675 × 10−4
C F 7 Mean7.987476 × 10−51.731456 × 10−41.152873 × 10−32.018705 × 10−37.780638 × 10−3
SD2.957382 × 10−55.302466 × 10−57.507599 × 10−48.608555 × 10−43.925921 × 10−3
Best2.590416 × 10−58.293191 × 10−52.602298 × 10−47.265199 × 10−43.881654 × 10−3
C F 8 Mean−7145.318974−7765.884441−5415.742450−12,533.951559−6618.143350
SD1888.294358423.738554817.16957875.838506715.725130
Best−9400.506153−8317.183028−7547.511021−12,569.486618−8027.302686
C F 9 Mean1.534772 × 10−134.998223147.65731611.83969628.485021
SD4.574788 × 10−136.10703945.8416919.2253384.849056
Best0062.5068480.99495919.965157
C F 10 Mean4.440892 × 10−155.151435 × 10−1513.5523290.1037962.803250
SD01.421085 × 10−157.6945480.1680570.420610
Best4.440892 × 10−154.440892 × 10−154.440892 × 10−154.440892 × 10−151.792227
C F 11 Mean01.786773 × 10−39.573472 × 10−31.105455 × 10−28.989487 × 10−2
SD03.611421 × 10−32.220695 × 10−32.513669 × 10−24.414168 × 10−2
Best00005.255478 × 10−2
C F 12 Mean5.497947 × 10−112.470617 × 10−89.135967 × 10−24.669100 × 10−21.327286
SD4.249546 × 10−111.011952 × 10−82.430115 × 10−28.734069 × 10−21.247796
Best7.307790 × 10−121.197376 × 10−86.475929 × 10−21.570545 × 10−320.111829
C F 13 Mean0.1884016.580893 × 10−21.3802412.3623450.494989
SD0.1532636.743503 × 10−20.2390083.1121450.598903
Best4.302022 × 10−43.078688 × 10−71.1359535.242489 × 10−109.204132 × 10−2
C F 14 Mean0.9980038377944500.9980038377947170.9980038472088951.491087966555021.29581667593930
SD7.021667 × 10−172.011807 × 10−132.623667 × 10−81.4792520.635439
Best0.9980038377944500.9980038377944830.9980038377944500.9980038377944500.998003837794450
C F 15 Mean4.62448857988186 × 10−43.99054908144924 × 10−44.058168 × 10−46.984581 × 10−44.906235 × 10−4
SD2.745643 × 10−42.747062 × 10−42.736655 × 10−42.716092 × 10−43.662750 × 10−4
Best3.07485988525647 × 10−43.07485989805093 × 10−43.07662563819038 × 10−43.10985749911466 × 10−43.07485987805605 × 10−4
C F 16 Mean−1.03162845348988−1.03162845340201−1.03162841182056−1.03162845348988−1.03162845348988
SD07.100377 × 10−119.916618 × 10−800
Best−1.03162845348988−1.03162845348443−1.03162845347704−1.03162845348988−1.03162845348988
C F 17 Mean0.3978873577297380.3978873586879810.3978873577299130.3978873577297380.397887357729738
SD08.737636 × 10−104.454266 × 10−1300
Best0.3978873577297380.3978873577298410.3978873577297380.3978873577297380.397887357729738
C F 18 Mean2.999999999999923.000000015356713.000209125715692.999999999999922.99999999999992
SD5.063396 × 10−162.426698 × 10−85.874210 × 10−48.188600 × 10−160
Best2.999999999999923.000000000001572.999999999999922.999999999999922.99999999999992
C F 19 Mean−3.86278214782076−3.86278209251003−3.86278214065809−3.86278214782076−3.86278214782076
SD8.881784 × 10−164.353004 × 10−81.224830 × 10−88.881784 × 10−168.881784 × 10−16
Best−3.86278214782076−3.86278214667246−3.86278214782076−3.86278214782076−3.86278214782076
C F 20 Mean−3.191775−3.202244−3.198292−3.200285−3.189303
SD2.92343 × 10−22.459642 × 10−28.855541 × 10−37.268656 × 10−62.755499 × 10−2
Best−3.22219007647393−3.22219006463194−3.21360383726563−3.20028745044100−3.22219007647393
C F 21 Mean−9.601349−9.142709−9.031591−9.142714−8.390441
SD1.5160762.0209701.1751952.0209712.767170
Best−10.1531996790582−10.1531994004619−10.1312343981895−10.1531996790582−10.1531996790582
C F 22 Mean−10.242894−9.339864−9.674596−9.871391−8.872245
SD0.1997212.1260960.7294541.5945733.061341
Best−10.4029153367777−10.4029149121881−10.3983960082644−10.4029153367777−10.4029153367777
C F 23 Mean−10.5364431534835−10.5364381295375−9.38574003225806−10.5364431534835−10.5364431534835
SD7.944109 × 10−164.450583 × 10−61.96777601.375960 × 10−15
Best−10.5364431534835−10.5364429443837−10.5345358768162−10.5364431534835−10.5364431534835
C F 24 Mean−106.764536749265−106.764536670269−106.764433302178−106.764536749265−106.764536749265
SD2.107810 × 10−141.062099 × 10−72.393181 × 10−41.421085 × 10−142.733512 × 10−14
Best−106.764536749265−106.764536749056−106.764536747555−106.764536749265−106.764536749265
C F 25 Mean0.5000000000001530.5000000000007790.5000000000003140.5000000000000000.500000000000000
SD2.008861 × 10−131.627484 × 10−128.786089 × 10−1300
Best0.5000000000000050.5000000000000010.5000000000000000.5000000000000000.500000000000000
C F 26 Mean4.6498712.7012626.5226913.6488674.588877
SD0.8003091.1514741.0314410.2685960.723199
Best2.7878160.9316534.4426813.0386573.453176
C F 27 Mean−8.03977154555592−8.03934047668692−8.03955534121555−8.03983806190992−8.03919251751744
SD1.185843 × 10−41.742601 × 10−42.361555 × 10−42.857435 × 10−56.104576 × 10−4
Best−8.039850−8.039829−8.039836−8.039849−8.039850
C F 28 Mean6.097802 × 10−20.2845770.1907077.502779 × 10−42.107188 × 10−4
SD8.192341 × 10−20.1139068.350326 × 10−25.310118 × 10−43.321565 × 10−4
Best0.00010.0150020.0447820.00010.0001
C F 29 Mean0.2570363.3066007.7022671.1582013.942612
SD0.3339404.1877440.4356061.1875340.603973
Best1.453989 × 10−20.3014836.6477987.549659 × 10−122.797453
C F 30 Mean0.3361270.3632620.6032177.050466 × 10−80.245494
SD0.1374850.1828220.2178431.619793 × 10−70.133411
Best5.559156 × 10−25.500235 × 10−20.31313106.083741 × 10−33
C F 31 Mean−955.608723−950.418413−955.608721−954.570662−891.830018
SD3.595093 × 10−145.1903103.862493 × 10−63.11418391.055620
Best−955.608722698531−955.608722698531−955.608722698531−955.608722698531−955.608722698531
Rank Based on Mean of the ResultsSum Rank538812284107
Mean Rank1.7096772.8387103.9354842.7096773.451613
Total Rank13524
Rank Based on Best of the ResultsSum Rank539712481100
Mean Rank1.7096773.12903242.6129033.225806
Total Rank13524
Table 14. Unconstrained CEC2017 benchmark test functions.
Table 14. Unconstrained CEC2017 benchmark test functions.
No.Function Name F * Type
C E C 01 S&R Bent Cigar Function100Unimodal
C E C 02 S&R Sum of Different Power Function200
C E C 03 S&R Zakharov Function300
C E C 04 S&R Rosenbrock’s Function400Multimodal
C E C 05 S&R Rastrigin’s Function500
C E C 06 S&R Expanded Scaffer’s F6 Function600
C E C 07 S&R Lunacek Bi_Rastrigin Function700
C E C 08 S&R Non-Continuous Rastrigin’s Function800
C E C 09 S&R Levy Function900
C E C 10 S&R Schwefel’s Function1000
C E C 11 C E C 03 , C E C 04 , and C E C 05 1100Hybrid
C E C 12 C E C 01 , C E C 10 , and S&R High-Conditioned Elliptic Function1200
C E C 13 C E C 01 , C E C 04 , and C E C 07 1300
C E C 14 C E C 05 , S&R High-Conditioned Elliptic, S&R Ackley, and S&R Expanded Scaffer’s F7 Functions1400
C E C 15 C E C 01 , C E C 04 , C E C 05 , and S&R HGBat Function1500
C E C 16 C E C 04 , C E C 06 , C E C 10 , and S&R HGBat Function1600
C E C 17 C E C 05 , C E C 10 , S&R Ackley, and S&R Expanded Griewank plus Rosenbrock Functions1700
C E C 18 C E C 05 , S&R High-Conditioned Elliptic, S&R Ackley, S&R HGBat, and S&R Discus Functions1800
C E C 19 C E C 01 , C E C 05 , C E C 06 , S&R Expanded Griewank Plus Rosenbrock, and S&R Weierstrass1900
C E C 20 C E C 05 , S&R HappyCat, S&R Katsuura, S&R Ackley, S&R Schwefel, and S&R Expanded Scaffer’s F7 Functions2000
C E C 21 Rosenbrocks, High-Conditioned Elliptic, and Rastrigin’s Functions2100Composition
C E C 22 Rastrigin’s, Griewank, and Modified Schwefel Functions2200
C E C 23 Rosenbrocks, Ackley, Modified Schwefel, and Rastrigin’s Functions2300
C E C 24 Ackley, High-Conditioned Elliptic, Griewank, and Rastrigin’s Functions2400
C E C 25 Rastrigins, HappyCat, Ackley, Discus, and Rosenbrock’s Functions2500
C E C 26 Expanded Scaffer’s F6, Modified Schwefel, Griewank, Rosenbrocks, and Rastrigin’s Functions2600
C E C 27 HappyCat, Rastrigins, Modified Schwefel, Bent Cigar, High-Conditioned Elliptic, and Expanded Scaffer’s F6 Functions2700
C E C 28 Ackley, Griewank, Discus, Rosenbrocks, HappyCat, and Expanded Scaffer’s F6 Functions2800
C E C 29 Expanded Scaffer’s F6, Ackley, Expanded Griewank plus Rosenbrock, Bent Cigar, Two HGBat, Two Rosenbrocks, Two Rastrigins, and Two Schwefel’s Functions2900
C E C 30 Expanded Griewank Plus Rosenbrock, Weierstrass, Expanded Scaffer’s F6, High-Conditioned Elliptic, Ackley, Discus, Two HGBat, Two Rosenbrocks, Two Bent Cigar, and Three Rastrigin’s Functions3000
Table 15. Comparison based on shifted and rotated CEC2017 benchmark test functions.
Table 15. Comparison based on shifted and rotated CEC2017 benchmark test functions.
GBFIOGWOTLBODEPSOCPATSAWOA
C E C 1 Mean3.0427 × 10+45.2425 × 10+71.0787 × 10+103.2906 × 10+65.8511 × 10+85.5862 × 10+47.4852 × 10+101.7534 × 10+8
SD2.8249 × 10+46.8858 × 10+72.7243 × 10+97.0151 × 10+62.3514 × 10+88.6168 × 10+46.0034 × 10+92.7801 × 10+8
Best1.0006 × 10+22.2646 × 10+67.3081 × 10+91.0473 × 10+23.1247 × 10+81.3209 × 10+26.5858 × 10+101.2616 × 10+2
C F 2 Mean1.3459 × 10+161.8283 × 10+211.0306 × 10+302.4881 × 10+206.5639 × 10+165.2363 × 10+223.3919 × 10+384.5171 × 10+32
SD3.7829 × 10+163.6753 × 10+211.6537 × 10+307.1219 × 10+201.9650 × 10+171.3691 × 10+238.5134 × 10+381.3551 × 10+33
Best3.0591 × 10+81.2396 × 10+132.3330 × 10+266.6152 × 10+121.0099 × 10+105.0366 × 10+124.0212 × 10+364.6817 × 10+11
C F 3 Mean7.9853 × 10+31.2979 × 10+35.1089 × 10+41.1540 × 10+46.6428 × 10+21.7898 × 10+48.5956 × 10+46.6356 × 10+2
SD3.5092 × 10+31.0247 × 10+31.5404 × 10+43.5303 × 10+31.0050 × 10+37.8294 × 10+39.5798 × 10+34.6279 × 10+2
Best3.0623 × 10+33.5710 × 10+23.3715 × 10+47.1014 × 10+33.1513 × 10+27.0414 × 10+37.6401 × 10+43.0118 × 10+2
C F 4 Mean8.4462 × 10+32.3143 × 10+54.3737 × 10+89.1427 × 10+35.9680 × 10+51.2831 × 10+48.5166 × 10+99.7340 × 10+3
SD9.1738 × 10+33.0453 × 10+52.1974 × 10+87.0608 × 10+31.4865 × 10+61.2426 × 10+41.7105 × 10+96.5563 × 10+3
Best6.1169 × 10+29.8791 × 10+31.5995 × 10+86.0642 × 10+27.8244 × 10+26.0885 × 10+26.7843 × 10+97.4653 × 10+2
C F 5 Mean6.9705 × 10+29.8453 × 10+21.0757 × 10+46.5860 × 10+21.0612 × 10+36.9874 × 10+25.9355 × 10+44.8812 × 10+3
SD2.3280 × 10+12.8484 × 10+22.0250 × 10+38.86273.9530 × 10+23.6326 × 10+14.0418 × 10+39.4759 × 10+2
Best6.5335 × 10+26.0076 × 10+27.7623 × 10+36.4341 × 10+27.3606 × 10+26.0023 × 10+25.0626 × 10+43.4009 × 10+3
C F 6 Mean6.1199 × 10+26.1076 × 10+26.1221 × 10+26.1204 × 10+26.1142 × 10+26.1229 × 10+26.1274 × 10+26.1225 × 10+2
SD0.25050.391610.25450.17250.40420.25680.19370.4033
Best6.1159 × 10+26.0998 × 10+26.1156 × 10+26.1181 × 10+26.1076 × 10+26.1156 × 10+26.1243 × 10+26.1157 × 10+2
C F 7 Mean9.4332 × 10+29.7733 × 10+22.6965 × 10+38.9814 × 10+29.6824 × 10+29.4150 × 10+21.0159 × 10+43.3524 × 10+3
SD2.3829 × 10+13.1959 × 10+13.8496 × 10+29.45818.5816 × 10+18.85988.3824 × 10+29.7905 × 10+2
Best9.0639 × 10+29.4075 × 10+22.2887 × 10+38.7647 × 10+28.1326 × 10+29.2597 × 10+28.3332 × 10+32.3470 × 10+3
C F 8 Mean1.0170 × 10+31.1686 × 10+32.0555 × 10+49.5874 × 10+22.5350 × 10+31.0173 × 10+39.0818 × 10+44.6828 × 10+3
SD1.6858 × 10+11.3060 × 10+23.0799 × 10+38.31487.6991 × 10+21.4184 × 10+19.1209 × 10+31.4188 × 10+3
Best9.8354 × 10+29.4557 × 10+21.5471 × 10+49.3628 × 10+21.1926 × 10+39.8525 × 10+27.2518 × 10+42.7142 × 10+3
C F 9 Mean9.3282 × 10+21.0107 × 10+37.6405 × 10+39.0001 × 10+24.1650 × 10+39.1484 × 10+21.2363 × 10+46.0580 × 10+3
SD1.4807 × 10+17.4294 × 10+12.1172 × 10+33.7424 × 10−21.3628 × 10+39.78021.0314 × 10+31.8479 × 10+3
Best9.1759 × 10+29.0972 × 10+24.7597 × 10+39.0000 × 10+21.8404 × 10+39.0107 × 10+21.0089 × 10+43.5584 × 10+3
C F 10 Mean1.1011 × 10+41.1079 × 10+41.1455 × 10+41.0722 × 10+41.1401 × 10+41.0961 × 10+41.2451 × 10+41.1813 × 10+4
SD1.8808 × 10+21.5078 × 10+22.3859 × 10+21.5673 × 10+21.2725 × 10+21.284 × 10+21.3989 × 10+24.1831 × 10+2
Best1.0590 × 10+41.0908 × 10+41.1136 × 10+41.0548 × 10+41.1195 × 10+41.0724 × 10+41.2189 × 10+41.1129 × 10+4
C F 11 Mean1.7670 × 10+32.0221 × 10+63.1151 × 10+92.5291 × 10+31.1730 × 10+78.7801 × 10+54.0390 × 10+101.3320 × 10+3
SD6.7221 × 10+23.4853 × 10+68.2243 × 10+83.0633 × 10+32.0238 × 10+71.6759 × 10+64.4325 × 10+92.6880 × 10+2
Best1.1290 × 10+34.2580 × 10+31.5136 × 10+91.1365 × 10+31.0634 × 10+42.4347 × 10+33.2131 × 10+101.1290 × 10+3
C F 12 Mean5.9288 × 10+49.7257 × 10+77.1215 × 10+91.6104 × 10+41.3794 × 10+85.7313 × 10+64.8662 × 10+101.3130 × 10+9
SD9.0807 × 10+41.3485 × 10+82.0435 × 10+92.5779 × 10+32.4600 × 10+81.1492 × 10+75.4498 × 10+93.1497 × 10+9
Best1.3827 × 10+44.7340 × 10+64.0387 × 10+91.3849 × 10+41.6120 × 10+41.4564 × 10+43.7038 × 10+101.3874 × 10+4
C F 13 Mean8.3733 × 10+41.4049 × 10+81.4700 × 10+103.9304 × 10+55.8942 × 10+88.1524 × 10+48.5046 × 10+102.1004 × 10+3
SD2.6233 × 10+49.9947 × 10+73.6972 × 10+94.4340 × 10+52.9850 × 10+84.4050 × 10+41.3211 × 10+101.3268
Best3.3367 × 10+42.7440 × 10+71.0374 × 10+101.8847 × 10+48.4376 × 10+72.2178 × 10+35.9840 × 10+102.0984 × 10+3
C F 14 Mean5.8750 × 10+61.5426 × 10+79.8656 × 10+76.3622 × 10+62.1211 × 10+69.4752 × 10+69.1320 × 10+89.3891 × 10+5
SD1.7853 × 10+68.1897 × 10+62.1676 × 10+72.8854 × 10+62.2163 × 10+61.0014 × 10+72.4638 × 10+86.2615 × 10+5
Best2.4620 × 10+65.0749 × 10+65.8264 × 10+72.2981 × 10+62.5551 × 10+51.5089 × 10+65.2074 × 10+82.9271 × 10+5
C F 15 Mean8.2975 × 10+41.3341 × 10+81.0189 × 10+104.0740 × 10+53.0637 × 10+88.9142 × 10+47.6362 × 10+101.5357 × 10+3
SD3.9857 × 10+41.1885 × 10+82.4488 × 10+93.0513 × 10+53.0808 × 10+85.7921 × 10+43.9859 × 10+97.9048
Best1.5673 × 10+34.2886 × 10+66.8067 × 10+91.5325 × 10+33.2873 × 10+71.5468 × 10+36.9687 × 10+91.5299 × 10+3
C F 16 Mean1.4487 × 10+42.5661 × 10+51.2361 × 10+95.8814 × 10+42.7922 × 10+63.6117 × 10+42.9758 × 10+103.2885 × 10+5
SD5.8891 × 10+24.0008 × 10+53.0577 × 10+87.6916 × 10+44.1507 × 10+66.4341 × 10+45.1031 × 10+98.6244 × 10+5
Best1.4194 × 10+41.4372 × 10+47.0565 × 10+81.4231 × 10+46.4617 × 10+41.4196 × 10+41.9522 × 10+101.4195 × 10+4
C F 17 Mean1.4351 × 10+47.1725 × 10+71.6760 × 10+141.4279 × 10+49.7976 × 10+81.4331 × 10+45.4148 × 10+167.9218 × 10+7
SD2.0286 × 10+11.6587 × 10+81.3323 × 10+147.21732.2737 × 10+95.5655 × 10+11.4025 × 10+162.3744 × 10+8
Best1.4310 × 10+41.4474 × 10+42.4595 × 10+131.4270 × 10+44.7082 × 10+61.4297 × 10+43.2314 × 10+161.4310 × 10+4
C F 18 Mean7.6719 × 10+61.2565 × 10+78.4578 × 10+71.7995 × 10+72.3636 × 10+61.1445 × 10+74.9030 × 10+83.9831 × 10+7
SD3.9782 × 10+65.1616 × 10+62.1103 × 10+71.3468 × 10+71.1146 × 10+66.8927 × 10+66.7935 × 10+71.8158 × 10+7
Best1.7226 × 10+65.8824 × 10+65.4018 × 10+76.7495 × 10+61.2019 × 10+65.2488 × 10+63.2780 × 10+81.4917 × 10+7
C F 19 Mean4.7801 × 10+31.8763 × 10+85.7415 × 10+133.3488 × 10+38.8622 × 10+83.4119 × 10+32.5284 × 10+161.9531 × 10+3
SD1.3835 × 10+31.6251 × 10+88.8069 × 10+132.6690 × 10+31.3862 × 10+92.0317 × 10+31.2974 × 10+163.0686 × 10+1
Best1.9145 × 10+37.8634 × 10+63.5591 × 10+121.9074 × 10+32.7479 × 10+71.9135 × 10+37.0023 × 10+151.9237 × 10+3
C F 20 Mean5.9888 × 10+1021.6665 × 10+847.6404 × 10+1111.5548 × 10+191.4427 × 10+1001.0005 × 10+918.4624 × 10+1167.1701 × 10+109
SD1.7792 × 10+1034.9606 × 10+842.2625 × 10+1124.6645 × 10+194.1334 × 10+1002.2225 × 10+911.1522 × 10+1171.9814 × 10+110
Best6.5653 × 10+651.4713 × 10+635.2358 × 10+1061.4579 × 10+41.8321 × 10+862.6695 × 10+855.4253 × 10+1134.2488 × 10+103
C F 21 Mean2.1135 × 10+33.8159 × 10+39.5991 × 10+36.4467 × 10+33.3722 × 10+32.1853 × 10+35.8262 × 10+42.1123 × 10+3
SD5.97297.7727 × 10+21.8415 × 10+31.3007 × 10+45.9383 × 10+21.2686 × 10+21.5694 × 10+45.3425
Best2.1097 × 10+32.7056 × 10+37.0339 × 10+32.1097 × 10+32.4574 × 10+32.1098 × 10+33.1666 × 10+42.1097 × 10+3
C F 22 Mean6.4059 × 10+36.4015 × 10+36.4382 × 10+36.4009 × 10+36.4497 × 10+36.4216 × 10+36.4082 × 10+36.4113 × 10+3
SD5.0426 × 10−11.36171.7149 × 10+12.16761.4282 × 10+12.6376 × 10+13.9606 × 10−16.2780
Best6.4047 × 10+36.3984 × 10+36.4068 × 10+36.3971 × 10+36.4235 × 10+36.4008 × 10+36.4073 × 10+36.4072 × 10+3
C F 23 Mean5.4514 × 10+35.4583 × 10+35.8620 × 10+35.4702 × 10+35.6456 × 10+35.4948 × 10+36.5117 × 10+35.4470 × 10+3
SD3.55281.3281 × 10+11.3220 × 10+21.6258 × 10+12.6447 × 10+23.4469 × 10+11.6655 × 10+27.6078
Best5.4496 × 10+35.4496 × 10+35.7378 × 10+35.4496 × 10+35.4781 × 10+35.4567 × 10+36.2514 × 10+35.4445 × 10+3
C F 24 Mean2.4049 × 10+33.7702 × 10+38.1763 × 10+32.4003 × 10+34.4037 × 10+39.8171 × 10+34.4376 × 10+42.4112 × 10+3
SD3.59387.0818 × 10+21.9869 × 10+39.3908 × 10−11.6095 × 10+32.2245 × 10+41.2943 × 10+44.8607
Best2.4000 × 10+32.9135 × 10+35.8856 × 10+32.4000 × 10+32.5242 × 10+32.4000 × 10+32.6345 × 10+42.4047 × 10+3
C F 25 Mean2.5115 × 10+32.5259 × 10+32.8632 × 10+32.5346 × 10+32.6545 × 10+32.5449 × 10+33.6612 × 10+32.5209 × 10+3
SD1.3832 × 10+12.1977 × 10+15.0626 × 10+16.1919 × 10+19.3843 × 10+12.5118 × 10+11.4584 × 10+21.5459 × 10+1
Best2.5069 × 10+32.5069 × 10+32.7889 × 10+32.5069 × 10+32.5790 × 10+32.5071 × 10+33.4479 × 10+32.5069 × 10+3
C F 26 Mean5.1221 × 10+35.1271 × 10+35.4637 × 10+35.1439 × 10+35.1698 × 10+35.1538 × 10+35.9227 × 10+35.1204 × 10+3
SD5.26821.5539 × 10+16.5653 × 10+14.3448 × 10+14.2873 × 10+11.8060 × 10+11.9952 × 10+21.0649 × 10+1
Best5.1197 × 10+35.1197 × 10+35.3657 × 10+35.1197 × 10+35.1228 × 10+35.1211 × 10+35.6535 × 10+35.1150 × 10+3
C F 27 Mean4.7961 × 10+35.0083 × 10+31.3037 × 10+64.8113 × 10+35.2446 × 10+34.8678 × 10+32.3729 × 10+64.7963 × 10+3
SD5.2592 × 10−11.9426 × 10+22.6522 × 10+54.6254 × 10+13.0010 × 10+21.3515 × 10+22.5672 × 10+56.7444 × 10−1
Best4.7958 × 10+34.7968 × 10+39.5446 × 10+54.7958 × 10+34.9123 × 10+34.7959 × 10+32.0097 × 10+64.7958 × 10+3
C F 28 Mean2.8057 × 10+32.8141 × 10+33.1012 × 10+32.8232 × 10+32.8946 × 10+32.8122 × 10+33.7470 × 10+32.8057 × 10+3
SD2.4908 × 10−131.0768 × 10+16.2779 × 10+11.8079 × 10+11.2846 × 10+21.3537 × 10+11.8942 × 10+22.0667 × 10−8
Best2.8057 × 10+32.8057 × 10+33.0038 × 10+32.8057 × 10+32.8143 × 10+32.8057 × 10+33.5139 × 10+32.8057 × 10+3
C F 29 Mean4.9999 × 10+35.0411 × 10+35.2807 × 10+58.6630 × 10+36.3626 × 10+32.4707 × 10+71.1877 × 10+64.9999 × 10+3
SD8.1348 × 10−135.4386 × 10+14.0488 × 10+41.0984 × 10+41.5954 × 10+36.0227 × 10+78.1164 × 10+41.9679 × 10−7
Best4.9999 × 10+35.0005 × 10+34.8404 × 10+54.9999 × 10+35.0830 × 10+35.0104 × 10+31.0129 × 10+64.9999 × 10+3
C F 30 Mean3.0022 × 10+33.0024 × 10+31.0060 × 10+67.1836 × 10+53.2555 × 10+39.2082 × 10+62.0871 × 10+63.0023 × 10+3
SD9.2714 × 10−43.0809 × 10−22.6791 × 10+52.1461 × 10+62.4063 × 10+22.7196 × 10+73.0555 × 10+53.7938 × 10−2
Best3.0022 × 10+33.0023 × 10+37.1959 × 10+53.0022 × 10+33.0712 × 10+33.1290 × 10+31.6273 × 10+63.0023 × 10+3
Rank Based on Mean of the ResultsMean Rank2.2333.9336.7332.9334.73347.83.633
Total Rank14726583
Rank Based on Best of the ResultsMean Rank2.5334.16.7672.8334.9003.5337.9673.367
Total Rank15726483
Table 16. Performance of the GBFIO and other algorithms on the TCSD problem.
Table 16. Performance of the GBFIO and other algorithms on the TCSD problem.
AlgorithmGBFIOGWOTLBODEPSO
Variables x 1 0.0517330.0511310.0515590.0516890.051691
x 2 0.3577820.3434150.3535960.3567170.356762
x 3 11.22683812.11977211.47579011.28903411.28640
Constraints g 1 −2.220446 × 10−16−1.137609 × 10−4−7.716146 × 10−50−2.220446 × 10−16
g 2 0−2.930045 × 10−5−3.314506 × 10−500
g 3 −4.049367−4.047656−4.047007−4.053783−4.053872
g 4 −0.729275−0.729614−0.729896−0.727730−0.727698
Optimum objective function0.0126650.0126670.0126670.0126650.012665
Table 17. Statistical results of the GBFIO algorithm in comparison with other algorithms on the TCSD problem.
Table 17. Statistical results of the GBFIO algorithm in comparison with other algorithms on the TCSD problem.
AlgorithmGBFIOGWOTLBODEPSO
Best0.0126650.0126670.0126670.0126650.012665
Mean0.0126680.0127020.0126980.0127440.012705
Worst0.0126760.0127250.0127630.0130900.013193
SD3.05638 × 10−62.07286 × 10−52.44736 × 10−51.24856 × 10−41.14067 × 10−4
Table 18. Performance of the GBFIO and other algorithms on the WBD problem.
Table 18. Performance of the GBFIO and other algorithms on the WBD problem.
AlgorithmGBFIOGWOTLBODEPSO
Variables x 1 0.2036620.2036630.2037260.2036710.203669
x 2 2.6576782.6565732.6621572.6593992.659002
x 3 9.4740389.4754459.4672969.4721029.472551
x 4 0.2036620.2036670.2037350.2036710.203669
Constraints g 1 −0.000427−0.640565−0.17380400
g 2 −2429.184048−2437.957204−2399.718172−2419.093017−2421.428131
g 3 −0.241422−0.241425−0.241413−0.241419−0.241420
g 4 −6.790706 × 10−9−3.767273 × 10−7−9.238739 × 10−600
g 5 −0.009883−0.960555−3.702637−0.004736−9.094947 × 10−13
g 6 −0.078662−0.078663−0.078726−0.078671−0.078669
g 7 −3.407872−3.407712−3.407979−3.407958−3.407938
Optimum objective
function
1.6680851.6681961.6682311.6680851.668085
Table 19. Statistical results of the GBFIO algorithm in comparison to other algorithms on the WBD problem.
Table 19. Statistical results of the GBFIO algorithm in comparison to other algorithms on the WBD problem.
AlgorithmGBFIOGWOTLBODEPSO
Best1.6680851.6681961.6682311.6680851.668085
Mean1.6681391.6686131.6733121.7509731.670535
Worst1.6682611.6705101.7142642.2770171.692367
SD4.02387 × 10−50.0004910.0114830.1397420.007278
Table 20. Performance of the GBFIO and other algorithms on the PVD problem.
Table 20. Performance of the GBFIO and other algorithms on the PVD problem.
AlgorithmGBFIOGWOTLBODEPSO
Variables x 1 0.7781710.7783340.7781770.7994880.778169
x 2 0.3846500.3848670.3846550.3961770.384649
x 3 40.31971640.3228940.3196941.4242540.31962
x 4 199.998644199.9564541200185.17393200
Constraints g 1 −3.474887 × 10−12−1.026458 × 10−4−6.906605 × 10−60−1.110223 × 10−16
g 2 −7.984169 × 10−13−1.867439 × 10−4−5.317988 × 10−6−9.897130 × 10−40
g 3 −8.302741 × 10−7−10.026897−4.9450100−4.656613 × 10−10
g 4 −40.001356−40.043546−40−54.826069−40
Optimum objective function5885.3359875886.7667165885.4212745925.801975885.332774
Table 21. Statistical results of the GBFIO algorithm in comparison to other algorithms on the PVD problem.
Table 21. Statistical results of the GBFIO algorithm in comparison to other algorithms on the PVD problem.
AlgorithmGBFIOGWOTLBODEPSO
Best5885.3359875886.7667165885.4212745925.801975885.332774
Mean5885.7299595906.4285345957.5847516268.3018045987.870488
Worst5887.6560046159.1095186413.0637287107.4121676409.398064
SD0.50417229758.24275335131.2308551294.4901147165.3137231
Table 22. Performance of the GBFIO and other algorithms on the SRD problem.
Table 22. Performance of the GBFIO and other algorithms on the SRD problem.
AlgorithmGBFIOGWOTLBODEPSO
Variables x 1 3.53.5012333.53.53.5
x 2 0.70.70.70.70.7
x 3 1717171717
x 4 7.37.3073956997.37.37.3
x 5 7.7153199117.7873802467.7153852277.7153199118.3
x 6 3.3502153.3507623.3502213.3502153.350215
x 7 5.2866545.2871655.286665.2866545.286859
Constraints g 1 −7.391528 × 10−2−7.424131 × 10−2−7.391585 × 10−2−7.391528 × 10−2−7.391528 × 10−2
g 2 −0.197999−0.198281−0.197999−0.197999−0.197999
g 3 −0.499172−0.497977−0.499176−0.499172−0.499172
g 4 −0.904644−0.901985−0.904642−0.904644−0.881299
g 5 −7.771561 × 10−16−4.776360 × 10−4−5.500447 × 10−6−7.771561 × 10−16−2.220446 × 10−16
g 6 0−2.755579 × 10−4−3.228823 × 10−60−2.220446 × 10−16
g 7 −0.702500−0.702500−0.702500−0.702500−0.702500
g 8 −3.330669 × 10−16−3.520475 × 10−4−6.164804 × 10−7−2.220446 × 10−16−2.220446 × 10−16
g 9 −0.583333−0.583187−0.583333−0.583333−0.583333
g 10 −5.132575 × 10−2−5.217353 × 10−2−5.132449 × 10−2−5.132575 × 10−2−5.132575 × 10−2
g 11 −7.771561 × 10−16−9.181416 × 10−3−7.651281 × 10−6−9.992007 × 10−16−7.041622 × 10−2
Optimum objective function2994.4710662997.0661082994.4785412994.4710663007.436552
Table 23. Statistical results of the GBFIO algorithm in comparison to other algorithms on the SRD problem.
Table 23. Statistical results of the GBFIO algorithm in comparison to other algorithms on the SRD problem.
AlgorithmGBFIOGWOTLBODEPSO
Best2994.4710662997.066112994.4785412994.4710663007.436552
Mean2994.4710663001.7230143002.4764762995.2867173049.074191
Worst2994.4710663009.9721063037.1943782999.8678673180.009085
SD3.02268 × 10−103.75902971311.221554721.49650375134.49904775
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dehghani, M.; Aly, M.; Rodriguez, J.; Sheybani, E.; Javidi, G. A Novel Nature-Inspired Optimization Algorithm: Grizzly Bear Fat Increase Optimizer. Biomimetics 2025, 10, 379. https://doi.org/10.3390/biomimetics10060379

AMA Style

Dehghani M, Aly M, Rodriguez J, Sheybani E, Javidi G. A Novel Nature-Inspired Optimization Algorithm: Grizzly Bear Fat Increase Optimizer. Biomimetics. 2025; 10(6):379. https://doi.org/10.3390/biomimetics10060379

Chicago/Turabian Style

Dehghani, Moslem, Mokhtar Aly, Jose Rodriguez, Ehsan Sheybani, and Giti Javidi. 2025. "A Novel Nature-Inspired Optimization Algorithm: Grizzly Bear Fat Increase Optimizer" Biomimetics 10, no. 6: 379. https://doi.org/10.3390/biomimetics10060379

APA Style

Dehghani, M., Aly, M., Rodriguez, J., Sheybani, E., & Javidi, G. (2025). A Novel Nature-Inspired Optimization Algorithm: Grizzly Bear Fat Increase Optimizer. Biomimetics, 10(6), 379. https://doi.org/10.3390/biomimetics10060379

Article Metrics

Back to TopTop