Next Article in Journal
Development and Evaluation of a Tethered Class C3 Hexacopter in Maritime Conditions on the Helipad of a Ferry
Previous Article in Journal
Applicability of Clay/Organic Clay to Environmental Pollutants: Green Way—An Overview
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Nonlinear Convex Decreasing Weights Golden Eagle Optimizer Technique Based on a Global Optimization Strategy

College of Big Data & Information Engineering, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(16), 9394; https://doi.org/10.3390/app13169394
Submission received: 27 June 2023 / Revised: 6 August 2023 / Accepted: 14 August 2023 / Published: 18 August 2023
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
A novel approach called the nonlinear convex decreasing weights golden eagle optimization technique based on a global optimization strategy is proposed to overcome the limitations of the original golden eagle algorithm, which include slow convergence and low search accuracy. To enhance the diversity of the golden eagle, the algorithm is initialized with the Arnold chaotic map. Furthermore, nonlinear convex weight reduction is incorporated into the position update formula of the golden eagle, improving the algorithm’s ability to perform both local and global searches. Additionally, a final global optimization strategy is introduced, allowing the golden eagle to position itself in the best possible location. The effectiveness of the enhanced algorithm is evaluated through simulations using 12 benchmark test functions, demonstrating improved optimization performance. The algorithm is also tested using the CEC2021 test set to assess its performance against other algorithms. Several statistical tests are conducted to compare the efficacy of each method, with the enhanced algorithm consistently outperforming the others. To further validate the algorithm, it is applied to the cognitive radio spectrum allocation problem after discretization, and the results are compared to those obtained using traditional methods. The results indicate the successful operation of the updated algorithm. The effectiveness of the algorithm is further evaluated through five engineering design tasks, which provide additional evidence of its efficacy.

1. Introduction

The earliest approaches to optimization problems were mathematical or numerical [1], where the goal was to arrive at a zero-derivative point to arrive at the final answer. The search space expands exponentially as the number of dimensions rises, making it nearly impossible to use these techniques to solve nonlinear and non-convex problems with many variables and constraints. Additionally, numerical approaches may enter local optimality when the derivative is also zero. Such numerical approaches cannot guarantee the discovery of a globally optimal solution, since real-world problems usually exhibit stochastic behavior and have unexplored search areas [2].
To overcome the limitations of numerical approaches, more sophisticated meta-heuristic algorithms are routinely utilized to address difficult optimization issues. These techniques have the benefits of being easily applied to continuous and discrete functions, not requiring any extra complicated mathematical operations such as derivatives, and having a low incidence of falling into local optimality points. Single-solution-based methods and population-based methods are the two categories of meta-heuristic techniques. A solution (typically random) is generated and iteratively improved until the stopping criterion is reached in single-solution-based methods. Based on the interaction of information between the solutions, population-based methods create a random set of solutions in a predefined search space and are iteratively updated to find the (near) optimal solution. Deep learning is also a form of optimization technique that is frequently used in many different fields [3].
The most prevalent metaheuristic algorithms are those that draw their inspiration from evolutionary theory or normal social animal behaviors like foraging, mating, hunting, memory, and other behaviors. To more efficiently look for portions of the problem that are viable, this strategy can be applied in several different ways. Numerous swarm intelligence systems serve as examples. Particle Swarm Optimization (PSO) [4] is the birthplace of swarm intelligence algorithms, which are inspired by birds’ foraging behavior. Ant Colony Optimization (ACO) [5] is an algorithm inspired by simulating the collective routing behaviors of ants in nature. Cuckoo Search (CS) [6] is an algorithm inspired by they raise their young using parasitism. Firefly Algorithms (FA) [7] are algorithms inspired by the flickering behavior of fireflies. Artificial Bee Colony (ABC) [8] is an algorithm inspired by imitating the honey collection mechanism of bees. New swarm intelligence algorithms have also been put forward in recent years. The Whale Optimization Algorithm (WOA) [9] is an algorithm inspired by the predatory behavior of whales. The Grey Wolf Optimizer (GWO) [10] is an algorithm inspired by the hunting behavior of grey wolf groups. The Seagull Optimization Algorithm (SOA) [11] is an algorithm inspired by simulating the migration of seagulls in the nature and the attack behavior (foraging behavior) in the migration process. The Slap Swarm Algorithm (SSA) [12] is an algorithm inspired by the aggregation behavior of slap swarms and their formation into a chain for predation and movement. The Butterfly Optimization Algorithm (BOA) [13] is an algorithm inspired by the behavior whereby butterflies receive, perceive and analyze the smell in the air to determine the food source and the potential direction of mating partners.
Algorithms for swarm intelligence do, however, inevitably have significant shortcomings. A number of variables have an enormous impact on the algorithm. This is because swarm intelligence systems usually need a lot of modifications, including with respect to population size, iterations, inertia weights, etc. The choice of these parameters has a considerable impact on the algorithm, because various problems necessitate different parameter values, which are difficult to regulate. Due to the dispersed nature of swarm intelligence algorithms, they must iterate repeatedly in order to discover the optimum solution, which slows down the algorithm. Simultaneously, the algorithm might enter a local optimum and continually look for an optimum in a small area that is not the best option, causing it to become stuck.
Researchers have recently presented some enhanced swarm intelligence algorithms and used them in various industries. Nafees et al. [14] proposed improved variants of Particle Swarm Optimization that incorporate novel modifications and demonstrate superior performance compared to conventional approaches in solving optimization problems and training artificial neural networks. A Fusion Multi-Strategy Marine Predator Algorithm for mobile robot path planning (FMMPA) was proposed by Luxian Yang [15]. A better marine predator algorithm (MPA-DMA) was proposed by Li Shouyu et al. [16], and it was used for feature selection with positive outcomes. However, the neighborhood clustering learning technique makes it simple to push the population beyond the cap, which is likely to have some bearing on how practical issues are resolved. A marine predator method based on mirror reflection urban and rural learning (PRIL-MPA), which was also applied in feature selection, was proposed by Xuming et al. [17]. In contrast to the previous method, in this article, an ideal person is identified to serve as a search guide for others, population variety is reduced, the search space is reduced, and local optimization is made simple. A marine predator algorithm (MSIMPA) combining chaotic opposition and collective learning, and which also offered the best individual guidance approach, was proposed by Machi et al. [18]. The issue remains the same, and group learning has the potential to result in extended running times, as well.
The use of cognitive radio has benefited from the work of numerous academics. The whale algorithm (IWOA) was enhanced by Xu Hang [19] and others to allocate the cognitive radio spectrum. The enhanced whale method outperforms competing algorithms thanks to a nonlinear convergence factor and a hybrid reverse learning strategy. Similar to this, the same authors enhanced the Grey Wolf Algorithm (IBGWO) [20] for usage in this field and employed three strategies: the nonlinear convergence factor, the Cauchy perturbation approach, and adaptive weight, achieving improved outcomes. Yin Dexin et al. [21] adopted a novel strategy in which the industrial internet of things was recognized as an environment, and applied the improved sparrow search algorithm (IBSSA) to the problem model. Similar to this, Wang Yi et al. [22] applied the enhanced mayfly algorithm (GSWBMA) to the cognitive heterogeneous cellular network spectrum allocation problem, offering helpful guidance for future researchers. Numerous researchers have also enhanced numerous swarm intelligence algorithms in the context of traditional engineering applications [23,24,25].
A novel algorithm called the golden eagle optimizer (GEO) [26] was put forward in 2021, which was motivated by two hunting behaviors of golden eagles. The cruising and attacking phases are changed in order to catch prey more quickly. The shift from the golden eagle’s cruise behavior to attack behavior has a direct impact on the end result. Cruise behavior mostly reflects the algorithm’s exploration function, while attack behavior primarily reflects the algorithm’s exploitation function. Because the original algorithm cannot balance the exploration and exploitation phases, it randomly switches from cruise behavior to attack behavior, which causes GEO to have a slow convergence speed, poor search accuracy, and weak robustness. There are some strategies to improve this. Few people, however, have previously used an improved approach to simultaneously take into account the cognitive radio allocation and traditional engineering design constraints, as described in this paper. Based on the advantages of the golden eagle approach, which include fewer optimization parameters, robustness to local optima, and superior handling of high-dimensional problems, this algorithm model is suggested.
This paper suggests a nonlinear convex decreasing weights golden eagle optimization technique based on a global optimization strategy to handle the problem. Arnold chaotic map initialization, nonlinear convex weight reduction, and a global optimization strategy make up the three components of the improved algorithm. An Arnold chaotic map with decent ergodicity is used to initialize the population of golden eagles in chaos maps. On the one hand, after examining several inertia weight types, nonlinear convex decreasing weights are utilized to help the golden eagle achieve improved convergence during the optimization process. On the other hand, in order to better achieve optimization, the golden eagle location update formula now includes the global optimization method, which connects each golden eagle to the generation’s most optimized individual. Under 12 benchmark functions and CEC2021, IGEO is contrasted with other sophisticated optimization algorithms to demonstrate its better performance. The results of the numerical experiment demonstrate that IGEO has better optimization performance than other techniques. Using a single-strategy method, we assess the numerical optimization abilities of the original GEO algorithm, IGEO, and GEO for 12 benchmark functions. The outcomes of the numerical experiment show how well these two methods function as compared to the original GEO. Eventually, IGEO is transformed into a binary algorithm and used for cognitive radio spectrum allocation. Compared with other improved algorithms, IGEO achieves higher spectral efficiency. Five classic engineering cases were also used to further test the performance of the algorithm.
The main contributions of this paper are as follows:
(i)
The Arnold chaotic map strategy is employed to help the golden eagle population obtain a better initial position.
(ii)
The nonlinear convex reduction strategy is used to coordinate the cruising and attacking behavior of the golden eagle, thereby improving its local and global search capabilities.
(iii)
The global optimization strategy is utilized to assist the golden eagle population in complex environmental optimization, enhancing the possibility of escaping from local optima.
(iv)
Twelve benchmark test functions and the CEC2021 test set are employed to assess the capabilities of the improved algorithm, including comparisons with eight other algorithms.
(v)
Various statistical tests, including the Wilcoxon rank-sum test and Friedman ranking, as well as Holm’s subsequent verification, are conducted to determine the overall ranking of the comparison algorithms and the improved algorithm on the twelve benchmark test functions and CEC2021, thereby validating the final performance of the algorithm.
(vi)
The individual effects of the nonlinear convex reduction strategy and the last global optimization strategy are tested separately.
(vii)
Algorithm complexity analysis is performed.
(viii)
The improved algorithm is applied to the classical problem of cognitive radio spectrum allocation and the results compared with those of commonly used algorithms.
(ix)
Five common engineering applications are utilized to test the search capabilities of the algorithm.
The following is a breakdown of this article. The second section goes over the fundamental ideas and the current state of GEO research. The third section offers a thorough explanation of the improved technique proposed in this article, and an analysis of its complexity. The fourth section, which mostly focuses on certain application components, contains simulation experiments and engineering best practices. In simulation experiments, there are compared experiments with a single improvement approach as well as comparative experiments between improved algorithms and alternative algorithms. The CEC2021 test set and 12 benchmark test functions served as the basis for these tests, and the experimental findings underwent statistical analysis. In engineering experiments, the enhanced algorithm is used to solve the well-known cognitive radio spectrum allocation problem, and the results compared with those of many other algorithms. Five classic engineering application cases are also used to verify the feasibility of the algorithm in this paper. Finally, the conclusions of this article are presented.

2. Related Works

2.1. Basic Work on GEO

2.1.1. Prey Selection

The exploration and exploitation phases make up the two sections of the GEO method. During the exploration phase, the golden eagle may be seen to fly around its target, before attacking it during the exploitation phase. As a result, each golden eagle must select a victim before cruising in and attacking it. Currently, the best solution is that which locates the prey. Every golden eagle is supposed to remember the best answer thus far. Each golden eagle hunts and flies in search of better prey. The golden eagle’s attack and cruise vectors are depicted in a two-dimensional trajectory diagram in Figure 1. Golden Eagle’s attack direction is shown by the left arrow, while its cruise direction is in-dicated by the downward arrow.

2.1.2. Attack and Cruise

The procedure of catching the prey after the golden eagle gets close to it constitutes its attack behavior. A vector with a direction that goes to the prey and that extends from the golden eagle’s current location to the location of the prey in its memory can be used to replicate the attack behavior. Equation (1) can be used to describe how the golden eagle attacks.
A i = X f X i ,
where A i is the attack vector of the i th golden eagle, X f is the best position found by golden eagle f so far, and X i is the current position vector of golden eagle i .
The cruise behavior of the golden eagle is a process that starts with the eagle’s current location, turns around the prey, and continues to move toward the prey. In this procedure, the attack vector, the hyperplane, and the cruise vector are all connected. The cruise vector is in the tangent hyperplane of the golden eagle cruise trajectory, perpendicular to the assault vector. To compute the cruise vector, the hyperplane formula equation must first be obtained. A crucial reference plane in cruise behavior, the hyperplane is a linear subset with n − 1 dimensions in the n−dimensional linear space. An arbitrary point on the hyperplane and its normal vector can be used to identify it. Equation (2) displays the scalar form of the hyperplane equation in n−dimensional space.
h 1 x 1 + h 2 x 2 + + h n x n = d ,
where H is the hyperplane’s normal vector, X is a variable vector, and d is a constant. Therefore, for the arbitrary point P of the hyperplane, Equation (3) can be deduced.
d = H P = j = 1 n h j p j ,
If the attack vector A is regarded as a normal vector and the location of the eagle X is regarded as an arbitrary point, Equation (4) can clearly be obtained.
d = A X = j = 1 n a j x j ,
The starting point of the cruise vector C is the position of the golden eagle. The cruise vector can be chosen freely in n − 1 dimensions, but the hyperplane equation specifies the last dimension, as illustrated in Equation (4). There must be a free variable and a fixed variable, which are easy to determine using the following methods: randomly select a variable from among the variables as a fixed variable, denoted as k ; and assign the random value to all variables except the k th variable. Therefore, the cruise vector of golden eagle i in an n−dimensional space can be expressed using Equations (5) and (6).
c k = d j , j k n a j a k ,
C i = { c 1 , c 2 , , c k , , c n } , c i , i k = r a n d o m ,
where a j is the j th element of attack vector A . The relationship between the golden eagle’s cruise behavior and attack behavior and the hyperplane is shown in Figure 2. A represents the Golden Eagle’s attack vector, while C represents a potential cruise vector for the Golden Eagle on this hyperplane.

2.1.3. Move to New Location

The golden eagle i cruises around the prey to obtain the cruise vector C i , and when it reaches an appropriate position, it pounces on the prey to obtain the attack vector A i . After the golden eagle has performed its cruise and attack, the displacement of golden eagle i can be expressed as shown in Equation (7), while the position of golden eagle i at the t + 1 th generation can be expressed as shown in Equation (8).
Δ x i = r 1 P a A i A i + r 2 P c C i C i ,
x i t + 1 = x i t + Δ x i t ,
where r 1 and r 2 describe the random vector in [0,1], P c is the cruise coefficient and P a is the attack coefficient. C i and A i are the Euclidean norms of the cruise and attack vectors of golden eagle i . x i t is the old position of golden eagle i at the t th generation, while x i t + 1 is the new position of golden eagle i at the t + 1 th generation. P c and P a can be calculated using Equation (9).
P c = P c 0 + t T P c T P c 0 P a = P a 0 + t T P a T P a 0 ,
where P c 0 and P c T are P c ’s initial and final value, P a 0 and P a T are P a ’s initial and final value. t and T are the current and maximum generations, respectively.

2.2. Related Works on GEO

GEO is widely utilized by researchers, as it is an outstanding meta-heuristic method. Aijaz et al. [27] presented a two-stage photovoltaic residential system with electric vehicle charging capability, where the performance of the system was improved by an optimized proportional and integral gain selection bidirectional DC/DC converter (BDC) proportional–integral controller via the GEO algorithm. The golden eagle algorithm also outperformed the particle swarm and genetic algorithms in terms of performance. Magesh et al. [28] presented improved grid-connected wind turbine performance using PI control strategies based on GEO to improve the dynamic and transient stability of grid-connected PMSG-VSWT. In addition, GEO has also been applied to fuzzy control. Kumar et al. [29] set out to improve the performance of a nonlinear power system using a hierarchical golden eagle architecture with a self-evolving intelligent fuzzy controller (HGE-SIFC). GEO seems to be popular in wind and power systems. Sun et al. [30] accurately predicted wind power generation by means of the GEO algorithm, which they first improved, before proposing an extreme learning machine model that combined the improved GEO with an extreme learning machine for the prediction of wind power from numerical weather forecast data. In solar photovoltaic power systems, the GEO algorithm has another predictive feature. Boriratrit et al. [31] addressed the instability of machine learning and combined the GEO with a machine learning model, proposing a new machine learning model that yielded a smaller minimum root mean square error than the comparison model. Huge electricity demands have proved to be a tough challenge for power companies and system operators due to the increasing number of consumers in the electricity system and the unpredictability of electricity loads. Therefore, Mallappa et al. [32] proposed an effective energy management system (EMS) named golden eagle optimization with incremental electricity conductivity (GEO-INC) to meet demands with respect to load. Zhang et al. [33] proposed an improved GEO with improved strategies including an individual sample learning strategy, a decentralized foraging strategy and a random perturbation strategy, and these improved strategies were applied to a hybrid energy storage system with a complementary wind and solar power storage system for energy optimization. GEO has also been used in forecasting using learning machines; for example, a meta-learning extreme learning machine (MGEL-ELM) based on GEO and logistic mapping, and a same-date time-interval-averaging interpolation algorithm (SAME) [34] have been proposed in the literature to improve the forecasting performance of incomplete solar irradiance time series datasets. Profit Load Distribution (PLD) is a typical multi-constrained nonlinear optimization problem, and is an important part of energy saving and consumption reduction in power systems. Group intelligent optimization algorithms are an effective method for solving nonlinear optimization problems such as PLD. For the actual operational constraints of the power system in the PLD model, a novel GEO-based solution was proposed in [35]. A series of segmented quadratic polynomial sums were modelled for the fitness function as the cost function used to calculate the optimization by the first GEO. The results showed that the GEO was able to effectively solve the power system PLD problem. In addition to the prediction and power system aspects, GEO has also been applied in the field of image research. Al-Gburi et al. [36] applied GEO to the disc segmentation of human retinal images using pre-processing based on the golden eagle algorithm-guided geometric active Shannon contours and post-processing based on regular interval cross-sectional segmentation. Dwivedi et al. [37] proposed a novel medical image processing technique for analyzing different peripheral blood cells such as monocytes, lymphocytes, neutrophils, eosinophils, basophils and macrophages, using fuzzy c-mean clustering (MLWIFCM) based on GEO to perform cell nucleus segmentation. Justus et al. [38] proposed a hybrid multilayer perceptron (MLP)–convolutional neural network (CNN) (MLP-CNN) technique to provide services to SUs even under active TCS constraints in order to address the spectrum scarcity problem in cognitive radio. Eluri et al. [39] improved the GEO algorithm by using the transfer function to transform GEO into discrete space, and used time-varying flight duration to balance the cruise and attack parts of GEO. Moreover, their improved GEO was applied to feature selection, achieving good results. GEO has been used for multi-objective optimization in the context of the problem of reducing robot power consumption, obtaining a better Pareto frontier solution [40]. Xiang et al. [41] proposed PELGEO with personal example learning in combination with the grey wolf algorithm. Pan et al. [42] proposed GEO_DLS with personal example learning to enhance the search ability and mirror reflection learning to improve the optimization accuracy. Both PELGEO and GEO_DLS were applied for the 3D path planning of a UAV during power inspection. Ilango R et al. [43] proposed S2NA-GEO combined with a neural network learning algorithm. Later, a model for the uncertainty associated with renewable energy based on GEO was developed to relate the negative effects of variations in RES output for electric vehicles and intelligent charging [44].

2.3. Cognitive Radio Model

The demand for spectrum resources has significantly increased as a result of the quick expansion of wireless communication services brought on by the introduction of 5G communications. However, low spectrum utilization and the wastage of important resources have been caused by the fixed spectrum allocation mechanism and the limited supply of spectrum resources. In order to address these issues, Ghasemi et al. proposed Cognitive Radio (CR) [45] as a means to improve spectrum utilization.
Cognitive radio allows wireless communication systems to be aware of their surroundings, to secondarily utilize the unused spectrum holes of authorized users, to adapt to environmental changes and to automatically adjust system parameters, and to utilize the spectrum in a more flexible and efficient way. Existing spectrum allocation models include graph-theoretic coloring models [46,47], bargaining mechanism models [48], game theory models [49], etc.
Models for spectrum distribution can optimize system advantages, but it is challenging to precisely manage user fairness; therefore, they cannot guarantee the absence of flaws like those related to user fairness and unethical user competition. Scholars have used different swarm intelligence algorithms to optimize spectrum allocation, such as genetic algorithms (GA) [50], particle swarm algorithms (PSO) [4], butterfly optimization algorithms (BOA) [13], cat swarm algorithms (CSO) [51] and other intelligent optimization algorithms, for spectrum allocation problems in cognitive radio. Most of the above algorithms suffer from premature convergence, stagnation and iterative solution drawbacks when solving the problem. To overcome these problems and maintain population diversity, an improved golden eagle algorithm based on a graph-theoretic coloring model is applied to the optimization of the spectrum allocation model in this paper.
A conversion function is used to convert the search space from a continuous search space to a discrete search space. A suitable conversion function is required to increase the performance of the binary algorithm. Firstly, the conversion function is algorithm independent and does not affect the search capability of the algorithm; secondly, the computational complexity of the algorithm does not change. For continuous-to-discrete conversions, this work employs the Sigmoid function, which can be written as follows:
S x j i t = 1 1 + e x j i t ,
where x j i t is the position of the j th dimension in the i th individual in the golden eagle population at the t th generation.
The function of spectrum allocation is to build a reasonable allocation of available spectrum (channels) for cognitive users, which can satisfy the resource demand of cognitive users while effectively avoiding interference with primary users and maximizing the use of resources. Assuming that there are N secondary users (SUs) and M available channels in a certain region, the relevant definition of the graph-theoretic coloring model is as follows:
Theorem 1.
Idle matrix L L = l n , m | l n , m 0 , 1 N × M , where l n , m = 1 indicates spectrum m is available to SU n and l n , m = 0 indicates spectrum m is not available to SU n .
Theorem 2.
Spectrum efficiency matrix B B = b n , m | b n , m R + N × M , where b n , m is a positive real number representing the network benefit that SU n can receive after obtaining spectrum m .
Theorem 3.
Interference constraint matrix C C = c n , k , m | c n , k , m 0 , 1 N × N × M , where C represents individual user conflicts on the m th spectrum band, c n , k , m = 1 indicates that the simultaneous use of channel m by SU n and SU k causes interference, while c n , k , m = 0 means no interference. When n = k , c n , k , m = 1 l n , m .
Theorem 4.
Non-interference allocation matrix A A = a n , m | a n , m 0 , 1 N × M , where A represents the final allocation strategy of M spectrums to N users, a n , m = 1 represents SU n can use spectrum m while a n , m = 0 represents cannot use. When c n , k , m = 1 , a n , m + a k , m 1 .
Theorem 5.
Maximum total system benefit R R = max n = 1 N m = 1 M a n , m b n , m .
Theorem 6.
Maximum percentage fairness fair fair = Π n = 1 N m = 1 M a n , m b n , m + 10 4 1 N .

3. Improved Golden Eagle Optimizer

3.1. Algorithm Model

3.1.1. Population Initialization Using the Arnold Chaotic Map

An individual golden eagle’s position is generated at random in the golden eagle optimization method. The existing population of golden eagles is relatively random and weak in terms of diversity, making it difficult to research and develop algorithms for algorithm optimization. In this paper, the initial location of the golden eagle is generated using an Arnold chaotic map to address the aforementioned issues. A Russian mathematician named Vladimir Igorevich Arnold suggested the Arnold chaotic map. This chaotic mapping technique can be used for repetitive folding and stretching transformation in a constrained space. The precise procedure is to first map the variables to the chaotic variable space using the cat mapping relationship, and then to use linear transformation to map the resulting chaotic variables to the space that needs to be optimized. Equation (11) describes the golden eagle’s starting position in the search space.
X 0 = X min + A r n o l d X max X min ,
where X max and X min are the upper and lower bounds for variables, X 0 is the initial position of the golden eagle and Arnold is an Arnold chaotic map in the range of 0 to 1. The Arnold chaotic map has a simple structure and great ergodic uniformity. Its expression is shown in Equation (12).
x n + 1 y n + 1 = 1 a b a b + 1 x n y n mod 1 ,

3.1.2. Nonlinear Convex Decreasing Weight

The nonlinear convex decreasing weight represents the degree of inheritance of the current location. The algorithm has strong global optimization capability (i.e., better exploration) when this value is large, because the population individuals inherit a large portion of the particle positions from the previous generation, and the algorithm has strong local optimization capability when this value is small, because the population individuals inherit little (i.e., better exploitation). To balance exploration and exploitation, a nonlinear convex decreasing weight is added in GEO. The greater inertia weight enables the golden eagle to cruise and explore more effectively in earlier phases of GEO. Golden eagles can attack prey more effectively because of their reduced inertial weight in later stages of GEO. Equation (8) shows that the location and moving step of the previous generation determine the golden eagle’s updated position. In order for the previous person to affect the current individual, a nonlinear convex decreasing weight is added. Introducing the nonlinear convex declining weight is Equation (13).
ω ( t ) = ω ini ( ω ini ω end ) 1 t T m ,
where ω ini and ω end are the initial and final weight values when the algorithm has gone through a number of iterations equal to the minimum and maximum numbers of generations, [ ω ini   ω end ] = [0.9 0.4]. t and T are the current and maximum generations. The different values of m represent the decreasing mode of inertial weight. Equation (14) represents the position update formula of the algorithm after introducing nonlinear convex decreasing weight.
x i t + 1 = ω ( t ) x i t + Δ x i ,
The value of m influences the weight’s decreasing mode and further impacts the algorithm’s performance during optimization, most notably its rate of convergence. Figure 3 depicts the nonlinear convex decreasing weight when m rises from 1 to 7. The following tests are conducted to investigate the value of m that is most advantageous to the algorithm. Six benchmark functions are chosen as experimental objects, 30 unique experiments are carried out, and seven different m values explain seven algorithms. The evaluation was conducted after generating the average convergence curve for these six benchmark test functions. The precise method involves finding the smallest number of iterations for each of the seven methods under the conditions of best convergence accuracy. The algorithm with the fewest iterations receives seven points, and so on, until the algorithm with the most iterations, which receives one point. The results of the seven algorithms for the six benchmark test functions are displayed in Table 1. Figure 3 and Table 1 show that the lowest score for m = 1, which reflects the linearly decreasing inertia weight, shows the algorithm’s slowest convergence pace. In this paper, the value of m is chosen to be 4, because it has the highest score. It also exhibits the fastest convergence speed.

3.1.3. Global Optimization Strategy

Without significant social contact, the golden eagle’s position update formula in the golden eagle optimization method simply refers to the historical memory of the golden eagle’s position and move step from the previous iteration. Such a blind random search in space not only fails to speed up the population’s search, but also frequently hinders its ability to swiftly identify the best option. The golden eagle position update formula is enhanced by the introduction of a global optimization strategy. For each iteration, the best-fitting individual in the population is picked so that it can interact with the current individual and quickly find the ideal solution. The improved formula is shown in Equation (15).
x i t + 1 = r 1 ω t x i t + Δ x i + r 2 ω t x b e s t t x i t ,
where r 1 and r 2 are random numbers between 0 and 2. x b e s t t is the best global position of the golden eagles.

3.2. Detailed Steps for the Improved Golden Eagle Optimizer Algorithm

The three strategies mentioned above can significantly increase the algorithm’s convergence speed and search precision, balance global exploration with local exploitation, and improve the performance of the original GEO. The IGEO implementation method is depicted in Figure 4, and Algorithm 1 presents the pseudo-code for the suggested IGEO. The Golden Eagle’s ideal hunting position is depicted in Figure 4 by x .
The pseudo-code of the proposed IGEO is shown in Algorithm 1.
Algorithm 1 Pseudo-Code of IGEO
Setup 1 Set the population size N, current iterations t = 0 and maximum generations T = 1000
2 Initialize the population position by Arnold chaotic map
3 Evaluate fitness function
4 Initialize other parameters: Pc and Pa and golden eagle’s memory
5 While t < T
6  Update Pc and Pa based on Equation (9)
7    For i = 1:N
8      Calculate A based on Equation (1)
9        If the length of A is not equal to zero
10          Calculate C based on Equations (5) and (6)
11          Calculate ω(t) based on Equation (13)
12          Calculate x based on Equation (7)
13          Calculate the population fitness and selected fitness optimal individual xbest
14          Update new position xt+1 based on Equation (15) and calculate its fitness
15            If fitness(xt+1) is better than fitness of position in golden eagle’s memory
16              Replace the new position with position in golden eagle’s memory
17            End If
18        End If
19     End For
20 End While

3.3. Algorithm Complexity Analysis

The running process of the IGEO is mainly divided into the following two parts: population initialization and the algorithm main cycle. For the initialization part, the Arnold chaotic map is used to initialize the golden eagle population. N golden eagles are initialized in the D dimension, with a complexity of O (N × D). In the main part of the algorithm, for the boundary setting of N golden eagle individuals, the complexity is O (N). In the golden eagle location update formula, the complexity is O (N × D). For the whole main loop part, there are T iterations, so the algorithm complexity of the whole main loop part is O (N) + O (N × D) + O (N × D × T). Therefore, the total complexity of the entire algorithm is the sum of the complexity of the initialization part and the complexity of the main loop part, which is O (N × D × T). The complexity of the improved algorithm is not more complex than the original, but the optimization effect is much better.

4. Analysis of Experimental Results

4.1. Simulation Experiment

4.1.1. Experimental Settings and Test Functions

The experimental operating environment was made up of the Windows 10 64-bit operating system, an Intel(R) Core (TM) i5-7200U processor, 2.5 GHz main frequency, and 4.00 GB of RAM. The algorithm was created using MATLAB 2021a as a basis.
Experiments were conducted on 12 benchmark functions in order to confirm the usefulness of the IGEO in solving diverse optimization challenges. Two continuous unimodal benchmark functions (F1 and F2) were used to assess the algorithm’s speed and accuracy of convergence. To assess the algorithm’s ability to perform a global search and the likelihood of leaving the local optimum, six complex multimodal benchmark functions (F3–F8) were utilized. The comprehensiveness of the approach was assessed using four fixed low-dimensional benchmark functions (F9–F12). Table 2 displays the specifications of the benchmarking functions.

4.1.2. Comparative Analysis of Performance with Other Algorithms

In order to verify the overall performance of the IGEO, eight algorithms were selected for comparison, including the butterfly optimization algorithm (BOA) [13], the grey wolf optimizer (GWO) [10], particle swarm optimization (PSO) [4], the sine cosine algorithm (SCA) [52], the slap swarm algorithm (SSA) [12], the whale optimization algorithm (WOA) [9], the golden eagle optimizer algorithm (GEO) [26], and the golden eagle optimizer with double learning strategies (GEO_DLS) [42]. To ensure fairness, the population number for all algorithms was 50, and the maximum number of iterations was 1000. In addition, the parameters of each original algorithm were the best, as shown in Table 3.
Table 4 displays the analytical results in relation to the other eight algorithms, along with the mean, standard deviation (std), and average running time. These results show the accuracy of the convergence and optimization of the algorithms. From the results for mean value and standard deviation, it can be seen that IGEO has the best search accuracy: the IGEO was able to readily obtain an excellent advantage that could be matched by no other algorithm on F1, F2, F4, F5, F7, F8, and F11, and the only other algorithms that achieved the same grades as IGEO were GEO_DLS on F3, WOA on F6 and F9, GWO on F10, and PSO on F12.
The performance on the continuous unimodal functions F1–F2 demonstrate the superior optimization capabilities of IGEO. The performance on the challenging multivariate functions (F3–F8) demonstrates IGEO’s capacity to avoid local maxima. Only WOA on F6 and GEO_DLS on F3 have an optimization precision as good as IGEO. Last but not least, IGEO’s performance on the fixed low-dimensional functions F9–F12 demonstrates its all-encompassing capabilities. Unfortunately, IGEO’s performance in this area is marginally subpar: the optimization accuracy of WOA on F9, GWO on F10, and PSO on F12 are all equal to that of IGEO. Due to the characteristics of the method itself, GEO, GEO_DLS, and IGEO have the longest mean running times, while IGEO performs the best overall.
An MAE ranking [53] on the best outcomes of 50 experiments was performed in this work in order to more accurately assess the optimization performance of each method on the test function.
In a quantitative analysis of all algorithms, the average absolute error of 12 benchmark functions is used to rank all algorithms. The algorithm’s performance increases as MAE decreases. The formula for determining the MAE ranking of these benchmark functions is given in Equation (16), and Table 5 displays the ranking of these algorithms.
M A E = i = 1 N f m i o i N f ,
where N f is the number of benchmark functions, m i is the average of the optimal results generated by the algorithm, and o i is the corresponding theoretical optimal value.
Table 5 shows that IGEO has the lowest MAE value and the best performance.
In Figure 5, the average convergence curve is also presented in logarithmic form for a better and more precise comparison of the experimental data. The absolute values of all findings are plotted, because −1 is the ideal value of the function F8.
In Figure 5, the test results for the convergence speed and search accuracy of the algorithms are presented. The results demonstrate that IGEO outperforms the comparison algorithms on F1 to F8. Specifically, for F1, F2, F3, F6, and F7, IGEO shows significant advantages in terms of average value, indicating an overwhelmingly high convergence speed and search accuracy. For F4, F5, and F8, IGEO exhibits better performance than the comparison algorithms, and the ability to accurately capture the theoretical optimal values of 0.9 and −1 for F5 and F8, respectively. While IGEO shows inferior performance on F9, F10, F11, and F12, its standard deviation is 0, demonstrating superior robustness compared to other algorithms. Regarding F9, IGEO’s performance is second only to WOA. The convergence speed of GWO and the search accuracy of WOA on F9 are comparable to those of IGEO. For F10, GWO outperforms both IGEO and WOA in terms of performance. For F11 and F12, although the average convergence curves of PSO and IGEO are similar, the results presented in Table 4 clearly indicate that IGEO exhibited a higher search accuracy than PSO.
Table 4 and Figure 5 present the optimization results for the benchmark functions F1 to F8 in general dimensions (dim = 30), where it is evident that IGEO achieved the best results in terms of both convergence speed and search accuracy. To verify the comprehensive performance and robustness of IGEO, experiments were conducted on the eight benchmark functions under 100-dimensional conditions using the comparison algorithms. The conditional parameters of each algorithm were consistent with those shown in Table 3, and each algorithm was independently run 50 times for each function. Table 6 presents the results of the comparison with the other seven original algorithms, including mean and standard deviation (std). Table 6 clearly demonstrates that IGEO outperformed the other algorithms in terms of both average value and standard deviation, indicating its superior exploration and exploitation ability.
Here, the optimization outcomes for the CEC2021 test set are also displayed to further demonstrate IGEO’s efficacy. Recent winners are contrasted with IGEO. Both traditional improvements to particle swarm optimization, such EIW_PSO [54], and advances in new swarm intelligence algorithms, like FMMPA [15], are included among the comparison algorithms. Additionally, IGEO is contrasted with upgraded GEO algorithms, including GEO_DLS [42], CMA-ES-RIS [55], and L-SHADE [56]. Table 7 displays the specific outcomes. The data in the table were derived by deducting these values, because none of the 10 functions in the CEC2021 test set have an ideal value of 0. The table includes the minimum value, maximum value, standard deviation, and average value of the algorithm optimization. The table shows that IGEO’s performance is still fairly strong, overall, despite the fact that it is not the best for every function.

4.1.3. Statistical Significance Testing

In this work, the Wilcoxon statistical test and the Friedman test are performed on the best results of the 50 experiments, and further follow-up verification was conducted using the Holm method to more accurately evaluate the optimization performance of each method on the test function.
(i)
Wilcoxon test for statistics. If IGEO obtains the best results, it is then evaluated against the eight comparison algorithms using the rank-sum test, which is carried out at the 5% significance level. The best results from 50 separate tests make up the data vectors for comparison.
When p is less than 5%, the original hypothesis cannot be ruled out at a significance level of 100 × 5%, indicating that the enhanced algorithm produces better results than the comparison algorithm does, and vice versa. Table 8 displays the rank-sum test p values for each algorithm. Among these, “NAN” denotes that the numerical comparison is irrelevant and that the ideal value of the two procedures is 0. “+”, “−”, and “=” signify that IGEO’s performance is better than, worse than, or equal to that of the comparison algorithm, respectively.
Table 8 shows that when compared to the original algorithms GEO and SSA, IGEO achieves a 100% optimization rate; when compared to BOA, PSO, and SCA, IGEO achieves a 91.2% optimization rate; and when compared to GWO, WOA, and GEO DLS, IGEO achieves an 83.3% optimization rate. These results illustrate IGEO’s thorough performance across the 12 benchmark functions.
(ii)
Friedman Test. A differential analysis technique called the Friedman test can be used to solve issues by combining various approaches. The advantages and disadvantages of the algorithms are compared by computing the average ratings of the algorithms being compared. With this approach, in contrast to the rank-sum test approach, the comparison algorithm’s overall performance can be further assessed. The formula for the Friedman test is as follows:
R a n k i = 1 N f j = 1 N f R j i ,
where R a n k i denotes the algorithm’s final ranking, N f denotes the quantity of the test functions, and R j i denotes the ranking of algorithm i in the test function j . Using Equation (17), the Friedman rankings for the benchmark functions and CEC2021 are determined, and the results obtained are reported in Table 6 and Table 7. To solve the algorithm ranking in a specific test function fairly, an average of 30 separate runs is used. Table 9 shows that IGEO’s comprehensive rating comes in top place. In addition, Table 10 shows that IGEO ranks lower than FMMPA.
(iii)
Holm verification. Table 11 and Table 12 present the data acquired after further Holm verification of the results by comparing the results obtained with the original and improved algorithms. Assuming that each algorithm’s distribution is equal to that of the classical original algorithm, the system uses Friedman to perform a bidirectional rank variance analysis on the pertinent samples at a significance level of 0.05.
After further verification usign Holm, the average chi square value after testing is 34.296, the critical chi square value of the system is 15.507, the degree of freedom gained through data testing is 9, and the p-value is significantly below 0.05. The original presumption is disproved, i.e., the presumption that all optimization results follow a uniform distribution. Table 11 demonstrates that the WOA, GEO_DLS, PSO, and GWO models reject the null hypothesis and significantly deviate from the enhanced method. SCA, BOA, GEO, and SSA all support the initial theory and differ little from the initial method. The final order of preference for the algorithms is IGEO > WOA > GEO_DLS > PSO > GWO > SCA > BOA > GEO > SSA.
Similarly, in the experiment using CEC2021 as the test object, the critical chi square value of the system is 19.943, which therefore also rejects the original hypothesis. Table 12 demonstrates that the L-SHADE, GEO_DLS, CMA-ES-RIS and FMMPA models reject the null hypothesis and significantly deviate from the enhanced method. Only EIW_PSO rejects the hypothesis that there is a significant difference from IGEO. The final order of preference for the algorithms is FMMPA > IGEO > L-SHADE > GEO_DLS > CMA-ES-RIS > EIW_PSO.

4.1.4. Comparative Analysis of Different Strategy Algorithms

To explore the effectiveness of the proposed algorithm and study the impact of different strategies on the golden eagle optimizer algorithm, ablation experiments were conducted. GEO with a single strategy includes GEO with nonlinear weights (wGEO) and GEO with global optimization strategy (pGEO). The experimental parameter settings were consistent with those presented in Table 3. The experimental algorithms included GEO, wGEO, pGEO, and IGEO. Each algorithm was independently run 50 times on each of the 12 test functions selected in this article. The experimental results are presented in Table 13.
The results presented in Table 13 indicate that the optimization accuracy when using a nonlinear convex decreasing weight strategy or a global optimization strategy individually is inferior to that when using the mixed strategy, with a significant difference in optimization accuracy despite having the same running time. For F1, F2, F7, F9, and F11, IGEO demonstrated absolute superiority, achieving the highest optimization accuracy. For F3 to F6, wGEO, pGEO, and IGEO exhibited similar search accuracy, which was significantly better than that of GEO. For F8, F10, and F12, pGEO and IGEO exhibited similar performance.
To verify the performance of IGEO in terms of convergence speed and search accuracy, Figure 6 presents the average convergence curves of the benchmark functions. Similarly, since the optimal value of function F8 is −1, the absolute value of all results is taken when drawing the visualization. Compared with wGEO and pGEO, IGEO exhibits faster convergence on F3 to F6, although the convergence precision is the same. IGEO and pGEO achieve identical performance on F8. For F10 and F12, IGEO demonstrates a slight advantage in terms of convergence speed, while the average optimal value and standard deviation are close to the theoretical optimal value. Furthermore, IGEO performs optimally on the other functions.
Combining the exploration of the value of m described in Section 3.2 and the two numerical experiments reported in Section 4.1.2 and Section 4.1.3, it can be concluded that the nonlinear convex decreasing weight mainly affects the convergence speed of the algorithm, indicating the algorithm’s ability to perform exploration. Conversely, the global optimization strategy mainly affects the search accuracy of the algorithm, indicating the algorithm’s ability to perform exploitation. In this article, these two capacities are effectively balanced by combining these two strategies to improve GEO, leading to improved algorithm performance. The theoretical numerical experiment for the revised algorithm is presented in this section. In Section 4.2 and Section 4.3, specific application experiments are provided.

4.2. Cognitive Radio Application

4.2.1. IGEO Problem Solving Model

In the proposed model for solving the spectrum allocation problem, each golden eagle’s binary-coded position represents a feasible spectrum allocation strategy. To determine the optimal spectrum allocation scheme, it is necessary to solve the interference-free allocation matrix A to maximize the total system benefit. Based on the status of the secondary user (SU) utilizing the channel in the idle matrix l n , m = 0 , SU n cannot utilize channel m. Consequently, matrix a n , m in the corresponding position must be zero. Conversely, if l n , m = 1 , then a n , m can be either 0 or 1. The processing strategy proposed in the literature [45] for the idle matrix L involves exclusively numbering those elements for which the array is 1. Furthermore, the number of elements in matrix L that equal 1, representing the coding length D of the golden eagle individuals, is recorded. The formula for calculating this is as follows:
D = n = 1 N m = 1 M l n , m ,
Figure 7 illustrates the mapping relationship between the location encoding and the assignment matrix. In the current cognitive wireless network environment, assuming N = 6 cognitive users and M = 4 channels, the idle matrix L is calculated based on the network topology. The locations of 1 in matrix L are identified, and the location x of an individual golden eagle’s binary encoding is mapped to the allocation matrix A in increasing order.
The pseudo-code of the proposed IGEO solving CR when applying IGEO to the cognitive radio model is shown in Algorithm 2.
Algorithm 2 Pseudo-Code of IGEO for Solving CR
1 Initialize the idle matrix L, the benefit matrix B, the non-interference distribution matrix C
2 Calculate the number of values in L, and then enter the appropriate values of n and m at the position of 1 in the idle matrix L. Find the dimensions D of the optimization problem, which corresponds to the number of unique golden eagle codes, by listing the position elements in an increasing order of n and m
3 Set algorithm parameters N, t = 0, T = 1000
4 Initialize Arnold chaotic map
5 According to the problem dimension D, the number N, and Arnold map, the individual position is initialized, and the golden eagles’ position is mapped to the allocation matrix A.
6  If all n and k in the coherence matrix satisfy the condition c n , k , m = 1 , the values of a n , m and a k , m in the assignment matrix A are both 1.
7   Set one of them to 0 and keep the other unchanged
8  End If
9 The binary coding corresponding to the golden eagle individual after processing the non-interference constraint matrix
10 According to the distribution matrix A and the benefit matrix B, the individual fitness value is calculated.
11 Initialize other parameters: P c , P a and golden eagle’s memory
12 While t < T
13   Update P c and P a based on Equation (9)
14    For i = 1:N
15      Calculate A(attack vector) based on Equation (1)
16        If the length of A is not equal to zero
17          Calculate C based on Equations (5) and (6)
18          Calculate ω(t) based on Equation (12)
19          Calculate x based on Equation (7)
20          Calculate the population fitness and selected fitness optimal individual xbest
21          Update new position xt+1 based on Equation (14) and calculate its fitness
22          Discretize the current position of the golden eagle individual based on Equation (7) binary
15            If fitness(xt+1) is better than fitness of position in golden eagle’s memory
16              Replace the new position with position in golden eagle’s memory
17            End If
18        End If
19    End For
20 End While

4.2.2. The Solution Results of the Problem Model

In this experiment, the number of populations is set to 50, the number of iterations is set to 1000, and the number of primary users (number of spectrums) is set to M and N, respectively. After 30 experiments, the average of all outcomes is calculated. This investigation used the genetic algorithm (GA) [50], the particle swarm algorithm (PSO) [4], the butterfly optimization algorithm (BOA) [13], and the cat swarm optimization (CSO) [51] as comparison methods.
Figure 8 shows the system solution’s average maximum system benefit for N = 10 and M = 10. The figure indicates that GA exhibits the best system performance, followed by the IGEO algorithm, while the GEO algorithm exhibits the worst performance. This observation highlights the effectiveness of the improved strategy when utilized in the spectrum allocation model. Figure 9 presents the fairness of each algorithm in 30 distinct channel environments. The results demonstrate that the IGEO algorithm yields the highest user fairness, whereas the GA algorithm results in the worst user fairness. Overall, the IGEO algorithm achieves the best maximization of both system benefits and user fairness. These results suggest that IGEO can more effectively be used to resolve the spectrum allocation problem, compared to the other algorithms.
To investigate the impact of different numbers of users on the average system benefit, in this study, the number of available channels is maintained at a constant M = 10 in the environment, while increasing the number of SUs N from 10 to 30, with an increment of 2. The relationship between the number of users and the average benefit is analyzed, as shown in Figure 10. The results indicate that the average benefit of the cognitive radio system gradually decreases with increasing numbers of SUs. However, the IGEO algorithm outperforms the GA, PSO, BOA, CSA, and GEO algorithms in terms of the average system benefit value obtained.
On the other hand, this study looks into how different channel counts affect the overall value of the system. The average advantage of the system for different numbers of channels is obtained while increasing M from 10 to 30, with an increment of 2, as shown in Figure 11. Our results show that when the number of possible spectra in the region rises, the average benefit of the channel gradually increases. Notably, with the exception of BOA, the average benefit of the IGEO algorithm is higher than that of other methods. This underlines once more how well the improved algorithm allocates spectrum.

4.3. Engineering Applications

To further validate the performance and practical effectiveness of the improved algorithm, in this article, five classic engineering problems were selected, and comparative experiments were conducted with other algorithms. All five engineering problems are static single-objective constrained optimization problems, which can be generally expressed as follows:
min F x s . t . g i ( x ) 0 , i = 1 , 2 , , m , h j ( x ) = 0 , j = 1 , 2 , , n
where F x represents the objective function, and g i x and h i x represent the constraint conditions.
To more effectively handle the constraint conditions, in this article, penalty functions are employed, which can be expressed as:
Φ x = F x ± i = 1 m l i × max ( 0 , g i ( x ) ) α + j = 1 n o j × h j ( x ) β ,
where Φ x represents the final objective function, and l i and o j represent the penalty coefficients. All algorithms are independently tested 30 times.

4.3.1. Piston Rod Design Problem

A rarely encountered static, single-objective constrained problem is the piston rod optimization problem. By maximizing the locations of the piston components H, B, D, and X as the piston rod advances from 0 degrees to 45 degrees, as indicated in Appendix A, its primary goal is to reduce fuel consumption. The basic model is expressed in Appendix B.
In the piston rod design problem, the IGEO algorithm is compared with GWO, PSO, SCA, SSA, WOA, GEO, and GEO_DLS. The minimum cost and corresponding optimal variable values obtained by the above algorithms can be found in Table 14. Among the algorithms, IGEO achieves the best performance in the piston rod design problem, with the lowest cost of 0.0003717.

4.3.2. I Beam Design Problem

The goal in the I beam structural design problem is to minimize vertical deflection by optimizing the length, height, and two thicknesses. Appendix A contains a structural schematic diagram, which depicts the left and major views of the I beam. For ease of computation, let X = x 1   x 2   x 3   x 4 = h   b   t w   t f . The mathematical model is expressed in Appendix B.
The performance of IGEO when designing I beams is compared to that of other algorithms, including GWO, PSO, SCA, SSA, WOA, GEO, and GEO_DLS. The comparison results are presented in Table 15, where it can be observed that both GEO and GEO_DLS yield an optimal value of 0.001644. However, the key difference lies in the very small standard deviation of IGEO, indicating its high stability in solving the problem.

4.3.3. Car Side Impact Design Problem

The car side impact design problem is a static constrained optimization problem with a single objective and multiple variables. Its main objective is to minimize the total weight of the vehicle. The mathematical model is presented in Appendix B, and the meanings of the specific parameters can be found in [57].
The performances of different algorithms when designing car side collisions are compared in Table 16. Among the compared algorithms, which included GWO, PSO, SCA, SSA, WOA, GEO, and GEO_DLS, IGEO achieved the best result, with the minimum value of 21.8829.

4.3.4. Cantilever Beam Design Problem

The cantilever beam optimization problem involves five variables and a vertical displacement constraint, where each variable has a constant thickness, and the objective is to minimize the weight of the beam. A schematic diagram for this problem is presented in Appendix A. The mathematical model for this problem is given in Appendix B.
After applying various optimization algorithms such as GWO, PSO, SCA, SSA, WOA, GEO, and GEO_DLS, IGEO was used to optimize the cantilever beam design problem. The results obtained using these algorithms are presented in Table 17, where it can be observed that IGEO achieved the best result, with a value of 1.179635.

4.3.5. Three-Bar Truss Design Problem

In the classic three-bar truss engineering design problem, the aim is to minimize the weight of a symmetrical light bar structure that is subject to constraints on stress, deflection, and buckling. The mathematical model for this problem is presented in Appendix B, and the structural diagram can be found in Appendix A.
Table 18 presents the results of various optimization algorithms, including GWO, PSO, SCA, SSA, WOA, GEO, and GEO_DLS, as well as the comparison algorithm GEO_DLS, and IGEO for the three-bar truss design problem. The table indicates that IGEO achieved the best results, with a minimum value of 259.8111.

5. Conclusions

The principle and position update formula of the original golden eagle optimization method were explored, and a hybrid golden eagle optimization algorithm (IGEO) based on Arnold mapping and nonlinear convex decreasing weight was proposed. Using 12 benchmark functions, CEC2021, and the Wilcoxon, Friedman, and Holm tests, it was confirmed that the proposed IGEO has better search performance and stronger resilience. Additionally, the impact of the single-strategy method on the algorithm was also studied. After conducting simulation tests, the enhanced IGEO was applied to the model of the traditional cognitive radio spectrum allocation problem. It was discovered that IGEO had the best overall performance, and was able to allocate the spectrum well when compared to GA, PSO, and other methods. To test the proposed IGEO’s problem-solving abilities, five real-world technical design problems were also addressed, and the solutions were contrasted with other methods. The IGEO methodology is supported by all of the findings, feedback, and analyses as being a superior approach that can be used to tackle challenging engineering optimization problems. In future research, IGEO is expected to be applied to more practical spectrum allocation problems in order to investigate the algorithm’s additional capabilities.

Author Contributions

Conceptualization, J.D.; methodology, J.D.; software, J.D.; validation, J.D., D.Z. and Q.H.; formal analysis, J.D.; writing—original draft preparation, J.D.; writing—review and editing, J.D. and L.L.; resources, D.Z. and Q.H.; All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC were funded by National Natural Science Foundation of China, grant number No 62062021, 61872034, 62166006; Natural Science Foundation of Guizhou Province, grant number [2020]1Y254; Guizhou Provincial Science and Technology Projects, grant number Guizhou Science Foundation-ZK [2021] General 335.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study did not report any data.

Acknowledgments

The author is extremely thankful to anonymous referees and the editor for their valuable comments towards improving the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Piston rod structure.
Figure A1. Piston rod structure.
Applsci 13 09394 g0a1
Figure A2. I beam structure.
Figure A2. I beam structure.
Applsci 13 09394 g0a2
Figure A3. Cantilever beam structure.
Figure A3. Cantilever beam structure.
Applsci 13 09394 g0a3
Figure A4. Three-bar truss structure.
Figure A4. Three-bar truss structure.
Applsci 13 09394 g0a4

Appendix B

Appendix B.1. Piston Rod Design Problem

Minimize F x = 1 4 π x 3 2 L 2 L 1
Subject to
g 1 = Q L cos θ R × F 0 g 2 = Q L x 4 M max 0 g 3 = 1.2 L 2 L 1 L 1 0 g 4 = x 3 2 x 2 0 ,   R = x 4 x 4 sin θ + x 1 + x 1 x 2 x 4 cos θ x 4 x 2 2 + x 1 2 ,   F = π P x 3 2 4 L 1 = x 4 x 2 2 + x 1 2 ,   L 2 = x 4 sin θ + x 1 2 + x 2 x 4 cos θ 2 θ = 45 ,   Q = 10 , 000   l b s ,   L = 240   i n ,   M max = 1.8 × 10 6 l b s   in ,   P = 1500   p s i
where 0.05 x 1 , x 2 , x 4 500 ,   0.05 x 3 120 .

Appendix B.2. I Beam Design Problem

Minimize F X = 5000 x 3 x 1 2 x 4 12 + x 2 x 4 3 6 + 2 x 2 x 4 x 1 x 4 2 2
Subject to g 1 = 2 x 2 x 3 + x 3 x 1 2 x 4 300 g 2 = 1.8 x 1 × 10 4 x 3 x 1 2 x 4 3 + 2 x 2 x 3 4 x 4 2 + 3 x 1 x 1 2 x 4 + 15 x 2 × 10 3 x 1 2 x 4 x 3 3 + 2 x 3 x 2 3 56
where 10 x 1 80 ; 10 x 2 50 ; 0.9 x 3 , x 4 5

Appendix B.3. Cantilever Beam Design Problem

Minimize F X = 1.98 + 4.9 x 1 + 6.67 x 2 + 6.98 x 3 + 4.01 x 4 + 1.78 x 5 + 2.73 x 7
Subject to g 1 = 1.16 0.3717 x 2 x 4 0.00931 x 2 x 10 0.484 x 3 x 9 + 0.01343 x 6 x 10 1 g 2 = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 + 0.0144 x 3 x 5 + 0.0008757 x 5 x 10 + 0.080405 x 6 x 9 + 0.00139 x 8 x 11 + 0.00001575 x 10 x 11 0.32 g 3 = 0.214 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 + 0.03099 x 2 x 6 0.018 x 2 x 7 + 0.0208 x 3 x 8 + 0.121 x 3 x 9 0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 + 0.00121 x 8 x 11 0.32 g 4 = 0.074 0.061 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2 0.32 g 5 = 28.98 + 3.818 x 3 4.2 x 1 x 2 + 0.0207 x 5 x 10 + 6.63 x 6 x 9 7.7 x 7 x 8 + 0.32 x 9 x 10 32 g 6 = 33.86 + 2.95 x 3 + 0.1792 x 10 5.05 x 1 x 2 11 x 2 x 8 0.0215 x 5 x 10 9.98 x 7 x 8 + 22 x 8 x 9 32 g 7 = 46.36 9.9 x 2 12.9 x 1 x 8 + 0.1107 x 3 x 10 32 g 8 = 4.72 0.5 x 4 0.19 x 2 x 3 0.0122 x 4 x 10 + 0.009325 x 6 x 10 + 0.000191 x 11 2 4 g 9 = 10.58 0.647 x 1 x 2 1.95 x 2 x 8 + 0.02054 x 3 x 10 0.0198 x 4 x 10 + 0.028 x 6 x 10 9.9 g 10 = 16.45 0.489 x 3 x 7 0.843 x 5 x 6 + 0.0432 x 9 x 10 0.0556 x 9 x 11 + 0.000786 x 11 2 15.7
where 0.5 x 1 ~ x 7 1.5 ; x 8 , x 9 0.192 , 0.345 ; 30 x 10 ~ x 11 30

Appendix B.4. Car Side Impact Design Problem

Minimize F X = 0.06224 x 1 + x 2 + x 3 + x 4 + x 5
Subject to g X = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1
where 0.01 x j 100 ,   j = 1 , 2 , , 5

Appendix B.5. Three Bar Truss Design Problem

Minimize F X = L x 2 + 2 2 x 1
Subject to g 1 = x 2 2 x 1 x 2 + 2 x 1 2 P σ 0 g 2 = x 2 + 2 x 1 2 x 1 x 2 + 2 x 1 2 P σ 0 g 3 = 1 x 1 + 2 x 2 P σ 0 P = 2 k N / c m 2 , L = 100 c m , σ = 2 k N / c m 2
where 0 x 1 , x 2 1

References

  1. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  2. Nematollahi, F.A.; Rahiminejad, A.; Vahidi, B. A novel physical based meta-heuristic optimization method known as Lightning Attachment Procedure Optimization. Appl. Soft Comput. 2017, 59, 596–621. [Google Scholar] [CrossRef]
  3. Haider, W.B.; Ur, N.R.; Asma, N.; Nisar, K.; Ibrahim, A.; Shakir, R.; Rawat, D.B. Constructing Domain Ontology for Alzheimer Disease Using Deep Learning Based Approach. Electronics 2022, 11, 1890. [Google Scholar]
  4. Ji, Z.; Chang, J.; Guo, X.; Wang, J.; Yang, H.; Wang, L.; Jiang, H. Ultrawide coverage receiver based on compound eye structure for free space optical communication. Opt. Commun. 2023, 545, 129740. [Google Scholar] [CrossRef]
  5. Zhu, C.; Ji, Q.; Guo, X.; Zhang, J. Mmwave massive MIMO: One joint beam selection combining cuckoo search and ant colony optimization. EURASIP J. Wirel. Commun. Netw. 2023, 2023, 65. [Google Scholar] [CrossRef]
  6. Yang, X.-S.; Deb, S. Engineering optimisation by cuckoo search. Int. J. Math. Model. Numer. Optim. 2010, 1, 330–343. [Google Scholar] [CrossRef]
  7. Majid, M.; Goel, L.; Saxena, A.; Srivastava, A.K.; Singh, G.K.; Verma, R.; Bhutto, J.K.; Hussein, H.S. Firefly Algorithm and Neural Network Employment for Dilution Analysis of Super Duplex Stainless Steel Clads over AISI 1020 Steel Using Gas Tungsten Arc Process. Coatings 2023, 13, 841. [Google Scholar] [CrossRef]
  8. Dervis, K.; Bahriye, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar]
  9. Seyedali, M.; Andrew, L. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  11. Gaurav, D.; Vijay, K. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl. Based Syst. 2018, 165, 169–196. [Google Scholar]
  12. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  13. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  14. Hassan, N.; Bangyal, W.H.; Khan, S.M.A.; Nisar, K.; Ibrahim, A.A.A.; Rawat, D.B. Improved Opposition-Based Particle Swarm Optimization Algorithm for Global Optimization. Symmetry 2021, 13, 2280. [Google Scholar] [CrossRef]
  15. Yang, L.; He, Q.; Yang, L.; Luo, S. A Fusion Multi-Strategy Marine Predator Algorithm for Mobile Robot Path Planning. Appl. Sci. 2022, 12, 9170. [Google Scholar] [CrossRef]
  16. Li, S.; He, Q. Improved Feature Selection for Marine Predator Algorithm. Comput. Eng. Appl. 2023, 59, 168–179. [Google Scholar]
  17. Xu, M.; Long, W.; Yang, Y. Planar-mirror reflection imaging learning based marine predators algorithm and feature selection. Comput. Appl. Res. 2023, 40, 394–398 + 444. [Google Scholar]
  18. Ma, C.; Zeng, G.; Huang, B.; Liu, J. Marine Predator Algorithm Based on Chaotic Opposition Learning and Group Learning. Comput. Eng. Appl. 2022, 58, 271–283. [Google Scholar]
  19. Xu, H.; Zhang, D.; Wang, Y.; Song, T.T. Application of Improved Whale Algorithm in Cognitive Radio Spectrum Allocation. Comput. Simul. 2021, 38, 431–436. [Google Scholar]
  20. Xu, H.; Zhang, D.; Wang, Y.; Song, T.T. Spectrum allocation based on improved binary grey wolf optimizer. Comput. Eng. Des. 2021, 42, 1353–1359. [Google Scholar]
  21. Yin, D.; Zhang, D.; Zhang, L.; Cai, P.; Qin, W. Spectrum Allocation Strategy Based on Sparrow Algorithm in Cognitive Industrial Internet of Things. Data Acquis. Process. 2022, 37, 371–382. [Google Scholar]
  22. Zhang, D.; Wang, Y.; Zou, C.; Zhao, P.; Zhang, L. Resource allocation strategies for improved mayfly algorithm in cognitive heterogeneous cellular network. J. Commun. 2022, 43, 156–167. [Google Scholar]
  23. Meng, K.O.; Pauline, O.; Kiong, C.S. A new flower pollination algorithm with improved convergence and its application to engineering optimization. Decis. Anal. J. 2022, 5, 100144. [Google Scholar]
  24. Yang, X.; Wang, R.; Zhao, D.; Yu, F.; Huang, C.; Heidari, A.A.; Chen, H. An adaptive quadratic interpolation and rounding mechanism sine cosine algorithm with application to constrained engineering optimization problems. Expert Syst. Appl. 2023, 213, 119041. [Google Scholar] [CrossRef]
  25. Sabry, A.E.; Muhammed, M.H.; Khalaf, W.A. Letter: Application of optimization algorithms to engineering design problems and discrepancies in mathematical formulas. Appl. Soft Comput. J. 2023, 140, 110252. [Google Scholar]
  26. Mohammadi-Balani, A.; Nayeri, M.D.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
  27. Aijaz, M.; Hussain, I.; Lone, S.A. Golden Eagle Optimized Control for a Dual Stage Photovoltaic Residential System with Electric Vehicle Charging Capability. Energy Sources Part A Recovery Util. Environ. Eff. 2022, 44, 4525–4545. [Google Scholar] [CrossRef]
  28. Magesh, T.; Devi, G.; Lakshmanan, T. Improving the performance of grid connected wind generator with a PI control scheme based on the metaheuristic golden eagle optimization algorithm. Electr. Power Syst. Res. 2023, 214, 108944. [Google Scholar] [CrossRef]
  29. Kumar, A.G.D.; Vengadachalam, N.; Madhavi, S.V. A Novel Optimized Golden Eagle Based Self-Evolving Intelligent Fuzzy Controller to Enhance Power System Performance. In Proceedings of the 2022 IEEE 2nd International Conference on Sustainable Energy and Future Electric Transportation (SeFeT), Hyderabad, India, 4–6 August 2022; pp. 1–6. [Google Scholar]
  30. Sun, H. An extreme learning machine model optimized based on improved golden eagle algorithm for wind power forecasting. In Proceedings of the 2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Beijing, China, 19–20 November 2022; pp. 86–91. [Google Scholar]
  31. Boriratrit, S.; Chatthaworn, R. Golden Eagle Extreme Learning Machine for Hourly Solar Irradiance Forecasting. In Proceedings of the International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Maldives, Maldives, 16–18 November 2022; p. 9988106. [Google Scholar]
  32. Bandahalli Mallappa, P.K.; Velasco Quesada, G.; Martínez García, H. Energy Management of Grid Connected Hybrid Solar/Wind/Battery System using Golden Eagle Optimization with Incremental Conductance. Renew. Energy Power Qual. J. 2022, 20, 342–347. [Google Scholar] [CrossRef]
  33. Zhang, Z.-K.; Li, P.-Q.; Zeng, J.-J. Capacity Optimization of Hybrid Energy Storage System Based on Improved Golden Eagle Optimization. J. Netw. Intell. 2022, 7, 943–959. [Google Scholar]
  34. Boriratrit, S.; Fuangfoo, P.; Srithapon, C.; Chatthaworn, R. Adaptive meta-learning extreme learning machine with golden eagle optimization and logistic map for forecasting the incomplete data of solar irradiance. Energy AI 2023, 13, 100243. [Google Scholar] [CrossRef]
  35. Guo, J.-F.; Zhang, Y.-Q.; Xu, S.-B.; Lin, J.-Y. A Power System Profitable Load Dispatch Based on Golden Eagle Optimizer. J. Comput. 2022, 33, 145–158. [Google Scholar]
  36. Al-Gburi, Z.D.S.; Kurnaz, S. Optical disk segmentation in human retina images with golden eagle optimizer. Optik 2022, 271, 170103. [Google Scholar] [CrossRef]
  37. Dwivedi, A.; Rai, V.; Amrita; Joshi, S.; Kumar, R.; Pippal, S.K. Peripheral blood cell classification using modified local-information weighted fuzzy C-means clustering-based golden eagle optimization model. Soft Comput. 2022, 26, 13829–13841. [Google Scholar] [CrossRef]
  38. Justus, J.J.; Anuradha, M. A golden eagle optimized hybrid multilayer perceptron convolutional neural network architecture-based three-stage mechanism for multiuser cognitive radio network. Int. J. Commun. Syst. 2021, 35, 5054. [Google Scholar] [CrossRef]
  39. Eluri, R.K.; Devarakonda, N. Binary Golden Eagle Optimizer with Time-Varying Flight Length for feature selection. Knowl.-Based Syst. 2022, 247, 108771. [Google Scholar] [CrossRef]
  40. Zarkandi, S. Dynamic modeling and power optimization of a 4RPSP+PS parallel flight simulator machine. Robotica 2021, 40, 646–671. [Google Scholar] [CrossRef]
  41. Lv, J.X.; Yan, L.J.; Chu, S.C.; Cai, Z.M.; Pan, J.S.; He, X.K. A new hybrid algorithm based on golden eagle optimizer and grey wolf optimizer for 3D path planning of multiple UAVs in power inspection. Neural Comput. Appl. 2022, 34, 11911–11936. [Google Scholar] [CrossRef]
  42. Pan, J.-S.; Lv, J.-X.; Yan, L.-J.; Weng, S.-W.; Chu, S.-C.; Xue, J.-K. Golden eagle optimizer with double learning strategies for 3D path planning of UAV in power inspection. Math. Comput. Simul. 2022, 193, 509–532. [Google Scholar] [CrossRef]
  43. Ilango, R.; Rajesh, P.; Shajin Francis, H. S2NA-GEO method–based charging strategy of electric vehicles to mitigate the volatility of renewable energy sources. Int. Trans. Electr. Energy Syst. 2021, 31, 13125. [Google Scholar] [CrossRef]
  44. Jagadish Kumar, N.; Balasubramanian, C. Hybrid Gradient Descent Golden Eagle Optimization (HGDGEO) Algorithm-Based Efficient Heterogeneous Resource Scheduling for Big Data Processing on Clouds. Wirel. Pers. Commun. 2023, 129, 1175–1195. [Google Scholar] [CrossRef]
  45. Ghasemi, A.; Sousa, E.S. Spectrum sensing in cognitive radio networks: Requirements, challenges and design trade-offs. IEEE Commun. Mag. 2008, 46, 32–39. [Google Scholar] [CrossRef]
  46. Peng, C.; Zheng, H.; Zhao, B.Y. Utilization and fairness in spectrum assignment for opportunism tic spectrum access. Mob. Netw. Appl. 2006, 11, 555–576. [Google Scholar] [CrossRef]
  47. Wang, W.; Liu, X. List-coloring based channel allocation for open spectrum wireless networks. In Proceedings of the IEEE Vehicular Technology Conference, Stockholm, Sweden, 30 May–1 June 2005; pp. 690–694. [Google Scholar]
  48. Gandhi, S.; Buragohain, C.; Cao, L.; Zheng, H.; Suri, S. Towards real time dynamic spectrum auctions. Comput. Netw. 2009, 52, 879–897. [Google Scholar] [CrossRef]
  49. Ji, Z.; Liu, K.J.R. Dynamic spectrum sharing: A game theoretical overview. IEEE Commun. Mag. 2007, 45, 88–94. [Google Scholar] [CrossRef]
  50. Liu, Q.; Wang, C.; Li, X.; Gao, L. An improved genetic algorithm with modified critical path-based searching for integrated process planning and scheduling problem considering automated guided vehicle transportation task. J. Manuf. Syst. 2023, 70, 127–136. [Google Scholar] [CrossRef]
  51. Ahmed, A.M.; Rashid, T.A.; Saeed, S.A.M. Cat Swarm Optimization Algorithm: A Survey and Performance Evaluation. Comput. Intell. Neurosci. 2020, 2020, 4854895. [Google Scholar] [CrossRef]
  52. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  53. Nabil, E. A Modified Flower Pollination Algorithm for Global Optimization. Expert Syst. Appl. 2016, 57, 192–203. [Google Scholar] [CrossRef]
  54. Dong, H.B.; Li, D.J.; Zhang, X.P. Particle Swarm Optimization Algorithm with Dynamically Adjusting Inertia Weight. Comput. Sci. 2018, 45, 98–102, 139. [Google Scholar]
  55. Caraffini, F.; Iacca, G.; Neri, F.; Picinali, L.; Mininno, E. A CMA-ES Super-fit Scheme for the Re-sampled Inheritance Search. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1123–1130. [Google Scholar]
  56. Tanabe, R.; Fukunaga, A.S. Improving the Search Performance of SHADE Using Linear Population Size Reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  57. Kumar, S.A. Multi-population-based adaptive sine cosine algorithm with modified mutualism strategy for global optimization. Knowl. Based Syst. 2022, 251, 109326. [Google Scholar]
Figure 1. Golden eagle’s cruise behavior.
Figure 1. Golden eagle’s cruise behavior.
Applsci 13 09394 g001
Figure 2. The relationship between A, C and the hyperplane.
Figure 2. The relationship between A, C and the hyperplane.
Applsci 13 09394 g002
Figure 3. Different degrees of nonlinear convex decreasing weights.
Figure 3. Different degrees of nonlinear convex decreasing weights.
Applsci 13 09394 g003
Figure 4. Flowchart of the improved golden eagle optimizer algorithm.
Figure 4. Flowchart of the improved golden eagle optimizer algorithm.
Applsci 13 09394 g004
Figure 5. IGEO and other algorithms’ convergence curves on the test functions.
Figure 5. IGEO and other algorithms’ convergence curves on the test functions.
Applsci 13 09394 g005
Figure 6. IGEO and various single−strategy algorithms’ convergence curves on the benchmark functions.
Figure 6. IGEO and various single−strategy algorithms’ convergence curves on the benchmark functions.
Applsci 13 09394 g006aApplsci 13 09394 g006b
Figure 7. Mapping association between position coding and allocation matrix.
Figure 7. Mapping association between position coding and allocation matrix.
Applsci 13 09394 g007
Figure 8. Maximize the system benefit of the algorithms.
Figure 8. Maximize the system benefit of the algorithms.
Applsci 13 09394 g008
Figure 9. Fairness under different parameter environments.
Figure 9. Fairness under different parameter environments.
Applsci 13 09394 g009
Figure 10. The association between average benefit and user count.
Figure 10. The association between average benefit and user count.
Applsci 13 09394 g010
Figure 11. The association between average benefit and frequency spectrum count.
Figure 11. The association between average benefit and frequency spectrum count.
Applsci 13 09394 g011
Table 1. The influence of different values of parameter m on the algorithm.
Table 1. The influence of different values of parameter m on the algorithm.
m 1234567
F12476531
F22347651
F32435671
F42675431
F62457631
F72357641
Total12243137332526
Table 2. Benchmark test functions information.
Table 2. Benchmark test functions information.
Fun No.NameEquationFeaturesDBoundsf*
F1Sphere F 1 x = i = 1 n x i 2 Unimodal30[−100, 100]D0
F2Schwefel 2.22 F 2 x = i = 1 n x i 2 + π x i Unimodal30[−10, 10]D0
F3Griewank F 3 x = 1 + i = 1 n x i 2 4000 i = 1 n cos x i i Multimodal30[−600, 600]D0
F4Ackley1 F 4 x = 20 e 0.2 1 n i = 1 n x i 2 e 1 n i = 1 n cos 2 π x i + 20 + e Multimodal30[−32, 32]D0
F5Periodic F 5 x = 1 + i = 1 n sin 2 x i 0.1 e i = 1 n x i 2 Multimodal30[−50, 50]D0.9
F6Rastrigin F 6 x = 10 n + i = 1 n x i 2 10 cos 2 π x i Multimodal30[−5.12, 5.12]D0
F7Salomon F 7 x = 1 cos 2 π i = 1 n x i 2 + 0.1 i = 1 n x i 2 Multimodal30[−100, 100]D0
F8Yang 4 F 8 x = i = 1 n sin 2 x i e i = 1 n sin 2 x i Multimodal30[−10, 10]D−1
F9Matyas F 9 x = 0.26 x 1 2 + x 2 2 0.48 x 1 x 2 Fixed low-dim2[−10, 10]D0
F10Three-hump camel F 10 x = 2 x 1 2 1.05 x 1 4 + x 1 6 + x 1 x 2 + x 2 2 Fixed low-dim2[−5, 5]D0
F11Himmeblau F 11 x = x 1 2 + x 2 11 2 + x 1 + x 2 2 7 2 Fixed low-dim2[−5, 5]D0
F12Levi 13 F 12 x = sin 2 3 π x 1 + x 1 1 2 1 + sin 2 3 π x 2 + x 2 1 2 1 + sin 2 2 π x 2 Fixed low-dim2[−10, 10]D0
f* represents the theoretical optimal value of each test function.
D represents the dimension of the test function.
Table 3. Parameter settings of each algorithm.
Table 3. Parameter settings of each algorithm.
AlgorithmsParameter Settings
BOAp = 0.8, c = 0.01, a = 0.1
GWO a max = 2, a min = 0
PSO ω max = 0.9, ω min = 0.2, c1 = c2 = 2
SCAa = 2
SSA-
WOAb = 1
GEO P c = [ P c 0 P c T ] = [1 0.5], P a = [ P a 0   P a T ] = [0.5 2]
GEO_DLS P c = [ P c 0 P c T ] = [1 0.5], P a = [ P a 0   P a T ] = [0.5 2], ε = 3, μ∈[0,1], Q∈[0,1]
IGEO ω ini = 0.9, ω e n d = 0.4, P c = [ P c 0 P c T ] = [1 0.5], P a = [ P a 0   P a T ] = [0.5 2]
Table 4. Test results for the comparison of IGEO and other advanced algorithms.
Table 4. Test results for the comparison of IGEO and other advanced algorithms.
Fun No.AlgorithmMeanStdTime/sFun No.AlgorithmMeanStdTime/s
F1BOA1.66 × 10−148.70 × 10−160.3235F2BOA9.17 × 10−121.11 × 10−120.2990
GWO2.92 × 10−701.03 × 10−690.4156GWO1.77 × 10−805.91 × 10−800.2193
PSO7.77 × 10−121.44 × 10−110.1773PSO3.89 × 10−271.01 × 10−260.1158
SCA3.48 × 10−31.26 × 10−20.3297SCA3.39 × 10−211.02 × 10−200.1774
SSA8.70 × 10−91.59 × 10−90.3864SSA6.89 × 10−61.52 × 10−60.3061
WOA5.37 × 10−17200.1798WOA5.08 × 10−1123.36 × 10−1110.1298
GEO5.05 × 10−124.35 × 10−120.8909GEO2.58 × 10−59.24 × 10−50.7959
GEO_DLS1.15 × 10−178.00 × 10−170.8448GEO_DLS4.66 × 10−121.28 × 10−110.7581
IGEO000.7498IGEO000.6049
F3BOA2.80 × 10−161.98 × 10−150.3983F4BOA5.50 × 10−134.35 × 10−130.3228
GWO1.60 × 10−22.11 × 10−20.2654GWO4.58 × 10−157.03 × 10−160.2250
PSO1.28 × 10−18.38 × 10−20.1658PSO4.87 × 10−151.17 × 10−150.1165
SCA1.52 × 10−26.12 × 10−20.2395SCA4.73 × 10−151.73 × 10−150.1896
SSA2.43 × 10−11.31 × 10−10.3889SSA4.55 × 10−18.01 × 10−10.3061
WOA4.31 × 10−28.59 × 10−20.1794WOA4.01 × 10−151.98 × 10−150.1323
GEO4.93 × 10−35.67 × 10−30.8387GEO4.44 × 10−1500.8499
GEO_DLS000.8421GEO_DLS1.32 × 10−117.77 × 10−110.7699
IGEO000.8812IGEO8.88 × 10−1600.8077
F5BOA8.38 × 1004.70 × 10−10.4023F6BOA2.23 × 1011.90 × 1010.3149
GWO5.98 × 1001.70 × 1000.5043GWO2.65 × 10−11.33 × 1000.2052
PSO1.00 × 1009.74 × 10−100.2299PSO2.75 × 1001.55 × 1000.1120
SCA3.94 × 1003.88 × 10−10.4031SCA5.86 × 10−13.14 × 1000.1917
SSA2.39 × 1002.68 × 10−10.4361SSA1.41 × 1016.96 × 1000.3525
WOA9.46 × 10−16.79 × 10−20.1927WOA000.1428
GEO2.60 × 1004.06 × 10−10.9985GEO2.79 × 1001.63 × 1000.8278
GEO_DLS2.01 × 1002.86 × 10−10.9769GEO_DLS4.26 × 10−163.01 × 10−150.7984
IGEO9.00 × 10−12.24 × 10−160.9742IGEO000.8386
F7BOA3.03 × 10−16.85 × 10−30.3426F8BOA4.56 × 10−121.05 × 10−120.5778
GWO1.44 × 10−15.01 × 10−20.4574GWO1.40 × 10−163.28 × 10−170.4964
PSO3.68 × 10−15.87 × 10−20.2078PSO1.49 × 10−229.74 × 10−220.2717
SCA2.47 × 10−11.41 × 10−10.3542SCA1.57 × 10−108.96 × 10−110.4437
SSA7.68 × 10−11.32 × 10−10.3978SSA3.11 × 10−231.43 × 10−230.5102
WOA1.14 × 10−16.70 × 10−20.1881WOA−1.80 × 10−13.88 × 10−10.2832
GEO4.14 × 10−15.35 × 10−20.8920GEO6.93 × 10−211.61 × 10−200.9595
GEO_DLS6.24 × 10−24.84 × 10−20.9006GEO_DLS−2.38 × 10−14.28 × 10−10.9696
IGEO000.7675IGEO−1.00 × 10000.9908
F9BOA1.13 × 10−154.81 × 10−160.2707F10BOA5.43 × 10−185.67 × 10−180.2804
GWO1.38 × 10−27100.1262GWO000.1281
PSO9.27 × 10−926.19 × 10−910.0724PSO2.30 × 10−1251.39 × 10−1240.0783
SCA1.81 × 10−1221.28 × 10−1210.1161SCA1.12 × 10−1497.25 × 10−1490.1306
SSA2.90 × 10−163.90 × 10−160.2668SSA7.90 × 10−168.78 × 10−160.2439
WOA000.1143WOA4.00 × 10−18400.1150
GEO3.37 × 10−931.37 × 10−920.8084GEO5.40 × 10−1252.01 × 10−1240.7735
GEO_DLS5.22 × 10−482.99 × 10−470.7655GEO_DLS4.11 × 10−602.38 × 10−590.7380
IGEO000.7445IGEO000.5507
F11BOA2.38 × 10−32.80 × 10−30.4347F12BOA2.11 × 10−42.41 × 10−40.3884
GWO2.18 × 10−63.05 × 10−60.1891GWO9.94 × 10−81.12 × 10−70.1877
PSO2.52 × 10−313.72 × 10−310.0952PSO1.35 × 10−312.21 × 10−470.1359
SCA5.38 × 10−36.27 × 10−30.1640SCA3.40 × 10−43.28 × 10−40.1755
SSA2.87 × 10−144.69 × 10−140.4635SSA4.12 × 10−145.21 × 10−140.3833
WOA2.85 × 10−77.96 × 10−70.1760WOA4.62 × 10−81.21 × 10−70.1804
GEO9.60 × 10−39.17 × 10−31.3803GEO5.56 × 10−45.08 × 10−41.4123
GEO_DLS1.87 × 10−63.89 × 10−61.2409GEO_DLS3.94 × 10−77.86 × 10−71.3110
IGEO000.6167IGEO1.35 × 10−312.21 × 10−470.7423
Table 5. Comparison of MAE ranking between IGEO and other advanced algorithms.
Table 5. Comparison of MAE ranking between IGEO and other advanced algorithms.
AlgorithmMAERankingAlgorithmMAERanking
IGEO9.25 × 10−171GEO4.93 × 10−16
WOA8.53 × 10−22GWO5.42 × 10−17
GEO_DLS1.61 × 10−13SSA1.50 × 1008
PSO3.62 × 10−14BOA2.59 × 1009
SCA4.09 × 10−15
Table 6. Comparison of the test results of IGEO and other advanced algorithms (dim = 100).
Table 6. Comparison of the test results of IGEO and other advanced algorithms (dim = 100).
Fun No.BOAGWOPSOSCA
MeanStdMeanStdMeanStdMeanStd
F11.80 × 10−147.59 × 10−162.55 × 10−345.00 × 10−349.66 × 10−19.73 × 10−14.16 × 1032.98 × 103
F22.52 × 10491.08 × 10507.94 × 10−214.65 × 10−216.07 × 1002.53 × 1001.42 × 1001.98 × 100
F31.11 × 10−145.64 × 10−151.32 × 10−35.92 × 10−31.31 × 10−27.07 × 10−33.78 × 1012.83 × 101
F41.14 × 10−113.99 × 10−137.01 × 10−145.09 × 10−152.29 × 1003.26 × 10−11.90 × 1014.51 × 100
F53.32 × 1012.45 × 1003.00 × 1016.44 × 1001.16 × 1009.13 × 10−21.87 × 1011.13 × 100
F62.36 × 10−81.67 × 10−73.66 × 10−11.88 × 1004.02 × 1025.38 × 1012.22 × 1021.15 × 102
F73.29 × 10−12.96 × 10−22.60 × 10−14.95 × 10−21.30 × 1008.69 × 10−28.08 × 1002.86 × 100
F81.06 × 10−292.55 × 10−302.87 × 10−416.96 × 10−411.19 × 10−428.93 × 10−449.62 × 10−293.19 × 10−28
Fun No.SSAWOAGEOIGEO
MeanStdMeanStdMeanStdMeanStd
F19.86 × 10−37.53 × 10−35.55 × 10−17006.94 × 10−11.47 × 10−100
F21.38 × 1014.25 × 1008.93 × 10−1084.34 × 10−1077.88 × 1001.77 × 10000
F31.18 × 10−12.42 × 10−24.34 × 10−32.17 × 10−24.14 × 10−16.60 × 10−200
F45.25 × 1001.42 × 1004.01 × 10−152.55 × 10−153.95 × 1004.67 × 10−18.88 × 10−160
F51.06 × 1013.70 × 1009.50 × 10−19.03 × 10−27.14 × 1001.13 × 1009.00 × 10−12.24 × 10−16
F61.34 × 1023.27 × 101004.01 × 1015.67 × 10000
F75.92 × 1006.35 × 10−11.38 × 10−17.79 × 10−22.41 × 1002.26 × 10−100
F81.82 × 10−421.54 × 10−42−3.60 × 10−14.85 × 10−11.18 × 10−421.42 × 10−44−1.00 × 1000
Table 7. Comparison of IGEO and other comparison algorithms on CEC2021.
Table 7. Comparison of IGEO and other comparison algorithms on CEC2021.
Fun No.ItemsMinMaxStdMeanFun No.ItemsMinMaxStdMean
F1CMA-ES-RIS3.68 × 10−19.72 × 1032.27 × 1031.34 × 103F2CMA-ES-RIS2.17 × 1021.17 × 1032.66 × 1027.09 × 102
FMMPA9.72 × 1005.73 × 1011.20 × 1011.35 × 101FMMPA7.01 × 1006.89 × 1021.65 × 1023.15 × 102
EIW_PSO2.18 × 1035.99 × 1061.42 × 1043.47 × 104EIW_PSO1.21 × 1011.86 × 1021.62 × 1025.44 × 102
L-SHADE1.46 × 1042.69 × 1049.95 × 1031.83 × 104L-SHADE−1.40 × 1031.13 × 1037.39 × 102−5.18 × 102
GEO_DLS8.21 × 1042.30 × 1065.39 × 1055.66 × 105GEO_DLS9.57 × 1016.43 × 1021.37 × 1023.84 × 102
IGEO2.73 × 10−11.33 × 1032.76 × 1022.01 × 102IGEO3.54 × 1007.04 × 1022.07 × 1022.42 × 102
F3CMA-ES-RIS1.37 × 1013.84 × 1017.38 × 1002.59 × 101F4CMA-ES-RIS9.87 × 10−31.74 × 1004.32 × 10−18.68 × 10−1
FMMPA1.35 × 1012.92 × 1014.90 × 1002.14 × 101FMMPA2.66 × 10−11.70 × 1003.41 × 10−17.93 × 10−1
EIW_PSO2.10 × 1013.93 × 1014.26 × 1003.14 × 101EIW_PSO9.50 × 1013.49 × 1022.28 × 1022.24 × 102
L-SHADE−1.79 × 1036.15 × 1027.39 × 102−9.18 × 102L-SHADE−6.00 × 1021.80 × 1037.39 × 1022.70 × 102
GEO_DLS1.67 × 1013.02 × 1013.51 × 1002.26 × 101GEO_DLS9.69 × 10−12.75 × 1004.81 × 10−11.71 × 100
IGEO1.08 × 1011.85 × 1011.81 × 1001.33 × 101IGEO9.89 × 10−21.08 × 1002.20 × 10−15.52 × 10−1
F5CMA-ES-RIS1.48 × 1028.90 × 1032.50 × 1032.53 × 103F6CMA-ES-RIS1.46 × 1005.93 × 1021.21 × 1022.02 × 102
FMMPA1.41 × 1004.08 × 1019.35 × 1001.33 × 101FMMPA8.61 × 10−11.25 × 1013.76 × 1002.90 × 100
EIW_PSO4.87 × 1038.83 × 1031.94 × 1031.68 × 103EIW_PSO2.24 × 1025.07 × 1025.80 × 1013.75 × 102
L-SHADE−8.00 × 1021.61 × 1037.39 × 1027.09 × 101L-SHADE−9.00 × 1021.50 × 1037.41 × 102−2.96 × 101
GEO_DLS8.83 × 1021.11 × 1042.23 × 1032.97 × 103GEO_DLS3.16 × 1001.21 × 1022.07 × 1011.52 × 101
IGEO9.33 × 1025.72 × 1031.20 × 1032.36 × 103IGEO8.44 × 10−11.24 × 1025.70 × 1014.31 × 101
F7CMA-ES-RIS4.42 × 1011.79 × 1034.22 × 1025.52 × 102F8CMA-ES-RIS1.01 × 1021.16 × 1032.28 × 1021.61 × 102
FMMPA1.02 × 10−13.43 × 1006.27 × 10−19.16 × 10−1FMMPA1.16 × 1011.01 × 1022.20 × 1019.46 × 101
EIW_PSO1.51 × 1037.64 × 1052.14 × 1051.32 × 105EIW_PSO1.16 × 1011.87 × 1035.75 × 1023.35 × 102
L-SHADE−3.99 × 102,2.00 × 1037.22 × 1024.70 × 102L-SHADE−2.00 × 1022.20 × 1037.39 × 1026.70 × 102
GEO_DLS1.49 × 1021.89 × 1033.76 × 1021.18 × 103GEO_DLS5.93 × 1011.11 × 1028.96 × 1001.06 × 102
IGEO5.40 × 1024.61 × 1039.38 × 1022.06 × 103IGEO1.00 × 1021.06 × 1021.50 × 1001.02 × 102
F9CMA-ES-RIS1.00 × 1025.05 × 1021.11 × 1023.63 × 102F10CMA-ES-RIS3.98 × 1024.46 × 1022.32 × 1014.24 × 102
FMMPA1.00 × 1021.00 × 1022.28 × 10−41.00 × 102FMMPA3.98 × 1023.98 × 1028.90 × 10−23.98 × 102
EIW_PSO3.90 × 1024.24 × 1028.01 × 1004.11 × 102EIW_PSO4.29 × 1027.49 × 1024.17 × 1015.26 × 102
L-SHADE−1.00 × 1022.64 × 1037.43 × 1021.07 × 103L-SHADE3.98 × 1022.85 × 1037.39 × 1021.28 × 103
GEO_DLS1.02 × 1023.43 × 1021.06 × 1021.79 × 102GEO_DLS3.99 × 1024.46 × 1021.97 × 1014.19 × 102
IGEO1.00 × 1023.35 × 1024.22 × 1013.23 × 102IGEO3.98 × 1024.49 × 1022.14 × 1014.33 × 102
Table 8. Comparison of rank-sum test values between IGEO and other advanced algorithms.
Table 8. Comparison of rank-sum test values between IGEO and other advanced algorithms.
Fun No.p (IGEO-BOA)p (IGEO-GWO)p (IGEO-PSO)p (IGEO-SCA)p (IGEO-SSA)p (IGEO-WOA)p (IGEO-GEO)p (IGEO-GEO_DLS)
F13.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+4.22 × 10−12+
F23.31 × 10−20+2.29 × 10−16+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+NAN=3.21 × 10−20+4.22 × 10−12+
F33.31 × 10−20+NAN=3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+4.22 × 10−12+
F43.31 × 10−20+6.73 × 10−23+3.22 × 10−22+6.620 × 10−22+3.31 × 10−20+4.39 × 10−15+2.63 × 10−23+4.22 × 10−12+
F53.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+7.77 × 10−08+3.31 × 10−20+4.22 × 10−12+
F65.97 × 10−18+0.1594−1.54 × 10−18+0.0822−3.31 × 10−20+NAN=1.37 × 10−18+NAN=
F73.31 × 10−20+3.31 × 10−20+2.53 × 10−20+3.31 × 10−20+3.28 × 10−20+3.14 × 10−20+3.31 × 10−20+4.22 × 10−12+
F83.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+2.29 × 10−16+3.31 × 10−20+4.22 × 10−12+
F93.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+4.22 × 10−12+
F100.3271−4.58 × 10−10+3.31 × 10−20+3.81 × 10−07+3.31 × 10−20+7.75 × 10−06+3.40 × 10−08+NAN=
F113.31 × 10−20+3.31 × 10−20+1.44 × 10−05+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+
F123.31 × 10−20+3.31 × 10−20+NAN=3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+3.31 × 10−20+
+/=/−11/0/110/1/111/1/011/0/112/0/010/2/012/0/010/2/0
Table 9. Friedman ranking under benchmark function comparison.
Table 9. Friedman ranking under benchmark function comparison.
AlgorithmBOAGWOPSOSCASSAWOAGEOGEO_DLSIGEO
Value7.084.505.006.176.922.836.504.581.42
Rank935682741
Table 10. Friedman ranking under CEC2021 comparison.
Table 10. Friedman ranking under CEC2021 comparison.
AlgorithmCMA-ES-RISFMMPAEIW_PSOL-SHADEGEO_DLSIGEO
Value4.10 1.60 5.10 3.50 3.80 2.90
Rank5 1 6342
Table 11. Subsequent Holm verification results for all algorithms under the benchmark testing functions.
Table 11. Subsequent Holm verification results for all algorithms under the benchmark testing functions.
iSample 1 vs. Sample 2 z ¯ p ¯ α/iNull Hypothesis (Ho)Conclusion
1IGEO-WOA1.2500.2635520.05All algorithms have similar distributions with little differenceHo accepted for α = 5% since p ¯ > 0.05
2IGEO-GEO_DLS1.7080.1265180.025
3IGEO-PSO2.2500.0441710.016667
4IGEO-GWO2.3750.0336480.0125
5IGEO-SCA3.6250.0011860.01Ho rejected for α = 5% since p ¯ < 0.05
6IGEO-BOA4.0000.0003470.008333
7IGEO-GEO4.0000.0003470.007143
8IGEO-SSA4.0420.0003000.00625
N12 χ 2 34.296
Free degree8p-value<0.05
Table 12. Subsequent Holm verification results for all algorithms under CEC2021.
Table 12. Subsequent Holm verification results for all algorithms under CEC2021.
iSample 1 vs. Sample 2 z ¯ p ¯ α/iNull Hypothesis (Ho)Conclusion
1IGEO-L-SHADE0.6000.4732890.05All algorithms have similar distributions with little differenceHo accepted for α = 5% since p ¯ > α/i
2IGEO-GEO_DLS0.9000.2820590.025
3IGEO-CMA-ES-RIS1.2000.1514940.016667
4IGEO-FMMPA1.3000.1202330.0125
5IGEO-EIW_PSO2.2000.0085510.01Ho rejected for α = 5% since p ¯ < α/i
N10 χ 2 19.943
Free degree5p-value<0.05
Table 13. Comparison of test results for IGEO and various single-strategy algorithms.
Table 13. Comparison of test results for IGEO and various single-strategy algorithms.
Fun No.AlgorithmMeanStdTime/sFun No.AlgorithmMeanStdTime/s
F1GEO6.60 × 10−126.80 × 10−120.8638F2GEO5.72 × 10−61.52 × 10−50.8149
wGEO3.37 × 10−16900.9309wGEO5.11 × 10−861.21 × 10−850.7754
pGEO1.08 × 10−29200.9195pGEO4.81 × 10−1543.12 × 10−1530.7696
IGEO000.7280IGEO000.5939
F3GEO4.34 × 10−36.04 × 10−30.8464F4GEO4.30 × 10−157.03 × 10−160.8155
wGEO000.8268wGEO8.88 × 10−1600.8186
pGEO000.8384pGEO8.88 × 10−1600.8310
IGEO000.8457IGEO8.88 × 10−1600.8407
F5GEO2.62 × 1003.85 × 10−10.9738F6GEO1.77 × 1002.69 × 1000.8146
wGEO9.00 × 10−12.24 × 10−160.9290wGEO000.8040
pGEO9.00 × 10−12.24 × 10−160.9693pGEO000.8284
IGEO9.00 × 10−12.24 × 10−160.9770IGEO000.8406
F7GEO4.16 × 10−17.92 × 10−20.8811F8GEO8.18 × 10−211.47 × 10−200.9774
wGEO8.03 × 10−31.37 × 10−20.9259wGEO−3.75 × 10−13.62 × 10−11.0186
pGEO2.95 × 10−1382.09 × 10−1370.9063pGEO−1.00 × 10000.9664
IGEO000.7688IGEO−1.00 × 10001.0303
F9GEO1.30 × 10−933.43 × 10−930.7909F10GEO1.50 × 10−1255.30 × 10−1250.8167
wGEO1.22 × 10−18200.8646wGEO1.79 × 10−21100.8387
pGEO1.59 × 10−30100.8199pGEO000.7478
IGEO000.7652IGEO000.5730
F11GEO7.32 × 10−37.56 × 10−31.3881F12GEO5.82 × 10−46.30 × 10−41.4014
wGEO8.70 × 10−38.62 × 10−31.1919wGEO6.06 × 10−44.79 × 10−41.2037
pGEO3.47 × 10−313.96 × 10−310.7713pGEO1.35 × 10−312.21 × 10−470.8097
IGEO000.5874IGEO1.35 × 10−312.21 × 10−470.7579
Table 14. Comparison of the best results and the statistical results obtained using different optimizers for the piston rod design problem.
Table 14. Comparison of the best results and the statistical results obtained using different optimizers for the piston rod design problem.
Algorithm
GWOPSOSCASSAWOAGEOGEO_DLSIGEO
x150048.72684500499.6521500133.254955.280910.05
x250023.28697500499.5754500247.8902446.3305360.2898
x30.6383470.9654320.6385430.6385930.63834684.3626120120
x40.05136.8260.050.050.05332.6104379.5125500
Best0.01131528.857550.0113220.0113240.0113150.1832120.0506320.000317
Mean0.0123121087.5410.011360.0163630.02103335.196145.2010410.000317
Std0.0054581438.072.34 × 10−50.0056260.01590841.329147.9533245.51 × 10−20
Table 15. Comparison of best results and the statistical results obtained using different optimizers for I beam design problem.
Table 15. Comparison of best results and the statistical results obtained using different optimizers for I beam design problem.
Algorithm
GWOPSOSCASSAWOAGEOGEO_DLSIGEO
x15056.41491505050101010
x28057.6131880808057.158071034.67102
x31.4999991.8219761.4999851.51.51.49437552.340662
x456.41217655550.90.9
Best0.0122950.0107670.0122950.0122950.0122950.0016440.0016440.001644
Mean0.0122950.0251110.0122950.0122950.0122950.001950.0023340.001644
Std3.11 × 10−120.0087984.62 × 10−101.68 × 10−135.62 × 10−160.0002160.0010788.82 × 10−19
Table 16. Comparison of the best results and the statistical results obtained using different optimizers for the car side impact design problem.
Table 16. Comparison of the best results and the statistical results obtained using different optimizers for the car side impact design problem.
Algorithm
GWOPSOSCASSAWOAGEOGEO_DLSIGEO
x11.50.9698261.51.51.50.50.50.5
x20.51.1598150.50.50.50.51.50.5
x30.50.7529530.50.50.50.51.50.5
x41.51.1888751.521.51.51.50.5
x50.50.6505620.50.50.50.50.50.5
x621.47452220.51.50.5
x70.50.6133420.50.50.50.50.50.5
x80.5370.2383090.5370.5370.5370.3450.1920.192
x90.1920.2726450.3450.1920.3450.1920.3450.192
x100−1.22153000−30−30−30
x110−4.66199000−30−30−30
Best24.42527.3235324.42526.4324.42522.25522.25521.8829
Mean27.774533.0237624.42528.0245727.2768323.38823.05822.67033
Std1.62977211.509437.23 × 10−151.1858541.3772640.9509171.1323640.765726
Table 17. Comparison of the best results and the statistical results obtained using different optimizers for the cantilever beam design problem.
Table 17. Comparison of the best results and the statistical results obtained using different optimizers for the cantilever beam design problem.
Algorithm
GWOPSOSCASSAWOAGEOGEO_DLSIGEO
x16.02120313.332835.9254646.0187555.8173095.0439585.7339494.539906
x25.31157410.67115.7308955.3064715.7464427.5883273.83767617.7145
x34.48708536.930334.410014.4941584.4874120.015.8458487.484595
x43.5000287.7055493.5442913.5004943.49361635.903593.6146274.238047
x52.15385326.019481.9683782.1537872.02331216.63980.0389115.066114
Best1.3365265.8915941.3430791.3365211.3423981.4792081.3743051.179635
Mean1.3365577.7729761.3800321.3365291.4334021.6427241.5154311.370618
Std2.76 × 10−50.9694850.0228386.10 × 10−60.077620.0729010.0665340.073602
Table 18. Comparison of the best results and the statistical results obtained using different optimizers for the three-bar truss design problem.
Table 18. Comparison of the best results and the statistical results obtained using different optimizers for the three-bar truss design problem.
Algorithm
GWOPSOSCASSAWOAGEOGEO_DLSIGEO
x10.7885990.7688790.7883570.7886490.7897170.7997810.798960.765336
x20.4083750.3816870.4089940.4082350.4052220.2699020.6268870.674236
Best263.8915259.8307263.8942263.8915263.8923264.1117264.1828259.8111
Mean263.892261.3095265.215263.8915264.3333265.4653266.1265259.8717
Std0.0009891.4322974.7914424.60 × 10−70.6833761.0904091.7482860.035903
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, J.; Zhang, D.; Li, L.; He, Q. A Nonlinear Convex Decreasing Weights Golden Eagle Optimizer Technique Based on a Global Optimization Strategy. Appl. Sci. 2023, 13, 9394. https://doi.org/10.3390/app13169394

AMA Style

Deng J, Zhang D, Li L, He Q. A Nonlinear Convex Decreasing Weights Golden Eagle Optimizer Technique Based on a Global Optimization Strategy. Applied Sciences. 2023; 13(16):9394. https://doi.org/10.3390/app13169394

Chicago/Turabian Style

Deng, Jiaxin, Damin Zhang, Lun Li, and Qing He. 2023. "A Nonlinear Convex Decreasing Weights Golden Eagle Optimizer Technique Based on a Global Optimization Strategy" Applied Sciences 13, no. 16: 9394. https://doi.org/10.3390/app13169394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop