1. Introduction
In recent years, information technology has had a deep impact on human civilization [
1]. Due to this advancement, a massive amount of data needs to be analyzed, more complicated real-world problems need to be solved, and enhancement of the computing efficiency of computers is needed [
2]. Artificial Intelligence (AI) has been a persistently hot topic to deal with this development. AI refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions [
3]. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. AI methods in MG strategies contain Reasoning and Learning (RL) and Swarm Intelligence (SI) methods [
4]. The SI-based algorithms have attained significant popularity among researchers and are considered one of the highest encouraging categories of AI especially metaheuristic algorithms [
5]. In general, metaheuristic optimization algorithms try to imitate the physical, biological, or even chemical procedures that take place in the environment. However, some algorithms depend on mathematical theories [
6]. The most generic ones are: Genetic Algorithm, which simulates Darwin’s theory of evolution [
7], Simulated Annealing (SA) algorithm, that is developed from the thermodynamic process [
8], Particle swarm optimization (PSO) algorithms, which simulate the behaviour of fish school or bird flock [
9], Differential Evolution (DE) algorithm, that is applied in solving problems and functions by iteratively improving a candidate solution based on an evolutionary process [
10], Teaching Learning-Based Optimization (TLBO), which is based on a teaching-learning process [
11], Jaya algorithm, that based on the concept that tries to reach the best solution and tries to avoid failure to move away from the worst solution [
12], Cuckoo Search (CS) Algorithm, which imitates the brood parasitism conduct of some cuckoo species in conjunction with the Levy flight conduct of certain birds and fruit flies [
13], flower pollination algorithm (FPA) that brings its metaphor from the pollination in the flowering cycle of some plants in nature [
14]. African Vultures Optimization Algorithm (AVOA) which imitates the nature of African vultures in foraging and navigation [
15].
Gradient-based optimization (GBO) is a newly developed metaheuristic optimization to solve optimization problems. It contains search directions defined by the gradient of the function at the current point. GBO algorithm is motivated by Newton’s gradient procedure including two principal processes: gradient search rule process and local escaping operator. The gradient-based approach uses the gradient search rule to improve exploring phenomena and quickens the convergent rate of GBO to obtain the optimal position within the search space. However, the local escaping process prevents GBO to avoid getting stuck into the local optima [
16].
In swarm-based algorithms, inertia weight is a concept utilized to balance the influence of the current position and the attraction to the best-known position in the search space. This helps the algorithm avoid getting trapped in local optima and to explore the search space more effectively. The value of inertia weight is typically decreased over time to increase the exploration [
17].
In general, the optimization algorithm procedure steps, consist of parameter selection, variables (search agents), initialization, exploration, exploitation, randomization formula of the step search, selection of step, and terminating condition [
18]. Each search agent in the population interacts with other agents to locate the optimal solution [
19]. Generally, the swarm-based algorithms require some common control parameters like population size and the number of iterations [
20]. In addition, some algorithms have specific control parameters besides the general parameters, known as hyper-parameters. These parameters exist to improve the performance of the algorithm by tuning their values properly [
21].
The search agents in a SI system are designed to have simple rules. There is no central control to give order to how individual agents should perform [
22]. The agent’s real performance is local, with a degree of arbitrary. However, the relations between such agents and the other parameters in the algorithm take the edge of the occurrence of “intelligent” to mimic the global behavior, while the individual agents cannot find alone [
23].
The major contributions of this paper are listed as follows:
Utilizing modified inertia weight in the original version of GBO to adjust the accuracy of the best solution. Whereas, the inertia weight in optimization algorithm, gives more weight to previous solutions in order to converge faster but also allows for exploration of new solutions.
Modified parameters are utilized in GBO to boost the convergence speed and provide the proper balance of global and local search capabilities.
A novel operator (G) is introduced which supports the diversity search in the search space. Whereas, G applied to move search agents toward better solutions leading to suitable performance both in the global search and local search using new developed formula.
The prove the superiority of the proposed IGBO, its performance is compared with GBO, CS, DE, FPA, PSO, TLBO, and AVOA using a wide range of benchmark functions on a few real-world problems.
The rest of the article is organized as follows: In
Section 2 background and related works are presented.
Section 3 presents a brief description of the GBO while the proposed IGBO is described in
Section 4.
Section 5 describes the benchmark function and
Section 6 explains the real-world problems. The obtained results and performance comparison using benchmark functions against the different optimization algorithms are presented in
Section 7. In
Section 8 the proposed IGBO is employed to solve a challenging real-world optimization problem in the field of engineering. Finally,
Section 9 summarizes the study.
2. Background and Related Works
The improvement of the optimization algorithm depends on the enhancement of the procedure steps or advancements of hybrid algorithms [
24]. Every year there is competition in the algorithms on benchmarks function and real-world problems based on time, accuracy, and result to decide which algorithm is efficient [
25]. There are different approaches to improving algorithms such as parameters tuning, opposite strategy, inertia weight, chaos strategy, fuzzy logic strategy, adding new operators, or multi-objective theory [
26].
Adding inertia weight to adjust the accuracy and convergence speed toward the optimal solution [
27]. There are different methods to implement inertia such as fixed and arbitrary inertia weights, and Adaptive inertia weights [
28]. Tuning parameters to enhance convergence rate by many approaches. For example, with fixed values that are appropriate to the search process, some approaches gradually change the parameters of operators through the problem search process while, in some approaches, the mechanism will update the parameter for some instances of problems. Some of these methods aim to develop an adaptive mechanism to change the parameter value according to the search process [
29]. Adding extra operators to balance exploration and exploitation and enhance diversity. There are different kinds of operators such as comparison/relational operators, stream operators, subscript operators, function operators, and arithmetic operators [
30].
Because of the robustness of GBO, it is applied to solve quite complex real-world problems. The authors in [
31] used GBO to find the optimal design automatic voltage regulator (AVR) using the GBO algorithm. In [
32], GBO is used to calculate the reliability redundancy allocation problem of a series-parallel framework. In [
33], GBO is employed in the estimation of the parameters of solar cells and photovoltaic (PV) panels as an effective and precise method. GBO in [
34] is utilized in the calculation of the Economic Load Dispatch (ELD) problem for different situations such as ELD with transmission losses, and mixed economic and emission dispatch. In [
35], GBO was utilized with Proton Exchange Membrane Fuel Cell to estimate the optimal parameters of three distinct kinds of PEM fuel cells. The scholars in [
36] used an ensemble random vector functional link model (ERVFL) incorporated with GBO to model the ultrasonic welding of a polymeric material blend. ERVFL-GBO has the best outcome which indicates its high accuracy over other tested methods.
Despite the effective performance, GBO is trapped in local solution when conducting complicated non-linear functions, thus, it can decrease its accuracy. To overcome these drawbacks, different variants of the GBO have been introduced. The researchers in [
37] used a multi-objective GBO algorithm-based Weighted multi-view Clustering (MO-GBO-WMV) to find the consensus clustering among different partitioning generated from individual views and compared this approach with some other methods to demonstrate the advanced ability of this approach. In [
38] introduced a multi-objective gradient-based optimizer (MO-GBO) to handle the best solution for more than one objective, where needed. In [
39], by improving the GBO using
and
which are new chaotic numbers generated by various chaos maps. Then IGBO is used to derive the parameters of PV modules. Similarly, in [
40] a novel random learning mechanism is designed to improve the performance of GBO. After that, IGBO was utilized to extract parameters of four photovoltaic models. In [
41], an improved gradient-based optimizer denoted by (CL-GBO) to build DNA coding sets that contain the Cauchy and Levy mutation operators, which are utilized as readers and addresses of the libraries. In [
42] the goal was to solve single and multi-Economic Emission Dispatch problems, using an elegant method depending on mix-object both Manta ray foraging optimization (MRFO) with GBO, named (MRFO–GBO) to avoid trapped into local optima as well as accelerate the solution process. In [
43] enhance the performance of Grey Wolf Optimizer (GWO) by a new procedure to use GBO to produce a new algorithm called (G-GWO). This combination was applied to improve the algorithm’s exploitation and exploration and also added Gaussian walk and Levy flight. These two are arbitrary operators utilized to increase the diversity search of exploration in the G-GWO. In [
44] modified GBO algorithm using operator D to improve the stability between exploration and exploitation phases in the search process. Similarly, in [
45] used binary search, which is an advanced type of search algorithm that finds and fetches data from a sorted list of items. It is called the binary GBO (B-GBO) algorithm. Then B-GBO is used in feature selecting problems of machine learning and data mining.
Optimization algorithms have gained a surge in popularity and have achieved significant attention from both academic and industrial fields. Here the review some recent optimization algorithms. Arithmetic Optimization Algorithm (AOA) has developed depending on well-known Arithmetic theory [
46]. In [
47], the scholars introduced Aquila Optimizer (AO), which mathematically models and mimics the nature of aquila during the procedure of hunting the prey. Dwarf Mongoose Optimization (DMO) algorithm is introduced in [
48] which simulate the teamwork behaviors of the dwarf mongoose. The authors in [
49] have developed Ebola Optimization search Algorithm (EOSA) based on the propagation behavior of the Ebola virus disease. Gazelle Optimization Algorithm (GOA) have presented in [
50], which is inspired by the survival ability in their predator-dominated environment. The authors in [
51] have developed prairie dog optimization (PDO) algorithm, which mimic the behaviour of four prairie dog in foraging and burrow building. Reptile Search Algorithm (RSA) has presented in [
52], which is inspired by the nature of Crocodiles in hunting. The authors in [
53], introduced oppositional unified particle swarm gradient-based optimizer based on mix of oppositional learning, unified particle swarm algorithm, and GBO algorithm to solve the complex inverse analysis of structural damage problems. In [
54], the scholars have developed social engineering particle swarm optimization algorithm, which consists of combination of social engineering optimizer and particle swarm optimization to deal with structural health monitoring as objective function. However, this combination of more than algorithm may lead to slower convergence and inaccuracy in some optimization problems where is the speed is compulsory.
Summarizing the previous studies, the GBO algorithm has superiority over all modern counterparts in solving problems in different fields. However, ordinary GBO has some limitations such as:
GBO will still be trapped into local optima and suffer from the imbalance between exploitation and exploration, premature convergence, and slow convergence speed under some circumstances due to incomplete judgment standard and operators.
The main function of the local escaping operator (LEO) phase algorithm is to avoid the occurrence of local optimal stagnation, but only when the random number is less than 0.5, it will enter the LEO phase.
The former GBO does not identify the optimal solution for discrete functions which have discrete search spaces and decision variables such as feature selection problems.
There is only one guidance towards the best solution during the updating process, which limits the exploitation capability and can lead to the propensity of falling into the local optimal solution.
Former GBO does not have enough internal memory to save optimal solutions among all generations, which leads to a lack the population diversity. Moreover, the performance of the algorithm is affected significantly by the space domain of the objective function. However, an intensive searching process may lead to the deterioration of multimodal objective functions.
7. Results and Analysis
The parameters of IGBO and other algorithms are selected according to the subsection parameter settings. Then, they have been tested and executed using the MATLAB (R2021a) desktop computer running Windows 10 Enterprise 64-bit with an Intel ® Core TM (Santa Clara, CA, USA) i5-8500 CPU processor and 8.00 GB RAM. All results are stored based on 50 population sizes and 50 independent runs with 1000 iterations for every run, then the results are compared using the obtained results.
In benchmark test functions, the results found by IGBO are compared with seven well-known algorithms (GBO, CS, DE, PSO, FPA, TLBO, AVOA). In addition, the results of IGBO in real-world problems are compared with the result of the same counterpart algorithms.
7.1. Benchmark Test Functions Results
This section describes the results of IGBO and the chosen algorithms using variant benchmark functions. Further, the results computes and compares the descriptive statistics in terms of best, worst, median, mean value, and standard deviation of all the algorithms. The best result for each function is highlighted in boldface.
The results of IGBO for unimodal benchmark functions are compared with other algorithms in
Table 4. IGBO algorithm obtained the lowest value in the results in the functions (f1, f2) excluding the function (f3). This shows that IGBO has least variation in results of compared to the competitor’s algorithms. Hence, IGBO is a better choice.
Table 5 illustrate
p-values obtained from Wilcoxon rank-sum statistical test with 5% accuracy. By looking at the results, it is evident that IGBO has achieved excellent results with significant differences between the proposed IGBO approach and other optimization methods.
By examining the results of unimodal functions using Friedman mean ranking test, it is obvious that the proposed IGBO algorithm performed better than other counterpart algorithms and had been able to achieve the appropriate score of 3.826 in Friedman test as it is clear from
Figure 5.
Multimodal benchmark functions are complicated tasks to test the exploration ability of the optimization algorithms to find the main optimal region of search domain.
The results of IGBO for multimodal benchmark functions are compared with other algorithms in
Table 6. Analysis of the results of this table shows that IGBO with its high exploration power, has provided the global optimal for (f4, f5, f6, f8), which indicates the more effective performance of IGBO.
Non-parametric Wilcoxon sign rank test has archived significant results when it is applied on multimodal benchmark functions, which shown in
Table 7. The post hoc analysis confirms the effectiveness of the proposed method and it is statistically significant.
The obtained Friedman test results for multimodal functions are sown in
Figure 6. The overall rank demonstrates that IGBO algorithm is superior to its counterparts. It obtained the lower value with 3.193.
To identify the nature of the algorithms with the functions,
Figure 7 shown the 3D design, which allows for a more visual and intuitive understanding of the function’s behavior and properties. The convergence curve is demonstrated in
Figure 8, the evaluation of convergence capability shows the robustness of IGBO against different algorithms.
7.2. Comparison of Computational Time
The modifications to improve original GBO, make it able to find optimal from the entire feasible range with the proper balance of global and local search capabilities Moreover, they effect to boost the convergence speed of IGBO compared to its counterparts. Each algorithm runs 1000 iterations and the average of the elapsed time is considered a criterion for computational time. The proposed algorithm needs less time to find the best solution measured in seconds.
Table 8 illustrates the comparison of computational time between IGBO and other algorithms.
7.3. Result of Real-World Problems
This part show results of the parameter values of the maximum function evaluations (MFEs), that compare the proposed IGBO algorithm against counterpart algorithms. All algorithms are used to solve real-world problems as mentioned before, with 50 numbers of population and 30 runs containing 1000 iterations.
7.3.1. Three-Bar Truss Design Results
Table 9 shows the results of the comparative algorithms for solving the three-bar truss design problem, and
Figure 9 shows the convergence curve and best positions of the three-bar truss design using IGBO.
The non-parametric Wilcoxon sign rank test has achieved significant results when applied to three-bar truss problem, as demonstrated in
Table 10. Moreover, In Friedman test, IGBO algorithm has achieved the smallest rank compared with other intelligent optimization algorithms. These tests confirm the effectiveness of the proposed method and it is statistically significant.
7.3.2. I-Beam Design Results
Table 11 shows the results of the comparative algorithms for solving the I-beam design problem, and
Figure 10 shows the convergence curve and best positions of I-beam design using IGBO.
Table 12 shows Wilcoxon sign rank test and Friedman mean rank test for I-beam problem. Wilcoxon test has produced significant outcomes when used on I-Beam Design results. Furthermore, the overall ranking using Friedman test proved that IGBO algorithm is superior to other algorithms.
7.3.3. Automatic Voltage Regulator Design Results
Due to the novelty performance and advanced ability in tuning, the proposed study optimization of the AVR system contains a FOPID controller.
Figure 11 shows step response of the AVR-based IGBO-FOPID controller.
The FOPID parameters estimated at the end of the IGBO search process are shown in
Table 13.
The transfer function model for the proposed GBO-based AVR system with the incorporation of optimized variables is given in the equation below:
To validate the effectiveness of the proposed optimal AVR design, its dynamic response is compared with that of the previously designed AVRs under identical operating conditions as shown in
Figure 12.
Table 14 provides the quantitative evaluation of the dynamic response based on some of the very important dynamic response indicators such as percentage overshoot, settling time, and peak time.
Conversely,
Table 15 describes stability criterion results with the best AVR design-based algorithms. The stability indicators are Phase Margin (PM), Delay Margin (DM), Bandwidth (BW), and Phase Gain (PG). As can be seen from the proposed GBO tuned FOPID-AVR provides the most stable design among the considered AVRs with the highest PM and BW values.
In summary, the results of the real-world problems, demonstrate that IGBO can deal with different challenging problems and various combinatorial optimization problems. Thus, IGBO is the most powerful optimization algorithm with the lowest computational costs and high convergence speed to get the optimal solution.