Next Article in Journal
Bayesian-Based Traffic Safety Evaluation Study for Driverless Infiltration
Previous Article in Journal
Recording of Full-Color Snapshot Digital Holographic Portraits Using Neural Network Image Interpolation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pressure Vessel Design Problem Using Improved Gray Wolf Optimizer Based on Cauchy Distribution

1
College of Electronic and Optical Engineering & College of Flexible Electronics (Future Technology), Nanjing University of Posts and Telecommunications, Nanjing 210023, China
2
Nation–Local Joint Project Engineering Laboratory of RF Integration & Micropackage, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(22), 12290; https://doi.org/10.3390/app132212290
Submission received: 11 October 2023 / Revised: 10 November 2023 / Accepted: 11 November 2023 / Published: 13 November 2023

Abstract

:
The Gray Wolf Optimizer (GWO) is an established algorithm for addressing complex optimization tasks. Despite its effectiveness, enhancing its precision and circumventing premature convergence is crucial to extending its scope of application. In this context, our study presents the Cauchy Gray Wolf Optimizer (CGWO), a modified version of GWO that leverages Cauchy distributions for key algorithmic improvements. The innovation of CGWO lies in several areas: First, it adopts a Cauchy distribution-based strategy for initializing the population, thereby broadening the global search potential. Second, the algorithm integrates a dynamic inertia weight mechanism, modulated non-linearly in accordance with the Cauchy distribution, to ensure a balanced trade-off between exploration and exploitation throughout the search process. Third, it introduces a Cauchy mutation concept, using inertia weight as a probability determinant, to preserve diversity and bolster the capability for escaping local optima during later search phases. Furthermore, a greedy strategy is employed to incrementally enhance solution accuracy. The performance of CGWO was rigorously evaluated using 23 benchmark functions, demonstrating significant improvements in convergence rate, solution precision, and robustness when contrasted with conventional algorithms. The deployment of CGWO in solving the engineering challenge of pressure vessel design illustrated its superiority over traditional methods, highlighting its potential for widespread adoption in practical engineering contexts.

1. Introduction

The swarm intelligence algorithm comprises a variety of computing methods inspired by group behavior in nature, and its central premise is to solve diverse problems by simulating the cooperative work among the individuals of a given population [1]. In recent years, with the introduction of the particle swarm algorithm [2] and other algorithms, many scholars have focused their energy on this research field. Currently, the swarm intelligence algorithm has been widely used in academia and various engineering disciplines.
The Gray Wolf Optimizer (GWO), an intelligent global optimization method, was proposed by Mirjalili in 2014 [3]. This algorithm, predicated on the cooperative hunting behavior and hierarchical structure of gray wolves in nature, has since garnered substantial academic interest. It has seen widespread application in computer science [4], engineering science [5], energy science [6], biomedical science [7], and other fields.
GWO has been utilized as an optimization tool in diverse areas. For instance, Zaid et al. [8] employed it for intelligent fractional integral control to achieve load frequency control in a two-zone interconnected modern power system. Azizi et al. [9] leveraged a multi-objective variant of GWO to determine the optimal operating conditions of a novel cogeneration system. Ullah et al. [10] adopted GWO when optimizing the parameters of an electric vehicle charging time prediction model. Moreover, Wang et al. [11] utilized it to optimize parameters in an energy prediction model to predict the energy development trend in the next few years. Elsisi [12] utilized an improved version of GWO for an adaptive predictive model for the control of autonomous vehicles, and Shaheen et al. [13] employed it for grid-wide optimal reactive power scheduling. Furthermore, Hu et al. [14] and Boursianis et al. [15] both demonstrated GWO’s utility in the prediction of wind speed and the optimization of antenna design and synthesis, respectively. Liu et al. [16] incorporated the Gray Wolf Optimizer (GWO) algorithm into the domain of robotic path planning. Through the integration of a suite of enhancement techniques, this approach facilitated notable advancements in the practical application of path planning, thereby offering a more efficacious optimization strategy that contributes significantly to the progression of research in robotic path planning.
Although the GWO algorithm has many advantages, there are still some problems. Since it is centralized when hunting under the leadership of three wolves, when the leading wolf falls into the local extremes, other individuals are also affected. Therefore, when solving some large-scale problems, this algorithm is hindered by a lack of diversity and can easily fall into the local optimum [17]. To this end, many studies related to GWO have been proposed. Meidani et al. [18] proposed an adaptive GWO, which adaptively adjusted the convergence factor based on fitness in the search process to improve the performance of the algorithm. Zhang et al. [19] proposed two dynamic GWO algorithms based on the standard GWO. The position update of the current search wolf does not need to wait for the comparison between other search wolves and the three leading wolves; therefore, it can update the position in time and improve the speed of iterative convergence. Sun et al. [20] proposed an equilibrium Gray Wolf Optimization algorithm with refracted reverse learning, which overcame the problem of the low population diversity of GWO groups in the later stage and reduced the possibility of falling into local extremes. Li et al. [21] introduced a differential evolution algorithm and nonlinear convergence factor into the traditional GWO algorithm to solve the problem that the algorithm can easily fall into the local optimum. Zhou et al. [22] proposed a nonlinear convergence factor and search mechanism so that, when hunting, the update of the wolf pack is not only affected by the three leading wolves but also by the position of the surrounding wolves.
The optimization ability of the above improved GWO algorithm has been enhanced to some extent, but there are still some limitations. For example, Dereli [23] proposed an improved convergence factor, which improved the accuracy of the algorithm; however, the problem of insufficient diversity in the later stage remained. Heidari and Pahlavani [24] adopted Levi flight and greedy strategies, which can improve the ability of the GWO algorithm to jump out of the local optimum when solving many multimodal problems but did not accelerate the convergence speed. Based on the above shortcomings, this paper proposes a series of strategies to improve GWO based on the long tail property of Cauchy distribution. First, an initialization strategy that follows a Cauchy distribution is used to increase the diversity of the initial population. Secondly, a dynamic nonlinear inertia weighting strategy based on Cauchy distribution and logarithmic function is proposed to further improve the search performance of the algorithm. Finally, in the late iteration of the algorithm, the inertia weight is taken as the key index to measure mutation probability, and a Cauchy mutation operator is introduced to update the position to improve the population diversity and greatly improve the ability of the algorithm to jump out of the local optimum. The Cauchy distribution’s introduction to GWO enhances the algorithm’s global search ability, maintains diversity within the population, and provides a dynamic balance between exploration and exploitation, which, collectively, solve the issues of premature convergence and limited late-stage search diversity. Simulation results show that CGWO has great advantages in terms of solution accuracy, convergence speed, and stability.
The quest for optimal solutions in diverse engineering design scenarios has long stood as a pivotal challenge in the realm of industrial production. With the ongoing evolution of optimization technology, significant advancements have been achieved in addressing complex, real-world engineering conundrums [25]. Furthermore, a plethora of novel optimization methodologies have been effectively integrated into a myriad of engineering design processes [26]. Among these, the design of pressure vessels is recognized as a quintessential problem [27], with the primary objective of minimizing overall design costs. In this context, selecting an optimization technique that aptly balances exploratory and developmental aspects, while, concurrently, circumventing local optima, is of paramount importance. This paper introduces the application of the newly proposed Cauchy Gray Wolf Optimizer (CGWO) algorithm to this specific design challenge. The results pertaining to pressure vessel design underscore the formidable potential of CGWO in navigating the complexities inherent in practical engineering issues, thereby heralding a novel avenue for resolving intricate challenges in engineering design optimization.
The rest of the paper is structured as follows: Section 2 describes the related work. Section 3 describes the details of the proposed CGWO algorithm and the algorithm flow. Section 4 shows the results and analysis of the simulation experiments on 23 standard test functions. Section 5 implements CGWO to a fundamental engineering task: the design of pressure vessels. Section 6 provides the conclusions.

2. Related Work

2.1. An Overview of the Gray Wolf Optimizer

The Gray Wolf Optimization algorithm simulates the hierarchical structure of the gray wolf population and its hunting behavior in nature. Hierarchy is the main characteristic of the gray wolf pack, and, to maintain order in the wolf pack, the gray wolf population is divided into four levels, namely, α, β, δ, and ω wolves, as shown in Figure 1.
In Figure 1, each wolf plays a different role in the group, among which α, β, and δ are considered to be the leader wolves with better ability, representing the optimal value, the second-best value, and the third-best value, respectively.
The gray wolf’s hunting process consists of three main steps, namely, finding the prey, surrounding and hunting until it stops moving, and, finally, attacking it. The process of encirclement can be modeled by updating the position of each wolf relative to that of the prey, as in Equation (1):
D = C X p t X t
where X p and X are the position vectors of prey and gray wolf, respectively, D is the distance vector, and t   is the current number of iterations.
Update the wolf’s position in the next iteration as shown in Equation (2):
X t + 1 = X p t A D
A and C in Equations (1) and (2) are coefficient vectors, and their calculation formula is shown in Equation (3):
A = 2 a r 1 a ,         C = 2 r 2
where r 1 and r 2 are random numbers in the range [0, 1]; a is the convergence factor, which decreases linearly from 2 to 0 in iterations. The calculation formula of a is shown in Equation (4):
a = 2 ( 1 t / T m a x )
where T m a x is the maximum number of iterations.
In the decision space of the optimization problem, the best solution (the position of the prey) is not known. Therefore, in order to simulate the hunting behavior of the gray wolf, α, β, and δ are assumed to know the potential position of the prey better, and the positions of the three are used to judge the position of the prey so that other gray wolves update their positions according to the positions of the three optimal gray wolves and gradually approach the prey, as shown in Figure 2.
The mathematical formula for calculating the distance vector of the three leading wolves during the gray wolf hunting the target prey is shown in Equation (5):
D α = C 1 X α X D β = C 2 X β X D δ = C 3 X δ X
where D α , D β , and D δ signify the respective distances from the α, β, and δ wolves to the other members of the pack. Correspondingly, X α , X β , and X δ denote the present position vectors of the α, β, and δ wolves, and X designates the current position vector of the gray wolf under consideration. Moreover, C 1 , C 2 , and C 3   are vectors composed of random coefficients.
The formula for the position update of the ω wolf is shown in Equation (6):
X 1 = X α A 1 D α X 2 = X β A 2 D β X 3 = X δ A 3 D δ X t + 1 = X 1 + X 2 + X 3 3
where X 1 , X 2 , and X 3 denote the current positions of the three wolves, respectively. C 1 , C 2 , and C 3 are vectors composed of random coefficients.

2.2. Cauchy Distribution

The Cauchy distribution, also known as the Cauchy—Lorentz distribution, is a continuous probability distribution [28]. It has some special properties, such as heavy tails. This makes Cauchy distribution better able to capture the occurrence of extreme values and is suitable for describing some abnormal situations or rare events [29]. The probability density function of the Cauchy distribution is shown in Equation (7):
f x , x 0 , γ = 1 π · γ ( x x 0 ) 2 + γ 2
where x 0 is the location parameter, which can control the location of the distribution. γ is the scale parameter [30], which can control the shape of the distribution. In order to understand the Cauchy distribution more intuitively, the distribution curve of the Cauchy distribution with different parameters and the comparison between the Cauchy distribution and Gaussian distribution curves are drawn, as shown in Figure 3.
It can be seen from Figure 3 that the Cauchy function curve is a symmetric bell-shaped curve, which slowly decreases from the top to the two ends. The peak distribution of the Cauchy distribution at the origin of the coordinates is shorter, and the rest of the peak distribution is longer. In comparison to the Gaussian distribution, the Cauchy distribution exhibits a broader spread and a more pronounced propensity for dispersion [31], endowing it with the ability to produce outlier values at considerable distances from the mean. Luo [32] has observed that, while the Gray Wolf Optimizer (GWO) demonstrates exemplary efficacy in resolving optimization problems with global optima situated at the coordinate origin, its performance is marred by a marked search bias in scenarios where the optima deviate from this central point. Such an association demonstrably underscores the efficacy and logical foundation underlying the employment of the Cauchy distribution as a strategic enhancement of the GWO algorithm.
Building on the foregoing analysis, the incorporation of the Cauchy distribution within the Gray Wolf Optimizer (GWO) algorithm is hypothesized to bolster its explorative prowess while simultaneously diminishing the tendency for premature convergence to local optima. Consequently, the heavy-tailed nature of the Cauchy distribution is of considerable significance in the sophisticated augmentation of the established GWO paradigm.
The cumulative distribution function of the Cauchy distribution is shown in Equation (8):
f x , x 0 , γ = 1 2 + 1 π · tanh 1 x x 0 γ
The formula [33] for generating random numbers using Cauchy distribution is shown in Equation (9):
C a u c h y x 0 , γ = x 0 + γ · tan π · ( u 0.5 )
where u is a random number in the range from 0 to 1.
Since the slope of the tangent function tan increases at a very fast rate, Cauchy random numbers have a probability of some extreme values, and the randomness is greater. If the optimal solution of the problem occurs, in some extreme cases, the use of Cauchy random numbers is advantageous. When the algorithm falls into the local optimum, the mutation of random numbers has a greater probability of jumping out of the local optimal solution. Based on the above characteristics of Cauchy distribution and Cauchy random numbers, the basic GWO algorithm is improved.

3. The Proposed Algorithm

3.1. Introduction of the CGWO

To solve the search bias of the traditional GWO that occurs when solving the problem that the optimal solution is not at the origin, this paper makes a series of improvements to the traditional GWO algorithm based on the characteristic that the Cauchy distribution has a greater probability to generate extreme values far from the origin compared with other distributions.
Improvements are mainly made from the following three aspects:
(1)
An initialization strategy predicated on the Cauchy distribution is implemented. Leveraging the heavy-tailed nature of the Cauchy distribution, the scope of distribution for the initial individuals is broadened, thereby circumventing an overly concentrated distribution of initial individuals that could lead to entrapment in local optima. This enhancement in the initialization process, achieved through the augmentation of the initial population’s diversity, enables the algorithm to conduct a more extensive search in its initial stages, thereby significantly bolstering its global search capabilities;
(2)
Employing a dynamic, nonlinear hybrid weighting strategy, which integrates the Cauchy distribution and logarithmic functions and facilitates a time-responsive adaptation of the algorithm’s search mechanism. This adaptation aligns with the distinct characteristics prevalent in different phases of the search process. In the initial stages, this strategy significantly enhances the algorithm’s capability for global exploration, whereas, in the latter stages, it effectively accelerates the convergence rate. Such a strategic approach ensures a more nuanced balance between global and local search methodologies at each stage of the algorithm’s execution, thereby optimizing overall performance;
(3)
The Cauchy mutation strategy based on dynamic inertia weights was adopted to improve the position update, and the Cauchy distribution was introduced in the late search period to generate certain disturbances to individuals. Meanwhile, the controllability of disturbance control based on the survival of the fittest rule improved the diversity of the late population within a controllable range and enhanced the ability of the algorithm to break out of the local optimal value.
The specific improvement methods are described in Section 3.2, Section 3.3 and Section 3.4.

3.2. Cauchy Initialization Strategy

The GWO algorithm adopts a random method to initialize the population, and the evolution of the population is only guided by the better solution in the population, which makes the GWO algorithm easily fall into the local optimal value [34].
In order to avoid the excessive concentration of initialized individuals and poor global search ability, combined with Cauchy random numbers [35], this paper proposes a Cauchy initialization strategy, as shown in Equation (10):
X = l b + ( u b l b ) · C a u c h y x 0 , γ
where l b is the lower bound of the variable,   u b is the upper bound, and C a u c h y x 0 , γ is the random number based on the Cauchy distribution.
The distribution of the initial population in the search space plays a key and decisive role in the global search ability of the algorithm. Compared with the initial population distribution of the conventional random method, the introduction of the random numbers generated by the Cauchy distribution can improve the initial diversity of particles, and this diversity helps the algorithm avoid a premature concentration in the search space to obtain more exploration in the search process. The comparison between random initialization and Cauchy random initialization is shown in Figure 4.
Figure 4 shows the scatter plots of the conventional random initialization and the Cauchy random value initialization proposed in this paper. It can be seen from the figure that the conventional random point selection may cause the initial individual distribution to be more concentrated, thus making the algorithm more likely to fall into the local optimum. Individuals are more likely to be created far from the center so they can cover most of the search space [36], which improves the diversity of the initial population and reduces the possibility of the algorithm falling into the local optimum.

3.3. Dynamic Nonlinear Inertia Weights Based on Cauchy Distribution

Shi and Eberhart [37] proposed an inertial weight method in particle swarm optimization that made outstanding contributions to the balance between algorithm exploration ability and development ability. Since GWO can easily fall into local optima, the idea of inertia weight has been introduced into GWO to improve its performance [38,39].
In order to improve the optimization ability of GWO, and inspired by the improved inertia weight of the PSO algorithm [40], this study designs a dynamic nonlinear inertia weight that combines the Cauchy random number and logarithmic function. In order to balance the exploration ability and development ability, previous research shows that the inertia weight should be decreasing [41] and that the early ω value is large, which improves the global search ability of the algorithm. The value of ω decreases in the later period, which is aimed at the local search ability of the algorithm. However, when ω is very small in the later period, there is a loss in population diversity and the algorithm is susceptible to easily falling into the local optimum. In order to solve this problem, this paper introduces the Cauchy random value and proposes a dynamic and more flexible weight that not only realizes the nonlinear reduction in inertia weight but also improves population diversity and prevents premature convergence. The inertia weight proposed is shown in Equation (11):
ω t = ω m i n + ω m a x ω m i n 2 · 1 ln e + k · τ 2   + ω m a x ω m i n 2 · C a u c h y x 0 , γ
where ω m a x and ω m i n represent the maximum and minimum inertia weights, respectively; τ = t / T m a x , that is, the number of current iterations t divided by the maximum number of iterations. k is a constant used to regulate the trend of weight reduction.
This formula combines the logarithmic function and Cauchy random number to optimize the algorithm, which not only realizes the overall trend of nonlinear decline but also increases the population diversity of the algorithm in the later stage because of the introduction of the Cauchy random number, which meets the necessary conditions for convergence and will not converge prematurely. The first half of the equation ensures that the inertia weight decreases nonlinearly through the logarithmic function 1 / ln e + k · τ 2 , and the weight decreases gradually as τ increases. Cauchy random numbers are introduced into the second half of the formula. Due to the strong disturbance of the Cauchy distribution, the diversity of the population in the later stage is improved, the local optimal solution is avoided, the accuracy of the algorithm is improved, and the convergence speed of the algorithm in the later stage is accelerated.
The position update formula based on inertia weight is shown in Equation (12):
X t + 1 = ω · X 1 + X 2 + X 3 3
Equation (12) is modified by the application of a non-linear inertia weight ω governed by the Cauchy distribution, building upon the foundational position update Equation (6). This alteration transcends the realm of simplistic average calculations. During the position update procedure, the dynamic non-linear decay of the inertia weight enhances the equilibrium between exploration and exploitation, thereby expediting the algorithm’s iteration speed. The incorporation of Cauchy random numbers augments the algorithm’s diversity in the later stages, amplifying the likelihood of escaping local optima. Consequently, this formula amalgamates the advantageous properties of the Cauchy inertia weight into the position-updating process. By comparing the randomly generated numbers in 100 iterations with Cauchy random numbers, Figure 5 can be obtained.
It can be seen from Figure 5 that, compared with random numbers generated randomly, Cauchy random numbers have greater fluctuations and have a certain probability of producing some extreme values far from the origin, which is conducive to improving diversity and is better equipped to solve the problem that GWO can easily fall into the local optimum.

3.4. Cauchy Mutation Strategy Based on Dynamic Inertia Weight

Cauchy mutation, as an update strategy, has been applied to optimization algorithms by many scholars, and it has been proven to be an effective technique for improving algorithms [42]. Cauchy mutation can improve the exploration or development ability of the algorithm [43]. In this paper, the Cauchy nonlinear inertia weight is used as the mutation probability, and a new Cauchy mutation position update strategy combined with the inertia weight is proposed. Compared with the original position update strategy, the improved method can generate a wider range of individual mutations. Compared with the conventional Cauchy mutation strategy, the proposed mutation strategy is more flexible and can adapt to each stage of the search.
If the leading wolf is trapped in the local optimal value, other individuals will also be greatly affected. Therefore, the objects of the Cauchy mutation are the three leading wolves α, β, and δ. The Cauchy mutation strategy based on inertia weight is shown in Equation (13):
X α , β , δ i + 1 t = X α , β , δ i t + C a u c h y x 0 , γ · X α , β , δ i t
where X α , β , δ i t is the individual before the mutation and X α , β , δ i + 1 t is the individual after the mutation.
The mutation probability, which determines whether an individual needs mutation, is calculated using Equation (14).
P s = ω = ω m i n + ω m a x ω m i n 2 · 1 ln e + k · τ 2   + ω m a x ω m i n 2 · C a u c h y x 0 , γ
In Equation (14), the inertia weight is taken as the mutation probability P s , which is the basis for judging whether the mutation operation is needed. In the early stage, the inertia weight remains at a large value, and the global search ability of the algorithm is strong. These perturbations are then added to the current position to generate new candidate solutions. The Cauchy distribution is a heavy-tailed distribution that does not have finite mean and variance; therefore, greater randomness can be introduced, which helps the algorithm jump out of the local optimum.
Due to the heavy-tail nature, the Cauchy mutation may also produce large disturbances. In order to prevent large disturbances from affecting the convergence speed, this paper first compares the fitness values of the mutant and the current optimal individual and then uses the survival of the fittest strategy [44] to determine the final new individual—this ensures that the perturbations generated by the Cauchy distribution are controllable. The specific position update formula is shown in Equation (15):
X α , β , δ ( t ) = X α , β , δ i ( t )                         r a n d < P s                                                                                                                                   X α , β , δ i + 1 t                       r a n d > P s     a n d   f X α , β , δ i t > f X α , β , δ i + 1 t X α , β , δ i t                         r a n d > P s     a n d   f X α , β , δ i t < f X α , β , δ i + 1 t    
where f X α , β , δ i t is the fitness function value of the α β , and δ individual before the mutation, and f X α , β , δ i + 1 t is the fitness function value of the individual after the mutation. P s is the mutation probability, that is, the Cauchy inertia weight proposed in this paper, and it is a dynamically changing value.
Equation (15) uses the greedy strategy, which not only takes advantage of the long tail characteristics of a Cauchy mutation to increase population diversity in the later stage to prevent falling into the local optimum but also makes the mutant compete with the pre-mutation individuals according to the idea of natural selection, prevents accuracy decline in the algorithm caused by an extreme phenomenon in the Cauchy distribution.

3.5. The Pseudo-Code of the Proposed CGWO Algorithm

The pseudo-code of the proposed algorithm is shown in Algorithm 1.
Algorithm 1 Pseudo-code of the CGWO
Initialize using Equation (10) to generate the gray wolf populations   X i ( i = 1 ,   2 , , N )
Calculate the fitness value   f ( X i )   of each individual in the population
Select     X α =   the best individual
       X β =   the second individual
       X δ =   the third individual
While  t < maximum number of iterations
   for   i = 1   t o   N
      calculate the values of parameter   a ,   A ,   C   according to Equations (3) and (4)
      calculate the values of Inertia weight   ω   according to Equation (11)
      update the position of individuals   X i   by Equations (6) and (12)
   end
   Update the leader wolf α, β and δ
   If   rand > Ps
      use Cauchy mutation and survival of the fittest strategies to the leader wolves using Equations (13) and (15)
   end
end
In Algorithm 1, the initialization part is an improved initialization method based on the Cauchy random number, and the Cauchy nonlinear inertia weight is introduced in the position update part. In the later stage of the algorithm, the Cauchy mutation strategy and survival of the fittest law are used to increase the diversity of the population to avoid being trapped in the local optimum.

4. Simulation Experiment and Results Analysis

This paper includes five main experiments:
(1)
The first of the main experiments was conducted to find the appropriate values of the scale parameter of the Cauchy distribution and the parameter k of the inertia weight in the proposed algorithm, which can be used to select the parameters in the algorithm;
(2)
The second experiment is a comparison experiment between the proposed CGWO algorithm and several other optimization algorithms as well as traditional GWO, which is used to test the performance of CGWO in low and high dimensions;
(3)
The third experiment is a comparison between the inertia weight improvement strategy of this paper and the inertia weight strategy of several other proposed GWOs, which is used to test the effectiveness and superiority of the Cauchy inertia weight strategy;
(4)
The fourth experiment is a comparison of the proposed CGWO algorithm with several other improved GWO algorithms. It is used to verify the improvement in algorithm performance achieved by the improvements suggested in this paper;
(5)
The fifth experiment is a rank-sum test of the proposed algorithm against several algorithms in the first experiment. Through this test, one can test whether there is a significant difference between CGWO and other algorithms.

4.1. Experiment Set Up

The 23 benchmark function sets [45] employed in this investigation are of a classical nature and were rigorously selected to evaluate the Gray Wolf Optimizer (GWO) algorithm at its inception. The suite encompasses a tripartite categorization of test functions: 7 unimodal, 6 multimodal, and 10 fixed, low-dimensional multimodal benchmark functions. These functions are instrumental in assessing the multifaceted performance of an algorithm, amalgamating both its global optimization capabilities and local search proficiencies. The unimodal benchmark functions are typically harnessed to gauge the algorithm’s global search prowess and the performance on multimodal benchmarks is indicative of its local search efficiency and its susceptibility to premature convergence in local optima. To provide a holistic appraisal of the algorithm’s performance, the present study conducts evaluations across both lower and higher dimensions, thereby scrutinizing the algorithm’s scalability.
All the experiments were conducted on a PC using AMD Ryzen 7 5800H CPU and a Windows 10 operating system; in addition, all algorithms were run with MATLAB R2021a.
The unimodal benchmark function is presented in Table 1, where the function expression, dimensions, search range, and optimal solution are listed.
The multi-mode benchmark function is shown in Table 2.
The fixed-dimension, multimodal benchmark functions are shown in Table 3.

4.2. Selection of Parameters

4.2.1. Selection of the Scale Parameter γ in the Cauchy Distribution

The scale parameter in the Cauchy distribution controls the shape of the Cauchy distribution, which has a certain impact on the exploration ability or development ability of the algorithm. Therefore, selecting the appropriate scale parameter is very important for the improvement effect. In order to obtain a reasonable value of γ , in this paper, a series of extensive experiments were carried out on the test functions in the table, and relatively consistent results were obtained. In brief, the mean values of the solution results of the algorithms with different values of γ on the five functions are given here. Among these four functions,   f 1 and f 7 are unimodal benchmark functions, f 10 and f 12 are multimodal benchmark functions, and f 15 is a fixed-dimension, multimodal benchmark function. The experimental results are shown in Table 4, and the best experimental results are shown in bold.
From the analysis of the results in Table 4, for f 1 and f 10 , when the parameters are different, the results of this algorithm are consistent and good. For functions f 7 , f 12 , and f 15 , when the parameters are different, the results of this algorithm are not very different. When the value of γ is 1, the results of each function are good and the results are consistent. Therefore, the scale parameter of the Cauchy distribution is chosen as 1, which is also the parameter of the standard Cauchy distribution.
In addition, in order to intuitively show the influence of different values on CGWO performance, line charts showing the mean value of the algorithm solution results of f 12 and f 15 are presented in Figure 6:
As can be seen from Figure 6, for f 12 , when γ [ 0.8 ,   1.3 ] , the results of CGWO are more stable and accurate. For f 15 , when γ = 0.6 and γ = 1 , the results are good. Therefore, it is appropriate to set γ = 1 in the CGWO algorithm in this paper as the algorithm can better balance the abilities of exploration and development. It is also the parameter of the standard Cauchy distribution.

4.2.2. Selection of Inertia Weight Parameter k

The parameter k of the logarithmic function of the dynamic inertia weight in CGWO can control the decreasing trend of the inertia weight. In order to obtain the appropriate parameter k value, this paper conducts a series of extensive experiments on the test functions in the table. For brevity, the results of four typical functions are shown. Among them, functions f 1 and f 7 are single-peak benchmark functions, f 12 is a multi-modal benchmark function, and f 15 is a fixed-dimension multi-modal benchmark function. The test results for parameter k are shown in Table 5, and the best experimental results in the table are shown in bold.
From the test results in Table 5, it can be analyzed that, for function f 1 , when the value of the k parameter is different, the algorithm achieves the optimal result. For f 7 , the performance of the algorithm is relatively good after the value of k is 4.0, and the effect is optimal when k = 4.5 . For functions f 12 and f 15 , the algorithm performance is good in the range of the k value; therefore, the k value is determined to be 4.5.
In order to see the influence of different k values more intuitively on the performance of the algorithm, a line diagram of the mean value of the algorithm solution results changing with the k value is shown in Figure 7.
According to Figure 7 the following conclusions can be drawn. For function f 7 , the mean value solved using the algorithm shows a downward trend with the increase in the value of k , and the results of the algorithm are better in the second half of the broken line, especially when k = 4.5 . For f 12 and f 15 , it can be found from the figure that the results of the algorithm fluctuate to some extent with the change in the value of k ; however, when k is within the range of [ 4 ,   5 ] , the results of the algorithm are relatively better and more stable. Based on the above analysis, a value of 4.5 for k is appropriate.

4.3. Effectiveness Test of CGWO: Comparison with Other Optimization Algorithms

In order to verify the performance of CGWO, this paper selects intelligent algorithms such as PSO, FA [46], FOA [47], GWO, and the proposed CGWO algorithm for comparison and analysis. The performance of all algorithms was evaluated according to the best value, mean value, and standard deviation generated by running the results on 23 test functions; for functions with non-fixed dimensions, the performance was tested with dimensions 30 and 100, respectively. These test statistics are presented below. The best experimental results in Table 6, Table 7 and Table 8 are shown in bold. Among them, the test results on the seven unimodal benchmark functions are shown in Table 6.
It can be clearly seen from Table 6 that, for unimodal benchmark functions, the CGWO algorithm performs very well in both low and high dimensions. Specifically, for functions f 1 to f 4 , CGWO achieves the theoretical optimal value when the dimension is 30 and 100; moreover, the best value, mean value, and standard deviation are all 0, and the solution performance far exceeds that of other algorithms. For f 5 , the performance of CGWO and GWO is ahead of other algorithms, and there is little difference between CGWO and GWO in terms of solution accuracy, but the standard deviation of CGWO is better, indicating that the CGWO algorithm is more stable. For f 6 and f 7 , the performance of the CGWO algorithm is significantly better than that of the other four algorithms in both low and high dimensions, the results are up to 10 orders of magnitude higher than that of the other algorithms, and the best value, mean value, and standard deviation of CGWO are the best among the several algorithms. The unimodal benchmark function is usually used to test the global search performance of the algorithm, and the above analysis shows that the CGWO algorithm has a strong global search ability in both low and high dimensions.
The test results of the multi-modal benchmark function are shown in Table 7.
The test results in Table 7 show that CGWO has a strong solution performance in both low and high dimensions for multi-mode benchmark functions. For functions   f 9 to f 12 the optimal value, mean value, and standard deviation of the CGWO test results in low and high dimensions are significantly better than those of the other four algorithms. Especially for f 9 and f 11 , CGWO reaches the theoretical optimal value, and the standard deviation and mean value are also 0. For f 8 , CGWO has better solution accuracy, and the calculated optimal value is closest to the theoretical optimal value. For f 13 , CGWO has the highest solution accuracy, and the order of magnitude of the results is 13 orders of magnitude higher than other algorithms. The above analysis is sufficient to show that CGWO has strong solution performance in the face of multimodal benchmark functions. The solution performance of the multi-mode benchmark function can usually reflect the local search performance of the algorithm and whether it has the disadvantage of falling into the local optimum. From the test results, CGWO has a good local search ability and the strongest ability to jump out of the local optimum solution, which is the result of making full use of the long tail characteristics of the Cauchy distribution, which proves the effectiveness of CGWO.
The test results of the fixed, low-dimensional multimodal benchmark function are shown in Table 8.
According to Table 8, for fixed dimension multi-mode benchmark functions, the solution performance of the CGWO algorithm also has good performance. For each test function, the optimal value of the CGWO results is closest to the theoretical optimal value, and the theoretical optimal value can be reached in most cases. Especially for functions f 16 to f 20 , the optimal value, mean value, and standard deviation of the CGWO results are the best. The excellent solution performance under the fixed dimensional multimodal benchmark function is enough to prove that CGWO can also perform a global search and jump out of the local optimum in a fixed, low-dimensional space excellently.
By analyzing the results of the above three categories of test functions (Table 6, Table 7 and Table 8), it can be concluded that CGWO has significant advantages in the accuracy, stability, and scalability of different types of test functions. In order to reflect the advantages of CGWO more intuitively, the convergence curves of the five algorithms for all test functions are shown in Figure 8.
By analyzing Figure 8, it can be found that the convergence speed of the CGWO algorithm is fast, especially for unimodal benchmark functions f 1 to f 4 and multi-modal benchmark functions f 9 to f 11 ; CGWO convergence speed is much faster than other algorithms: its accuracy is the highest, and it can quickly converge to the theoretical optimal value or the closest to the theoretical optimal value. It can be clearly seen from the iteration graphs of f 6 , f 8 , f 12 , and f 13 that, in the iteration process, CGWO jumps out of the local optimum value when other algorithms have an obvious tendency to fall into the local optimum, which reflects the excellent performance of CGWO in jumping out of the local optimum solution. Moreover, it fully reflects the advantages of the long tail characteristics of the Cauchy distribution. For f 7 , f 10 , and f 15 , although CGWO, like other algorithms, tends to fall into the local optimum, it has obvious advantages in solution accuracy and convergence speed compared with other algorithms, and the optimal value obtained is also closest to the theoretical optimal value. For f 21 , f 22 , and f 23 , CGWO, GWO, and FOA all have the ability to jump out of the local optimum; however, the accuracy and convergence speed of CGWO are better.
To sum up, CGWO has great advantages in accuracy, stability, scalability, and convergence, which fully verifies the feasibility and performance of the CGWO algorithm.

4.4. Validity Test of CGWO Inertia Weights: Comparison with Other Inertia Weight Strategies

In order to prove the effectiveness of the dynamic inertia weight based on the Cauchy distribution and logarithmic function proposed in this paper, three different types of inertia weights are selected from other improved Gray Wolf Optimizers (GWOs) in the literature for comparison. This paper calls the GWO algorithm improved by these three inertia weight strategies GWO1 [48], GWO2 [49], and GWO3 [50], respectively. The comparative test results of the algorithm improved by the Cauchy nonlinear weighting strategy and the GWO algorithm improved by these three weighting strategies are shown in Table 9, and the best experimental results are shown in bold.
As can be seen from Table 9, for the vast majority of test functions, the CGWO algorithm improved by the Cauchy nonlinear inertia weight has the best optimization performance. For each test function, the optimal value of the CGWO test results is the closest to the theoretical optimal value; in particular, functions f 1   to f 4 , f 9 , f 11 , f 14 , and f 15 to f 23 all reach the theoretical optimal value. For the vast majority of functions, the mean value and standard deviation of the CGWO results iarealso the best. Specifically, for f 2 and f 3 , only CGWO converges to the theoretical optimal value, while other algorithms cannot achieve the theoretical optimal value. For f 6 , f 7 , f 12 , and f 13 , CGWO outperforms the other algorithms by up to seven orders of magnitude. Through the experimental results of the Cauchy weight strategy, under the unimodal benchmark function, multi-modal benchmark function, and fixed-dimension, multi-modal benchmark function, it can be concluded that the proposed Cauchy nonlinear weight strategy can improve the global search ability of the algorithm, enhance the local search performance of the algorithm, and reduce the possibility of the algorithm falling into the local optimal solution. This fully demonstrates the advantage of the inertia weight strategy of the proposed CGWO in terms of solution performance.
In order to compare the optimization effects of different inertia weight strategies more intuitively on the algorithm, the convergence curves of the four inertia weight strategies on all test functions are shown in Figure 9.
It can be seen from Figure 9 that the performance of CGWO under most test functions has obvious advantages over other algorithms, especially functions f 1 to f 4 , f 9 , f 11 , f 14 , and f 15 . For these functions, the convergence speed and solution accuracy of the CGWO algorithm are much higher than those of other algorithms. To be specific, for f 6 , f 12 , and f 13 , it can be clearly seen that, when the other three algorithms fall into the local optimum, CGWO breaks out of the local optimum in the iteration process, thus making the solution accuracy much higher than that of the other three algorithms. This fully reflects the advantages brought by introducing the long tail characteristic of the Cauchy distribution into the inertia weight, which improves the population diversity of the algorithm in the later stage. It also greatly improves the ability of the algorithm to jump out of the local optimum. For f 7 , f 10 , f 14 , and f 15 , although the four algorithms all tend to fall into the local optimum, the optimization ability of CGWO is still the best. For the multi-mode benchmark function f 8 and the fixed dimension multi-mode benchmark functions f 21 to f 23 , CGWO also has excellent performance, and the convergence speed is much faster than other algorithms, allowing for high solution accuracy and the ability to jump out of the local optimum.
To sum up, CGWO outperforms the other three algorithms in terms of accuracy, stability, and convergence, and CGWO has a strong ability to jump out of the local optimum, which shows that, for GWO, the Cauchy nonlinear dynamic inertia weight strategy has better performance than the other three inertia weights, reflecting the effectiveness and improvement of the proposed weight strategy.

4.5. Performance Comparison of CGWO with Several Other Improved GWOs

In order to further study the performance of CGWO, MGWO [51], HGWO [52], and LGWO [53], in this section, they were selected to compare these three improved GWO algorithms with the CGWO proposed in this paper. The implementation results are shown in Table 10, and the best experimental results are shown in bold.
As can be seen from Table 10, the CGWO algorithm proposed in this paper has the best performance for most of the unimodal benchmark functions, multi-modal benchmark functions, and fixed-dimension, multi-modal benchmark functions. Specifically, for unimodal benchmark functions f 1 to f 7 , the optimal value, average value, and standard deviation of the CGWO algorithm results are the best, among which, for functions f 1 to f 4 , only CGWO can achieve the theoretical optimal value with a mean value and standard deviation of 0, and the effect is excellent. For f 7 , although CGWO is the same as MGWO, HGWO, and LGWO, all fail to reach the theoretical optimal value; however, their optimal value is the closest to the theoretical optimal value and the most stable, reflecting the excellent global search ability of CGWO. For the multi-modal benchmark functions f 8 to f 13 , the three evaluation indexes of the CGWO results are also optimal, which reflects its strong local search ability and ability to prevent easily falling into the local optimum. For functions f 9 to f 11 , CGWO reached the theoretical optimal value. For several other multimodal benchmark functions, the CGWO results are up to five orders of magnitude higher than other algorithms, which is not only accurate but also the most stable. For fixed-dimension, multi-modal benchmark functions f 14 to f 23 , CGWO also has obvious advantages, with the highest solution accuracy and high stability, which proves that its solution ability is also excellent in multi-modal problems with fixed low dimensions. In general, the proposed improved CGWO outperforms the other three improved GWO algorithms in solving the unimodal benchmark function, the multi-modal benchmark function, and the fixed-dimension, multi-modal benchmark function.
In order to further compare the performance of the four improved GWO algorithms, this paper draws the iterative convergence curves of the four improved algorithms under different test functions, as shown in Figure 10.
From Figure 10, it can be found that, for f 1 to f 4 , f 9 , and f 11 , CGWO has the fastest iteration speed and highest accuracy, and its performance far exceeds that of the other three improved algorithms. For f 6 , f 12 , and f 13 , CGWO can jump out of the local optimum when the other three improved GWOs fall into the local optimum solution, which fully reflects the ability of the Cauchy distribution to reduce the possibility of falling into the local optimum and proves that the improved idea in this paper can solve the problem that the traditional GWO algorithm can easily fall into the local optimum to a certain extent. For f 7 , f 8 , f 14 , f 15 , f 18 , and f 20 , although the change trends of the four improved algorithms are very similar, it can be seen that CGWO has a faster iteration speed and higher accuracy. For functions f 21 to f 23 , the four algorithms all have the ability to jump out of the local optimum under fixed low dimensions; however, the advantages of CGWO are more obvious and are reflected in its better iteration speed and accuracy. From the above analysis, it can be concluded that, among the four improved GWO algorithms, CGWO has a wider and better improvement effect, stronger search performance, faster iteration speed, and more stability.

4.6. Wilcoxon Rank Sum Test

In order to further evaluate the effectiveness and optimization performance of CGWO, this paper uses the Wilcoxon rank sum test to verify whether the running results of CGWO are significantly different from other algorithms at a significance level of a = 5% [54]; when p < a, H 0 hypothesis is rejected, indicating that there is a significant difference between the two algorithms. When p > a, H 0 hypothesis is accepted, indicating that there is no significant difference between the two algorithms. The test results of the CGWO, PSO, FA, FOA, and GWO algorithms at a significance level of a = 5% are shown in Table 11.
As can be seen from Table 9, for most test functions, most of the p values of CGWO are less than the significance level a, and there are significant differences between the calculation results and the other four algorithms.

5. Solving a Pressure Vessel Design Problem

The design of pressure vessels represents a time-honored engineering challenge. The principal objective here is the cost optimization of cylindrical pressure vessels. The focus is on minimizing the manufacturing costs, which includes processes such as pairing, forming, and welding. A schematic representation of a pressure vessel is presented below.
As shown in the Figure 11, the pressure vessel’s top is hemispherical, and the design optimization variables include the length of the cylindrical section ( L , the inner radius ( R ), the shell thickness ( T s ), and the head thickness ( T h ). These four variables are critical in pressure vessel design. The mathematical model of this problem is expressed as follows.
Min f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.81 x 1 2 x 3
x = x 1 , x 2 , x 3 , x 4 = ( T s , T h , R , L )
s . t .           g 1 x = x 1 + 0.0193 x 3 0
g 2 x = x 2 + 0.00954 x 3 0
g 3 x = π x 3 2 x 4 4 3 π x 3 3 + 129,600 0
g 4 x = x 4 240 0
0 x 1 99 ,           0 x 2 99 ,           10 x 1 200 ,           10 x 1 200
Equation (16) constitutes the objective function of the classical pressure vessel design problem, delineating the minimization objective. This objective pertains to the discovery of an optimal configuration for the four design variables, namely, shell thickness Ts, head thickness Th, inner radius R, and cylinder length L. In the literature, this problem is solved by using the mathematical methods of an augmented Lagrangian multiplier [55] and branch-and-bound [56] classical techniques. This study utilizes intelligent optimization algorithms, namely, CGWO, PSO, FA, FOA, GWO, SCA, and WDO, to resolve the given problem. The outputs generated from these solutions are displayed in the succeeding table.
Table 12 shows that CGWO achieved the best results, which fully demonstrates its ability to obtain favorable outcomes in practical engineering problems.

6. Conclusions

In this study, we introduce a refined Gray Wolf Optimization algorithm—CGWO—that was innovatively enhanced using the Cauchy distribution. This advancement is primarily aimed at overcoming the limitations of traditional GWO, such as its susceptibility to local optima and subpar optimization accuracy. Capitalizing on the long -tail characteristic of the Cauchy distribution, the CGWO algorithm incorporates an initialization strategy to broaden the search space from the outset. Moreover, by integrating a dynamic, nonlinear inertial weight strategy derived from both the Cauchy distribution and logarithmic functions, CGWO adeptly adapts to the distinct stages of the optimization process. This approach ensures a more effective balance between global and local searches, thereby mitigating the likelihood of converging on local optima and facilitating a swifter rate of convergence.
A key feature of CGWO is the implementation of a Cauchy mutation strategy, guided by dynamically changing inertia weights that serve as probabilities for mutation. In the algorithm’s later stages, this strategy enhances population diversity, harnessing a survival-of-the-fittest approach to control the disturbances induced by Cauchy mutations. This aspect significantly aids the algorithm in escaping local optima, thus improving both the accuracy and convergence speed.
To validate the enhancements proposed in this paper, CGWO was rigorously tested against 23 standard test functions from various perspectives. Comparative analysis with other optimization algorithms and improved GWO variants revealed that CGWO exhibits superior optimization capabilities and faster convergence, with the Cauchy dynamic strategy outperforming other inertial weight strategies. Further substantiation through the Wilcoxon rank-sum test established CGWO’s significant improvements over several competing algorithms, highlighting the efficacy and potential of our proposed enhancements in refining the basic Gray Wolf Optimizer and reducing the risk of entrapment in local optima.
Applying CGWO to a classical real-world engineering problem (specifically, pressure vessel design), the algorithm demonstrated notable effectiveness, suggesting its viability for complex engineering challenges. This indicates promising prospects for CGWO in addressing practical optimization problems, underscoring the feasibility and impact of our improvement strategy.
Future work will focus on a more comprehensive and in-depth performance evaluation of CGWO. Additionally, exploring its application in areas such as power system energy load prediction, autonomous vehicle fault diagnosis, and optimal path planning presents intriguing avenues for further research.

Author Contributions

Conceptualization, K.S.; Data curation, J.L.; Formal analysis, J.L.; Funding acquisition, K.S.; Methodology, J.L. and K.S.; Project administration, K.S.; Resources, K.S.; Software, J.L.; Supervision, K.S.; Validation, J.L.; Visualization, J.L.; Writing—original draft, J.L.; Writing—review and editing, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National College Students’ Innovation and Entrepreneurship training program (202310293152E), and National-Local Joint Project Engineering Lab of RF Integration & Micropackage, Nanjing 210023, China.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [IEEE Xplore] at [doi:10.1109/4235.771163], ref. [45].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  2. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  3. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  4. Zamfirache, I.A.; Precup, R.-E.; Roman, R.-C.; Petriu, E.M. Policy Iteration Reinforcement Learning-Based Control Using a Grey Wolf Optimizer Algorithm. Inf. Sci. 2022, 585, 162–175. [Google Scholar] [CrossRef]
  5. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An Improved Grey Wolf Optimizer for Solving Engineering Problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Hong, W.-C. Application of Variational Mode Decomposition and Chaotic Grey Wolf Optimizer with Support Vector Regression for Forecasting Electric Loads. Knowl.-Based Syst. 2021, 228, 107297. [Google Scholar] [CrossRef]
  7. Chakraborty, C.; Kishor, A.; Rodrigues, J.J.P.C. Novel Enhanced-Grey Wolf Optimization Hybrid Machine Learning Technique for Biomedical Data Computation. Comput. Electr. Eng. 2022, 99, 107778. [Google Scholar] [CrossRef]
  8. Zaid, S.A.; Bakeer, A.; Magdy, G.; Albalawi, H.; Kassem, A.M.; El-Shimy, M.E.; AbdelMeguid, H.; Manqarah, B. A New Intelligent Fractional-Order Load Frequency Control for Interconnected Modern Power Systems with Virtual Inertia Control. Fractal Fract. 2023, 7, 62. [Google Scholar] [CrossRef]
  9. Azizi, S.; Shakibi, H.; Shokri, A.; Chitsaz, A.; Yari, M. Multi-Aspect Analysis and RSM-Based Optimization of a Novel Dual-Source Electricity and Cooling Cogeneration System. Appl. Energy 2023, 332, 120487. [Google Scholar] [CrossRef]
  10. Ullah, I.; Liu, K.; Yamamoto, T.; Shafiullah, M.; Jamal, A. Grey Wolf Optimizer-Based Machine Learning Algorithm to Predict Electric Vehicle Charging Duration Time. Transp. Lett. 2023, 15, 889–906. [Google Scholar] [CrossRef]
  11. Wang, Y.; He, X.; Zhang, L.; Ma, X.; Wu, W.; Nie, R.; Chi, P.; Zhang, Y. A Novel Fractional Time-Delayed Grey Bernoulli Forecasting Model and Its Application for the Energy Production and Consumption Prediction. Eng. Appl. Artif. Intell. 2022, 110, 104683. [Google Scholar] [CrossRef]
  12. Elsisi, M. Improved Grey Wolf Optimizer Based on Opposition and Quasi Learning Approaches for Optimization: Case Study Autonomous Vehicle Including Vision System. Artif. Intell. Rev. 2022, 55, 5597–5620. [Google Scholar] [CrossRef]
  13. Shaheen, M.A.M.; Hasanien, H.M.; Alkuhayli, A. A Novel Hybrid GWO-PSO Optimization Technique for Optimal Reactive Power Dispatch Problem Solution. Ain Shams Eng. J. 2021, 12, 621–630. [Google Scholar] [CrossRef]
  14. Hu, H.; Li, Y.; Zhang, X.; Fang, M. A Novel Hybrid Model for Short-Term Prediction of Wind Speed. Pattern Recognit. 2022, 127, 108623. [Google Scholar] [CrossRef]
  15. Boursianis, A.D.; Papadopoulou, M.S.; Salucci, M.; Polo, A.; Sarigiannidis, P.; Psannis, K.; Mirjalili, S.; Koulouridis, S.; Goudos, S.K. Emerging Swarm Intelligence Algorithms and Their Applications in Antenna Design: The GWO, WOA, and SSA Optimizers. Appl. Sci. 2021, 11, 8330. [Google Scholar] [CrossRef]
  16. Liu, L.; Li, L.; Nian, H.; Lu, Y.; Zhao, H.; Chen, Y. Enhanced Grey Wolf Optimization Algorithm for Mobile Robot Path Planning. Electronics 2023, 12, 4026. [Google Scholar] [CrossRef]
  17. Long, W.; Jiao, J.; Liang, X.; Tang, M. An Exploration-Enhanced Grey Wolf Optimizer to Solve High-Dimensional Numerical Optimization. Eng. Appl. Artif. Intell. 2018, 68, 63–80. [Google Scholar] [CrossRef]
  18. Meidani, K.; Hemmasian, A.; Mirjalili, S.; Barati Farimani, A. Adaptive Grey Wolf Optimizer. Neural Comput. Applic 2022, 34, 7711–7731. [Google Scholar] [CrossRef]
  19. Zhang, X.; Zhang, Y.; Ming, Z. Improved Dynamic Grey Wolf Optimizer. Front. Inf. Technol. Electron. Eng. 2021, 22, 877–890. [Google Scholar] [CrossRef]
  20. Sun, L.; Feng, B.; Chen, T.; Zhao, D.; Xin, Y. Equalized Grey Wolf Optimizer with Refraction Opposite Learning. Comput. Intell. Neurosci. 2022, 2022, 2721490. [Google Scholar] [CrossRef]
  21. Li, C.; Peng, T.; Zhu, Y. A Cutting Pattern Recognition Method for Shearers Based on ICEEMDAN and Improved Grey Wolf Optimizer Algorithm-Optimized SVM. Appl. Sci. 2021, 11, 9081. [Google Scholar] [CrossRef]
  22. Zhou, Y.; Yang, X.; Tao, L.; Yang, L. Transformer Fault Diagnosis Model Based on Improved Gray Wolf Optimizer and Probabilistic Neural Network. Energies 2021, 14, 3029. [Google Scholar] [CrossRef]
  23. Dereli, S. A New Modified Grey Wolf Optimization Algorithm Proposal for a Fundamental Engineering Problem in Robotics. Neural Comput. Appl. 2021, 33, 14119–14131. [Google Scholar] [CrossRef]
  24. Heidari, A.A.; Pahlavani, P. An Efficient Modified Grey Wolf Optimizer with Lévy Flight for Optimization Tasks. Appl. Soft Comput. 2017, 60, 115–134. [Google Scholar] [CrossRef]
  25. Mahdy, A.; Shaheen, A.; El-Sehiemy, R.; Ginidi, A. Artificial Ecosystem Optimization by Means of Fitness Distance Balance Model for Engineering Design Optimization. J. Supercomput. 2023, 79, 18021–18052. [Google Scholar] [CrossRef]
  26. Özbay, F.A. A Modified Seahorse Optimization Algorithm Based on Chaotic Maps for Solving Global Optimization and Engineering Problems. Eng. Sci. Technol. Int. J. 2023, 41, 101408. [Google Scholar] [CrossRef]
  27. Assiri, A.S. On the Performance Improvement of Butterfly Optimization Approaches for Global Optimization and Feature Selection. PLoS ONE 2021, 16, e0242612. [Google Scholar] [CrossRef]
  28. Pekgör, A. A Novel Goodness-of-Fit Test for Cauchy Distribution. J. Math. 2023, 2023, 9200213. [Google Scholar] [CrossRef]
  29. Wang, C.; Yang, Y.; Shu, Q.; Yu, C.; Cui, Z. Point Cloud Registration Algorithm Based on Cauchy Mixture Model. IEEE Photonics J. 2021, 13, 6900213. [Google Scholar] [CrossRef]
  30. Akaoka, Y.; Okamura, K.; Otobe, Y. Bahadur Efficiency of the Maximum Likelihood Estimator and One-Step Estimator for Quasi-Arithmetic Means of the Cauchy Distribution. Ann. Inst. Stat. Math. 2022, 74, 895–923. [Google Scholar] [CrossRef]
  31. Li, L.; Qian, S.; Li, Z.; Li, S. Application of Improved Satin Bowerbird Optimizer in Image Segmentation. Front. Plant Sci. 2022, 13, 915811. [Google Scholar] [CrossRef]
  32. Luo, K. Enhanced Grey Wolf Optimizer with a Model for Dynamically Estimating the Location of the Prey. Appl. Soft Comput. 2019, 77, 225–235. [Google Scholar] [CrossRef]
  33. Gupta, S.; Deep, K. Cauchy Grey Wolf Optimiser for Continuous Optimisation Problems. J. Exp. Theor. Artif. Intell. 2018, 30, 1051–1075. [Google Scholar] [CrossRef]
  34. Li, J.; Yang, F. Task Assignment Strategy for Multi-Robot Based on Improved Grey Wolf Optimizer. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 6319–6335. [Google Scholar] [CrossRef]
  35. Ma, W.; Wang, M.; Zhu, X. Improved Particle Swarm Optimization Based Approach for Bilevel Programming Problem—An Application on Supply Chain Model. Int. J. Mach. Learn. Cybern. 2014, 5, 281–292. [Google Scholar] [CrossRef]
  36. Bajer, D.; Martinović, G.; Brest, J. A Population Initialization Method for Evolutionary Algorithms Based on Clustering and Cauchy Deviates. Expert Syst. Appl. 2016, 60, 294–310. [Google Scholar] [CrossRef]
  37. Shi, Y.; Eberhart, R. A Modified Particle Swarm Optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  38. Yang, X.; Qiu, Y. Research on Improving Gray Wolf Algorithm Based on Multi-Strategy Fusion. IEEE Access 2023, 11, 66135–66149. [Google Scholar] [CrossRef]
  39. Li, S.; Xu, K.; Xue, G.; Liu, J.; Xu, Z. Prediction of Coal Spontaneous Combustion Temperature Based on Improved Grey Wolf Optimizer Algorithm and Support Vector Regression. Fuel 2022, 324, 124670. [Google Scholar] [CrossRef]
  40. Wang, J.; Wang, X.; Li, X.; Yi, J. A Hybrid Particle Swarm Optimization Algorithm with Dynamic Adjustment of Inertia Weight Based on a New Feature Selection Method to Optimize SVM Parameters. Entropy 2023, 25, 531. [Google Scholar] [CrossRef]
  41. Gu, Y.; Lu, H.; Xiang, L.; Shen, W. Adaptive Simplified Chicken Swarm Optimization Based on Inverted S-Shaped Inertia Weight. Chin. J. Electron. 2022, 31, 367–386. [Google Scholar] [CrossRef]
  42. Ali, M.; Pant, M. Improving the Performance of Differential Evolution Algorithm Using Cauchy Mutation. Soft Comput. 2011, 15, 991–1007. [Google Scholar] [CrossRef]
  43. Yu, H.; Song, J.; Chen, C.; Heidari, A.A.; Liu, J.; Chen, H.; Zaguia, A.; Mafarja, M. Image Segmentation of Leaf Spot Diseases on Maize Using Multi-Stage Cauchy-Enabled Grey Wolf Algorithm. Eng. Appl. Artif. Intell. 2022, 109, 104653. [Google Scholar] [CrossRef]
  44. Saremi, S.; Mirjalili, S.Z.; Mirjalili, S.M. Evolutionary Population Dynamics and Grey Wolf Optimizer. Neural Comput. Appl. 2015, 26, 1257–1263. [Google Scholar] [CrossRef]
  45. Yao, X.; Liu, Y.; Lin, G. Evolutionary Programming Made Faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  46. Yang, X.-S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Bristol, UK, 2010. [Google Scholar]
  47. Pan, W.-T. A New Fruit Fly Optimization Algorithm: Taking the Financial Distress Model as an Example. Knowl. Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  48. Lu, Y.; Li, S. Green Transportation Model in Logistics Considering the Carbon Emissions Costs Based on Improved Grey Wolf Algorithm. Sustainability 2023, 15, 11090. [Google Scholar] [CrossRef]
  49. Liang, B.; Zhang, T. Fractional Order Nonsingular Terminal Sliding Mode Cooperative Fault-Tolerant Control for High-Speed Trains With Actuator Faults Based on Grey Wolf Optimization. IEEE Access 2023, 11, 63932–63946. [Google Scholar] [CrossRef]
  50. Luo, Y.; Qin, Q.; Hu, Z.; Zhang, Y. Path Planning for Unmanned Delivery Robots Based on EWB-GWO Algorithm. Sensors 2023, 23, 1867. [Google Scholar] [CrossRef]
  51. Kumar, R.; Singh, L.; Tiwari, R. Path Planning for the Autonomous Robots Using Modified Grey Wolf Optimization Approach. J. Intell. Fuzzy Syst. 2021, 40, 9453–9470. [Google Scholar] [CrossRef]
  52. Wang, Y.; Zhu, Q.; Ma, H.; Yu, H. A Hybrid Gray Wolf Optimizer for Hyperspectral Image Band Selection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5527713. [Google Scholar] [CrossRef]
  53. Tang, M.; Yi, J.; Wu, H.; Wang, Z. Fault Detection of Wind Turbine Electric Pitch System Based on IGWO-ERF. Sensors 2021, 21, 6215. [Google Scholar] [CrossRef]
  54. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry Gas Solubility Optimization: A Novel Physics-Based Algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  55. Kannan, B.K.; Kramer, S.N. An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and Its Applications to Mechanical Design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
  56. Sandgren, E. Nonlinear Integer and Discrete Programming in Mechanical Design Optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
Figure 1. Chart of ranks.
Figure 1. Chart of ranks.
Applsci 13 12290 g001
Figure 2. Schematic diagram.
Figure 2. Schematic diagram.
Applsci 13 12290 g002
Figure 3. Cauchy distribution graph: (a) Graphs of different parameters; (b) Comparison of Cauchy distribution and Gaussian distribution among them. The red curve in Figure 3 is the standard Cauchy distribution.
Figure 3. Cauchy distribution graph: (a) Graphs of different parameters; (b) Comparison of Cauchy distribution and Gaussian distribution among them. The red curve in Figure 3 is the standard Cauchy distribution.
Applsci 13 12290 g003
Figure 4. Comparison of random initialization and Cauchy random initialization: (a) random initialization; (b) Cauchy random initialization.
Figure 4. Comparison of random initialization and Cauchy random initialization: (a) random initialization; (b) Cauchy random initialization.
Applsci 13 12290 g004
Figure 5. Random numbers versus Cauchy random numbers.
Figure 5. Random numbers versus Cauchy random numbers.
Applsci 13 12290 g005
Figure 6. Line plots of the results with different scale parameters.
Figure 6. Line plots of the results with different scale parameters.
Applsci 13 12290 g006
Figure 7. Line plots of the results with different values of k .
Figure 7. Line plots of the results with different values of k .
Applsci 13 12290 g007
Figure 8. Comparison of different algorithms.
Figure 8. Comparison of different algorithms.
Applsci 13 12290 g008aApplsci 13 12290 g008b
Figure 9. Comparison of different weight strategies.
Figure 9. Comparison of different weight strategies.
Applsci 13 12290 g009aApplsci 13 12290 g009b
Figure 10. Comparison chart of different improvements.
Figure 10. Comparison chart of different improvements.
Applsci 13 12290 g010aApplsci 13 12290 g010b
Figure 11. Schematic diagram of pressure vessel structure.
Figure 11. Schematic diagram of pressure vessel structure.
Applsci 13 12290 g011
Table 1. Unimodal reference function.
Table 1. Unimodal reference function.
FunctionDimSearch RangesOptimal Solutions
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
f 3 ( x ) = i = 1 n   ( j = 1 i x j ) 2 30[−100, 100]0
f 4 ( x ) = m a x i { | x i | ,   1 i n } 30[−100, 100]0
f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
f 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
f 7 ( x ) = i = 1 n i x i 4 + random [ 0 ,   1 ) 30[−1.28, 1.28]0
Table 2. Multimodal reference function.
Table 2. Multimodal reference function.
FunctionDimSearch RangesOptimal Solutions
f 8 ( x ) = i = 1 n   x i sin ( | x i | ) 30[−500, 500]−12,569.5
f 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
f 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
f 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30[−600, 600]0
f 12 ( x ) = π n { 10 s i n 2 ( π y i ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 s i n 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 ,   100 ,   4 ) y i = 1 + 1 4 ( x i + 1 ) u ( x i , a , k , m ) = { k ( x i a ) m , x i > a 0 , a x i a k ( x i a ) m , x i < a 30[−50, 50]0
f 13 ( x ) = 0.1 { s i n 2 ( 3 π x 1 ) + i = 1 n 1 ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + s i n 2 ( 2 π x n ) ] + i = 1 n u ( x i ,   5 ,   100 ,   4 ) 30[−50, 50]0
Table 3. Fixed dimensional multimodal reference function.
Table 3. Fixed dimensional multimodal reference function.
FunctionDimSearch RangesOptimal Solutions
f 14 ( x ) = [ 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ] 1 2[−65, 65]1
f 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.0003
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.03162
f 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
f 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
f 19 ( x ) = i = 1 4 c i exp [ j = 1 3 a i j ( x j p i j ) 2 ] 3[0, 1]−3.86
f 20 ( x ) = i = 1 4 c i exp [ j = 1 6 a i j ( x j p i j ) 2 ] 6[0, 1]−3.32
f 21 ( x ) = i = 1 5 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10.1532
f 22 ( x ) = i = 1 7 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10.4028
f 23 ( x ) = i = 1 10 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10.5363
Table 4. The test results of CGWO with different values of γ .
Table 4. The test results of CGWO with different values of γ .
γ Results f 1 f 7 f 10 f 12 f 15
0.1Mean09.522 × 10−58.882 × 10−161.6169 × 10−22.5387 × 10−3
0.2Mean09.105 × 10−58.882 × 10−164.1501 × 10−31.1452 × 10−3
0.3Mean09.037 × 10−58.882 × 10−163.2343 × 10−31.2585 × 10−3
0.4Mean01.170 × 10−48.882 × 10−161.4396 × 10−31.8597 × 10−3
0.5Mean01.098 × 10−48.882 × 10−161.2138 × 10−31.1940 × 10−3
0.6Mean09.788 × 10−58.882 × 10−161.9674 × 10−34.4491 × 10−4
0.7Mean09.865 × 10−58.882 × 10−168.2372 × 10−43.8147 × 10−3
0.8Mean01.262 × 10−48.882 × 10−166.5960 × 10−41.0151 × 10−3
0.9Mean01.226 × 10−48.882 × 10−164.2559 × 10−41.0260 × 10−3
1.0Mean09.131 × 10−58.882 × 10−162.2409 × 10−44.3320 × 10−4
1.1Mean01.044 × 10−48.882 × 10−162.2394 × 10−41.0942 × 10−3
1.2Mean01.331 × 10−48.882 × 10−165.5001 × 10−41.0882 × 10−3
1.3Mean09.928 × 10−58.882 × 10−168.7611 × 10−41.7111 × 10−3
Those in bold are the cases where the result is the best.
Table 5. The test results of CGWO with different values of k .
Table 5. The test results of CGWO with different values of k .
k Results f 1 f 7 f 12 f 15
0Mean04.922 × 10−45.9110 × 10−33.7223 × 10−3
0.5Mean05.417 × 10−43.3055 × 10−33.0463 × 10−3
1.0Mean02.720 × 10−45.1541 × 10−31.7624 × 10−3
1.5Mean03.230 × 10−44.4289 × 10−33.0011 × 10−3
2.0Mean02.429 × 10−43.3075 × 10−31.0698 × 10−3
2.5Mean03.157 × 10−44.4238 × 10−31.7514 × 10−3
3.0Mean02.907 × 10−46.6250 × 10−31.7448 × 10−3
3.5Mean03.646 × 10−43.2158 × 10−31.6956 × 10−3
4.0Mean02.353 × 10−52.2255 × 10−31.0748 × 10−3
4.5Mean01.843 × 10−52.1837 × 10−34.1145 × 10−4
5.0Mean02.154 × 10−42.1836 × 10−31.0103 × 10−3
5.5Mean02.112 × 10−45.3878 × 10−31.7956 × 10−3
6.0Mean02.112 × 10−44.4375 × 10−31.0385 × 10−3
Those in bold are the cases where the result is the best.
Table 6. Test results for unimodal benchmark functions.
Table 6. Test results for unimodal benchmark functions.
FunctionsDimensionResultsPSOFAFOAGWOCGWO
f 1 30Best1.2110 × 1037.1812 × 1022.1618 × 1043.6446 × 10−370
Mean1.6135 × 1032.6262 × 1032.9903 × 1045.2291 × 10−350
Std2.0078 × 1021.8835 × 1035.2243 × 1031.5900 × 10−340
100Best1.5010 × 1042.0700 × 1041.2612 × 1051.2832 × 10−190
Mean1.7413 × 1045.0758 × 1041.5794 × 1052.6498 × 10−180
Std2.1027 × 1031.6339 × 1041.3272 × 1043.2259 × 10−180
f 2 30Best1.6057 × 10732.54371.7440 × 1024.0221 × 10−220
Mean1.4651 × 101362.48492.9393 × 1063.3652 × 10−210
Std6.0585 × 101323.04196.1964 × 1062.9677 × 10−210
100Best1.9403 × 10441.7270 × 10127.5431 × 10235.2952 × 10−120
Mean4.6752 × 10525.9032 × 10336.6341 × 10341.3235 × 10−110
Std2.5087 × 10533.2334 × 10342.1916 × 10355.0888 × 10−120
f 3 30Best3.4135 × 1031.9285 × 1043.8240 × 1041.2897 × 10−100
Mean5.7925 × 1033.4836 × 1046.5574 × 1043.3327 × 10−50
Std2.2659 × 1031.3002 × 1041.3840 × 1041.5905 × 10−40
100Best8.4662 × 1042.5906 × 1053.8038 × 10511.05360
Mean1.2955 × 1053.8818 × 1059.2044 × 1054.2216 × 1020
Std2.8592 × 1041.1679 × 1053.2277 × 1054.6099 × 1020
f 4 30Best12.544954.837454.83277.5164 × 10−100
Mean16.979977.392462.23281.3451 × 10−80
Std1.27948.16023.67181.6314 × 10−80
100Best37.218491.810875.54072.6104 × 10−20
Mean46.437996.381479.82321.21040
Std5.28031.84272.43941.21590
f 5 30Best1.1218 × 1077.9773 × 1052.1455 × 10726.057726.3554
Mean2.2226 × 1071.0845 × 1074.9234 × 10727.724327.6375
Std4.5716 × 1061.1956 × 1071.5647 × 1070.87930.6257
100Best4.6114 × 1081.9222 × 1083.2673 × 10897.048896.9381
Mean6.5344 × 1084.0775 × 1084.77034 × 10898.265698.3083
Std7.9990 × 1071.6320 × 1088.2852 × 1070.42320.3999
f 6 30Best1.2093 × 1033.4252 × 1022.0372 × 1040.50239.4244 × 10−7
Mean1.6729 × 1031.7824 × 1032.9820 × 1041.68601.0569 × 10−4
Std2.0151 × 1021.8743 × 1033.7119 × 1030.50571.4880 × 10−4
100Best1.2944 × 1041.5678 × 1041.3426 × 10511.22476.9877 × 10−4
Mean1.7326 × 1044.8463 × 1041.5674 × 10513.68380.1442
Std2.6984 × 1031.8092 × 1041.0750 × 1041.03060.1805
f 7 30Best72.768614.617015.42367.1640 × 10−41.5969 × 10−6
Mean1.3792 × 10245.339926.72552.7571 × 10−31.2706 × 10−4
Std26.750117.42916.52991.5707 × 10−32.4114 × 10−4
100Best1.7034 × 1031.0441 × 1034.3114 × 1022.0947 × 10−31.0841 × 10−5
Mean2.1359 × 1031.4901 × 1037.3886 × 1026.8152 × 10−31.5433 × 10−4
Std2.1355 × 1021.7529 × 1021.2140 × 1022.7294 × 10−31.9447 × 10−4
Those in bold are the cases where the result is the best.
Table 7. Multi−modal benchmark function test results.
Table 7. Multi−modal benchmark function test results.
FunctionsDimensionResultsPSOFAFOAGWOCGWO
f 8 30Best−7.7963 × 103−8.5822 × 103−6.3676 × 103−6.8795 × 103−1.0661 × 104
Mean−6.2455 × 103−6.6183 × 103−5.3782 × 103−5.6016 × 103−7.1879 × 103
Std8.1381 × 1021.3152 × 1034.1550 × 1026.1556 × 1021.3569 × 103
100Best−2.2110 × 104−2.3127 × 104−1.0540 × 104−1.9276 × 104−2.8872 × 104
Mean−1.8511 × 104−1.6545 × 104−9.3053 × 103−1.5222 × 104−1.7414 × 104
Std2.2875 × 1032.8187 × 1038.2188 × 1022.3207 × 1032.0124 × 103
f 9 30Best4.0431 × 1022.5167 × 1022.4458 × 10200
Mean4.5035 × 1023.0055 × 1023.2312 × 1021.86630
Std19.426928.619124.41033.68460
100Best1.5358 × 1031.2318 × 1031.2741 × 1037.9581 × 10−130
Mean1.6923 × 1031.3428 × 1031.3394 × 1033.25080
Std61.432957.574540.59154.89840
f 10 30Best16.271910.392018.05032.9309 × 10−148.8818 × 10−16
Mean17.569615.500919.06424.0086 × 10−148.8818 × 10−16
Std0.70052.02380.45564.1182 × 10−150
100Best19.449219.210219.86434.5148 × 10−118.8818 × 10−16
Mean20.076820.065120.17391.1835 × 10−108.8818 × 10−16
Std0.17010.40200.13667.6198 × 10−110
f 11 30Best1.31121.43911.8285 × 10200
Mean1.42476.80532.8923 × 1027.0916 × 10−30
Std6.2396 × 10−28.720750.52891.4254 × 10−20
100Best4.467320.85271.2152 × 10300
Mean5.34521.8094 × 1021.4094 × 1033.5592 × 10−30
Std0.38021.3518 × 1021.1289 × 1029.3449 × 10−30
f 12 30Best2.0331 × 1049.7452 × 1041.8072 × 1073.9722 × 10−22.0805 × 10−6
Mean2.8855 × 1051.2464 × 1074.6900 × 1070.11098.7659 × 10−4
Std1.9070 × 1051.4527 × 1071.2284 × 1079.3102 × 10−23.8057 × 10−3
100Best7.7862 × 1076.1173 × 1074.6337 × 1080.33566.1477 × 10−6
Mean1.6346 × 1085.7079 × 1088.2877 × 1080.47943.8124 × 10−3
Std4.2783 × 1072.4257 × 1081.8180 × 1088.4667 × 10−25.3842 × 10−3
f 13 30Best1.7412 × 1062.8555 × 1054.0336 × 1070.52431.8532 × 10−5
Mean4.1834 × 1063.3424 × 1071.3921 × 1081.17347.4011 × 10−2
Std1.5508 × 1063.4811 × 1074.0588 × 1070.28590.1671
100Best2.5833 × 1083.4171 × 1081.0372 × 1097.26394.2803 × 10−4
Mean4.3003 × 1081.0449 × 1091.8496 × 1097.78131.0011
Std9.0403 × 1074.8776 × 1083.9656 × 1080.31340.7526
Those in bold are the cases where the result is the best.
Table 8. Fixed−dimension, multimodal benchmark function test results.
Table 8. Fixed−dimension, multimodal benchmark function test results.
FunctionsDimensionResultsPSOFAFOAGWOCGWO
f 14 2Best0.99800.99800.99800.99800.9980
Mean0.99805.72770.99806.17971.3287
Std1.6157 × 10−45.53656.6272 × 10−64.61880.7521
f 15 4Best1.7519 × 10−39.7061 × 10−41.4597 × 10−33.0753 × 10−43.0749 × 10−4
Mean1.5434 × 10−21.2700 × 10−22.2806 × 10−34.4684 × 10−31.1376 × 10−3
Std9.0997 × 10−32.5391 × 10−29.0014 × 10−48.1303 × 10−33.6033 × 10−3
f 16 2Best−1.0315−1.0316−1.0314−1.0316−1.0316
Mean−1.0152−1.0189−1.0296−1.0316−1.0316
Std1.2889 × 10−25.1892 × 10−21.6087 × 10−33.9711 × 10−81.0175 × 10−8
f 17 2Best0.39800.39800.39870.39800.3980
Mean0.61670.40810.40660.39800.3980
Std1.18043.4542 × 10−21.0731 × 10−23.0662 × 10−72.6491 × 10−7
f 18 2Best3.00963.00043.00013.00003.0000
Mean6.12739.52123.008611.10013.0000
Std16.185416.75278.4175 × 10−324.71546.1328 × 10−5
f 19 3Best−3.8618−3.8602−3.8626−3.8628−3.8628
Mean−3.8433−3.7187−3.8598−3.8603−3.8621
Std6.2613 × 10−20.11021.8211 × 10−33.3184 × 10−31.7124 × 10−3
f 20 6Best−3.2513−3.0642−3.3097−3.3200−3.3200
Mean−2.6199−2.4113−3.2380−3.2482−3.2946
Std0.69580.51005.3671 × 10−28.3973 × 10−25.1673 × 10−2
f 21 4Best−5.0297−8.7822−9.4883−10.1531−10.1532
Mean−2.4965−3.7942−7.4085−9.3968−9.5654
Std1.27362.46041.14692.00021.5623
f 22 4Best−4.6916−9.9974−9.9822−10.4029−10.4029
Mean−2.4702−4.0348−7.4981−10.0495−10.3328
Std1.24742.38550.95961.34330.3823
f 23 4Best−6.2217−9.9716−9.3099−10.5363−10.5363
Mean−3.0612−4.5097−7.1399−9.4540−10.1754
Std1.35482.97661.05532.80541.3719
Those in bold are the cases where the result is the best.
Table 9. The comparison results of four inertia weights.
Table 9. The comparison results of four inertia weights.
FunctionsDimensionResultsGWO1GWO2GWO3CGWO
f 1 30Best06.3747 × 10−3181.5510 × 10−730
Mean03.8695 × 10−3052.1090 × 10−670
Std007.1793 × 10−670
f 2 30Best6.6232 × 10−2677.2792 × 10−1581.0396 × 10−410
Mean2.0976 × 10−2683.3602 × 10−1487.8699 × 10−400
Std01.8404 × 10−1472.1656 × 10−390
f 3 30Best4.0944 × 10−3131.2715 × 10−3154.9458 × 10−250
Mean3.2456 × 10−3047.9840 × 10−3021.4509 × 10−140
Std004.8297 × 10−140
f 4 30Best5.9069 × 10−1795.0573 × 10−1657.6416 × 10−220
Mean1.6441 × 10−1773.0469 × 10−1638.3291 × 10−190
Std003.4206 × 10−180
f 5 30Best26.843628.093427.155726.3554
Mean28.482428.566828.215527.6375
Std0.38080.32040.57220.6257
f 6 30Best1.96554.92342.43699.4244 × 10−7
Mean3.81315.53183.35511.0569 × 10−4
Std0.43110.25620.43501.4880 × 10−4
f 7 30Best5.2987 × 10−68.6640 × 10−65.0086 × 10−41.5969 × 10−6
Mean1.9207 × 10−42.4277 × 10−41.6574 × 10−31.2706 × 10−4
Std1.7044 × 10−42.0526 × 10−49.2681 × 10−42.4114 × 10−4
f 8 30Best−3.3749 × 103−3.7980 × 103−6.7440 × 103−1.0661 × 104
Mean−2.5126 × 103−3.1165 × 103−4.6594 × 103−7.1879 × 103
Std3.9021 × 1022.9672 × 1021.1482 × 1031.3569 × 103
f 9 30Best0000
Mean0000
Std0000
f 10 30Best4.4409 × 10−158.8818 × 10−168.2305 × 10−158.8818 × 10−16
Mean4.4409 × 10−158.8818 × 10−164.4409 × 10−158.8818 × 10−16
Std001.2973 × 10−150
f 11 30Best0000
Mean1.5575 × 10−301.6987 × 10−30
Std5.8254 × 10−206.9457 × 10−30
f 12 30Best0.21110.60535.1319 × 10−22.0805 × 10−6
Mean0.35690.76090.24218.7659 × 10−4
Std7.871 × 10−20.13790.18463.8057 × 10−3
f 13 30Best1.81722.63171.56161.8532 × 10−5
Mean2.18772.73801.93447.4011 × 10−2
Std0.13114.4192 × 10−20.20620.1671
f 14 2Best0.99800.99800.99800.9980
Mean7.93363.84358.05751.3287
Std4.35813.41274.78150.7521
f 15 4Best3.1047 × 10−43.7376 × 10−43.0751 × 10−43.0749 × 10−4
Mean4.4822 × 10−39.0531 × 10−44.3603 × 10−31.1376 × 10−3
Std8.2921 × 10−33.2378 × 10−41.1632 × 10−23.6033 × 10−3
f 16 2Best−1.0316−1.0316−1.0316−1.0316
Mean−1.0306−1.0312−1.0316−1.0316
Std5.3124 × 10−33.4818 × 10−48.3115 × 10−81.0175 × 10−8
f 17 2Best0.39800.39800.39800.3980
Mean0.55240.42100.39800.3980
Std0.83941.5797 × 10−28.9221 × 10−72.6491 × 10−7
f 18 2Best3.00013.00003.00003.0000
Mean8.40103.00045.70013.0000
Std20.55396.3542 × 10−414.78866.1328 × 10−5
f 19 3Best−0.3005−3.8528−3.8628−3.8628
Mean−0.3005−3.6619−3.8605−3.8621
Std2.2584 × 10−160.23982.5324 × 10−31.7124 × 10−3
f 20 6Best−3.3104−2.8984−3.3200−3.3200
Mean−3.1958−1.8660−3.2690−3.2946
Std8.3021 × 10−20.52087.9813 × 10−25.1673 × 10−2
f 21 4Best−9.3877−4.1852−10.1530−10.1532
Mean−5.7510−2.2274−8.8844−9.5654
Std1.48051.37082.37091.5623
f 22 4Best−9.6454−4.2295−10.4026−10.4029
Mean−5.6725−2.3928−10.2234−10.3328
Std1.27301.36940.97000.3823
f 23 4Best−10.5026−4.4852−10.5361−10.5363
Mean−5.9055−2.4005−9.7631−10.1754
Std1.89861.23342.01651.3719
Those in bold are the cases where the result is the best.
Table 10. The comparison results of MGWO, HGWO, LGWO, and CGWO.
Table 10. The comparison results of MGWO, HGWO, LGWO, and CGWO.
FunctionsDimensionResultsMGWOHGWOLGWOCGWO
f 1 30Best4.2025 × 10−291.2004 × 10−459.2839 × 10−370
Mean4.6237 × 10−271.5657 × 10−421.9113 × 10−340
Std8.7003 × 10−273.2037 × 10−423.4335 × 10−340
f 2 30Best4.0702 × 10−171.2409 × 10−264.4512 × 10−220
Mean2.1404 × 10−162.1612 × 10−251.1233 × 10−200
Std2.4492 × 10−163.2843 × 10−251.5010 × 10−200
f 3 30Best4.3841 × 10−86.9926 × 10−134.4024 × 10−110
Mean2.5975 × 10−51.9412 × 10−73.9837 × 10−60
Std7.3876 × 10−55.8421 × 10−71.0671 × 10−50
f 4 30Best6.7139 × 10−83.2630 × 10−121.7452 × 10−100
Mean2.3427 × 10−69.3791 × 10−111.0686 × 10−80
Std9.2347 × 10−71.3713 × 10−101.5464 × 10−80
f 5 30Best26.194826.216126.850126.3554
Mean27.936427.693627.667027.6375
Std0.77220.75350.62980.6257
f 6 30Best0.76160.49810.82679.4244 × 10−7
Mean2.12121.67411.54301.0569 × 10−4
Std0.54050.43810.42351.4880 × 10−4
f 7 30Best4.9014 × 10−44.5224 × 10−44.1187 × 10−61.5969 × 10−6
Mean2.4231 × 10−31.8675 × 10−33.0570 × 10−41.2706 × 10−4
Std1.2131 × 10−31.0909 × 10−32.7301 × 10−42.4114 × 10−4
f 8 30Best−6.6830 × 103−6.9852 × 103−7.3442 × 103−1.0661 × 104
Mean−5.3102 × 103−5.6269 × 103−5.7067 × 103−7.1879 × 103
Std7.0398 × 1027.9456 × 1021.0612 × 1031.3569 × 103
f 9 30Best5.6843 × 10−14000
Mean5.06700.84540.67810
Std5.16973.17691.80110
f 10 30Best8.6153 × 10−141.1546 × 10−142.5757 × 10−148.8818 × 10−16
Mean1.3316 × 10−131.9954 × 10−143.3573 × 10−148.8818 × 10−16
Std2.9262 × 10−143.7879 × 10−155.0588 × 10−150
f 11 30Best0000
Mean9.2044 × 10−37.9090 × 10−35.1232 × 10−30
Std1.6215 × 10−21.1882 × 10−21.1021 × 10−20
f 12 30Best3.8869 × 10−24.6003 × 10−23.0001 × 10−22.0805 × 10−6
Mean0.17160.12238.8932 × 10−28.7659 × 10−4
Std0.11559.2771 × 10−24.4537 × 10−23.8057 × 10−3
f 13 30Best1.67050.90720.66991.8532 × 10−5
Mean2.32441.27331.21437.4011 × 10−2
Std0.29440.90720.68150.1671
f 14 2Best0.99800.99800.99800.9980
Mean8.63747.27935.43851.3287
Std4.66514.79534.32270.7521
f 15 4Best3.0749 × 10−43.0752 × 10−43.0759 × 10−43.0749 × 10−4
Mean4.49267 × 10−37.7792 × 10−33.2275 × 10−31.1376 × 10−3
Std8.0769 × 10−39.7403 × 10−36.8910 × 10−33.6033 × 10−3
f 16 2Best−1.0316−1.0316−1.0316−1.0316
Mean−1.0307−1.0316−1.0316−1.0316
Std5.2236 × 10−36.9846 × 10−85.3524 × 10−91.0175 × 10−8
f 17 2Best0.39800.39800.39800.3980
Mean0.39800.39800.39800.3980
Std3.9409 × 10−74.9314 × 10−72.8181 × 10−92.6491 × 10−7
f 18 2Best3.00003.00003.00003.0000
Mean3.000113.80018.40013.0000
Std1.3770 × 10−328.005320.55036.1328 × 10−5
f 19 3Best−3.8628−3.8628−3.8628−3.8628
Mean−3.8616−3.8614−3.8603−3.8621
Std2.2312 × 10−32.4133 × 10−32.9021 × 10−31.7124 × 10−3
f 20 6Best−3.3200−3.3200−3.3200−3.3200
Mean−3.2536−3.2494−3.2642−3.2946
Std8.1437 × 10−27.1021 × 10−28.6131 × 10−25.1673 × 10−2
f 21 4Best−10.1531−10.1531−10.1532−10.1532
Mean−9.3965−8.6474−9.9674−9.5654
Std2.00012.83501.32431.5623
f 22 4Best−10.4027−10.4026−10.4028−10.4029
Mean−9.9705−9.8720−9.6857−10.3328
Std1.67091.61752.23230.3823
f 23 4Best−10.5363−10.5362−10.5363−10.5363
Mean−9.3951−8.9066−9.5448−10.1754
Std2.05871.54182.60771.3719
Those in bold are the cases where the result is the best.
Table 11. The rank sum test p−value.
Table 11. The rank sum test p−value.
FunctionsCGWO vs. PSOCGWO vs. FACGWO vs. FOACGWO vs. GWO
p−valuep−valuep−valuep−value
f 1 1.21 × 10−121.21 × 10−121.21 × 10−123.02 × 10−11
f 2 1.21 × 10−121.21 × 10−121.21 × 10−123.02 × 10−11
f 3 1.21 × 10−121.21 × 10−121.21 × 10−123.02 × 10−11
f 4 1.21 × 10−121.21 × 10−121.21 × 10−123.02 × 10−11
f 5 3.02 × 10−113.02 × 10−113.02 × 10−111.984 × 10−2
f 6 3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f 7 3.02 × 10−113.02 × 10−113.02 × 10−111.07 × 10−9
f 8 9.46 × 10−30.14121.69 × 10−93.83 × 10−6
f 9 1.21 × 10−121.72 × 10−121.21 × 10−121.21 × 10−12
f 10 1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
f 11 1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
f 12 3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f 13 3.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−11
f 14 2.71 × 10−26.52 × 10−92.71 × 10−29.06 × 10−8
f 15 8.99 × 10−111.10 × 10−89.51 × 10−61.38 × 10−2
f 16 3.02 × 10−115.49 × 10−113.02 × 10−118.68 × 10−3
f 17 3.02 × 10−113.02 × 10−113.02 × 10−114.43 × 10−3
f 18 3.02 × 10−119.76 × 10−103.34 × 10−111.27 × 10−2
f 19 1.09 × 10−103.02 × 10−115.19 × 10−74.51 × 10−2
f 20 3.69 × 10−114.08 × 10−112.00 × 10−62.15 × 10−2
f 21 7.39 × 10−111.87 × 10−51.76 × 10−24.36 × 10−2
f 22 3.02 × 10−115.32 × 10−34.99 × 10−93.78 × 10−2
f 23 3.02 × 10−114.84 × 10−23.35 × 10−80.13345
Those in bold are the cases where the p−value is less than the significance level a.
Table 12. The solution results from the pressure vessel design problems.
Table 12. The solution results from the pressure vessel design problems.
AlgorithmOptimal VariablesOptimum Cost
T s T h R L
CGWO0.78080.387040.4466198.32375885.3327
PSO1.00180.526750.6469103.17216321.9003
FA1.04090.523253.257882.95116141.2396
FOA0.80630.944441.4192191.10336924.7596
GWO0.82330.408342.6509179.14445890.6112
SCA0.92290.498845.7581152.79936138.6313
WDO1.27700.698458.503655.09706909.6709
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Sun, K. Pressure Vessel Design Problem Using Improved Gray Wolf Optimizer Based on Cauchy Distribution. Appl. Sci. 2023, 13, 12290. https://doi.org/10.3390/app132212290

AMA Style

Li J, Sun K. Pressure Vessel Design Problem Using Improved Gray Wolf Optimizer Based on Cauchy Distribution. Applied Sciences. 2023; 13(22):12290. https://doi.org/10.3390/app132212290

Chicago/Turabian Style

Li, Jun, and Kexue Sun. 2023. "Pressure Vessel Design Problem Using Improved Gray Wolf Optimizer Based on Cauchy Distribution" Applied Sciences 13, no. 22: 12290. https://doi.org/10.3390/app132212290

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop