Next Article in Journal
Information and Analytical System for Processing of Research Results to Justify the Safety of Atomic Energy
Next Article in Special Issue
Lemurs Optimizer: A New Metaheuristic Algorithm for Global Optimization
Previous Article in Journal
Interdisciplinary IoT and Emotion Knowledge Graph-Based Recommendation System to Boost Mental Health
Previous Article in Special Issue
Applications of Metaheuristics Inspired by Nature in a Specific Optimisation Problem of a Postal Distribution Sector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems

1
School of Mechanical and Electrical Engineering, Guizhou Normal University, Guiyang 550025, China
2
Technical Engineering Center of Manufacturing Service and Knowledge Engineering, Guizhou Normal University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9709; https://doi.org/10.3390/app12199709
Submission received: 1 September 2022 / Revised: 23 September 2022 / Accepted: 23 September 2022 / Published: 27 September 2022
(This article belongs to the Special Issue Metaheuristics for Real-World Optimization Problems)

Abstract

:
Golden jackal optimization (GJO) is an effective metaheuristic algorithm that imitates the cooperative hunting behavior of the golden jackal. However, since the update of the prey’s position often depends on the male golden jackal and there is insufficient diversity of golden jackals in some cases, it is prone to falling into a local optimal optimum. In order to address these drawbacks of GJO, this paper proposes an improved algorithm, called a hybrid GJO and golden sine (S) algorithm (Gold-SA) with dynamic lens-imaging (L) learning (LSGJO). First, this paper proposes novel dual golden spiral update rules inspired by Gold-SA. These rules give GJO the ability to think like a human (Gold-SA), making the golden jackal more intelligent in the process of preying, and improving the ability and efficiency of optimization. Second, a novel nonlinear dynamic decreasing scaling factor is introduced into the lens-imaging learning operator to maintain the population diversity. The performance of LSGJO is verified through 23 classical benchmark functions and 3 complex design problems in real scenarios. The experimental results show that LSGJO converges faster and more accurately than 11 state-of-the-art optimization algorithms, the global and local search ability has improved significantly, and the proposed algorithm has shown superior performance in solving constrained problems.

1. Introduction

Natural science and social economy optimization problems are a research hotspot in computer science, management and decision-making, artificial intelligence, and other fields. The search for high-precision solutions to such optimization problems has attracted many researchers. However, the traditional optimization methods based on mathematical theory, such as Newton’s downhill method and the gradient descent method, have been unable to solve these problems effectively [1,2], so many scholars favor metaheuristic algorithms.
Metaheuristic algorithms are used to find the optimal solution or satisfactory solution to complex optimization problems [3,4,5], and they are inspired by the phenomenon of a biological population, physical phenomena, evolutionary law, etc. For example, the whale optimization algorithm (WOA) is inspired by the humpback whales’ foraging behavior in nature [6]. The salp swarm algorithm (SSA) is inspired by the swarming behavior of salps when navigating and foraging in oceans [7]. The Harris’s hawk optimization (HHO) is inspired by the different mechanisms of the Harris’s hawk’s strategy for capturing prey [8]. Biological populations inspire these algorithms. For example, the equilibrium optimizer (EO) is inspired by control volume mass balance models that estimate both dynamic and equilibrium states [9]. The lightning attachment procedure optimization (LAPO) is inspired by the natural process of connecting the upward-facing and downward-facing leads of lightning [10]. The turbulent flow of water-based optimization (TFWO) is inspired by whirlpools created in the turbulent flow of water [11]. Physical phenomena inspire these algorithms. For example, the inspiration for the genetic algorithm (GA) comes from the survival of the fittest under the action of genetics, selection, and variation in organisms to achieve evolution and development [12]. The immune algorithm (IA) inspiration comes from the immune mechanism of biology, combined with the evolutionary mechanism of genes [13]. The laws of evolution inspire these algorithms. Metaheuristic algorithms are widely used in signal processing [14], image processing [15,16], fault detection [17], production scheduling [18], feature selection [19], path planning [20], numerical optimization [21], engineering design [22,23,24], etc.
Moreover, the no-free-lunch theory (NFL) shows that no one algorithm can be applied to all optimization problems [25]. This theory has prompted many researchers to improve existing algorithms. Alkayem et al. proposed the self-adaptive quasi-oppositional stochastic fractal search (SA-QSFS) by employing triple modal-based objective function combination and quasi-oppositional learning [26]. Tian and Shi proposed MPSO using chaos initialization and sigmoid-like inertia weight based on the PSO algorithm [27]. Dhargupta et al. proposed SOGWO by combining opposition-based learning (OBL) with a grey wolf optimizer [28]. Alkayem et al. proposed the social engineering particle swarm optimization algorithm (SEPSO) by combining the PSO population-based elitist-solution mechanism and the SEO two-solution attacker–defender paradigm [29].
Our team has also improved some algorithms and achieved good results in numerical optimization and engineering applications: Wei et al. proposed a new unbalanced fault diagnosis framework using MFO to optimize γ parameters in LS-SVM [30]. Fan et al. proposed BGWO by combining the beetle antenna strategy with the gray wolf algorithm and adopting the nonlinear control step size strategy [31]. Fan et al. proposed m-EO based on EO with reverse learning, new update rules, and chaos strategy [32]. Fan et al. proposed MMPA based on WOA with a new position update strategy, a logistic opposition-based learning mechanism, inertia weight coefficient, and a nonlinear control step size strategy [33]. Wei et al. proposed NI-MWMOTE based on MWMOTE with an adaptive noise processing scheme and an aggregative hierarchical clustering method [34].
The golden jackal optimization algorithm (GJO) is a recently proposed biological swarm intelligence algorithm [35], inspired by the collaborative hunting behavior of golden jackals. Although GJO has the advantages of easy implementation, high stability, and few adjustment parameters, its exploration and exploitation capabilities are unbalanced, which can easily lead to excessive exploitation and fall into local optima. In the process of iteration, the position of prey is always located in the middle of two golden jackals. However, the position of male golden jackals is not necessarily the optimal solution, which easily leads to slow convergence in the late iteration, poor convergence accuracy, and easily falling into a locally optimal solution. Aiming at the shortcomings of GJO, this paper improves the original algorithm and proposes LSGJO. The new algorithm adds the dynamic lens-imaging learning strategy and the novel position updating strategy to make it have better global search ability and local search ability.
In order to prove its superior result, LSGJO is compared with several well-known and recent algorithms on 23 benchmark functions. Among the 23 benchmark functions, the functions F1-F13 have three different dimensions (30, 100, 500), and F14-F23 are fixed-dimension functions. The experimental results show that the algorithm proposed in this paper has a fast convergence speed and high accuracy. In addition, the experimental results of LSGJO on constrained optimization problems in three mechanical fields also show that the proposed algorithm can solve practical problems.
The highlights and contributions of this paper are summarized as follows:
(1)
LSGJO is proposed.
(2)
Wilcoxon rank sum test and Friedman test are used to analyze the statistical data. Observing the convergence curve and comparing it with other algorithms proves that LSGJO has tremendous advantages.
(3)
LSGJO is applied to solve three constrained optimization problems in mechanical fields and compared with many advanced algorithms.
The remaining sections of this paper are as follows: Section 2 briefly summarizes the conventional golden jackal algorithm. Section 3 proposes LSGJO and analyzes its time complexity. The benchmark functions are tested, and the results are analyzed in Section 4. LSGJO is used to solve three constrained optimization problems in mechanical fields in Section 5. Section 6 discusses the challenges, recommendations, and limitations related to the proposed algorithm. Finally, Section 7 concludes the paper and proposes future studies.

2. Golden Jackal Algorithm

The golden jackal algorithm is a swarm intelligence algorithm proposed by Nitish Chopra and Muhammad Mohsin Ansari; it mimics the hunting behavior of golden jackals in nature. Golden jackals usually hunt with males and females. The hunting behavior of the golden jackal is divided into three steps: (1) searching for and moving towards the prey; (2) enclosing and irritating the prey until it stops moving; and (3) pouncing towards the prey.
During the initialization phase, a randomly distributed set of prey position matrices is generated by Equation (1):
Y 1 , 1 Y 1 , j Y 1 , n Y 2 , 1 Y 2 , j Y 2 , n Y N 1 , 1 Y N 1 , j Y N 1 , n Y N , 1 Y N , j Y N , n
where N denotes the number of prey populations and n denotes dimensions.
The mathematical model of the golden jackal’s hunt is as follows (|E| > 1):
Y 1 t = Y M t E Y M t r l P r e y t
Y 2 t = Y F M t E Y F M t r l P r e y t
where t is the current iteration, Y M t indicates the position of the male golden jackal, Y F M t indicates the position of the female, and P r e y t is the position vector of the prey. Y 1 t and Y 2 t are the updated positions of the male and female golden jackals.
E is the evading energy of prey and is calculated as follows:
E = E 1 E 0
E 1 = c 1 ( 1 ( t / T ) )
where E 0 is a random number in the range [–1, 1], indicating the prey’s initial energy; T represents the maximum number of iterations; c 1 is the default constant set to 1.5; and E 1 denotes the prey’s decreasing energy.
In Equations (2) and (3), Y M t r l · P r e y t denotes the distance between the golden jackal and prey and “rl” is the vector of random numbers calculated by the Levy flight function.
r l = 0.05 L F y
L F y = 0.01 × μ × σ / v 1 / β σ = Γ 1 + β × sin ( π β / 2 ) Γ 1 + β 2 × β × 2 β 1 1 / β
where u and v are random values in (0, 1) and β is the default constant set to 1.5.
Y t + 1 = Y 1 t + Y 2 t 2
where Y t + 1 is the updated position of the prey based on the male and the female golden jackals.
When the prey is harassed by the golden jackals, the evading energy is decreased. The mathematical model of the golden jackals surrounding prey and devouring it is as follows (|E| ≤ 1):
Y 1 t = Y M t E r l Y M t P r e y t
Y 2 t = Y F M t E r l Y F M t r l P r e y t
The pseudo-code of the above GJO is shown in Algorithm 1.
Algorithm 1: Golden Jackal Optimization
Inputs: The population size N and maximum number of iterations T
Outputs: The location of prey and its fitness value
Initialize the random prey population Y i (i = 1, 2, …, N)
While (t < T)
    Calculate the fitness values of prey
     Y 1 = best prey individual (Male Jackal Position)
     Y 2 = second best prey individual (Female Jackal Position)
    for (each prey individual)
      Update the evading energy “E” using Equations (4) and (6)
      Update “rl” using Equations (6) and (7)
      If (|E| ≤ 1)      (Exploration phase)
      Update the prey position using Equations (2), (3), and (8)
      If (|E| > 1)      (Exploitation phase)
      Update the prey position using Equations (8), (9), and (10)
    end for
    t = t + 1
end while
return Y 1

3. Proposed LSGJO

When solving some optimization problems, GJO easily falls into iterative stagnation, slow convergence in later stages, and insufficient exploration and exploitation capacity, and the shortcomings are more apparent when solving complex problems. In this section, we propose two improvement strategies described in detail below.

3.1. Dynamic Lens-Imaging Learning Strategy

Lens-imaging learning strategy is a recently proposed opposition-based learning method [36]. This strategy is derived from the law of optics in the convex lens-imaging law. The principle of the strategy is to refract the entity on one side to the other through a convex lens to form an inverted image. Here, Figure 1 is used to outline its principle: on the left of the coordinate axis y, there is an individual G (the male golden jackal); its projection on the coordinate axis x is X, and its distance from the coordinate axis x is h. The coordinate axis y denotes a convex lens of focal length f, and the O point is the center of the convex lens. G passes through a convex lens to produce an opposite individual G′, whose projection on the coordinate axis x is X′, and its distance from the coordinate axis x is h′. The individual X and its opposite individual X′ are obtained.
According to Figure 1, X and X   can be derived from the convex lens-imaging principle:
u b + l b 2 X X u b + l b 2 = h h
where ub and lb are the upper and lower bounds. Let h / h = α , and α is called the scaling factor; then, Equation (11) is transformed to obtain the formula for the opposite point X :
X = u b + l b 2 + u b + l b 2 α X α
The scaling factor α can increase the local development ability of the LSGJO. In the original lens-imaging learning strategy, the scaling factors are generally considered constant, which reduces the convergence performance of the algorithm. Therefore, this paper proposes a new scaling factor based on nonlinear dynamic decreasing, which can obtain larger values in the early iteration of the algorithm so that the algorithm can search in the broader range of different dimensional regions and improve the diversity of the population. At the end of the algorithm iteration, a smaller value is obtained, so the fine search near the optimal individual can be carried out to improve the local optimization ability. The nonlinear dynamic scaling factor α is calculated by Equation (13):
α = ζ m i n ζ m a x ζ m i n t / T 2
where ζ m a x is the maximum scaling factor, ζ m i n is the minimum scaling factor, and T is the maximum number of iterations; the value ζ m a x is 100, and the value ζ m i n is 10.
Equation (12) can be popularized to the n-dimensional search space:
X j = u b j + l b j 2 + u b j + l b j 2 α X j α
where X j and X j are the components of X and X in dimension j, respectively, and l b j and u b j are the upper and lower bounds of dimension j, respectively. The dynamic lens-imaging strategy considers the candidate and opposite solutions and selects the best solution according to the calculated fitness. In this paper, the dynamic lens-imaging learning strategy is applied to the current global optima of the swarm in the GJO and is beneficial to help the population avoid stagnation in local optima.
Learning strategies mainly include the opposition-based learning (OBL) strategy, the quasi-opposition-based learning strategy, and the dynamic lens-imaging-based learning strategy. The original OBL is a special form of α = 1 in Equation (12). Quasi-oppositional learning, proposed by Tizhoosh et al. [37], is utilized to improve the overall exploration of the initial and execution stages, and it is an excellent improvement on the original opposition-based learning method. It can increase the diversity of the population but ignores that with the increase in the number of iterations, the algorithm changes from global optimization to local optimization. The dynamic lens-imaging learning proposed in this paper takes this into account.

3.2. Novel Update Rules

The golden sine algorithm was proposed by Tanyildizi et al. [2]; it is inspired by the relationship between the sine function and the unit circle in mathematics. The golden section coefficient is introduced in the position update in the golden sine algorithm, and the special relationship between the sine function and the unit element is combined with the golden section. The golden sine algorithm finds the global optimal solution by reducing the search scope continuously. First, the global search finds the optimal solution space, then the local search is carried out, and finally the global optimal solution is sought. The golden sine algorithm has better local search ability, and its mathematical model is as follows:
X i t + 1 = X i t s i n R 1 + R 2 s i n R 1 x 1 P i t x 2 X i t
where t denotes the current iteration number; R 1 is a random value inside [0, 2π]; R 2 is a random value inside [0, π]; R1 and R2 indicate the direction and distance of the next generation of movement, respectively; and x 1 and x 2 are the golden section coefficients, which are used to narrow the search space and guide the individual to converge to the optimal solution.
x 1 = a 1 τ + b τ
x 2 = a τ + b 1 τ
τ = 5 1 / 2
where a and b are the initial values π and π , and τ represents the golden number.
When the golden sine algorithm and the golden jackal algorithm are combined, the position update rules of male and female jackals in the exploitation stage are as follows:
Y 1 t = p r e y t s i n R 1 + R 2 s i n R 1 x 1 Y M t x 2 p r e y t
Y 2 t = p r e y t s i n R 1 + R 2 s i n R 1 x 1 Y FM t x 2 p r e y t
The position updating rule adopts the dual golden spiral update rules, mimics the golden jackal surrounding the prey in a curve way, consumes the prey’s physical strength, gradually narrows the encircling circle of the prey, and then captures the prey. This position updating rule is more in line with the natural golden jackal surrounding and capturing prey state, and the principle of the dual golden spiral update rules is shown in Figure 2.
In a word, combining the lens-imaging strategy with a nonlinear dynamic decreasing factor and the new update rules enables GJO to jump out of the local optimum, accelerate the convergence, and improve the convergence accuracy. In the exploitation phase of the golden jackal algorithm, adding a levy flight function can avoid falling into local optimization to a certain extent. Since the levy flight function is characterized by short-distance and occasional long-distance jumps, GJO still falls into local optima in some numerical optimizations. Especially in the high-dimensional function, its effect will be significantly reduced. In this regard, the dynamic lens-imaging learning strategy is used to find the opposite of the current global optimal solution, increase the population’s diversity, and retain a better one by comparing the fitness function values. In the exploitation phase, the positions of male and female jackals are updated by the new update rules. The pseudo-code of LSGJO is shown in Algorithm 2 and Figure 3.
Algorithm 2: The pseudo-code of LSGJO
Inputs: The population size N and maximum number of iterations T
Outputs: The location of prey and its fitness value
Initialize the random prey population Y i (i = 1, 2, …, N)
While (t < T)
Calculate the fitness values of prey
Y 1 = best prey individual (Male Jackal Position)
Y 2 = second best prey individual (Female Jackal Position)
    Obtain Y 1 * by Equation (14)
    Calculate the fitness function values of Y 1 and Y 1 * , set the better one as Y 1
for (each prey individual)
Update the evading energy “E” using Equations (3) and (4)
Update “rl” using Equations (6) and (7)
If(|E| ≥ 1)        (Exploration phase)
Update the prey position using Equations (8), (19), and (20)
If(|E| < 1)        (Exploitation phase)
Update the prey position using Equations (8), (9), and (10)
end for
t = t + 1
end while
return Y 1

3.3. The Computational Complexity of LSGJO

The time complexity indirectly reflects the convergence rate of the algorithm. Suppose the time required to initialize the parameters (population size N, dimension d, coefficient E, rl, etc.) is γ 1 . According to Equation (7), the time needed for each dimension to update the position of the prey and the position of the golden jackal is γ 2 , and the time for solving the fitness value of the objective function is f (n); then, the time complexity of GJO is:
T 1 n = O γ 1 + N d × γ 2 + f n = O d + f n
In the LSGJO algorithm, it is assumed that the initialization parameters (population size N, dimension d, τ , x 1 , x 2 coefficient E, rl, etc.) are γ 3 , and the time required to perform the lens-imaging learning strategy is γ 4 . The time required to execute the greedy mechanism is γ 5 . According to Equation (7), the time needed for each dimension to update the position of the prey and the position of the golden jackal is γ 6 ; then, the time complexity of the LSGJO is:
T 2 n = O γ 3 + γ 5 + N d × γ 6 + γ 4 + f n = O d + f n
The LSGJO proposed in this paper has the same time complexity as GJO:
T 1 n = T 2 n
In summary, the LSGJO does not increase the time complexity.

4. Simulation and Result Analysis

To verify the performance of the LSGJO, this study uses 23 benchmark functions commonly used in the literature [2,35], which are listed in Table 1. The functions F1~F7 are high-dimensional unimodal functions and have a single global optimal solution. These functions are used to test the convergence rate of search algorithms, The functions F8~F13 are high-dimensional multimodal functions and have a single global optimum and multiple locally optimal solutions; these functions are designed to test the search capacities of optimization algorithms. The functions F14~F23 are low-dimensional multimodal functions and have a small number of local minima. The range indicates the solution space, and F m i n denotes the optimal value. In order to verify the robustness of LSGJO, the 13 functions F1~F13 were tested with 100 and 500 dimensions.
All experiments were conducted on the same environment configuration, and all algorithms were implemented in Matlab 2016 b installed on Windows 10 (64 bit), CPU Intel(R) i5-9400 F at 2.9 GHz and 16 GB of RAM.
In this paper, some novel swarm intelligence algorithms, including grey wolf optimizer (GWO) [38], Harris’s hawk optimization (HHO), chimp optimization algorithm (ChoA) [39], golden jackal optimization (GJO), equilibrium optimizer (EO), whale optimization algorithm (WOA), salp swarm algorithm (SSA), snake optimizer (SO) [40], particle swarm optimization (PSO) [41], modified particle swarm optimization (MPSO), and selective opposition-based grey wolf optimization (SOGWO), are compared with the improved algorithms.
The parameters of all the comparison algorithms are shown in Table 2. In order to ensure the fairness of the experimental results, the population size of each algorithm was set to 30, and the maximum number of iterations was set to 500. Each algorithm ran 30 times independently, and its average and standard deviation were recorded.

4.1. Comparison and Analysis with Metaheuristic Algorithms

The experimental results of 11 algorithms on 23 benchmark functions are shown in Table 3. As can be seen from the mean and standard deviation, LSGJO performs better than GJO in almost every function. Compared with other algorithms, LSGJO is the first in all the test functions except the average ranking of F6, F12, F14, and F20. In all benchmark function tests, the average value and standard deviation of LSGJO test results are small, indicating that the performance of the LSGJO is the best. From the convergence curve in Figure 4, it can be seen that LSGJO converges to the optimal solution much faster than other algorithms.

4.2. Experimental Analysis of the Algorithm in Different Dimensions of Function

As the dimension of the function increases, the computational cost of the function increases exponentially. The other setting conditions are shown above when the dimensions are set to 100 and 500. As can be seen from Table 4 and Table 5, LSGJO can obtain the optimal solutions in both 100 dimensions and 500 dimensions. To further observe the performance of LSGJO, 100-dimensional convergence curves and the 500-dimensional convergence curve are shown in Figure 5 and Figure 6, respectively. The convergence speed of LSGJO in the image of functions F1–F13 is faster than that of other algorithms, and the convergence accuracy is higher in Figure 5 and Figure 6. The results show that LSGJO has better robustness than other comparison algorithms.
Multidimensional testing not only reflects the robustness of the algorithm but also has a certain practical significance. The traveling salesman problem (TSP) is a typical NPC problem that aims to minimize the path traversing all cities. When there are many cities, the algorithm needs to have the ability to solve multidimensional problems. When swarm intelligence algorithms are used to optimize the weights and thresholds of multilayer neural networks and when the number of layers of the network is large, the number of variables will exceed 500, and the algorithm needs to have the ability to solve 500-dimensional problems. When solving large-scale job-shop scheduling problems, due to a large number of jobs and machines, the algorithm will need to be able to solve multidimensional problems. When a swarm intelligence algorithm is used for wireless sensor coverage optimization, if the coverage area is large, the algorithm needs to have the ability to solve multidimensional problems. In addition, swarm intelligence algorithms are also used in assembly sequence and process planning, and under certain conditions, the ability of algorithms to deal with multidimensional variables is also required.

4.3. Convergence Behavior Analysis

This experiment is used to observe the convergence behavior of LSGJO with 30 dimensions and 500 iterations. The convergence process of the LSGJO is shown in Figure 7. The diagram in the first column is a three-dimensional plot of the benchmark function. The diagram in the second column is the convergence curve of the LSGJO, which is the optimal value of the current iteration. It can be seen that LSGJO converges quickly on the unimodal function, and the ladder shape appears on the multimodal function, which shows that the improved algorithm has better exploration ability and exploitation ability. The diagram in the third column is the trajectory of the first golden jackal in the first dimension. The significant fluctuation at the beginning is due to the global optimization in the early iteration stage. The trajectory fluctuates significantly in the later stage of the iteration because of the dynamic lens-imaging learning strategy added, which can avoid falling into iterative stagnation. The fourth column diagram is the average fitness of the overall solution, which is used to evaluate the overall performance of the population. The curve will be relatively high in the initial iteration, and the average fitness will likely be stable as the number of iterations increases. The fifth column diagram shows the historical position of the search agent in the iterative process. In the search history of functions F1–F4 and functions F9–F11, the point positions are more clustered, indicating that the fitness of the search agent is small, and the next iteration will be a local search in this area. In the search history of functions F5, F6, and F16, many points are scattered, indicating that if the optimal value is not found quickly, other search agents will continue to search for the optimal solution.

4.4. Statistical Analysis

In the statistical processing of experimental data, each experiment’s average value and standard deviation have been calculated and can be used to judge the algorithms’ quality. In order to further verify the significant differences between the proposed algorithm and other algorithms, the Wilcoxon rank-sum test is performed at a significance level of 0.05 [42]. If the p-value is > 0.05, we should consider the performance of these two algorithms to be similar, and the values are underlined. The performance of all the algorithms is ranked by the Friedman rank test [43]. The results of the Wilcoxon rank sum test are shown in Table 6. NaN indicates that significance cannot be determined. The total number of significant differences is shown in the last column. In F9 and F11, some algorithms do not have significant differences in 30 and 100 dimensions. However, they have significant differences in 500 dimensions, indicating that other algorithms have poor performance in high dimensions, while LSGJO has good performance. The sorting results by Friedman show that LSGJO ranks first in both low-dimensional and high-dimensional functions.

5. Real-World Engineering Design Problems

In order to verify the optimization performance of LSGJO in real-world engineering design problems, this paper introduces three constraint problems, namely the speed reducer design problem [44], the gear train design problem [45], and the multiple-disk clutch design problem [46]. The number of iterations for all algorithms is set to 500, and the population size is set to 30.

5.1. Speed Reducer Design Problem

The speed reducer is widely used in mechanical products. According to its use occasion, its specific functions are mainly manifested as reducing the speed, increasing the torque, reducing the inertia of the movement mechanism, and so on. The main goal of this design problem is to minimize the weight of the speed reducer. The variables include face width y 1 , the module of teeth y 2 , the number of teeth in the pinion y 2 , the length of the first shaft y 3 , the length of the second shaft y 4 , the diameter of the first shaft y 5 , and the diameter of the second shaft y 6 . The speed reducer is shown in Figure 8. The mathematical model of the speed reducer design is stated in Appendix A, Equation (A1).
Table 7 shows the optimal solution to the speed reducer design problem obtained by 10 popular intelligent algorithms, namely the proposed algorithm in this paper, GWO, HHO, ChoA, GJO, EO, WOA, SO, MPSO, and SOGWO. The experimental results show that LSGJO is better than other algorithms. The minimum weight of the speed reducer is Minimize f y = 2994.4711, with the optimal solution y = {3.5000, 0.7000, 17.0000, 7.3000, 7.7153, 3.3502, 5.2867}.

5.2. Gear Train Design Problem

The gear train plays an essential role in watches, clutches, differentials, machine tools, fans, mixers, and many other products. It is one of the most common mechanisms in the mechanical field. The main objective of the gear train design problem is to minimize the gear ratio. The variable is the number of teeth of four gears. The gear train is shown in Figure 9. The mathematical model of the gear train design is stated in Appendix A, Equation (A2).
Table 8 shows the optimal solution to the gear train design problem by 14 popular intelligent algorithms, namely LSGJO, GWO, GJO, PSO, bat algorithm (BA) [47], ant colony optimization (ACO) [48], simulated annealing (SA) [49], flower pollination algorithm (FPA) [50], dragonfly algorithm (DA) [51], moth–flame optimization algorithm (MFO) [52], polar bear optimization algorithm (PBO) [53], firefly algorithm (FA) [54], SOGWO, and EO. Experimental results show that LSGJO is significantly superior to other algorithms. The minimum transmission ratio of the gear train is Minimize f y = 2.63 × 10−19, and the optimal solution y = {3.17 × 101, 1.20 × 101,1.20 × 101, 3.15 × 101}.

5.3. Multiple-Disk Clutch Design Problem

The multiple-disk clutch is widely used in mechanical transmission systems in machine tools, steel rolling, metallurgical mining, handling, ship fishery equipment, etc. The main objective of the multiple-disk clutch design problem is to minimize the weight of the clutch. The variables include the internal surface area radius y 1 , the external surface radius y 2 , the disc thickness y 3 , the driving force y 4 , and the number of friction surfaces y 5 . The multiple-disk clutch is shown in Figure 10. The mathematical model of multiple-disk clutch design is stated in Appendix A, Equation (A3).
Table 9 is the optimal solution for multiple-disk clutch design obtained by 11 advanced intelligent algorithms. These intelligent algorithms are LSGJO, GWO, GJO, ChoA, ant lion optimizer (ALO) [55], multi-verse optimizer (MVO) [56], ACO, sine cosine algorithm (SCA) [57], EO, SOGWO, and MPSO. The experimental results show that LSGJO is better than the other 10 algorithms; the minimum weight of the multiple-disk clutch is Minimize f y = 0.2352425, and the optimal solution y = {69.9999928, 90.0000000, 1.0000000, 945.1761801, 2.0000000}.

6. Discussion

Every metaheuristic should be critically evaluated [58]. Metaheuristic algorithms are created to solve practical problems. The algorithm proposed in this paper only proves the effectiveness of numerical optimization problems and can not prove that the algorithm is universal in other problems. The algorithm proposed in this paper only obtains the approximate solution to the optimization problem, but not the exact solution, which is worthy of further improvement and also our future work. The improved algorithm in this paper is closer to the hunting state of the real golden jackal than the original algorithm, but there is still a gap between it and the real hunting state. It is worth studying how to establish a mathematical model consistent with the actual hunting state. Determining which components of the algorithm have an impact on the optimization problem is also an important issue that will help us further improve the algorithm.

7. Conclusions

In order to improve the efficiency of GJO in global numerical optimization and practical design problems, a hybrid GJO and golden sine algorithm with dynamic lens-imaging learning is proposed in this paper. LSGJO makes two effective improvements over the GJO. Firstly, the candidate solution of the optimal solution is generated by the dynamic lens-imaging learning strategy, which increases the possibility of finding the optimal value quickly. Secondly, novel dual golden spiral update rules are introduced in the exploitation stage to accelerate convergence and avoid falling into local optima. The algorithm’s global search ability and local search ability are enhanced and achieve balance with each other by combining the two proposed improvements. Twenty-three benchmark functions were tested to evaluate the performance of the LSGJO, including three dimensions (30, 100, 500). Experimental results and statistical data show that the algorithm proposed in this paper has a fast convergence speed, high convergence precision, strong robustness, and stable searching performance. Compared to 11 state-of-the-art optimization algorithms, LSGJO has excellent competitiveness. In addition, LSGJO was successfully applied to three real-world engineering problems in the mechanical field (speed reducer design, gear train design, and multiple-disk clutch design), and its optimization effect was better than that of other algorithms.
In the future, the potential of LSGJO will be explored and focused on applications, and research in other directions, such as (1) path planning for unmanned aerial vehicles (UAVs), (2) the use of the oppositional learning method in the initialization stage, and (3) a multiobjective optimization algorithm based on LSGJO, will be studied and applied to feature selection and process parameter optimization.

Author Contributions

Conceptualization, P.Y., T.Z. and L.Y.; methodology, P.Y. and L.Y.; software, P.Y.; writing—original draft, P.Y.; writing—review and editing, P.Y., T.Z. and L.Y.; data curation, W.Z.; visualization W.Z.; supervision, T.Z. and Y.L.; funding acquisition, T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation (Grant No. 72061006, 71761007); Academic New Seedling Foundation Project of Guizhou Normal University (Grant No. Qianshixinmiao [2021] A30); Growth Project for Young Scientific and Technological Talents in General Colleges and Universities of Guizhou Province (Grant No. Qianjiaohe KY [2022] 167); Guizhou Provincial Science and Technology Projects (Grant No. Qiankehejichu-ZK [2022] General 320).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

See Equations (A1)–(A3)
y = y 1 , y 2 , y 3 , y 4 , y 5 , y 6 , y 7 Minimize : f y = 0.7854 y 1 y 2 2 3.3333 y 3 2 + 14.9334 y 3 43.0934 1.508 y 1 y 6 2 + y 7 2 + 7.4777 y 6 3 + y 7 3 + 0.7854 y 4 y 6 2 + y 5 y 7 2 Subject   to : h 1 y = 27 y 1 y 2 2 y 3 1 0 h 2 y = 397.5 y 1 y 2 2 y 3 2 1 0 h 3 y = 1.93 y 4 3 y 2 y 6 4 y 3 1 0 h 4 y = 1.93 y 5 3 y 2 y 7 4 y 3 1 0 h 5 y = 745 y 4 / y 2 y 3 2 + 16.9 10 6 1 / 2 110 y 6 3 1 0 h 6 y = 745 y 5 / y 2 y 3 2 + 157.5 10 6 1 / 2 85 y 7 3 1 0 h 7 y = y 2 y 3 40 1 0 h 8 y = 5 y 2 y 1 1 0 h 9 y = y 1 12 y 2 1 0 h 10 y = 1.5 y 6 + 1.9 y 4 1 0 h 11 y = 1.1 y 7 + 1.9 y 5 1 0 Variable   range : 2.6 y 1 3.6 , 0.7 y 2 0.8 , 17 y 3 28 , 7.3 y 4 8.3 , 7.3 y 5 8.3 , 2.9 y 6 3.9 , 5.0 y 7 5.5
y = y 1 , y 2 , y 3 , y 4 Minimize : f y = 1 6.931 y 3 y 2 y 1 y 4 2 Variable   range : 12 y 1 60 , 12 y 2 60 , 12 y 3 60 , 12 y 4 60
y = y 1 , y 2 , y 3 , y 4 , y 5 Minimze : f ( x ) = π y 2 2 y 1 2 y 3 y 5 + 1 ρ Subject   to : g 1 ( x ) = p max + p r z 0 g 2 ( x ) = p r z V s r V s r , max p max 0 g 3 ( x ) = Δ R + y 1 y 2 0 g 4 ( x ) = L max + y 5 + 1 y 3 + δ 0 g 5 ( x ) = s M s M h 0 g 6 ( x ) = T 0 g 7 ( x ) = V s r , max + V s r 0 g 7 ( x ) = T T max 0 where : 60 y 1 80 90 y 2 110 1 y 3 3 0 y 4 1000 2 y 5 9 M h = 2 3 μ y 4 y 5 y 2 3 y 1 3 y 2 2 y 1 2 N . mm ω = π n 30 rad / s A = π y 2 2 y 1 2 mm 2 p r z = y 4 A N / mm 2 V s r = π R s r n 30 mm / s R s r = 2 3 y 2 3 y 1 3 y 2 2 y 1 2 mm T = I z ω M h + M f Δ R = 20 mm , L max = 30 mm , μ = 0.6 V s r , max = 10 m / s , δ = 0.5 mm , s = 1.5 T max = 15 s , n = 250 rpm , I z = 55 Kg m 2 M s = 40 Nm , M f = 3 Nm ,   ρ = 0 . 0000078 ,   p max = 1

References

  1. Wu, G. Across neighborhood search for numerical optimization. Inf. Sci. 2016, 329, 597–618. [Google Scholar] [CrossRef]
  2. Tanyildizi, E.; Demir, G. Golden Sine Algorithm: A Novel Math-Inspired Algorithm. Adv. Electr. Comput. Eng. 2017, 17, 71–78. [Google Scholar] [CrossRef]
  3. Sun, X.X.; Pan, J.S.; Chu, S.C.; Hu, P.; Tian, A.Q. A novel pigeon-inspired optimization with QUasi-Affine TRansfor-mationevolutionary algorithm for DV-Hop in wireless sensor networks. Int. J. Distrib. Sens. Netw. 2020, 16, 1–15. [Google Scholar] [CrossRef]
  4. Song, P.C.; Chu, S.C.; Pan, J.S.; Yang, H. Simplified Phasmatodea population evolution algorithm for optimization. Complex Intell. Syst. 2022, 8, 2749–2767. [Google Scholar] [CrossRef]
  5. Sun, L.; Koopialipoor, M.; Armaghani, D.J.; Tarinejad, R.; Tahir, M. Applying a meta-heuristic algorithm to predict and op-timize compressive strength of concrete samples. Eng. Comput. 2019, 37, 1133–1145. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  8. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  9. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2019, 191, 105190. [Google Scholar] [CrossRef]
  10. Nematollahi, A.F.; Rahiminejad, A.; Vahidi, B. A novel physical based meta-heuristic optimization method known as lightning attachment procedure optimization. Appl. Soft Comput. 2017, 59, 596–621. [Google Scholar] [CrossRef]
  11. Ghasemi, M.; Davoudkhani, I.F.; Akbari, E.; Rahimnejad, A.; Ghavidel, S.; Li, L. A novel and effective optimization algorithm for global optimization and its engineering applications: Turbulent Flow of Water-based Optimization (TFWO). Eng. Appl. Artif. Intell. 2020, 92, 103666. [Google Scholar] [CrossRef]
  12. Malakar, S.; Ghosh, M.; Bhowmik, S.; Sarkar, R.; Nasipuri, M. A GA based hierarchical feature selection approach for handwritten word recognition. Neural Comput. Appl. 2019, 32, 2533–2552. [Google Scholar] [CrossRef]
  13. Wang, L.; Pan, J.; Jiao, L. The immune algorithm. Acta Electonica Sin. 2000, 28, 96. [Google Scholar]
  14. Valayapalayam Kittusamy, S.R.; Elhoseny, M.; Kathiresan, S. An enhanced whale optimization algorithm for vehicular communication networks. Int. J. Commun. Syst. 2019, 35, e3953. [Google Scholar] [CrossRef]
  15. Rather, S.A.; Bala, P.S. Constriction coefficient based particle swarm optimization and gravitational search algorithm for multilevel image thresholding. Expert Syst. 2021, 38, e12717. [Google Scholar] [CrossRef]
  16. Yan, Z.; Zhang, J.; Yang, Z.; Tang, J. Kapur’s Entropy for Underwater Multilevel Thresholding Image Segmentation Based on Whale Optimization Algorithm. IEEE Access 2020, 9, 41294–41319. [Google Scholar] [CrossRef]
  17. Navarro-Acosta, J.A.; García-Calvillo, I.D.; Reséndiz-Flores, E.O. Fault detection based on squirrel search algorithm and support vector data description for industrial processes. Soft Comput. 2022, 1–12. [Google Scholar] [CrossRef]
  18. Cao, X.; Yang, Z.Y.; Hong, W.C.; Xu, R.Z.; Wang, Y.T. Optimizing Berth-quay Crane Allocation considering Economic Factors Using Cha-otic Quantum SSA. Appl. Artif. Intell. 2022, 36, 2073719. [Google Scholar] [CrossRef]
  19. Samieiyan, B.; MohammadiNasab, P.; Mollaei, M.A.; Hajizadeh, F.; Kangavari, M. Novel optimized crow search algorithm for feature selection. Expert Syst. Appl. 2022, 204, 117486. [Google Scholar] [CrossRef]
  20. Phung, M.D.; Ha, Q.P. Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization. Appl. Soft Comput. 2021, 107, 107376. [Google Scholar] [CrossRef]
  21. Trojovský, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
  22. Yuan, Y.; Mu, X.; Shao, X.; Ren, J.; Zhao, Y.; Wang, Z. Optimization of an auto drum fashioned brake using the elite opposition-based learning and chaotic k-best gravitational search strategy based grey wolf optimizer algorithm. Appl. Soft Comput. 2022, 123, 108947. [Google Scholar] [CrossRef]
  23. Kamboj, V.K.; Nandi, A.; Bhadoria, A.; Sehgal, S. An intensify Harris Hawks optimizer for numerical and engineering optimiza-tion problems. Appl. Soft Comput. 2020, 89, 106018. [Google Scholar] [CrossRef]
  24. Rahati, A.; Rigi, E.M.; Idoumghar, L.; Brévilliers, M. Ensembles strategies for backtracking search algorithm with application to engi-neering design optimization problems. Appl. Soft Comput. 2022, 121, 108717. [Google Scholar] [CrossRef]
  25. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  26. Alkayem, N.F.; Shen, L.; Asteris, P.G.; Sokol, M.; Xin, Z.; Cao, M. A new self-adaptive quasi-oppositional stochastic fractal search for the inverse problem of structural damage assessment. Alex. Eng. J. 2022, 61, 1922–1936. [Google Scholar] [CrossRef]
  27. Tian, D.; Shi, Z. MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 2018, 41, 49–68. [Google Scholar] [CrossRef]
  28. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective Opposition based Grey Wolf Optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  29. Alkayem, N.F.; Cao, M.; Shen, L.; Fu, R.; Šumarac, D. The combined social engineering particle swarm optimization for real-world engi-neering problems: A case study of model-based structural health monitoring. Appl. Soft Comput. 2022, 123, 108919. [Google Scholar] [CrossRef]
  30. Wei, J.; Huang, H.; Yao, L.; Hu, Y.; Fan, Q.; Huang, D. New imbalanced fault diagnosis framework based on Cluster-MWMOTE and MFO-optimized LS-SVM using limited and complex bearing data. Eng. Appl. Artif. Intell. 2020, 96, 103966. [Google Scholar] [CrossRef]
  31. Fan, Q.; Huang, H.; Li, Y.; Han, Z.; Hu, Y.; Huang, D. Beetle antenna strategy based grey wolf optimization. Expert Syst. Appl. 2020, 165, 113882. [Google Scholar] [CrossRef]
  32. Fan, Q.; Huang, H.; Yang, K.; Zhang, S.; Yao, L.; Xiong, Q. A modified equilibrium optimizer using opposition-based learning and novel update rules. Expert Syst. Appl. 2021, 170, 114575. [Google Scholar] [CrossRef]
  33. Fan, Q.; Huang, H.; Chen, Q.; Yao, L.; Yang, K.; Huang, D. A modified self-adaptive marine predators algorithm: Framework and engineering ap-plications. Eng. Comput. 2021, 38, 3269–3294. [Google Scholar] [CrossRef]
  34. Wei, J.; Huang, H.; Yao, L.; Hu, Y.; Fan, Q.; Huang, D. NI-MWMOTE: An improving noise-immunity majority weighted minority oversampling technique for imbalanced classification problems. Expert Syst. Appl. 2020, 158, 113504. [Google Scholar] [CrossRef]
  35. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  36. Long, W.; Jiao, J.; Xu, M.; Tang, M.; Wu, T.; Cai, S. Lens-imaging learning Harris hawks optimizer for global optimization and its application to feature selection. Expert Syst. Appl. 2022, 202, 117255. [Google Scholar] [CrossRef]
  37. Tizhoosh H, R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (cimca-iawtic’06), IEEE, Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  38. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  39. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  40. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  41. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–35. [Google Scholar] [CrossRef]
  42. Maesono, Y. Competitors of the Wilcoxon signed rank test. Ann. Inst. Stat. Math. 1987, 39, 363–375. [Google Scholar] [CrossRef]
  43. Theodorsson-Norheim, E. Friedman and Quade tests: BASIC computer program to perform nonparametric two-way analysis of variance and multiple comparisons on ranks of several related samples. Comput. Biol. Med. 1987, 17, 85–99. [Google Scholar] [CrossRef]
  44. Gandomi, A.; Alavi, A.H. An introduction of krill herd algorithm for engineering optimization. J. Civ. Eng. Manag. 2015, 22, 302–310. [Google Scholar] [CrossRef]
  45. Sandgren, E. Nonlinear Integer and Discrete Programming in Mechanical Design Optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
  46. Osyczka, A.; Osyczka, A. Evolutionary Algorithms for Single and Multicriteria Design Optimization; Physica-Verlag: Heidelberg, Germany, 2002. [Google Scholar]
  47. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Frome, UK, 2010. [Google Scholar]
  48. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  49. Bertsimas, D.; Tsitsiklis, J. Simulated annealing. Stat. Sci. 1993, 8, 10–15. [Google Scholar] [CrossRef]
  50. Yang, X.S. Flower pollination algorithm for global optimization. In International Conference on Unconventional Computing and Natural Computation; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar]
  51. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  52. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  53. Połap, D.; Woz´niak, M. Polar Bear Optimization Algorithm: Meta-Heuristic with Fast Population Movement and Dynamic Birth and Death Mechanism. Symmetry 2017, 9, 203. [Google Scholar] [CrossRef]
  54. Yang, X.S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  55. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  56. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  57. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  58. Sörensen, K. Metaheuristics-the metaphor exposed. Int. Trans. Oper. Res. 2013, 22, 3–18. [Google Scholar] [CrossRef]
Figure 1. The principle of lens-imaging learning strategy.
Figure 1. The principle of lens-imaging learning strategy.
Applsci 12 09709 g001
Figure 2. The principle of the dual golden spiral update rules.
Figure 2. The principle of the dual golden spiral update rules.
Applsci 12 09709 g002
Figure 3. Flowchart of LSGJO.
Figure 3. Flowchart of LSGJO.
Applsci 12 09709 g003
Figure 4. The convergence curves of the LSGJO and other comparison algorithms with Dim = 30.
Figure 4. The convergence curves of the LSGJO and other comparison algorithms with Dim = 30.
Applsci 12 09709 g004aApplsci 12 09709 g004b
Figure 5. The convergence curves of the LSGJO and other comparison algorithms with Dim ═ 100.
Figure 5. The convergence curves of the LSGJO and other comparison algorithms with Dim ═ 100.
Applsci 12 09709 g005
Figure 6. The convergence curves of the LSGJO and other comparison algorithms with Dim ═ 500.
Figure 6. The convergence curves of the LSGJO and other comparison algorithms with Dim ═ 500.
Applsci 12 09709 g006aApplsci 12 09709 g006b
Figure 7. The convergence curve, the trajectories, the average fitness history, and the search history of certain functions.
Figure 7. The convergence curve, the trajectories, the average fitness history, and the search history of certain functions.
Applsci 12 09709 g007aApplsci 12 09709 g007b
Figure 8. Speed reducer design problem.
Figure 8. Speed reducer design problem.
Applsci 12 09709 g008
Figure 9. Gear train design problem.
Figure 9. Gear train design problem.
Applsci 12 09709 g009
Figure 10. Multiple-disk clutch design problem.
Figure 10. Multiple-disk clutch design problem.
Applsci 12 09709 g010
Table 1. The benchmark functions.
Table 1. The benchmark functions.
FunctionDimRange F m i n Type
f 1 ( x ) = i = 1 n x i 2 30, 100, 500[−100, 100]0Unimodal
f 2 ( x ) = i = 1 n x i + i = 1 n x i 30, 100, 500[−1.28, 1.28]0Unimodal
f 3 ( x ) = i = 1 n j 1 i x j 2 30, 100, 500[−100, 100]0Unimodal
f 4 ( x ) = max i x i , 1 i n 30, 100, 500[−100, 100]0Unimodal
f 5 ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30, 100, 500[−30, 30]0Unimodal
  f 6 x = i = 1 n x i + 0.5 2 30, 100, 500[−100, 100]0Unimodal
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ] 30, 100, 500[−1.28, 1.28]0Unimodal
f 8 ( x ) = i = 1 n x i sin ( x i ) 30, 100, 500[−500, 500]−418.9829 × nMultimodal
f 9 ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 30, 100, 500[−5.12, 5.12]0Multimodal
f 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp 1 n i = 1 n cos 2 π x i + 20 + e 30, 100, 500[−32, 32]0Multimodal
f 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30, 100, 500[−600, 600]0Multimodal
f 12 x = π n 10 sin ( π yi ) + i = 1 n 1 ( y i 1 ) 2 1 + 10 sin 2 ( π y i + 1 ) + ( y n 1 ) 2 + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 u ( x i , a , k , m ) = k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a 30, 100, 500[−50, 50]0Multimodal
f 13 ( x ) = 0.1 sin 2 3 π x 1 + i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 n u x i , 5 , 100 , 4 30, 100, 500[−50, 50]0Multimodal
f 14 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2[−65.536, 65.536]1Multimodal
f 15 ( x ) = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.0003Multimodal
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316Multimodal
f 17 ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398Multimodal
f 18 ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2[−2, 2]3Multimodal
f 19 ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2 3[0, 1]−3.86Multimodal
f 20 ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2 6[0, 1]−3.32Multimodal
f 21 ( x ) = i = 1 5 X a i X a i T + c i 1 4[0, 10]−10.1532Multimodal
f 22 ( x ) = i = 1 7 X a i X a i T + c i 1 4[0, 10]−10.4029Multimodal
f 23 ( x ) = i = 1 10 X a i X a i T + c i 1 4[0, 10]−10.5364Multimodal
Table 2. Parameter settings of various algorithms.
Table 2. Parameter settings of various algorithms.
AlgorithmParameter Settings
GWOa = 2( linearly decreased over iterations)
HHOJ = [0, 2]
ChoAa = 2 (linearly decreased over iterations), m = chaos (3, 1, 1)
GJOa = 1.5 (linearly decreased over iterations)
EOa1 = 2, a2 = 1, GP = 0.5, t = 1 (nonlinearly decreased over iterations)
WOAb = 1
SSAc1 = 2 (nonlinearly decreased over iterations)
SOa = 2 (linearly decreased over iterations)
PSOW = 0.9, c1 = 2, c2 = 2
MPSOWmax = 0.9, Wmin = 0.4, c1 = 2, c2 = 2
SOGWOa = 2 (linearly decreased over iterations)
LSGJOA = 1.5 (linearly decreased over iterations)
Table 3. Results and comparison of different algorithms on 23 benchmark functions with Dim = 30. The best results of the experiments are shown in bold.
Table 3. Results and comparison of different algorithms on 23 benchmark functions with Dim = 30. The best results of the experiments are shown in bold.
F(x)ItemGWOHHOChoAGJOEOWOASSASOPSOMPSOSOGWOLSGJO
F1Ave1.34 × 10−273.65 × 10−942.98 × 10−72.66 × 10−543.13 × 10−417.02 × 10−731.93 × 10−74.32 × 10−942.96 × 1021.003.87 × 10−270
Std1.53 × 10−272.00 × 10−935.47 × 10−78.99 × 10−545.33 × 10−412.85 × 10−722.61 × 10−72.07 × 10−931.72 × 1022.861.30 × 10−260
rank7.02.010.05.06.04.09.03.012.011.08.01.0
F2Ave1.13 × 10−161.75 × 10−502.64 × 10−62.97 × 10−326.19 × 10−249.23 × 10−512.087.39 × 10−433.34 × 1012.54 × 1019.35 × 10−170
Std1.02 × 10−164.21 × 10−502.65 × 10−66.64 × 10−326.39 × 10−242.67 × 10−501.571.61 × 10−421.43 × 1011.65 × 1016.13 × 10−170
rank8.03.09.05.06.02.010.04.011.511.57.01.0
F3Ave6.17 × 10−58.14 × 10−772.24 × 1013.81 × 10−172.96 × 10−94.43 × 1041.66 × 1031.41 × 10−571.12 × 1041.49 × 1041.10 × 10−40
Std2.66 × 10−43.96 × 10−765.57 × 1011.26 × 10−168.69 × 10−91.87 × 1048.31 × 1025.97 × 10−571.03 × 1047.41 × 1032.64 × 10−40
rank6.52.08.04.05.012.03.04.010.510.56.51.0
F4Ave7.43 × 10−74.58 × 10−471.27 × 10−11.36 × 10−143.56 × 10−104.64 × 1011.06 × 1013.25 × 10−409.721.95 × 1011.16 × 10−60
Std6.07 × 10−72.50 ×10 −461.43 × 10−15.72 × 10−148.51 × 10−102.63 × 1013.379.32 × 10−402.686.001.18 × 10−60
rank6.02.08.04.05.012.010.03.09.011.07.01.0
F5Ave2.71 × 1011.86 × 10−22.88 × 1012.79 × 1012.53 × 1012.79 × 1013.98 × 1021.80 × 1011.85 × 1042.78 × 1042.72 × 1011.83 × 10−2
Std8.68 × 10−12.26 × 10−21.98 × 10−17.20 × 10−11.64 × 10−15.37 × 10−11.27 × 1031.24 × 1011.43 × 1044.15 × 1047.65 × 10−13.27 × 10−2
rank6.51.56.56.753.56.2510.06.011.012.06.51.5
F6Ave7.79 × 10−11.16 × 10−43.902.778.70 × 10−63.97 × 10−12.77 × 10−77.37 × 10−13.54 × 1021.217.77 × 10−15.84 × 10−4
Std3.60 × 10−11.51 × 10−43.82 × 10−14.87 × 10−15.34 × 10−62.47 × 10−18.79 × 10−75.84 × 10−11.56 × 1023.993.63 × 10−11.10 × 10−3
rank7.03.09.59.52.05.01.08.012.010.07.04.0
F7Ave2.23 × 10−31.64 × 10−41.78 × 10−35.14 × 10−41.39 × 10−33.46 × 10−31.65 × 10−12.99 × 10−41.533.72 × 10−11.77 × 10−31.47 × 10−4
Std1.05 × 10−32.09 × 10−42.04 × 10−34.42 × 10−46.33 × 10−46.16 × 10−36.79 × 10−22.89 × 10−43.901.079.29 × 10−41.45 × 10−4
rank7.52.07.54.05.09.010.03.012.011.06.01.0
F8Ave−5.83 × 103−1.26 × 104−5.73 × 103−3.85 × 103−9.23 × 103−1.00 × 104−7.43 × 1031.25 × 104−7.37 × 103−8.82 × 103−6.02 × 103−1.26 × 104
Std8.82 × 1026.06 × 1016.23 × 1011.14 × 1038.11 × 1021.89 × 1038.41 × 1021.81 × 1029.01 × 1026.26 × 1028.98 × 1021.30 × 10−1
rank8.51.756.511.05.07.06.58.08.55.08.01.25
F9Ave2.5302.86003.32 × 10−25.23 × 1012.202.20 × 1021.32 × 1023.060
Std4.6602.68001.82 × 10−11.64 × 1016.113.06 × 1013.16 × 1014.720
rank7.02.57.02.52.55.010.07.511.511.58.52.5
F10Ave1.03 × 10−138.88 × 10−162.00 × 1017.40 × 10−158.70 × 10−154.09 × 10−152.782.83 × 10−16.053.521.03 × 10−138.88 × 10−16
Std1.58 × 10−1401.22 × 10−31.35 × 10−152.17 × 10−153.14 × 10−159.44 × 10−17.38 × 10−11.913.161.68 × 10−140
rank6.251.510.03.54.54.09.58.511.011.07.01.5
F11Ave2.58 × 10−301.09 × 10−2001.47 × 10−21.80 × 10−27.95 × 10−23.792.49 × 10−12.31 × 10−30
Std5.54 × 10−302.44 × 10−2004.66 × 10−21.13 × 10−22.05 × 10−11.442.31 × 10−15.37 × 10−30
rank6.02.57.52.52.58.58.010.012.0.11.05.02.5
F12Ave4.65 × 10−27.18 × 10−65.63 × 10−12.59 × 10−13.46 × 10−32.87 × 10−28.578.59 × 10−25.893.675.01 × 10−21.50 × 10−5
Std2.74 × 10−21.06 × 10−52.40 × 10−11.48 × 10−11.89 × 10−22.59 × 10−24.281.33 × 10−12.801.862.90 × 10−22.13 × 10−5
rank5.01.09.08.03.04.012.07.011.010.06.02.0
F13Ave6.08 × 10−11.14 × 10−42.781.641.64 × 10−25.03 × 10−11.79 × 1012.66 × 10−12.33 × 1019.206.34 × 10−18.71 × 10−5
Std2.28 × 10−11.45 × 10−41.38 × 10−12.19 × 10−14.36 × 10−22.37 × 10−11.77 × 1015.68 × 10−12.59 × 1016.252.65 × 10−11.23 × 10−4
rank6.02.06.56.53.06.011.06.512.010.07.51.0
F14Ave4.561.139.98 × 1015.829.98 × 10−13.091.409.99 × 10−19.98 × 10−19.98 × 10−13.361.36
Std4.203.44 × 10−13.20 × 10−44.451.75 × 10−163.287.64 × 10−14.13 × 10−32.88 × 10−109.22 × 10−173.311.02
rank11.06.03.2512.02.259.07.55.02.751.7510.07.5
F15Ave7.76 × 10−34.23 × 10−41.32 × 10−32.46 × 10−36.36 × 10−37.07 × 10−41.54 × 10−36.18 × 10−41.23 × 10−24.07 × 10−37.06 × 10−33.86 × 10−4
Std9.75 × 10−32.68 × 10−45.84 × 10−56.07 × 10−39.32 × 10−34.22 × 10−43.57 × 10−33.63 × 10−49.54 × 10−37.42 × 10−39.57 × 10−35.82 × 10−5
rank11.52.53.57.09.04.56.03.511.08.010.51.0
F16Ave−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03
Std2.01 × 10−81.13 × 10−92.23 × 10−51.86 × 10−76.32 × 10−166.08 × 10−104.34 × 10−145.45 × 10−168.05 × 10−55.98 × 10−161.81 × 10−81.82 × 10−4
rank7.256.258.257.754.755.755.253.758.754.256.759.25
F17Ave3.98 × 10−13.99 × 10−13.99 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Std8.74 × 10−77.79 × 10−47.19 × 10−47.26 × 10−605.08 × 10−61.90 × 10−1403.62 × 10−503.09 × 10−63.31 × 10−4
rank5.2511.7511.256.753.756.254.753.757.253.755.757.75
F18Ave5.703.003.003.003.003.003.005.703.003.003.003.00
Std1.48 × 1011.05 × 10−62.20 × 10−44.38 × 10−61.28 × 10−151.68 × 10−43.32 × 10−138.246.65 × 10−41.07 × 10−155.51 × 10−53.38 × 10−3
rank11.754.756.755.253.756.254.2511.257.253.255.757.75
F19Ave−3.86−3.86−3.85−3.86−3.86−3.85−3.86−3.86−3.86−3.86−3.86−3.86
Std1.99 × 10−35.27 × 10−31.84 × 10−33.86 × 10−32.58 × 10−152.46 × 10−27.77 × 10−122.43 × 10−153.72 × 10−32.68 × 10−152.74 × 10−33.60 × 10−3
rank5.758.258.257.753.7511.754.753.257.254.756.256.75
F20Ave−3.26−3.10−2.66−3.09−3.27−3.20−3.22−3.31−2.96−3.27−3.23−3.19
Std9.27 × 10−29.27 × 10−24.55 × 10−12.06 × 10−15.92 × 10−22.25 × 10−15.83 × 10−23.63 × 10−25.18 × 10−16.03 × 10−27.96 × 10−27.58 × 10−2
rank5.758.2511.59.52.758.54.01.011.53.255.56.5
F21Ave−9.64−5.22−3.18−8.52−8.29−7.77−8.48−1.01 × 101−9.74−6.82−9.81−1.02 × 101
Std1.559.18 × 10−12.052.852.742.812.903.07 × 10−11.773.491.283.30 × 10−3
rank5.57.09.58.08.09.09.02.05.011.03.51.0
F22Ave−9.87−5.58−4.05−9.68−8.77−7.52−9.29−1.03 × 101−9.72−8.07−9.87−1.04 × 101
Std1.621.511.781.832.793.202.593.07 × 10−12.123.201.622.36 × 10−3
rank4.07.09.06.59.010.758.02.06.510.254.01.0
F23Ave−1.03 × 101−5.30−4.46−1.03 × 101−9.43−6.77−8.42−1.04 × 101−1.05 × 101−8.45−1.01 × 101−1.05 × 101
Std1.489.40 × 10−11.409.79 × 10−12.573.033.363.09 × 10−12.22 × 10−53.301.753.99 × 10−3
rank5.757.59.04.758.010.010.53.01.259.57.01.75
Total Rank160.7596.0185.25147.5108166.5174.0117.0200.5195.25155.071.5
Final Rank821053694121171
Table 4. Results and comparison of different algorithms on 13 benchmark functions with Dim ═ 100. The best results of the experiments are shown in bold.
Table 4. Results and comparison of different algorithms on 13 benchmark functions with Dim ═ 100. The best results of the experiments are shown in bold.
F(x)ItemGWOHHOChoAGJOEOWOASSASOPSOMPSOSOGWOLSGJO
F1Ave2.64 × 10−122.77 × 10−942.15 × 10−19.33 × 10−284.13 × 10−293.87 × 10−701.45 × 1035.20 × 10−823.96 × 1033.29 × 1042.43 × 10−120
Std2.73 × 10−121.45 × 10−932.12 × 10−11.85 × 10−275.48 × 10−291.85 × 10−694.54 × 1021.09 × 10−811.37 × 1031.11 × 1041.78 × 10−120
rank8.02.09.06.05.04.010.03.011.012.07.01.0
F2Ave4.25 × 10−82.34 × 10−493.34 × 10−21.06 × 10−172.14 × 10−176.02 × 10−514.81 × 1011.32 × 10−351.33 × 1022.86 × 1024.12 × 10−80
Std1.37 × 10−88.16 × 10−491.87 × 10−28.02 × 10−181.44 × 10−172.02 × 10−508.351.59 × 10−354.04 × 1013.80 × 1011.40 × 10−80
rank7.53.09.05.06.02.010.04.0.11.511.5.7.51.0
F3Ave8.96 × 1028.65 × 10−526.10 × 1041.518.96 × 1011.15 × 1065.63 × 1043.16 × 10−381.24 × 1052.35 × 1051.41 × 1030
Std1.45 × 1034.74 × 10−512.56 × 1045.104.05 × 1023.22 × 1052.59 × 1041.70 × 10−376.73 × 1043.96 × 1041.20 × 1030
rank6.52.08.54.05.012.08.53.010.510.56.51.0
F4Ave1.095.52 × 10−497.56 × 1016.676.75 × 10−27.91 × 1012.69 × 1011.04 × 10−362.33 × 1016.70 × 1018.12 × 10−10
Std1.951.74 × 10−481.49 × 1019.023.55 × 10−12.26 × 1013.761.31 × 10−364.535.366.49 × 10−10
rank6.02.011.08.54.012.08.03.08.09.55.01.0
F5Ave9.76 × 1014.00 × 10−21.54 × 1029.82 × 1019.65 × 1019.82 × 1011.53 × 1057.38 × 1015.64 × 1052.70 × 1079.80 × 1013.30 × 10−2
Std7.59 × 10−18.62 × 10−21.25 × 1025.60 × 10−19.08 × 10−12.20 × 10−16.64 × 1044.05 × 1014.88 × 1053.12 × 1076.15 × 10−14.06 × 10−2
rank5.52.09.05.755.55.2510.05.511.012.05.51.0
F6Ave9.774.26 × 10−42.22 × 1011.62 × 1014.034.341.52 × 1031.32 × 1014.75 × 1032.79 × 1041.07 × 1012.83 × 10−3
Std1.016.28 × 10−41.749.56 × 10−18.06 × 10−11.424.77 × 1021.05 × 1012.30 × 1031.03 × 1041.014.29 × 10−3
rank5.251.08.56.03.05.510.08.011.012.05.752.0
F7Ave6.43 × 10−32.01 × 10−41.36 × 10−21.37 × 10−32.30 × 10−34.23 × 10−32.882.25 × 10−42.75 × 1018.05 × 1017.61 × 10−31.29 × 10−4
Std2.31 × 10−33.48 × 10−49.10 × 10−31.18 × 10−38.01 × 10−45.41 × 10−35.82 × 10−11.42 × 10−44.66 × 1014.50 × 1012.68 × 10−31.27 × 10−4
rank6.52.59.04.54.57.010.02.511.511.57.51.0
F8Ave−1.61 × 104−4.19 × 104−1.81 × 104−8.23 × 103−2.59 × 104−3.63 × 104−2.16 × 104−4.18 × 104−1.53 × 104−2.22 × 104−1.70 × 104−4.19 × 104
Std2.37 × 1033.94 × 1011.34 × 1023.31 × 1031.29 × 1035.42 × 1031.86 × 1031.51 × 1022.27 × 1031.53 × 1031.49 × 1033.66 × 10−1
rank10.01.755.511.55.08.07.53.010.06.57.51.25
F9Ave9.7401.16 × 101007.58 × 10−152.41 × 1028.799.60 × 1027.60 × 1021.04 × 1010
Std7.1401.09 × 101002.88 × 10−144.13 × 1012.28 × 1012.566.98 × 1017.840
rank7.02.59.02.52.55.010.58.09.011.58.02.5
F10Ave1.14 × 10−78.88 × 10−162.00 × 1014.78 × 10−143.59 × 10−144.56 × 10−151.04 × 1014.44 × 10−159.651.83 × 1011.37 × 10−78.88 × 10−16
Std4.83 × 10−801.16 × 10−27.66 × 10−154.82 × 10−152.55 × 10−151.0302.561.085.72 × 10−80
rank7.01.7510.56.05.04.010.52.510.5118.01.75
F11Ave2.96 × 10−301.98 × 10−102.55 × 10−401.54 × 10103.66 × 1012.75 × 1024.52 × 10−30
Std8.15 × 10−301.94 × 10−101.40 × 10−303.8401.97 × 1016.90 × 1019.46 × 10−30
rank7.03.09.03.06.03.010.03.011.012.08.03.0
F12Ave2.88 × 10−14.40 × 10−61.185.90 × 10−14.30 × 10−24.76 × 10−23.60 × 1019.04 × 10−24.38 × 1022.35 × 1073.03 × 10−11.91 × 10−5
Std5.93 × 10−23.93 × 10−62.75 × 10−18.80 × 10−21.18 × 10−21.95 × 10−21.08 × 1012.62 × 10−12.15 × 1036.40 × 1076.40 × 10−23.53 × 10−5
rank5.51.09.07.53.04.010.06.511.012.06.52.0
F13Ave6.751.77 × 10−49.778.316.083.025.49 × 1031.046.72 × 1047.71 × 1076.771.49 × 10−4
Std3.43 × 10−12.36 × 10−41.052.79 × 10−11.029.83 × 10−19.48 × 1031.961.35 × 1051.48 × 1085.04 × 10−12.99 × 10−4
rank5.01.58.55.56.05.010.06.011.012.06.01.5
Total Rank86.7526115.575.7560.576.512554137132.588.7520.0
Final Rank729546103121181
Table 5. Results and comparison of different algorithms on 13 benchmark functions with Dim = 500. The best results of the experiments are shown in bold.
Table 5. Results and comparison of different algorithms on 13 benchmark functions with Dim = 500. The best results of the experiments are shown in bold.
F(x)ItemGWOHHOChoAGJOEOWOASSASOPSOMPSOSOGWOLSGJO
F1Ave1.66 × 10−33.29 × 10−945.42 × 1026.77 × 10−132.24 × 10−221.59 × 10−659.59 × 1042.66 × 10−713.87 × 1047.59 × 1052.33 × 10−30
Std4.60 × 10−41.60 × 10−932.78 × 1024.56 × 10−133.19 × 10−228.71 × 10−656.68 × 1034.38 × 10−711.61 × 1043.44 × 1047.75 × 10−40
rank7.02.09.06.05.04.010.53.010.512.08.01.0
F2Ave1.12 × 10−23.66 × 10−487.766.47 × 10−97.65 × 10−142.15 × 10−485.40 × 1021.02 × 10−317.69 × 1022.70 × 101261.14 × 10−20
Std1.83 × 10−31.70 × 10−472.062.29 × 10−93.52 × 10−149.16 × 10−481.65 × 1012.27 × 10−311.37 × 1021.48 × 101271.54 × 10−30
rank7.53.09.06.05.02.010.04.011.012.07.51.0
F3Ave3.21 × 1052.56 × 10−364.18 × 1063.27 × 1043.94 × 1042.75 × 1071.43 × 1061.84 × 10−152.78 × 1064.54 × 1063.32 × 1050
Std7.49 × 1041.40 × 10−351.80 × 1062.62 × 1045.81 × 1047.48 × 1066.54 × 1051.01 × 10−141.50 × 1068.31 × 1057.66 × 1040
rank6.02.010.54.05.012.08.03.09.510.07.01.0
F4Ave6.56 × 1011.66 × 10−479.69 × 1018.21 × 1017.16 × 1018.36 × 1014.02 × 1017.32 × 10−343.56 × 1019.92 × 1016.57 × 1010
Std7.168.38 × 10−471.653.991.61 × 1011.77 × 1012.651.04 × 10−334.802.26 × 10−15.170
rank8.02.08.08.09.511.05.53.06.08.08.01.0
F5Ave4.98 × 1022.82 × 10−12.47 × 1054.98 × 1024.98 × 1024.96 × 1023.77 × 1074.08 × 1024.07 × 1072.53 × 1094.98 × 1021.81 × 10−1
Std2.50 × 10−13.48 × 10−13.49 × 1051.64 × 10−11.29 × 10−13.38 × 10−14.22 × 1061.80 × 1024.38 × 1071.91 × 1084.20 × 10−13.39 × 10−1
rank4.754.09.04.253.754.010.05.511.012.06.753.0
F6Ave9.15 × 1011.96 × 10−35.82 × 1021.10 × 1028.71 × 1013.45 × 1019.29 × 1046.24 × 1013.85 × 1047.64 × 1059.17 × 1019.81 × 10−3
Std2.133.46 × 10−31.62 × 1021.221.736.355.96 × 1035.35 × 1011.52 × 1043.00 × 1042.281.30 × 10−2
rank5.51.09.05.54.55.010.56.010.512.06.52.0
F7Ave4.73 × 10−22.10 × 10−42.426.23 × 10−33.97 × 10−35.48 × 10−32.69 × 1021.69 × 10−42.80 × 1031.86 × 1044.51 × 10−21.59 × 10−4
Std1.13 × 10−22.76 × 10−41.653.74 × 10−31.43 × 10−36.26 × 10−33.71 × 1011.55 × 10−42.11 × 1031.66 × 1031.40 × 10−21.29 × 10−4
rank7.53.09.05.54.05.510.02.011.511.57.51.5
F8Ave−5.68 × 104−2.09 × 105−8.47 × 104−2.35 × 104−7.55 × 104−1.81 × 105−5.97 × 104−2.08 × 105−3.67 × 104−6.06 × 104−5.70 × 104−2.09 × 105
Std3.68 × 1032.72 × 1036.33 × 1021.34 × 1044.18 × 1032.86 × 1043.85 × 1031.40 × 1035.35 × 1033.70 × 1038.69 × 1035.20
rank7.52.753.511.57.08.07.53.010.06.59.51.25
F9Ave7.82 × 10102.32 × 1025.94 × 10−129.09 × 10−143.03 × 10−143.20 × 1035.524.78 × 1035.97 × 1037.40 × 1010
Std2.23 × 10105.80 × 1011.49 × 10−112.78 × 10−131.66 × 10−139.92 × 1011.95 × 1015.25 × 1021.69 × 1021.88 × 1010
rank8.01.59.05.04.03.010.06.511.511.56.51.5
F10Ave1.88 × 10−38.88 × 10−162.01 × 1013.31 × 10−85.85 × 10−134.20 × 10−151.42 × 1014.68 × 10−151.36 × 1012.02 × 1012.14 × 10−38.88 × 10−16
Std3.24 × 10−401.25 × 10−21.06 × 10−83.23 × 10−132.79 × 10−153.04 × 10−19.01 × 10−164.025.71 × 10−23.37 × 10−40
rank7.01.510.06.05.03.510.53.510.511.08.01.5
F11Ave6.51 × 10−304.672.47 × 10−139.62 × 10−171.05 × 10−28.45 × 10203.41 × 1026.91 × 1035.30 × 10−20
Std2.42 × 10−201.564.74 × 10−133.84 × 10−175.73 × 10−26.80 × 10101.27 × 1023.02 × 1026.58 × 10−20
rank6.02.09.0.5.04.07.010.52.0.10.512.08.02.0
F12Ave7.52 × 10−12.00 × 10−66.84 × 1049.32 × 10−15.84 × 10−11.05 × 10−11.41 × 1061.47 × 10−12.42 × 1054.98 × 1097.60 × 10−12.02 × 10−5
Std4.46 × 10−23.43 × 10−62.20 × 1052.31 × 10−22.42 × 10−24.44 × 10−27.36 × 1053.56 × 10−14.28 × 1054.32 × 1084.53 × 10−23.59 × 10−5
rank6.01.09.05.54.54.011.06.010.012.07.02.0
F13Ave5.03 × 1019.07 × 10−42.78 × 1054.80 × 1014.92 × 1011.79 × 1013.80 × 1076.195.78 × 1061.04 × 10105.10 × 1014.19 × 10−4
Std1.581.57 × 10−37.97 × 1054.95 × 10−12.54 × 10−17.171.03 × 1071.25 × 1017.57 × 1068.90 × 1081.607.61 × 10−4
rank6.02.09.04.54.55.511.05.510.012.07.01.0
Total Rank86.7527.7510476.7565.757412551132.5142.597.2519.75
Final Rank628945103121171
Table 6. Statistical analysis results.
Table 6. Statistical analysis results.
F(x)DimGWOHHOChoAGJOEOWOASSASOPSOMPSOSOGWOTotal
F1301.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
1001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
5001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
F2301.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
1001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
5001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
F3301.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
1001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
5001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
F4301.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
1001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
5001.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−1210
F5303.02 × 10−113.78 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.15 × 10−13.02 × 10−113.02 × 10−113.02 × 10−1110
1003.02 × 10−117.84 × 10−13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.02 × 10−113.02 × 10−113.02 × 10−119
5003.02 × 10−111.33 × 10−13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.02 × 10−113.02 × 10−113.02 × 10−119
F6301.61 × 10−102.25 × 10−43.02 × 10−113.02 × 10−112.23 × 10−93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.50 × 10−111.78 × 10−1010
1003.02 × 10−113.85 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.08 × 10−113.02 × 10−113.02 × 10−113.02 × 10−1110
5003.02 × 10−114.71 × 10−43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.78 × 10−103.02 × 10−113.02 × 10−113.02 × 10−1110
F7303.02 × 10−113.04 × 10−16.12 × 10−108.66 × 10−53.69 × 10−112.38 × 10−73.02 × 10−113.78 × 10−24.11 × 10−73.02 × 10−114.98 × 10−119
1003.02 × 10−112.90 × 10−14.08 × 10−114.62 × 10−103.02 × 10−111.43 × 10−83.02 × 10−111.38 × 10−23.02 × 10−113.02 × 10−113.02 × 10−119
5003.02 × 10−117.96 × 10−13.02 × 10−113.02 × 10−113.02 × 10−111.17 × 10−93.02 × 10−119.82 × 10−13.02 × 10−113.02 × 10−113.02 × 10−118
F8303.02 × 10−112.16 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−111.09 × 10−103.02 × 10−113.02 × 10−113.02 × 10−1110
1003.02 × 10−113.34 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−1110
5003.02 × 10−113.03 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.41 × 10−93.02 × 10−113.02 × 10−113.02 × 10−1110
F9304.43 × 10−12NaN1.21 × 10−12NaNNaNNaN1.21 × 10−128.87 × 10−71.21 × 10−121.21 × 10−124.47 × 10−126
1001.21 × 10−12NaN1.21 × 10−12NaNNaN1.61 × 10−11.21 × 10−123.45 × 10−71.21 × 10−121.21 × 10−121.21 × 10−126
5001.21 × 10−12NaN1.21 × 10−129.51 × 10−138.14 × 10−23.34 × 10−11.21 × 10−122.16 × 10−21.21 × 10−121.21 × 10−121.21 × 10−127
F10301.13 × 10−12NaN1.21 × 10−122.43 × 10−134.16 × 10−141.16 × 10−81.21 × 10−121.20 × 10−131.21 × 10−121.21 × 10−121.10 × 10−129
1001.21 × 10−12NaN1.19 × 10−129.98 × 10−135.94 × 10−133.86 × 10−91.21 × 10−121.69 × 10−141.21 × 10−121.21 × 10−121.21 × 10−129
5001.21 × 10−12NaN1.21 × 10−121.21 × 10−121.21 × 10−121.05 × 10−71.21 × 10−124.16 × 10−141.18 × 10−121.21 × 10−121.21 × 10−129
F11306.62 × 10−4NaN1.21 × 10−12NaNNaN3.34 × 1011.21 × 10−121.10 × 10−21.21 × 10−121.21 × 10−122.16 × 10−26
1001.21 × 10−12NaN1.21 × 10−12NaN3.34 × 101NaN1.21 × 10−12NaN1.21 × 10−121.21 × 10−121.21 × 10−125
5001.21 × 10−12NaN1.21 × 10−121.21 × 10−121.97 × 10−113.34 × 10−11.21 × 10−12NaN1.21 × 10−121.21 × 10−121.21 × 10−128
F12303.02 × 10−113.50 × 10−33.02 × 10−113.02 × 10−118.35 × 10−83.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−1110
1003.02 × 10−115.01 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−119
5003.02 × 10−111.34 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.57 × 10−103.02 × 10−113.02 × 10−113.02 × 10−1110
F13303.02 × 10−117.51 × 10−13.02 × 10−113.02 × 10−111.76 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−1110
1003.02 × 10−1113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−1110
5003.02 × 10−114.21 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−1110
F1426.20 × 10−12.38 × 10−31.84 × 10−24.84 × 10−21.52 × 10−115.11 × 10−14.65 × 10−52.53 × 10−74.42 × 10−65.14 × 10−121.49 × 10−18
F1549.93 × 10−21.70 × 10−23.02 × 10−113.56 × 10−41.68 × 10−42.75 × 10−32.87 × 10−101.37 × 10−16.70 × 10−111.38 × 10−61.22 × 10−17
F1623.02 × 10−113.02 × 10−118.35 × 10−83.02 × 10−111.25 × 10−113.02 × 10−113.01 × 10−111.25 × 10−114.03 × 10−31.34 × 10−113.02 × 10−1110
F1723.02 × 10−118.15 × 10−119.52 × 10−41.56 × 10−81.21 × 10−123.08 × 10−82.75 × 10−111.21 × 10−121.69 × 10−91.21 × 10−122.15 × 10−1010
F1827.77 × 10−93.02 × 10−111.11 × 10−63.34 × 10−112.49 × 10−113.82 × 10−93.02 × 10−111.04 × 10−73.83 × 10−61.77 × 10−111.20 × 10−810
F1932.60 × 10−51.27 × 10−26.36 × 10−51.58 × 10−11.34 × 10−114.64 × 10−13.02 × 10−111.25 × 10−119.03 × 10−42.36 × 10−125.86 × 10−69
F2063.99 × 10−41.43 × 10−53.82 × 10−103.03 × 10−24.51 × 10−65.08 × 10−31.33 × 10−26.86 × 10−103.87 × 10−14.78 × 10−66.67 × 10−39
F2142.84 × 10−13.02 × 10−113.02 × 10−111.55 × 10−95.64 × 10−45.19 × 10−76.63 × 10−18.74 × 10−24.86 × 10−92.68 × 10−25.75 × 10−27
F2242.84 × 10−13.02 × 10−113.02 × 10−114.44 × 10−71.83 × 10−35.97 × 10−91.95 × 10−34.13 × 10−36.23 × 10−56.60 × 10−14.29 × 10−17
F2348.88 × 10−13.02 × 10−113.02 × 10−111.25 × 10−78.14 × 10−65.97 × 10−99.51 × 10−67.61 × 10−32.00 × 10−91.99 × 10−21.33 × 10−18
Table 7. Comparison results of speed reducer design problem. The best results of the experiments are shown in bold.
Table 7. Comparison results of speed reducer design problem. The best results of the experiments are shown in bold.
Algorithm y 1 y 2 y 3 y 4 y 5 y 6 y 7 Optimum Value
GWO3.50230.700017.00007.48087.72513.36315.28723000.8341
HHO3.50260.700017.00008.04138.03013.49895.28683049.1657
ChoA3.60000.700017.00007.30008.30003.44275.36563121.8909
GJO3.55840.700217.00007.42528.01483.38495.28733035.4171
EO3.50000.700017.00007.30008.30003.35025.28693007.4366
WOA3.50000.700017.00007.91287.93083.58225.36063116.4355
SO3.50000.700017.00007.88497.71533.35195.28673000.2703
MPSO3.50000.700017.00007.30008.30003.35025.28693046.7137
SOGWO3.50670.700017.00007.30007.93163.35345.29303006.7350
LSGJO3.50000.700017.00007.30007.71533.35025.28672994.4711
Table 8. Comparison results of gear train design problem. The best results of the experiments are shown in bold.
Table 8. Comparison results of gear train design problem. The best results of the experiments are shown in bold.
Algorithm y 1 y 2 y 3 y 4 Optimum Value
GWO4.03 × 1012.46 × 1011.20 × 1015.08 × 1011.18 × 1013
GJO5.00 × 1011.71 × 1011.26 × 1012.98 × 1011.52 × 1013
PSO5.13 × 1012.10 × 1011.48 × 1014.78 × 1013.08 × 10−4
BA5.75 × 1011.95 × 1011.86 × 1014.37 × 1011.53 × 10−11
ACO5.15 × 1012.14 × 1011.58 × 1014.73 × 1012.87 × 10−5
SA5.13 × 1012.13 × 1011.50 × 1014.74 × 1011.71 × 10−4
FPA5.12 × 1012.25 × 1011.80 × 1015.59 × 1014.83 × 10−11
DA5.24 × 1011.70 × 1012.30 × 1015.17 × 1013.02 × 10−11
MFO4.42 × 1011.88 × 1012.11 × 1015.70 × 1011.44 × 10−14
PBO5.01 × 1012.33 × 1011.48 × 1014.79 × 1011.37 × 10−15
FA5.01 × 1012.44 × 1011.40 × 1014.64 × 1016.52 × 10−13
SOGWO4.81 × 1012.99 × 1011.38 × 1015.94 × 1012.35 × 10−11
EO4.49 × 1011.28 × 1012.93 × 1015.79 × 1015.76 × 10−14
LSGJO3.17 × 1011.20 × 1011.20 × 1013.15 × 1012.63 × 10−19
Table 9. Comparison results of multiple-disk clutch design problem. The best results of the experiments are shown in bold.
Table 9. Comparison results of multiple-disk clutch design problem. The best results of the experiments are shown in bold.
Algorithm y 1 y 2 y 3 y 4 y 5 Optimum Value
GWO69.989814890.00000001.0000000565.65729292.00000000.2353473
GJO69.990667490.00000001.0000000524.81434172.00000000.2353385
ChoA69.965789990.00000001.000000061.91919802.00000000.2355945
ALO69.999999690.00000001.0000000246.94927712.00000000.2352425
MVO69.988086221.400000015.8000000912.47229152.00000000.2353651
SCA69.261654190.00000001.000000057.90688732.00000000.2428013
EO70.000000090.00000001.000000045.18743492.00000000.2352425
SOGWO69.998955490.00000001.0000000525.21657802.00000000.2352532
MPSO70.000000090.00000001.0000000996.17537652.00000000.2352425
LSGJO69.999992890.00000001.0000000945.17618012.00000000.2352425
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, P.; Zhang, T.; Yao, L.; Lu, Y.; Zhuang, W. A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems. Appl. Sci. 2022, 12, 9709. https://doi.org/10.3390/app12199709

AMA Style

Yuan P, Zhang T, Yao L, Lu Y, Zhuang W. A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems. Applied Sciences. 2022; 12(19):9709. https://doi.org/10.3390/app12199709

Chicago/Turabian Style

Yuan, Panliang, Taihua Zhang, Liguo Yao, Yao Lu, and Weibin Zhuang. 2022. "A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems" Applied Sciences 12, no. 19: 9709. https://doi.org/10.3390/app12199709

APA Style

Yuan, P., Zhang, T., Yao, L., Lu, Y., & Zhuang, W. (2022). A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems. Applied Sciences, 12(19), 9709. https://doi.org/10.3390/app12199709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop