Next Article in Journal
Soft Error Simulation of Near-Threshold SRAM Design for Nanosatellite Applications
Next Article in Special Issue
An Evaluation of Directive-Based Parallelization on the GPU Using a Parboil Benchmark
Previous Article in Journal
Enhanced Speech Emotion Recognition Using DCGAN-Based Data Augmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Strategy Crazy Sparrow Search Algorithm for the Global Optimization Problem

College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(18), 3967; https://doi.org/10.3390/electronics12183967
Submission received: 25 August 2023 / Revised: 13 September 2023 / Accepted: 18 September 2023 / Published: 20 September 2023

Abstract

:
A multi-strategy crazy sparrow search algorithm (LTMSSA) for logic-tent hybrid chaotic maps is given in the research to address the issues of poor population diversity, slow convergence, and easily falling into the local optimum of the sparrow search algorithm (SSA). Firstly, the LTMSSA employs an elite chaotic backward learning strategy and an improved discoverer-follower ratio factor to improve the population’s quality and diversity. Secondly, the LTMSSA updates the positions of discoverers and followers by the crazy operator and the Lévy flight strategy to expand the selection range of target following individuals. Finally, during the algorithm’s optimization search, the LTMSSA introduces the tent hybrid and Corsi variable perturbation strategies to improve the population’s ability to jump out of the local optimum. Different types and dimensions of test functions are used as performance benchmark functions to test the performance of the LTMSSA with SSA variants and other algorithms. The simulation results show that the LTMSSA can jump out of the optimal local solution, converge faster, and have higher accuracy. Its overall performance is better than the other seven algorithms, and the LTMSSA can find smaller optimal values than other algorithms in the welded beam and reducer designs. The results confirm that the LTMSSA is an effective aid for computationally complex practical tasks, provides high-quality solutions, and outperforms other algorithms.

1. Introduction

Optimization refers to finding the best solution to achieve an objective by satisfying all constraints. Traditional optimization methods, such as simplex and gradient descent [1], are fast to solve and have a more mature mathematical theoretical basis. The feasibility requirements greatly influence the extent to which these algorithms can be implemented, and the solution capability could be better for optimization problems with unknown mathematical characteristics. The metaheuristic algorithm is an intelligent iterative search-based optimization method combining local search strategies and stochastic algorithms [1], which has no constraints on the nature of the problem, is highly searchable and widely applicable, and can solve practical optimization problems efficiently. Metaheuristic algorithms include the gravitational search algorithm (GSA) [2], the fireworks algorithm (FA) [3], the genetic algorithm (GA) [4], the harmony search algorithm (HS) [5], the seagull optimization algorithm (SOA) [6], the Harris hawks optimization (HHO) [7], and so on. Based on simulated evolutionary theory, these evolutionary computational methods are self-organizing and efficient at finding the optimum, making them effective at solving intricate solutions and massive optimization issues. Swarm intelligence algorithms are widely used today and yield significant benefits in many fields. Ever since the pioneering work of Beni and Wang, who introduced the notion of swarm intelligence (SI) [8], there has been an increase in the amount of in-depth research being conducted. Swarm intelligence algorithms have achieved rich results regarding theoretical systems and practical applications.
Particle swarm optimization (PSO) [9,10] and ant colony optimization (ACO) [11] are two good examples of optimization techniques. The foundation of search algorithms lies in the intricate mechanism of information exchange between individuals in a population as they progress through the evolution of a biological system. PSO has the advantage of good global convergence ability and ease of implementation, but there is a strong parameter dependency. For this reason, some scholars have introduced taboo search mechanisms [12] and local search strategies [13,14] to enhance the PSO algorithm’s efficiency. Inspired by ant foraging, ACO provides rapid solution times and computational simplicity. Researchers have increasingly been drawn to swarm intelligence optimization algorithms due to the competitive nature and global search capabilities demonstrated by algorithms like PSO and ACO. Yang et al. proposed the bat algorithm (BA) [15], which simulates bat echolocation. The grey wolf optimizer (GWO) algorithm is widely recognized for its ability to mimic the hierarchical organization and predatory behavior observed in grey wolves [16]. In addition, several algorithms for optimizing group intelligence have emerged in the past few years: the bacterial foraging algorithm (BFA) [17], the artificial bee colony (ABC) algorithm [18,19], the pigeon-inspired optimization (PIO) [20], the firefly algorithm (FA) [21,22], the cuckoo search (CS) algorithm [23,24], the whale optimization algorithm (WOA) [25], the fly optimization algorithm (FOA) [26], the krill herd (KH) [27], the dragonfly algorithm (DA) [28], the monkey algorithm (MA) [29], the beetle antennae search algorithm (BAS) [30], and so on.
The sparrow search algorithm (SSA) was put forward by Xue et al. as a cutting-edge bionic stochastic search technique [31], originating from an in-depth study of the social and behavioral characteristics of sparrow groups of predators and anti-predators. This algorithm has been widely applied and has obtained good results in various practical optimization problems, such as image segmentation [32], trajectory planning [33], workshop scheduling [34], power generation prediction [35], and other fields. However, this algorithm solves complex multidimensional optimization problems with the limitation that the sparrow population convergence is more severe and prone to early convergence. It is of great theoretical research value to determine how to effectively overcome the shortcomings of SSA to enhance its performance processing practical optimization problems and thus obtain a more widespread application.

1.1. Literature Review

There have been many improvements made by researchers to strengthen the effectiveness of SSA, and this paper will discuss two aspects of such algorithms.

1.1.1. Improving the Search Mechanism of the Algorithm

Tang et al. presented a progressive cosine algorithm aimed at modifying the placement of new members in a positive manner. They used a linear decreasing method to control individual alert scouting sparrows [36], balancing local exploitation of the algorithm with global exploration. Tang et al. developed a hierarchy and Brownian motion strategy to improve information communication between individuals; maintaining sparrow diversity involves upgrading improved sparrow placements [37]. Zhang et al. introduced a Corsi variation strategy for solving the optimum local problem using a tent mapping initialization of the population [38], effectively enhancing the algorithm’s search capability. A chaotic mapping method was used by Chen et al. to determine the location of sparrow populations, and the Lévy flight and random wandering strategies were introduced [39], which effortfully optimized algorithm efficiency and exploration capabilities. Ou-Yang et al. utilized the k-means algorithm cluster analysis to differentiate the sparrow population positions in SSA [40], reducing the perturbation, and applied it to the optimization study of the active suspension LQR controller. Ou-Yang et al. introduced a mirror-opposite learning strategy into the discoverer’s stage to increase population variety and search flexibility [41]. Mao et al. interfered with the optimal sparrow individuals using a Corsi variation and backward learning strategy to escape local optima [42]. Fu et al. introduced an innovative method called the elite chaotic backward learning mechanism in the population initialization stage. By combining it with a randomized following strategy in the chicken flock algorithm, they enhanced the quality and diversity of individuals, resulting in improved search performance for updating the joiner location [43]. Duan et al. employed Sobol sequence mapping to construct the population’s beginning location and used horizontal as well as vertical crossing to prevent local optimum and sustain sparrow variety [44]. Chen et al. incorporated the spiral exploration strategy into the discoverer search mechanism [45], which greatly improved the capacity for global exploration. By employing advanced diversity-enhancing techniques such as elite dissimilarity and random inverse hybrid variation, the algorithm effectively avoids premature convergence. This is achieved through the swift assimilation of individuals during the later stages of the iteration process. Yan et al. made the location distribution of the initial population more consistent by adding good point sets [46]. Combining the characteristics of the SSA algorithm, an iterative local search method with increased flexibility is introduced. A backward learning mechanism for dimension-by-dimensional lens imaging is introduced to reduce perturbations between dimensions and avoid premature convergence. He et al. asserted an SSA which varied the hybrid quantum behavior of the inferior population by introducing a quantum strategy. This modification had a profound effect on the superheated temperature control system, enabling them to accurately determine its parameters [47]. Wu et al. integrated a competition mechanism into their approach, aiding the sparrow in locating the optimal position nearest to the individual [48]. They cleverly employed a positive cosine strategy to strike a balance between exploring the wider context and conducting a targeted search within the local vicinity. A polynomial variation strategy was introduced to eliminate local optima. Liu et al. [49] enhanced the global search algorithm using t-distribution perturbation, which was discussed in the literature. For the sparrow individual crossing problem, a random regression strategy was adopted and used to deal with the design of the stressor. Referring to Sin’s chaotic search mechanism, Ma et al. improved the initialization of the population by introducing the Lévy flight perturbation mechanism into the SSA to improve the spatial search diversity [50].

1.1.2. Integration of Other Algorithms

To enhance the multiplicity of sparrow populations, Tian et al. devised an algorithm that employs fusion arithmetic optimization. Their strategy involves substituting the population’s individuals with an undirected weighted graph [51]. According to the improved circle algorithm to calculate the Hamiltonian ring length composed of population individuals, the ratio of the Hamiltonian ring lengths of two adjacent generations was used to indicate the population convergence trend. The pressure vessel and butterfly spring problems are effectively resolved by the greedy algorithm, which employs a strategy of randomly selecting individuals generated within a specific range. This approach enables the algorithm to break free from local optimum solutions. Yang et al. have incorporated the PSO algorithm with the SSA to enhance convergence speed [52]. Li et al. mixed a simulated annealing algorithm with the SSA to remove the algorithm from its local maximum [53], which boosts the algorithm’s global exploration. Using the simplex method, Liu et al. moved individuals with weak adaptability to enhance algorithm variety and search capacity after each cycle [54].

1.2. Research Gaps and Motivations

Our previous analysis demonstrated the resounding effectiveness of the SSA, as it transcends across different realms. Many scholars have made notable strides in enhancing the original algorithm’s search optimization capability by implementing iterative updates, amalgamating with other algorithms, and modifying parameters. Nevertheless, numerous research findings have primarily enhanced the algorithm’s capacity in a one-sided manner. The algorithm must continue to strike a delicate balance between effectively utilizing data and allowing for exploratory analysis. The ability to uphold diversity still needs to be discovered. In tackling non-convex problems, the challenges of achieving accurate optimization, conducting effective global searches, and avoiding the pitfalls of local optima persist and require attention. There is still ample opportunity for enhancing and advancing the SSA.

1.3. Contribution

To further boost the optimization performance, this research put forth a multi-strategy hybrid SSA methodology. The complete integration of dynamic parameters into every algorithm phase ensures a comprehensive evaluation of their relationships. The individual population experiences a shift towards being the majority. Significant improvements have been implemented in the dynamic parameters. We have included the active adaptive adjustment strategy as part of our approach. The purpose of dynamic global optimization is accomplished through the mutual influence of various interactions on one another. The primary advancements are showcased in the following manner:
  • The introduction of the LTMSSA aims to address the issues inherent to the initial algorithm.
  • The LTMSSA increases the population diversity by utilizing logistic-tent hybrid chaotic maps and improves the discoverer-follower scaling factor to make the algorithm more accurate at convergence.
  • Improving the discoverer and follower positions with the crazy operator and the Lévy flight strategy, respectively, and introducing the tent hybrid and Corsi variational perturbation strategies to reconcile the capability for both local and global search of the SSA.
  • The LTMSSA is evaluated for efficacy on 23 standard test functions and compared to different algorithms.
  • The scalability of the LTMSSA in various dimensions is examined and implemented in practical engineering issues involving the design of welded beams and reducers. Results indicate that the enhanced LTMSSA strategy proposed applies to solving optimization-related issues.
This study follows this structure: in Section 2, the concept of the SSA is introduced. In Section 3, the LTMSSA mechanism and its mathematical model are comprehensively outlined and discussed. In Section 4, the test results for LTMSSA and several other algorithms on 23 benchmark functions are presented. In Section 5, the practical engineering hurdles associated with the welded beam and reducer design are effectively addressed through the utilization of the LTMSSA. In Section 6, the paper comes to a conclusion.

2. Sparrow Search Algorithm

Sparrow search algorithms have three categories of individuals, namely discoverers, followers, and vigilantes, and their positions are representative of potential solutions. Throughout the foraging process, the three places are updated to find the ideal food source, which is the best solution.

2.1. Sparrow Search Algorithm Model

In the following matrix, the sparrow population can be represented.
X = x 1,1 x 1,2 x 1 , d x 2,1 x 2,2 x 2 , d x n , 1 x n , 2 x n , d
where d denotes the problem dimension and n is the total amount of sparrows in the population. Depending on the sparrow population’s fitness, the following is calculated.
F X = f x 1,1 x 1,2 x 1 , d f x 2,1 x 2,2 x 2 , d f x n , 1 x n , 2 x n , d
where F X denotes the fitness value of the sparrow and f denotes the fitness value of the problem dimension corresponding to each sparrow.
Since the discoverer in the sparrow population is searching for food for all sparrows, the discoverer experiences a greater level of fitness. Throughout the algorithm’s iteration, the discoverer’s position is continuously adjusted in accordance with Equation (3).
X i , j t + 1 = X i , j t exp i α T ,           R 2 < S T X i , j t + Q L ,                         R 2 S T
where t denotes the number of iterations, j = 1 , 2 d ; T denotes the maximum number of iterations, X i , j denotes the coordinates of the position of the i th sparrow in the j th dimension, α     ( 0 , 1 ] is a randomly generated number, R 2   ( R 2     [ 0 , 1 ] ) is the warning value, S T   ( S T     [ 0.5 , 1 ] ) denotes the security value, Q is a random number that follows a normal distribution, and L denotes a 1 × d matrix. When R 2 < S T , the discoverer can perform extensive foraging operations and there are no predators near the foraging area. Conversely when R 2 S T , some sparrows deliver a danger alert to the remaining sparrows, thus sparrows will be able to move quickly to foraging areas that are safe.
The followers will compete with the finder for food when they are looking for food. If they succeed, the followers will obtain the food searched by the finder; if not, they will continue to follow the finder. Equation (4) shows the formula for updating follower positions.
X i , j t + 1 = Q exp X w o r s t t     X i , j t i 2 ,                                 i > n / 2 X p t + 1 + | X i , j t X i , j t + 1 |   A + L ,             i n / 2
where X p t + 1 denotes the global best position of the sparrow at t + 1 th iteration, X w o r s t t denotes the global worst position of the sparrow at   t th iteration, n is the population size, A is a matrix of 1 × d , randomly generated 1 or 1 is assigned to the matrix A , and   A + = A T ( A A T ) 1 .
A random position is chosen for the vigilantes, and the formula for updating their status is presented in Equation (5).
X i , j t + 1 = X b e s t t + β X i , j t X b e s t t ,         f i f g X i , j t + K X i , j t X w o r s t t f i f w o r s t + ε ,           f i = f g
where X b e s t is the current global best position, β is a randomly generated parameter that controls the row length of sparrows and follows a normal distribution with variance 1 and mean 0 , K is also a random number that controls the search direction of sparrows and ranges from [ 1,1 ] , f i is the fitness value of the i th sparrow at this time, f g is the fitness value of the sparrow with the worst position in the entire sparrow population, f w o r s t is the fitness value of the sparrow with the worst position in the entire sparrow population, and ε is the smallest constant to make the denominator in Equation (5) not equal to 0. When f i f g , the sparrow at the edge of the population has a greater chance of getting caught by predators, and X b e s t indicates that at this moment, the sparrow is in the safest position. When f i = f g , sparrows in the middle of a population perceive a crisis and move closer to the sparrow in a safe position to prevent the risk of being caught.

2.2. Sparrow Search Algorithm Pseudo Code

According to the above algorithm design steps, the following Algorithm 1 is a pseudocode representation of the specific algorithm flow.
Algorithm 1 The framework of the SSA.
Input:
T: the maximum iterations
PD: the number of producers
SD: the number of sparrows who perceive the danger
R2: the alarm value
ST: safety value
n: the number of sparrows
Initialize a population of n sparrows and define its relevant parameters.
Output: Xbest, fbest.
1: While (t < T)
2: Rank the fitness values and find the current best individual
and the current worst individual.
3: R2 = rand (1)
4:    for i = 1: PD
5:      Using Equation (3) update the sparrow’s location;
6:    end for
7:    for i = (PD + 1): n
8:      Using Equation (4) update the sparrow’s location;
9:     end for
10:   for i = 1: SD
11:        Using Equation (5) update the sparrow’s location;
12:   end for
13: Obtain the current new location;
14: If the new location is better than before, update it;
15: t = t + 1
16: End While
17: return Xbest, fbest.

3. Multi-Strategy Hybrid Crazy Sparrow Search Algorithm

3.1. Population Initialization

3.1.1. Logistic-Tent Hybrid Chaos Map

Chaotic systems describe a complex chaotic phenomenon arising from deterministic nonlinear systems sensitive to initial value conditions, which are nonperiodic, and internally stochastic [55,56]. In general, chaotic systems are classified into low- and high-dimensional chaos [57]. To develop chaotic systems with better chaotic performance, researchers have combined multiple low-dimensional chaos to form a new composite chaotic system. This type of chaotic system can effectively overcome the shortcomings of low-dimensional chaos and is less complex and more accessible to implement than high-dimensional chaos. Tang et al. proposed a logistic-sine composite chaotic mapping and used it for image encryption to address the limitations of traditional one-dimensional chaotic systems and the security of parameter values [58] and achieved good encryption results. In order to safeguard the algorithm’s integrity, Zhang et al. employed the chaotic sequence derived from the logistic-sine-cosine composite chaos system. This involved the application of rank dislocation and cyclic pixel diffusion methodologies [59]. Combining logistic-tent chaotic maps with tent chaotic maps improves the randomness of the algorithm.
The mathematical formula for it is:
X n + 1 = r X n 1 X n + 4 r X n 2 m o d 1 , X n < 0.5 r X n 1 X n + 4 r 1 X n 2 m o d 1 , X n 0.5
where X is the system variable, r is the control parameter, X 0 , 1 , and r ( 0 , 4 ) .
The amalgamation of the logistic and tent chaotic systems in this disorderly setup results in complex dynamics, offering faster iteration, heightened autocorrelation, and broad applicability to many sequences.

3.1.2. Initial Population Elitism

To enhance the original population by using lens imaging inverse learning [60], let x j and x j * denote the current individual sparrow and its individual after the reversal of lens imaging, respectively.
x j * = a j + b j 2 + a j + b j 2 k x j k
where a j   and b j denote the minimum and maximum values in the j th dimension of the current population, respectively, and k is the scaling factor of the lens, k = 10 , 000 .
In order to initiate the sparrow population using the elite chaos reversal learning approach, the following set of procedures must be followed: the initial sparrow population X = x i 1 , , x i d ,   i = 1 , , n ,   x i d denotes the position of the i th sparrow in the d th dimension, the population x is substituted into Equation (7) to generate the chaotic population Y, and the population X is substituted into Equation (8) to generate the lenticular imaging reversal population Z. The sparrow individuals in population Y and population Z are ranked according to their fitness values, and the top n individuals are selected to form the elite chaotic reverse population P . The top n individuals of population P and the original sparrow population X are then selected to form a new initial sparrow population based on the ranking of individual fitness values.
X = x i 1 , , x i d

3.2. Location Formula Update

3.2.1. Proportionality Improvement

The SSA algorithm maintains a constant ratio between discoverers and followers, which may lead to an inefficient search globally. The main objective of this study is to put forth a strategy that effectively improves the discoverer-follower ratio coefficient, resulting in a reduction in discoverers and a simultaneous increase in followers through an adaptive approach.
To determine the optimal number of discoverers and followers, one can calculate the adjustments using the following prescribed method:
r = b ( tan π t 4 · i t e r m a x + π 4 k · r a n d 0 , 1 )
p N u m = r · N
s N u m = ( 1 r )   N
where k is the perturbation variation factor to disturb the nonlinearly falling r value, p N u m is the total amount of discoverers, s N u m is the number of followers, and the value of b is instrumental in maintaining an optimal equilibrium between pioneers and adopters, effectively regulating the ratio of individuals exploring new ideas versus those embracing them.

3.2.2. Join the Madness Calculator to Improve the Discoverer

The element of “madness” plays a crucial role in amplifying the unexpected behavior exhibited by the group [61]. A madness operator is introduced into the discoverer’s location update equation to maintain a diverse set of results. This operator perturbs the discoverer’s position with a predetermined probability of madness, effectively adding an unpredictable aspect to the equation. Here is the latest formulation for the discoverer update:
X i , j t + 1 = X i , j t exp ( i α T ) + P ( c 1 ) · s i g n ( c 1 ) · x 8 r a z i n e s s ,                           R 2 < S T X i , j t + Q L + P ( c 1 ) · s i g n ( c 1 ) · x 8 r a z i n e s s ,                                       R 2 S T
where P ( c 1 ) and s i g n ( c 1 ) are defined, respectively, as:
P ( c 1 ) = 1 ,       c 1 P c r 0 ,   o t h e r w i s e
s i g n ( c 1 ) = 1 ,       c 1 0.5 1 ,   o t h e r w i s e
where c 1 denotes a number chosen at random, with a uniform distribution between 0 , 1 ; x c r a z i n e s is usually taken as a small constant (=0.0001). P c r is the set probability of madness. In this case, if   P c r takes a small value (=0.3), c 1 will likely exceed   P c r and the madness factor P ( c 1 ) will be 0.

3.2.3. Lévy Flight Strategy to Improve Followers

Lévy flight strategies have been successfully applied to improve many swarm intelligence algorithms, and researchers have been inspired by it to introduce Lévy mechanisms in update strategies to improve algorithm performance. The step length of the walk satisfies a heavy-tailed Lévy distribution as shown in Equation (15):
L ( s ) | s | 1 β , 0 < β 2
Equation (16) visually represents the erratic trajectory of the Lévy flight strategy, which is characterized by its random and sporadic steps.
s = μ | ν | 1 β
where μ 0 , σ μ 2 , v 0 , σ v 2 , 0 < β 1 < 2 , β 1 = 1.5 .
σ μ = [ Γ ( 1 + β ) s i n ( π β / 2 ) Γ ( 1 + β ) / 2 ] β 2 ( β 1 ) / 2 ] 1 / β 1
The Lévy flight strategy has improved the following follower formula for the SSA algorithm:
X i , j t + 1 = Q exp X w o r s t t X i , j t   i 2 ,         i > n / 2 X p t + 1 + | X i , j t X i , j t + 1 | s ,     i n / 2

3.3. Tent Chaotic Perturbation and Corsi Variation Strategy

3.3.1. Tent Chaos Perturbation

The tent chaotic mapping equation was modified by Zhang et al., who introduced random variables rand 0 , 1 × 1 N [62]. As a result, a new version of the tent chaotic mapping equation is presented below.
z i + 1 = 2 z i + rand 0 , 1 × 1 N ,                                         0 z 1 2 2 1 z i + rand 0 , 1 × 1 N ,                   1 2 < z 1
The following is the expression using the Bernoulli shift improvement:
z i + 1 = 2 z i m o d 1 + rand 0 , 1 × 1 N
where N is the number of particles in the sequence.

3.3.2. Corsi Mutation

The Cauchy variation comes from constant frequency distributions and Cauchy distributions, and is characterized by a smaller peak at zero and a slow decline from the peak to the zero value [63], making the variation range more uniform. Variation can be characterized as follows.
m u t a t i o n ( x ) = x ( 1 + tan ( π ( u 0.5 ) )
where x is the original individual position, m u t a t i o n x is the individual position after the Corsi variation, and u is a random number in the 0 , 1 interval. Where x is the initial location of the individual, m u t a t i o n x is the position of the individual after the Corsi variation, and u is a random value within the range of 0 , 1 .

3.4. LTMSSA Flow Chart

The LTMSSA process diagram is shown in Figure 1. First, the initial parameters of the SSA are set, and the populations are initialized according to logistic-tent chaotic maps and elite reverse learning. Next, calculate the fitness value of each sparrow and its position and count the number of discoverers and followers, update the position of the three populations, and calculate the fitness value and average fitness value again. Finally, the tent mixing perturbation and the Cauchy mutation are performed, and when the perturbed and mutated individuals are better than the original individuals, the population fitness value, optimal position, and worst position are updated.

3.5. Computational Complexity

Assume that the algorithm incorporates a population of N individuals, each characterized by D dimensions. With a specified maximum iteration limit ( i t e r m a x ), s 1 is the optimal moment for initiating the population parameters in a random manner. Additionally, j ( D ) encompasses the assessment of the suitability of every individual, working in conjunction with p N u m discoverers, and the time taken for each dimension to be updated is represented by s 2 . Moreover, it consists of s N u m followers who necessitate s 3 time to update their dimensions, as well as alerters who rely on s 4 time for their updates. Therefore, the time complexity at the outset can be expressed as T 1 = O ( s 1 + N ( j ( D ) + D s 1 ) ) . The update of the discoverer is characterized by a time complexity of T 2 = O ( p N u m s 2 D ) ) . The time required to update followers is represented by the complexity T 3 = O ( s N u m s 3 D ) . The time complexity for updating the alerters has been improved to T 4 = O ( ( N p N u m s N u m ) s 4 D . To sum up, the complete time complexity of the SSA can be represented by the equation: T = T 1 + T 2 + T 3 + T 4 i t e r m a x = O ( D + j ( D ) ) .
The initial stage in the LTMSSA involves elite chaotic backward learning, which is estimated to take u 1 time, and sorting selection, which takes u 2 time. As a result, the time complexity of this stage can be denoted as T 11 = O ( s 1 + N ( u 1 + j ( D ) + D s 1 ) + u 2 ) . The updated formula for the count of explorers and supporters is referred to as u 3 , and consequently, the time complexity for updating the explorers is denoted as T 22 = O ( p N u m s 2 D + i t e r m a x u 3 ) . The time complexity for updating the followers is denoted as T 33 = O ( s N u m s 3 D + i t e r m a x u 3 ) , while the time complexity for updating the alerters is denoted as T 44 = O ( N p N u m s N u m s 4 D ) . In the context of the Corsi variation and the tent chaos perturbation process, assign the variable u 4 to represent the time taken to solve f a v g , while u 5 and u 6 , respectively, denote the time needed for computing the perturbation formula and the Corsi variation formula. During this stage, u 7 denotes the specific point in time when we compare the fitness value of the sparrow with the average fitness value. Similarly, u 8 represents the moment when the target position is updated based on merit. As a result, the time complexity for this stage is succinctly expressed as T 55 = O ( u 4 + u 5 + u 6 + N ( j ( D ) + u 7 ) + u 8 ) . In conclusion, the overall time complexity of the LTMSSA can be represented as follows: T T = T 11 + ( T 22 + T 33 + T 44 + T 55 ) i t e r m a x = O ( D + j ( D ) ) . Given that T T = T , it demonstrates that the time complexity of the LTMSSA remains unchanged.

4. Experimental Results and Discussion

4.1. Test Function and Algorithm Parameters

To gauge the performance of the LTMSSA, a simulation has been conducted using 23 benchmark test functions. Each algorithm maintains a population size of 30 (represented as N) and caps the number of iterations at 500 (designated as M). Meanwhile, SSA [31], SSSA [64], FSSA [65], CSFSSA [45], GWO [16], PSO [10], and BOA [66] are the algorithms used in this study for comparison, and the experimental parameters of each algorithm are shown in Table 1. Table 2 showcases a graphical depiction of single-peak functions F1–F7 in high-dimensional settings, multi-peak functions F8–F13 in high-dimensional settings, and multi-peak functions F14–F23 in low-dimensional settings. High-dimensional single-peak functions exhibit a distinct global optimum point and do not feature any local extreme points when assessing the speed of convergence. From a multidimensional perspective, local extremum points can give rise to multi-peaked functions, showcasing their diverse peaks in different dimensions.

4.2. Scalability Testing

To test the algorithmic scalability of the LTMSSA, the LTMSSA was compared with SSA in different dimensions. According to Table 3, the test outcomes of the LTMSSA in dimensions 20, 50, and 80 are nearly identical. The comparative data show that the quality of the SSA scheme before the improvement needs to be higher. The LTMSSA is more effective in terms of experimental performance.

4.3. Population Diversity Analysis of the LTMSSA

The population initialization was randomized, and to evaluate the impact of the improvement strategy, the Sphere function was selected for the merit-seeking experiment.
F x = k = 1 D x k 2
The initial population distribution and the distribution of individual sparrow positions after 10 and 50 iterations of the LTMSSA are shown in Figure 2a–c, and the distribution of individual sparrow positions after 10 and 50 iterations of the SSA are shown in Figure 2d.
From Figure 2, we can see that the population initialized by the elite chaos inverse learning strategy has good diversity, and individual sparrows have a uniform distribution around the optimal value, which gives the algorithm a good starting point for iteratively finding an optimal solution. Elite sparrows led the population to the ideal solution quicker as iterations rose. After 50 iterations, the sparrow population in the LTMSSA was more uniform and concentrated near the optimal solution compared with the distribution of sparrow individuals in the SSA, which verified the effective improvement of population diversity and population quality by the improved strategy.

4.4. Algorithm Comparison

4.4.1. Comparison of Single-Peak Test Functions

A widely adopted method in this field involves assessing the algorithm’s performance by subjecting it to a test function that is equipped with predetermined global optima. Based on the SSA mentioned, we further evaluated the algorithm on a single-peaked test function. The single-peak function is a better test for local exploitation capabilities. The optimization results for the LTMSSA, SSA, SSSA, FSSA, CSFSSA, GWO, PSO, and BOA are given in Table 4 for running 30 independent experiments.
Figure 3 provides a visual representation of the convergence curves for each algorithm, illustrating their performance on the single-peak test functions.
The F1–F7 optimum algorithm is summarized in Table 4. The LTMSSA performs better on the single-peak test functions contrary to additional algorithms. F1, F2, F3, F4, and F7 have a minimal LTMSSA mean and standard deviation. The SSA and FSSA had lower mean values as well as standard deviations than the LTMSSA for F5 and F6. Figure 3 shows that the LTMSSA achieves optimal solutions in all seven single-peak test functions, and the convergence speeds and accuracy exceed those of the SSA, FSSA, and other competitors.

4.4.2. Comparison of Multi-Peak Test Functions

The multi-peak test functions include several local optima, making the global optimum difficult to identify. Consequently, an algorithm’s capacity to investigate and step outside local solutions can be assessed more comprehensively. Table 5 gives the optimization results of the SSA, SSSA, FSSA, CSFSSA, GWO, PSO, and BOA on the multi-peak test function by running 30 independent experiments.
Figure 4 illustrates the convergence patterns of the algorithms mentioned earlier when applied to the multi-peak test functions.
Table 5 shows that both the Avg and Std of the LTMSSA have the lowest values on F8-F11 and rank first overall in the multi-peak test functions. The LTMSSA converges significantly faster than other methods, as seen in Figure 4. Based on the findings, the LTMSSA solution has high accuracy and does not eventually fall into the local optimum. In particular, in F8, F9, F10 and F11, the LTMSSA shows a better exploration mechanism than other methods. The convergence accuracy of the LTMSSA is not optimal from F12 and F13, but the rate at which the convergence occurs gradually increases, leading to the eventual discovery of the optimal outcome. Therefore, the overall optimization effect of the LTMSSA is more substantial.

4.4.3. Comparison of Fixed-Dimensional Test Functions

Finding the global optimum of these test functions needs a well-balanced algorithm that is constantly adapting to new information. In order to validate the sparrow search algorithm’s ability to explore globally and exploit locally, we specifically chose fixed-dimensional test functions for testing purposes. Based on 30 independent experiments, the optimization outcomes of each algorithm regarding fixed-dimensional test functions are listed in Table 6.
The convergence curves for each method on the fixed-dimensional test functions are displayed in Figure 5.
Table 6 shows excellent performance on the fixed-dimension LTMSSA test function as well. On F14, the Avg and Std of the BOA outperform the LTMSSA, but the convergence plot of F14 shows that the LTMSSA converges faster. In addition, on F16 and F17, although the Std of the LTMSSA is not optimal, the Avg is the lowest among all algorithms. On F15, F18, F19, F20, F21, F22, and F23, both the Avg and Std of the LTMSSA are optimal. From Figure 5, all fixed dimensional test functions find the optimal values quickly and have a high convergence rate.

4.4.4. Optimal Value of Each Algorithm

For each algorithm, 30 experiments were carried out in every benchmark function to ascertain the best possible outcome. A box plot analysis was also carried out to confirm the long-term viability and converging of the LTMSSA. The box plot shows the maximum, minimum, upper, lower, median, and outliers. In Figure 6, each box plot has “◆” for outliers, “─” for medians, upper and lower quartiles at the ends of the rectangular boxes, and “-“ for maximum or minimum values.
From the box plot, the LTMSSA has the most stable optimal value in the iterations and the median is closest to the optimal value. From the convergence value, we can indicate that the LTMSSA algorithm has stronger robustness. In addition, in each benchmark function, the LTMSSA has fewer outliers compared to other algorithms.
The radar plots are plotted below after ranking according to the optimal values to evaluate the comprehensive optimization capability of the LTMSSA.
Optimal performance is achieved when the algorithm’s total score is minimized, and its graph representation occupies a smaller area. Among the eight algorithms, the LTMSSA emerges as the top scorer with exceptional efficiency in Figure 7. This can be attributed to its ability to enclose the smallest area among the single-peak, multi-peak, and fixed-dimension test functions.

4.5. Discussion

The LTMSSA introduced in this research demonstrates superior performance compared to the SSA, SSSA, FSSA, CSFSSA, GWO, PSO, and BOA in optimizing the functions across the conducted experiments. Through a comprehensive integration of convergence performance and optimization performance, the findings were thoroughly analyzed and subsequently discussed. Despite the fact that the LTMSSA had lower average and standard deviation values compared to SSA for functions F5, F6, F12, and F13, it exhibited significantly higher convergence rates. On F14, the Avg and Std of the BOA outperformed the LTMSSA, but the convergence plot of F14 shows that the LTMSSA converges faster. In addition, on F16 and F17, although the Std of the LTMSSA is not optimal, the Avg is the lowest among all algorithms. The LTMSSA has the highest ranking in the number of optimal values obtained among the 23 tested functions and has the most stable optimal values in the iterative process. Based on the aforementioned analysis, the LTMSSA has the ability to swiftly achieve the most accurate global solution while ensuring rapid convergence. The findings indicate that the algorithm possesses a robust ability to explore both wide-ranging and localized areas, consistently delivering effective optimization outcomes. The achieved outcomes cannot be separated from the initial population that has been fine-tuned through comprehensive exploration, as well as the ongoing adjustments made throughout the iterative process. Undoubtedly, there are still some aspects of the algorithm that require refinement and improvement. To provide an instance, the algorithm’s effectiveness in optimization is influenced by random numbers, thus slightly compromising its overall accuracy. There is potential for enhancing the existing approach to adjusting parameters and weights, as the current dynamic scheme may not offer the most effective solution. Additionally, it is crucial to verify the algorithm’s effectiveness across various iterations and diverse population sizes.

5. Engineering Design Issues

5.1. Welded Beam Design

Welded beam design (WBD) is an issue of cost minimization to reduce production expenses [67]. It is also a typical nonlinear programming problem with four design variables: height (t), thickness (b), weld width (h), and length (l), as shown in Figure 8.
To express the mathematical model, we can utilize the following representation.
Variables:
z = z 1 z 2 z 3   z 4 = h l t b
Objective function:
f ( z ) = 1.10471 z 1 2 z 2 + 0.04811 z 3 z 4 ( 14.0 + z 2 )
where f ( z ) denotes the total cost, and cost-effectiveness is needed.
The decision variables take a range of values:
0.1 z 1 2
0.1 z 2 10
0.1 z 3 10
0.1 z 4 2
Constraints:
s 1 ( z ) = τ ( z ) τ m a x 0
s 2 ( z ) = σ ( z ) σ m a x 0
s 3 ( z ) = δ ( z ) δ m a x 0
s 4 ( z ) = z 1 z 4 0
s 5 ( z ) = P P c ( z ) 0
s 6 ( z ) = 0.125 z 1 0
s 7 ( z ) = 1.10471 z 1 2 + 0.04811 z 3 z 4 ( 14.0 + z 2 ) 5.0 0
The expressions of each function in the constraints can be referred to as Equations (36)–(42).
τ ( z ) = τ 2 + 2 τ τ ( z 2 / R ) + ( τ ) 2
τ = P 2 z 1 z 2
τ = M R / J
M = p ( L + z 2 / 2 )    
R = [ z 2 2 + ( z 1 + z 3 ) 2 ] 4
J = 2 { 2 z 1 z 2 [ z 2 2 12 + ( z 1 + z 3 ) 2 14 ] }
P c ( z ) = 4.013 E z 3 z 4 2 6 L 2 ( 1 z 3 E 8 L G )
where, σ m a x = 30000   p s i , P = 6000   l b , L = 14   i n . , δ m a x = 0.25   i n . , E = 3 × 10 6   p s i , τ m a x = 136000   p s i , a n d G = 1.2 × 10 7   p s i .
Table 7 shows that the LTMSSA achieves the lowest cost. It shows that the LTMSSA performs well in the challenging task of designing welded beams.

5.2. Reducer Design

Reducers are often used in mechanical systems and are an important component of gearboxes [68]. The task of designing a reducer involves finding the optimal solution to minimize its size, with 11 constraints in which the reducer’s weight must be lowered. Seven variables are involved in the problem, including the tooth width b , the gear module m , the amount of teeth z in the pinion, the length l 1 of the first shaft between bearings, the length l 2 of the second shaft between bearings, and the diameters d 1 and d 2 , as shown in Figure 9.
Variables:
x = x 1   x 2 x 3 x 4   x 5   x 6 x 7 = b m z l 1   l 2 d 1   d 2
Objective function:
f ( X ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 )
Constraints:
d 1 ( X ) = 27 x 1 x 2 2 x 3 1 0
d 2 ( X ) = 397.5 x 1 x 2 2 x 3 2 1 0
d 3 ( X ) = 1.93 x 4 3 x 2 x 6 4 x 3 1 0
d 4 ( X ) = 1.93 x 5 3 x 2 x 7 4 x 3 1 0
d 5 ( X ) = ( 745 x 4 / ( x 2 x 3 ) ) 2 + 16.9 × 10 6 110 x 6 3 1 0
d 6 ( X ) = ( 745 x 5 / ( x 2 x 3 ) ) 2 + 157.5 × 10 6 85 x 7 3 1 0
d 7 ( X ) = x 2 x 3 40 1 0
d 8 ( X ) = 5 x 2 x 1 1 0
d 9 ( X ) = x 1 12 x 2 1 0
d 10 ( X ) = 1.5 x 6 + 1.9 x 4 1 0
d 11 ( X ) = 1.1 x 7 + 1.9 x 5 1 0
Boundary constraints:
2.6 x 1 3.6
0.7 x 2 0.8
17 x 3 28
7.3 x 4 8.3
7.8 x 5 8.3
2.9 x 6 3.9
5.0 x 7 5.5
Table 8 lists optimization outcomes. The LTMSSA optimizes reducer design and improves optimization results.

6. Conclusions

A notable limitation of the SSA lies in its inclination to get trapped in local optimum solutions, coupled with the sluggish pace at which it converges during iterations. These factors significantly curtail its effectiveness in various practical scenarios. To overcome the drawbacks of the original algorithm, this paper puts forth a new solution called the logistic-tent hybrid chaotic maps-based multi-strategy mad sparrow search algorithm (LTMSSA). By utilizing a logistic-tent hybrid chaotic algorithm, the population is effectively initialized while also ensuring a balanced and unpredictable distribution across the board. First, the LTMSSA employs an elite chaotic backward learning strategy and an improved discoverer-follower scaling factor, resulting in improved quality and diversity. Secondly, the LTMSSA updates the positions of discoverers and followers by the crazy operator and the Lévy flight strategy to expand the selection of target followers. Finally, during the optimization search of the algorithm, the LTMSSA introduces tent mixing and Corsi variable perturbation strategies to improve the ability of populations to jump out of local optimum. The proposed LTMSSA algorithm is compared with other classical metaheuristic algorithms and SSA variants. Based on the optimization experiments carried out on 23 benchmark functions, it is evident that the proposed LTMSSA provides a noteworthy solution to the drawbacks of the SSA. Notably, it surpasses other SSA variants and advanced algorithms in terms of both iterative convergence and optimization performance. The findings from the optimization of the welded beam and reducer clearly demonstrate that the LTMSSA outperforms various classical metaheuristic algorithms in terms of optimization performance. Experimental results demonstrate that the suggested algorithm effectively strikes a balance between utilizing and exploring, adapting the algorithm’s global and local search, enabling swift iterations, and effortlessly attaining global optimization.

Author Contributions

Conceptualization, X.J. and Y.G.; methodology, X.J.; software, X.J.; validation, X.J., W.W., Y.G. and S.L.; formal analysis, X.J.; investigation, Y.G.; resources, X.J.; data curation, X.J.; writing—original draft preparation, X.J.; writing—review and editing, X.J.; visualization, X.J.; supervision, Y.G. and S.L.; project administration, W.W.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities, grant number 2572019BL04, and the Scientific Research Foundation for the Returned Overseas Chinese Scholars of Heilongjiang Province, grant number LC201407.

Data Availability Statement

Data are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yapici, H.; Cetinkaya, N. A new meta-heuristic optimizer: Pathfinder algorithm. Appl. Soft Comput. 2019, 78, 545–568. [Google Scholar] [CrossRef]
  2. Yazdani, S.; Nezamabadi-Pour, H.; Kamyab, S. A gravitational search algorithm for multimodal optimization. Swarm Evol. Comput. 2014, 14, 69–85. [Google Scholar] [CrossRef]
  3. Tan, Y.; Zhu, Y. Fireworks algorithm for optimization. In Advances in Swarm Intelligence, Proceedings of the International Conference in Swarm Intelligence, Beijing, China, 12–15 June 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 355–364. [Google Scholar] [CrossRef]
  4. Karimi, H.; Kani, I.M. Finding the worst imperfection pattern in shallow lattice domes using genetic algorithms. J. Build. Eng. 2019, 23, 107–113. [Google Scholar] [CrossRef]
  5. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  6. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  7. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  8. Beni, G.; Wang, J. Swarm intelligence in cellular robotic systems. In Robots and Biological Systems: Towards a New Bionics? NATO ASI Series; Springer: Berlin/Heidelberg, Germany, 1993; pp. 703–712. [Google Scholar] [CrossRef]
  9. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
  10. Eberhart, R.C.; Shi, Y. Particle swarm optimization: Developments, applications and resources. In Proceedings of the IEEE Congress on Evolutionary Computation, Seoul, Republic of Korea, 27–30 May 2001; pp. 81–86. [Google Scholar] [CrossRef]
  11. Dorigo, M.; Maniezzo, V.; Colorni, A. The ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. B 1996, 26, 29–41. [Google Scholar] [CrossRef]
  12. Xia, X.; Liu, J.; Hu, Z. An improved particle swarm optimizer based on tabu detecting and local learning strategy in a shrunk search space. Appl. Soft Comput. 2014, 23, 76–90. [Google Scholar] [CrossRef]
  13. Li, S.; Tan, M. A hybrid PSO-BFGS strategy for global optimization of multimodal functions. IEEE Trans. Syst. Man Cybern. B 2011, 41, 1003–1014. [Google Scholar]
  14. Zhao, S.; Liang, J.J.; Suganthan, P.N. Dynamic multi-swarm particle swarm optimizer with local search for large scale global optimization. In Proceedings of the Congress on Evolutionary Computation, Singapore, 1–6 June 2008; pp. 3845–3852. [Google Scholar] [CrossRef]
  15. Yang, X.S.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  17. Passino, K.M. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 2002, 22, 52–67. [Google Scholar] [CrossRef]
  18. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Erciyes University, Engineering Faculty, Computer Engineering Department: Talas, Turkey, 2005; Volume 129, pp. 2865–2874. [Google Scholar] [CrossRef]
  19. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  20. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  21. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: London, UK, 2008; pp. 1–147. [Google Scholar] [CrossRef]
  22. Yang, X.S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  23. Yang, X.S.; Deb, S. Cuckoo search via L’evy flights. In Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC ’09), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  24. Yang, X.S.; Deb, S. Engineering optimisation by cuckoo search. Int. J. Math. Model. Numer. Optim. 2010, 1, 330–343. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Lewi, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  26. Pan, W.T. A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowl.-Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  27. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  28. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  29. Zhao, R.Q.; Tang, W.S. Monkey Algorithm for global numerical optimization. J. Uncertain Syst. 2008, 2, 164–175. [Google Scholar] [CrossRef]
  30. Jiang, X.; Li, S. BAS: Beetle antennae search algorithm for optimization problems. Int. J. Robot. Control 2018, 1, 1–5. [Google Scholar] [CrossRef]
  31. Xue, J.K.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  32. Liu, T.T.; Yuan, Z.; Wu, L.; Badami, B. An optimal brain tumor detection by convolutional neural network and enhanced sparrow search algorithm. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2021, 235, 459–469. [Google Scholar] [CrossRef]
  33. Liu, Q.L.; Zhang, Y.; Li, M.Q.; Zhang, Z.Y.; Chao, N.; Shang, J. Multi-UAV Path Planning Based on Fusion of Sparrow Search Algorithm and Improved Bioinspired Neural Network. IEEE Access 2021, 9, 124670–124681. [Google Scholar] [CrossRef]
  34. Liu, L.N.; Nan, X.Y.; Shi, Y.F. Improved sparrow search algorithm for solving job shop scheduling problem. Comput. Appl. Res. 2021, 38, 3634–3639. [Google Scholar] [CrossRef]
  35. Wei, P.F.; Fan, S.Z.; Shi, R.J.; Wang, W.Q.; Cheng, C.J. Short-term photovoltaic power prediction based on improved sparrow search algorithm with optimized support vector machine. Therm. Power Gener. 2021, 50, 74–79. [Google Scholar] [CrossRef]
  36. Tang, A.D.; Han, T.; Xu, D.W.; Xie, L. A chaotic sparrow search algorithm-based approach for UAV trajectory planning. Comput. Appl. 2021, 41, 2128–2136. [Google Scholar] [CrossRef]
  37. Tang, A.D.; Han, T.; Xu, D.W.; Xie, L. Chaotic sparrow search algorithm based on hierarchy and Brownian motion. J. Air Force Eng. Univ. (Nat. Sci. Ed.) 2021, 22, 96–103. [Google Scholar] [CrossRef]
  38. Zhang, S.D.; Zhang, J.Y.; Wang, Z.H.; Li, Q.H. Regression prediction of material grinding particle size based on improved sparrow search algorithm to optimize BP neural network. In Proceedings of the 2021 2nd International Symposium on Computer Engineering and Intelligent Communications, Nanjing, China, 6–8 August 2021; pp. 216–219. [Google Scholar] [CrossRef]
  39. Chen, X.X.; Huang, X.Y.; Zhu, D.L.; Qiu, Y.X. Research on chaotic flying sparrow search algorithm. J. Phys. Conf. Ser. 2021, 1848, 012044. [Google Scholar] [CrossRef]
  40. Ou-yang, C.T.; Zhu, D.L. Research on multi-strategy improved sparrow search algorithm incorporating K-means. Electro-Opt. Control 2021, 28, 11–16. [Google Scholar] [CrossRef]
  41. Ou-Yang, C.T.; Liu, Y.J.; Zhu, D.L. An adaptive chaotic sparrow search optimization algorithm. In Proceedings of the 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering, Nanchang, China, 26–28 March 2021; pp. 76–82. [Google Scholar] [CrossRef]
  42. Mao, Q.H.; Zhang, Q. An improved sparrow algorithm incorporating Corsi variation and backward learning. Comput. Sci. Explor. 2021, 15, 1155–1164. [Google Scholar]
  43. Fu, H.; Liu, H. Improved sparrow search algorithm with multi-strategy fusion and its application. Control Decis. Mak. 2022, 37, 87–96. [Google Scholar] [CrossRef]
  44. Duan, Y.X.; Liu, C.Y. A sparrow search algorithm based on Sobol sequences and vertical and horizontal crossover strategies. Comput. Appl. 2022, 42, 36–43. [Google Scholar] [CrossRef]
  45. Chen, G.; Zeng, G.; Huang, B.; Liu, J. Sparrow search algorithm based on spiral exploration and adaptive hybrid mutation. J. Chin. Comput. Syst. 2023, 44, 779–786. [Google Scholar]
  46. Yan, S.Q.; Yang, P.; Zhu, D.L.; Wu, F.X.; Yan, Z. Improved sparrow search algorithm based on good point set. J. Beijing Univ. Aeronaut. Astronaut. 2022, 2021, 1–13. [Google Scholar] [CrossRef]
  47. He, G.S.; Dong, Z.; Sun, M. Parameter identification of superheated steam temperature model based on hybrid quantum sparrow algorithm. J. N. China Univ. Electr. Power (Nat. Sci. Ed.) 2023, 1, 92–100. [Google Scholar] [CrossRef]
  48. Wu, W.S.; Tian, L.Q.; Wang, Z.G.; Zhang, Y.; Wu, J.I.; Gui, F. A multi-objective sparrow optimization algorithm based on a novel non-dominated ranking. Comput. Appl. Res. 2022, 39, 2012–2019. [Google Scholar] [CrossRef]
  49. Liu, R.; Mo, W.B. Enhanced sparrow search algorithm and its engineering optimization application. Small Microcomput. Syst. 2022, 1–10. [Google Scholar] [CrossRef]
  50. Ma, W.; Zhu, X. A sparrow search algorithm based on Lévy flight perturbation strategy. J. Appl. Sci. 2022, 40, 116–130. [Google Scholar]
  51. Tian, L.; Liu, S. A hybrid sparrow and arithmetic optimization algorithm incorporating Hamiltonian graphs. Comput. Sci. So 2022, 2022, 1–13. [Google Scholar] [CrossRef]
  52. Yang, L.; Li, Z.; Wang, D.S.; Hong, M.; Wang, Z.B. Software defects prediction based on hybrid particle swarm optimization and sparrow search algorithm. IEEE Access 2021, 9, 60865–60879. [Google Scholar] [CrossRef]
  53. Li, F.; Lin, Y.X.; Zou, L.H.; Zhong, L.Y. Improved sparrow search algorithm applied to path planning of mobile robot. In Proceedings of the 2021 International Conference on Computer Information Science and Artificial Intelligence, Kunming, China, 17–19 September 2021; pp. 294–300. [Google Scholar]
  54. Liu, C.H.; He, Q. Improved search mechanism of simplex method to guide sparrow search algorithm. Comput. Eng. Sci. 2022, 2022, 9950161. [Google Scholar] [CrossRef]
  55. Zhou, Y.Q.; Zhang, H. New 3D affine transform applied to image encryption. Comput. Age 2022, 2022, 31–35. [Google Scholar] [CrossRef]
  56. Li, H.M.; Li, T.; Li, C.L. A new discrete memory-resistive chaotic system and its image encryption application. J. Hunan Inst. Technol. (Nat. Sci. Ed.) 2022, 35, 20–30. [Google Scholar] [CrossRef]
  57. Yang, K.X.; Wu, Z.H.; Hao, R.B. Four-dimensional chaotic systems and their applications in image encryption. Comput. Appl. Res. 2020, 37, 3433–3436. [Google Scholar] [CrossRef]
  58. Tang, C.H.; Wu, C.X. Logistic-Sine mapping and bit recombination for image encryption algorithms. Intell. Comput. Appl. 2022, 12, 173–179. [Google Scholar]
  59. Zhang, S.N.; Li, C.M. A color image encryption algorithm based on Logistic–Sine–Cosine mapping. Comput. Sci. 2022, 49, 353–358. [Google Scholar]
  60. Long, W.; Wu, T.B.; Tang, M.Z. Grey wolf optimizer algorithm based on lens imaging learning strategy. Acta Autom. Sin. 2020, 46, 2148–2164. [Google Scholar] [CrossRef]
  61. Wang, X.W.; Wang, W.; Wang, Y. An adaptive bat algorithm. In Intelligent Computing Theories and Technology—ICIC 2013; Springer: Berlin, Heidelberg, Germany, 2013; pp. 216–223. [Google Scholar] [CrossRef]
  62. Zhang, N.; Zhao, Z.D.; Bao, X.A. Gravitational search algorithm based on improved Tent chaos. Control Decis. 2020, 35, 893–900. [Google Scholar] [CrossRef]
  63. Guo, Z.Z.; Wang, P.; Ma, Y.F. Whale optimization algorithm based on adaptive weight and Cauchy mutation. Microelectron. Comput. 2017, 34, 20–25. [Google Scholar] [CrossRef]
  64. Li, A.L.; Quan, L.X.; Cui, G.M. A sparrow search algorithm incorporating positive cosine and Corsi variance. Comput. Eng. Appl. 2022, 58, 91–99. [Google Scholar] [CrossRef]
  65. Jiang, Y.; Ma, Y.; Liang, Y.Z. Optimized OTSU lung tissue segmentation algorithm based on fractional-order sparrow search. Comput. Sci. 2021, 48, 28–32. [Google Scholar] [CrossRef]
  66. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  67. Carlos, A.; Coello, C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  68. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
Figure 1. The LTMSSA process diagram.
Figure 1. The LTMSSA process diagram.
Electronics 12 03967 g001
Figure 2. (a): Initial population distribution of the LTMSSA; (b): Sparrow population distribution for 10 iterations of the LTMSSA; (c): Sparrow population distribution for 50 iterations of the LTMSSA; (d): Sparrow population distribution for 50 iterations of the SSA.
Figure 2. (a): Initial population distribution of the LTMSSA; (b): Sparrow population distribution for 10 iterations of the LTMSSA; (c): Sparrow population distribution for 50 iterations of the LTMSSA; (d): Sparrow population distribution for 50 iterations of the SSA.
Electronics 12 03967 g002
Figure 3. Convergence curves of each algorithm on the single-peak test functions.
Figure 3. Convergence curves of each algorithm on the single-peak test functions.
Electronics 12 03967 g003
Figure 4. Convergence curves of each algorithm on the multi-peak test functions.
Figure 4. Convergence curves of each algorithm on the multi-peak test functions.
Electronics 12 03967 g004
Figure 5. Convergence curves of each algorithm on the fixed-dimensional test functions.
Figure 5. Convergence curves of each algorithm on the fixed-dimensional test functions.
Electronics 12 03967 g005
Figure 6. Box plot of 23 benchmark functions.
Figure 6. Box plot of 23 benchmark functions.
Electronics 12 03967 g006
Figure 7. Radar map of 23 benchmark functions.
Figure 7. Radar map of 23 benchmark functions.
Electronics 12 03967 g007
Figure 8. Welded beam design problem.
Figure 8. Welded beam design problem.
Electronics 12 03967 g008
Figure 9. Reducer design problem.
Figure 9. Reducer design problem.
Electronics 12 03967 g009
Table 1. Experimental parameters.
Table 1. Experimental parameters.
AlgorithmsParameters
SSAST = 0.8, PD = 0.2, SD = 0.2
SSSAST = 0.8, PD = 0.2, SD = 0.2
FSSAST = 0.8, PD = 0.2, SD = 0.2
CSFSSAST = 0.8, PD = 0.2, SD = 0.2
LTMSSAST = 0.8, PD = 0.2, SD = 0.2
GWOa = (2→0), r1, r2 ∈ [0, 1]
PSOW = 0.9, C1 = 1.49445, C2 = 1.49445
BOAa = (0.1→0.3)
Table 2. Test Functions.
Table 2. Test Functions.
TypeFunctionDimensionScopeOptimal Value
Unimodal functions F 1 ( x ) = k = 1 D x k 2 30[−100, 100]0
F 2 ( x ) = k = 1 D | x k | + k = 1 D | x k | 30[−10, 10]0
F 3 ( x ) = k = 1 D ( l = 1 k x l ) 2 30[−100, 100]0
F 4 ( x ) = m a x k [ | x k | , 1 k D ] 30[−100, 100]0
F 5 ( x ) = k = 1 D 1 [ 100 ( ( x k + 1 x k 2 ) ) 2 + ( x k 1 ) 2 ] 30[−30, 30]0
F 6 ( x ) = k = 1 D ( | x k + 0.5 | ) 2 30[−100, 100]0
F 7 ( x ) = k = 1 D k x k 4 + r a n d o m ( 0 , 1 ) 30[−1.28, 1.28]0
Multimodal functions F 8 ( x ) = k = 1 D x k s i n ( | x k | ) 30[−500, 500]−418.9826 × D
F 9 ( x ) = k = 1 D [ x k 2 10 c o s ( 2 π x ) + 10 ] 30[−5.12, 5.12]0
F 10 x = 20 exp 0.2 1 D k = 1 D x k 2 exp 1 D k = 1 D c o s 2 π x k + 20 + e 30[−32, 32]0
F 11 ( x ) = k = 1 D | x k s i n ( x k ) + 0.1 x k | 30[−10, 10]0
F 12 ( x ) = π D ( 10 s i n ( π y 1 ) + ( y k 1 ) 2 ) + π D k = 1 D 1 ( y k 1 ) 2 [ 1 + 10 s i n 2 ( π y k + 1 ) ] + k = 1 D μ ( x k , 10,100,4 ) y k = 1 + x k + 1 4 , μ ( x k , p , a , m ) = { p ( x k a ) m , x k > a p ( x k a ) m , x k < a 30[−50, 50]0
F 13 ( x ) = 0.1 { s i n 2 ( 3 π x 1 ) + k = 1 D ( x k 1 ) 2 [ 1 + s i n 2 ( 3 π x k + 1 ) ] + ( x k 1 ) 2 [ 1 + s i n 2 ( 2 π x k ) ] + k = 1 D μ ( x k , 5,100,4 ) } μ ( x k , p , a , m ) = p ( x k a ) m , x k > a 0 , a < x k < a p ( x k a ) m , x k < a 30[−50, 50]0
Fixed dimensional functions F 14 ( x ) = 1 500 + l = 1 25 ( 1 l + k = 1 D ( x k a k l ) 6 ) 1 2[−65.536, 65.536]0.998
F 15 ( x ) = k = 1 11 [ a i x 1 ( b k 2 + b k x 2 ) b k 2 + b k x 3 + x 4 ] 2 4[−5, 5]0.0003075
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) c o s x 1 + 10 2[−5, 5]0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F 19 ( x ) = k = 1 4 c k e x p ( l = 1 3 a k l ( x l p k l ) 2 ) 3[0, 1]−3.86
F 20 ( x ) = k = 1 4 c k e x p ( l = 1 6 a k l ( x l p k l ) 2 ) 6[0, 1]−3.32
F 21 ( x ) = k = 1 5 [ ( X a k ) ( X a k ) T c k ] 1 4[0, 10]−10.1532
F 22 ( x ) = k = 1 7 [ ( X a k ) ( X a k ) T c k ] 1 4[0, 10]−10.4028
F 23 ( x ) = k = 1 10 [ ( X a k ) ( X a k ) T c k ] 1 4[0, 10]−10.5363
Table 3. Algorithm scalability testing.
Table 3. Algorithm scalability testing.
F Dim = 20Dim = 50Dim = 80
AvgStdAvgStdAvgStd
F1LTMSSA000000
SSA4.58 × 10−292.51 × 10−281.47 × 10−366.76 × 10−369.75 × 10−335.34 × 10−32
F2LTMSSA000000
SSA1.00 × 10−304.03 × 10−303.11 × 10−321.62 × 10−312.49 × 10−331.36 × 10−32
F3LTMSSA000000
SSA3.21 × 10−141.19 × 10−131.85 × 10−139.13 × 10−132.19 × 10−156.46 × 10−15
F4LTMSSA000000
SSA2.62 × 10−91.27 × 10−83.48 × 10−91.31 × 10−81.37 × 10−93.70 × 10−9
F5LTMSSA9.15 × 10−31.19 × 10−21.66 × 10−12.19 × 10−13.42 × 10−14.04 × 10−1
SSA8.51 × 10−42.50 × 10−33.11 × 10−37.30 × 10−35.83 × 10−31.04 × 10−2
F6LTMSSA1.30 × 10−31.28 × 10−31.20 × 10−29.59 × 10−31.96 × 10−22.39 × 10−2
SSA6.29 × 10−61.23 × 10−52.17 × 10−54.13 × 10−55.46 × 10−51.08 × 10−4
F7LTMSSA2.17 × 10−41.30 × 10−42.12 × 10−41.64 × 10−42.24 × 10−41.68 × 10−4
SSA2.61 × 10−41.85 × 10−44.06 × 10−43.36 × 10−42.66 × 10−42.58 × 10−4
F8LTMSSA−7.28 × 1038.00 × 102−1.86 × 1041.80 × 103−2.38 × 1044.46 × 103
SSA−6.60 × 1031.56 × 103−1.65 × 1044.60 × 103−2.98 × 1044.84 × 103
F9LTMSSA000000
SSA000000
F10LTMSSA8.88 × 10−1608.88 × 10−1608.88 × 10−160
SSA3.73 × 10−156.42 × 10−151.72 × 10−152.02 × 10−151.95 × 10−151.90 × 10−15
F11LTMSSA000000
SSA000000
F12LTMSSA4.13 × 10−45.23 × 10−44.24 × 10−43.64 × 10−43.40 × 10−45.04 × 10−4
SSA5.82 × 10−71.69 × 10−66.73 × 10−71.04 × 10−63.54 × 10−75.52 × 10−7
F13LTMSSA9.01 × 10−31.44 × 10−21.89 × 10−22.17 × 10−22.37 × 10−22.40 × 10−2
SSA1.38 × 10−53.64 × 10−51.15 × 10−52.04 × 10−51.76 × 10−53.29 × 10−5
Table 4. Unimodal function optimization results.
Table 4. Unimodal function optimization results.
F LTMSSASSASSSAFSSACSFSSAGWOPSOBOA
F1Avg02.53 × 10−321.08 × 10−301.37 × 10−322.43 × 10−301.32 × 10−271.18 × 10−57.77 × 10−11
Std01.38 × 10−315.91 × 10−305.31 × 10−321.33 × 10−291.95 × 10−272.98 × 10−58.62 × 10−12
F2Avg05.19 × 10−312.67 × 10−371.18 × 10−343.39 × 10−378.15 × 10−171.68 × 10−12.25 × 10−8
Std02.84 × 10−301.46 × 10−364.37 × 10−341.86 × 10−366.37 × 10−175.36 × 10−18.11 × 10−9
F3Avg04.74 × 10−132.98 × 10−81.18 × 10−141.48 × 10−76.66 × 10−67.36 × 106.33 × 10−11
Std02.56 × 10−128.45 × 10−85.97 × 10−146.79 × 10−71.77 × 10−54.88 × 107.40 × 10−12
F4Avg06.07 × 10−91.78 × 10−76.74 × 10−93.38 × 10−68.38 × 10−71.523.58 × 10−8
Std02.40 × 10−89.37 × 10−71.51 × 10−81.04 × 10−51.02 × 10−67.02 × 10−14.18 × 10−9
F5Avg6.43 × 10−21.15 × 10−34.70 × 10−11.62 × 10−31.512.70 × 105.89 × 102.89 × 10
Std9.91 × 10−22.72 × 10−31.042.77 × 10−33.638.32 × 10−13.47e+012.07 × 10−1
F6Avg4.11 × 10−32.32 × 10−51.18 × 10−11.45 × 10−53.50 × 10−28.64 × 10−12.59 × 10−25.34
Std4.28 × 10−34.22 × 10−58.42 × 10−23.03 × 10−51.46 × 10−23.79 × 10−19.38 × 10−26.90 × 10−1
F7Avg1.75 × 10−43.84 × 10−47.52 × 10−41.95 × 10−34.28 × 10−31.82 × 10−36.69 × 10−22.34 × 10−3
Std1.60 × 10−43.96 × 10−41.89 × 10−31.02 × 10−34.21 × 10−31.10 × 10−33.76 × 10−28.92 × 10−4
Table 5. Multimodal function optimization results.
Table 5. Multimodal function optimization results.
F LTMSSASSASSSAFSSACSFSSAGWOPSOBOA
F8Avg−1.05 × 104−9.03 × 103−8.60 × 103−3.20 × 103−3.05 × 103−6.17 × 103−5.39 × 103−4.09 × 103
Std1.25 × 1032.43 × 1032.12 × 1032.23 × 1033.56 × 1038.61 × 1031.49 × 1034.13 × 103
F9Avg007.5103.602.326.42 × 103.91 × 10
Std003.84 × 1001.77 × 102.971.46 × 107.99 × 10
F10Avg8.88 × 10−161.72 × 10−153.26 × 10−151.84 × 10−154.57 × 10−141.01 × 10−131.852.81 × 10−8
Std01.53 × 10−151.23 × 10−141.60 × 10−152.45 × 10−131.51 × 10−148.85 × 10−15.16 × 10−9
F11Avg003.56 × 10−3003.13 × 10−35.64 × 10−21.21 × 10−11
Std001.95 × 10−2007.76 × 10−38.70 × 10−21.33 × 10−11
F12Avg7.07 × 10−49.13 × 10−77.79 × 10−34.10 × 10−77.29 × 10−44.32 × 10−24.17 × 10−15.25 × 10−1
Std6.17 × 10−41.62 × 10−67.81 × 10−34.59 × 10−77.18 × 10−43.90 × 10−27.49 × 10−11.54 × 10−1
F13Avg1.33 × 10−22.07 × 10−51.06 × 10−11.20 × 10−53.58 × 10−25.69 × 10−12.05 × 10−12.81
Std1.14 × 10−26.86 × 10−51.28 × 10−13.27 × 10−54.32 × 10−22.25 × 10−17.13 × 10−13.08 × 10−1
Table 6. Fixed dimensional functions optimization results.
Table 6. Fixed dimensional functions optimization results.
F LTMSSASSASSSAFSSACSFSSAGWOPSOBOA
F14Avg2.644.344.842.805.305.462.771.10
Std3.074.463.613.234.514.612.232.59 × 10−1
F15Avg3.40 × 10−43.98 × 10−48.24 × 10−33.54 × 10−44.20 × 10−44.46 × 10−35.80 × 10−43.90 × 10−4
Std4.75 × 10−52.44 × 10−42.00 × 10−21.05 × 10−41.41 × 10−48.09e × 10−32.52 × 10−46.34e × 10−5
F16Avg−1.03−1.03−8.68 × 10−1−1.03−4.89 × 10−1−1.03−1.03−1.31 × 104
Std1.053 × 10−97.65 × 10−163.32 × 10−15.44 × 10−163.91 × 10−12.56 × 10−86.39 × 10−161.22 × 104
F17Avg3.98 × 10−13.98 × 10−14.04 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.99 × 10−1
Std1.48 × 10−65.78 × 10−73.01 × 10−22.84 × 10−61.40 × 10−61.74 × 10−606.83 × 10−4
F18Avg3.001.29 × 101.34 × 103.002.91 × 103.903.003.08
Std9.50 × 10−131.32 × 101.85 × 101.71 × 10−31.32 × 104.931.08 × 10−122.47 × 10−1
F19Avg−3.86−3.81−3.82−3.86−3.86−3.86−3.84−1.87 × 1018
Std1.93 × 10−31.96 × 10−16.66 × 10−22.07 × 10−32.54 × 10−32.71 × 10−31.41 × 10−11.38 × 1019
F20Avg−3.28−3.27−3.11−3.28−3.28−3.23−3.27−3.01
Std2.41 × 10−27.17 × 10−21.93 × 10−15.96 × 10−25.39 × 10−29.67 × 10−25.99 × 10−21.14 × 10−1
F21Avg−1.01 × 10−7.43−6.34−1.01 × 10−9.81−9.28−6.36−4.83
Std7.13 × 10−22.593.137.04 × 10−11.441.993.604.80 × 10−1
F22Avg−1.04 × 10−7.75−6.06−1.04 × 10−1.00 × 10−1.04 × 10−6.70−4.47
Std2.80 × 10−22.703.211.67 × 10−11.559.70 × 10−13.603.81 × 10−1
F23Avg−1.05 × 10−7.11−5.52−1.05−9.76−9.99−5.98−4.57
Std2.62 × 10−22.653.128.21 × 10−22.052.063.858.70 × 10−1
Table 7. Parameter optimization comparison of welded beam design problem.
Table 7. Parameter optimization comparison of welded beam design problem.
AlgorithmsOptimal Values for VariablesOptimum Value
hltb
LTMSSA0.33452.01098.27920.24511.8117
SSA0.28902.58907.40370.30652.0499
SSSA0.27984.55909.33520.20432.0970
FSSA0.38881.72388.11460.25511.8541
CSFSSA0.49532.13605.54900.54952.9458
GWO0.30072.12229.03810.20581.9549
PSO0.10007.08759.03660.20571.9769
BOA0.33223.20126.62560.38652.5096
Table 8. Parameter optimization comparison of the reducer design problem.
Table 8. Parameter optimization comparison of the reducer design problem.
AlgorithmsOptimal Values for VariablesOptimum
b m z l 1 l 2 d 1 d 2 Value
LTMSSA3.65380.700015.09196.82898.15563.35135.28722733.9
SSA3.21040.737515.41167.27107.45553.36625.28672981.9
SSSA3.53270.700015.16988.20127.94303.08555.44373106.5
FSSA3.31720.707915.74027.51547.53703.35765.28682897.4
CSFSSA3.64430.700016.71688.59677.92603.35355.28683017.5
GWO3.48370.700017.0007.58067.64253.35775.28773002.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, X.; Wang, W.; Guo, Y.; Liao, S. A Multi-Strategy Crazy Sparrow Search Algorithm for the Global Optimization Problem. Electronics 2023, 12, 3967. https://doi.org/10.3390/electronics12183967

AMA Style

Jiang X, Wang W, Guo Y, Liao S. A Multi-Strategy Crazy Sparrow Search Algorithm for the Global Optimization Problem. Electronics. 2023; 12(18):3967. https://doi.org/10.3390/electronics12183967

Chicago/Turabian Style

Jiang, Xuewei, Wei Wang, Yuanyuan Guo, and Senlin Liao. 2023. "A Multi-Strategy Crazy Sparrow Search Algorithm for the Global Optimization Problem" Electronics 12, no. 18: 3967. https://doi.org/10.3390/electronics12183967

APA Style

Jiang, X., Wang, W., Guo, Y., & Liao, S. (2023). A Multi-Strategy Crazy Sparrow Search Algorithm for the Global Optimization Problem. Electronics, 12(18), 3967. https://doi.org/10.3390/electronics12183967

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop