Next Article in Journal
Intelligent Retrieval and Secure Acquisition of Embedded Data
Previous Article in Journal
Optimal Synthesis of a Satellite Attitude Control System under Constraints on Control Torques and Velocities of Reaction Wheels
Previous Article in Special Issue
Supervisory Event-Triggered Control of Uncertain Process Networks: Balancing Stability and Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Intelligent Optimization Algorithm Based on a Learning Strategy

1
School of Maritime Economics and Management, Dalian Maritime University, Dalian 116026, China
2
Public Administration and Humanities College, Dalian Maritime University, Dalian 116026, China
3
Marine Engineering College, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2570; https://doi.org/10.3390/math12162570
Submission received: 1 July 2024 / Revised: 31 July 2024 / Accepted: 18 August 2024 / Published: 20 August 2024
(This article belongs to the Special Issue Mathematics and Engineering II)

Abstract

:
To overcome the limitations of single-type intelligent optimization algorithms prone to becoming stuck in local optima for complex problems, a hybrid intelligent optimization algorithm named SDIQ is proposed. This algorithm combines simulated annealing (SA), differential evolution (DE), quantum-behaved particle swarm optimization (QPSO), and improved particle swarm optimization (IPSO) into a unified framework. Initially, SA is used to explore the solution space and guide individuals toward preliminary optimization. The individuals are then ranked by fitness and divided into three subpopulations, each optimized by DE, QPSO, and IPSO, respectively. After each iteration, probabilistic learning based on fitness logarithms facilitates mutual learning among subpopulations, enabling global information sharing and improvement. The experimental results demonstrate that SDIQ exhibits strong global search capability and stability in solving both standard test functions and real-world engineering problems. Compared to traditional algorithms, SDIQ enhances global convergence and solution efficiency by integrating multiple optimization strategies and leveraging inter-individual learning, providing an effective solution for complex optimization problems.

1. Introduction

1.1. Motivation and Incitement

In contemporary scientific and engineering fields, optimization problems are widespread and critically important. Some of these problems are classified as non-convex, presenting significant challenges. Intelligent optimization algorithms are considered essential tools for solving non-convex problems. In recent years, significant attention has been drawn to intelligent optimization algorithms for addressing complex optimization issues across diverse fields, such as industrial design [1], finance [2], and aerospace engineering [3,4].
One of the interesting branches of intelligent optimization algorithms is the metaheuristic algorithms that are inspired by natural processes such as physical phenomena, biological behaviors, and evolutionary concepts to identify optimal solutions [5], offering notable advantages over traditional methods. These algorithms are characterized by their simplicity, flexibility, independence from specific problem characteristics, and their lack of reliance on gradient information [6]. Unlike deterministic mathematical optimization methods, which require detailed problem-specific knowledge and are prone to local optima entrapment, metaheuristic algorithms employ randomization to explore the search space globally [7]. This reduces the likelihood of becoming trapped in local optima and makes them independent of initial solutions.
Numerous intelligent optimization algorithms are developed annually by researchers, each representing a unique optimization strategy. However, individual algorithms often excel in solving specific types of optimization problems but may exhibit limited performance when faced with a broad range of diverse real-world problems. Therefore, combining the strengths of various optimization algorithms into a hybrid intelligent optimization algorithm is considered valuable for enhancing overall optimization capability. By integrating the advantages of different algorithms and facilitating mutual learning, a composite platform can be created that continuously incorporates new algorithms and improves the probability of achieving global convergence. This approach holds substantial significance and poses challenges in practical applications.

1.2. Literature Review

Over the past decade, the development of nature-inspired intelligent optimization algorithms has accelerated significantly [8]. Various natural phenomena have inspired numerous advanced algorithms, including those based on biological, physical, and chemical processes. Notable examples include simulated annealing (SA), genetic algorithms (GA), ant colony optimization (ACO), particle swarm optimization (PSO), and differential evolution (DE) [9,10,11,12,13]; they are characterized by their simplicity, flexibility, and independence from the derivative information of the objective function. When compared to traditional gradient-based methods like Newton’s method and gradient descent, intelligent optimization algorithms exhibit enhanced adaptability, making them suitable for a wide range of applications.
Despite their successes, each of these algorithms possesses intrinsic limitations. For instance, although the SA algorithm is known for its ability to avoid local optima partially, it tends to over-explore, leading to reduced efficiency, particularly in large-scale problems where its convergence is notably slow [14]. GA can experience premature convergence and require careful tuning of parameters [15]. ACO is computationally intensive and may not scale well with problem size [16]. Similarly, PSO and DE are susceptible to becoming trapped in local minima and are sensitive to parameter settings [17]. These challenges highlight the necessity for ongoing innovation and enhancement in the field of intelligent optimization.
In general, no single intelligent algorithm can universally solve all optimization problems effectively. While a certain algorithm might excel in addressing specific challenges, it may perform suboptimal when applied to different scenarios [18], underscoring the limitations posed by the “No Free Lunch” theorem in optimization [18]. Consequently, there is a burgeoning interest within the research community to develop robust, adaptive, and stable intelligent algorithms capable of performing well across diverse problem landscapes. A promising approach to address these challenges is the development of hybrid intelligent optimization algorithms, which combine the strengths of multiple algorithms to mitigate their weaknesses and enhance overall performance [19,20,21].
In the field of intelligent optimization algorithms, there is a significant volume of research conducted annually aimed at developing new hybrid algorithms or enhancing the performance of existing ones. Recent advancements in hybrid algorithms have demonstrated their potential to improve optimization outcomes significantly. For example, a hybrid algorithm based on GA and PSO has been proposed for system performance optimization in non-orthogonal multiple access mobile edge computing surveillance networks [22]. Similarly, combining DE with ACO has led to enhanced exploration and exploitation capabilities in optimal resource allocation for cloud computing, thereby yielding improved optimization results [23]. Moreover, the integration of GA, DE, and PSO enhances landslide spatial modeling using an adaptive neuro-fuzzy inference system (ANGIS), offering advanced capabilities for optimizing landslide susceptibility mapping with robust performance metrics [24]. These hybrid approaches leverage the complementary strengths of different algorithms, facilitating more robust and effective optimization strategies.

1.3. Contribution and Paper Organization

In this context, the present study proposed a hybrid intelligent optimization algorithm based on learning strategies aimed at integrating the advantages of different optimization algorithms to enhance overall optimization capability. The algorithm, named SDIQ, synergizes the strengths of simulated annealing (SA) [9], differential evolution (DE) [13], improved particle swarm optimization (IPSO) [25], and quantum behaved particle swarm optimization (QPSO) [26]. For clarity, a list of abbreviations used in this paper is provided in Table 1. The main contributions of this paper are as follows: (i) A new hybrid intelligent optimization algorithm is proposed, fully leveraging the strengths of individual sub-algorithms. (ii) Experimental validation demonstrates the effectiveness and superiority of the proposed algorithm in solving complex optimization problems.
The remainder of this paper is structured as follows. Section 2 describes the basic principles and procedures of the proposed hybrid intelligent algorithm, accompanied by detailed descriptions of its specific technical components, including initial particle optimization and sorting, subpopulation optimization, and inter-population learning strategies. Section 3 presents the application and discussion of the algorithm by solving standard test functions. Section 4 illustrates the application of the SDIQ algorithm in solving classic engineering optimization problems, including the minimum cost welded beam design, minimum mass design of telescopic springs, and design of pressure vessels; finally, this paper ends with the conclusions in Section 5.

2. Hybrid Intelligent Optimization Algorithm Based on Multiple Populations (SDIQ)

2.1. The Basic Principles and Procedures of Algorithms

The SDIQ algorithm is based on the following principles:
(i) Rapid initial optimization
Leveraging the fast and parallel nature of SA, the algorithm quickly optimizes all individual particles in the population for initial fine-tuning. This ensures that they reach a local optimum rapidly, saving valuable computational time for subsequent iterations and providing a strong starting point for further optimization.
(ii) Population splitting
After the initial optimization, particles are sorted based on their fitness values from high to low and evenly divided into three subpopulations. This sorting ensures that particles within each subpopulation exhibit similar fitness levels, while there are significant fitness differences between subpopulations. Such stratification allows for targeted optimization within subpopulations, enhancing the algorithm’s ability to explore diverse regions of the search space.
(iii) Rotation optimization and interactive learning
Each of the three subpopulations undergoes iterative optimization using DE, IPSO, and QPSO, respectively. Employing different optimization strategies within each subpopulation helps the algorithm escape local optima and seek the global optimum more effectively. After each algorithm completes an iteration cycle, the particles in each subpopulation learn from the optimal particles in other subpopulations with probabilities proportional to the logarithm of their fitness values. This cross-pollination of information, functioning as a mutation operation, not only aids in acquiring better solutions but also accelerates algorithm convergence by fostering diversity and reducing the likelihood of premature convergence. Once the learning phase concludes, the optimization algorithms corresponding to each subpopulation are rotated, ensuring that each subpopulation benefits from the unique strengths of different algorithms. Iterations continue until the termination conditions, such as a predefined number of iterations or a convergence criterion, are met.
The SDIQ algorithm process is illustrated in Figure 1. Throughout this document, it is assumed that the problems to be solved aim to minimize the objective function. Problems aiming to maximize the objective function can be converted into minimization problems by negating the objective function.
Step 1: Particle initialization. Initialize the particles by setting the population size N (where N is a multiple of 3). Each particle is assigned a random initial position within the specified variable range. The positions of particles in the population are represented by a matrix:
X = x 1 , 1 x 1 , 2 x 1 , d x N , 1 x N , 2 x N , d
where d denotes the dimensionality of the problem, and x i , j represents the position of the i-th particle in the j-th dimension. Each row of the matrix corresponds to a particle’s coordinates in the d-dimensional space, with different rows representing different particle individuals. The vector x i represents the position coordinates of particle i, expressed as a row vector.
Step 2: The SA algorithm is employed to conduct the initial optimization of particles. The termination conditions for this optimization step are defined as follows: the iterative process terminates if the number of iterations exceeds i t e r 0 or if the optimal value remains unchanged for iter01 consecutive generations. This step ensures that the particles achieve an initial refinement, setting a solid foundation for subsequent optimization phases.
Step 3: Sorting and subpopulation division. The population of particles resulting from Step 2 is sorted based on their fitness values in descending order. Following the sorting process, the particles are evenly divided into three subpopulations: X 1 , X 2 , and X 3 . Each subpopulation contains particles with similar fitness levels, facilitating targeted optimization strategies within each subgroup. Additionally, the variable for the overall iterative count i t e r 1 = 0 is initialized, setting the stage for the iterative optimization process that follows.
Step 4: The three subpopulations obtained from Step 3 are subsequently assigned to different evolutionary algorithms for iterative optimization. Specifically, subpopulation X 1 is assigned to the DE algorithm, subpopulation X 2 to the IPSO algorithm, and subpopulation X 3 to the QPSO algorithm for iteration. Each of these sub-algorithms operates independently and follows a pre-defined termination criterion. The internal termination conditions for each sub-algorithm are set as follows: the iterative process terminates if the number of iterations exceeds i t e r 2 or if the optimal value remains unchanged for i t e r 21 consecutive generations. This ensures that computational resources are not wasted on futile iterations. Upon completion of the iterative process, the subpopulations are updated and denoted as X 1 , X 2 , and X 3 , respectively, reflecting the evolved states post-optimization.
Step 5: The three new subpopulations derived from Step 4 undergo a phase of interactive learning. This interactive learning phase is crucial as it facilitates the exchange of information and strategies between subpopulations, enhancing the overall optimization process. Each particle in a subpopulation learns from the optimal particles in the other subpopulations, which helps in avoiding local optima and promotes the discovery of the global optimum. Upon completion of this learning phase, permutation operations are applied to subpopulations X 1 = X 3 , X 2 = X 1 , and X 3 = X 2 to further diversify the search process. The iteration count is then incremented to i t e r 1 = i t e r 1 + 1 . The algorithm’s termination conditions are defined rigorously to ensure efficient convergence. The computation is concluded if the iteration count i t e r 1 exceeds the maximum allowable iterations i t e r 11 . Alternatively, if the sum of distances between the optimal particles of the three subpopulations is lower than the specified tolerance value r 123 , indicating convergence, the algorithm terminates as well. If neither termination condition is met, the subpopulations X 1 , X 2 , and X 3 are returned to Step 4 for further iteration. This iterative process continues until one of the termination criteria is satisfied, ensuring a robust and comprehensive search for the optimal solution.

2.2. Initial Optimization and Sorting of Population Particles

The SA algorithm is utilized for the rapid preliminary optimization of population particles. Originating from the annealing process in solids, the SA algorithm, inspired by the annealing process in solid-state physics, is an optimization technique that aims to minimize the energy state of a system as it cools down to its lowest energy configuration. This algorithm utilizes the Metropolis criterion to determine whether a new solution should be accepted, incorporating a probabilistic acceptance of worse solutions to enable the escape from local optima. The SA algorithm has been theoretically proven to converge to the global optimum with a probability of 1 under appropriate cooling schedules. The core iterative formula of the algorithm is as follows:
t k = α t k 1
Δ f = f x i ( k ) f x i ( k + 1 )
x i ( k + 1 ) = x i ( k + 1 )   if   Δ f < 0   o r     exp Δ f t > rand   x i ( k ) else      
Following the preliminary optimization using the SA algorithm, the population particles are sorted based on their fitness values. This sorting process arranges the particles in descending order of their fitness, facilitating the subsequent division of the target population into three subpopulations. Each subpopulation is created to contain particles with similar fitness levels, enhancing the effectiveness of targeted optimization strategies applied in later stages. By segregating the population in this manner, the algorithm ensures that the distinct subpopulations can be optimized using different techniques, thus leveraging the strengths of diverse optimization approaches to achieve a more robust and comprehensive solution.

2.3. Optimization of the Subpopulations

The optimization of the three subpopulations obtained in Section 1.2 is conducted using DE, IPSO, and QPSO. Since the subpopulations will engage in interactive learning during subsequent iterations, the initial assignment of these optimization algorithms to the subpopulations can be arbitrary.
(1)
Subpopulation Optimization Algorithm: DE algorithm
The DE algorithm, proposed by Storn and Price in 1995 [13], is an intelligent optimization method inspired by the evolutionary processes observed in nature. The DE algorithm’s primary operations include mutation, crossover, and selection. These operations collectively drive the evolution of the population towards optimal solutions.
Mutation operation: The mutation process creates a mutant vector by adding the weighted difference between two population vectors to a third vector. This operation is crucial for introducing diversity and exploring the search space effectively. The formula for the mutation operation is as follows:
v i k + 1 = x i 3 k + F x i 1 k x i 2 k
where v i k + 1 represents the mutant vector for the i -th individual in generation k + 1 , while x i 1 k , x i 2 k and x i 3 k are randomly selected and distinct vectors from the current population. The indices i 1 , i 2 and i 3 are distinct integers such that i 1 i 2 i 3 i . The scaling factor F is a positive constant typically in the range F [ 0 , 2 ] , which controls the amplification of the differential variation. During the k -th iteration, the mutation operation generates the next generation’s trial solution v i based on the parent solution x i .
Crossover operation: The crossover operation aims to enhance the diversity of the population by combining the mutant vector with the target vector, producing a trial vector. Each component of the trial vector is determined by comparing the corresponding components of the mutant and target vectors. The formula for the crossover operation is as follows:
u i , j k + 1 = ν i , j k + 1 if r a n d < C r o r j = r 1 N x i , j k     else
where u i , j k + 1 denotes the j -th component of the offspring solution obtained during the k + 1 -th iteration. ν i , j k + 1 represents the j -th component of the trial solution obtained through the mutation operation. r a n d is a uniformly distributed random number drawn from the interval [ 0 , 1 ] , C r is the crossover probability, which determines the proportion of components inherited from the mutant vector, and r 1 N is a randomly chosen index that ensures at least one component of the trial vector differs from the target vector.
Selection operation: The selection operation is the final step in the DE algorithm, where the fitness values of the trial vector and the target vector are compared to determine which vector will survive into the next generation. The selection operation ensures that the population evolves towards better solutions over successive generations. The formula for the selection operation is as follows:
x i k + 1 = u i k + 1 if f ( u i k + 1 ) < f ( x i k ) x i k     else
According to Equation (7), the acceptance of an offspring solution as a new solution is determined by comparing its fitness with that of the parent solution.
(2)
Subpopulation Optimization Algorithm: IPSO Algorithm
The IPSO algorithm [25] offers an advantage by randomly selecting one of three strategies—stable update, conservative update, and aggressive update—with equal probabilities during velocity updates. These strategies are defined by the following formulas, respectively:
v i d k + 1 = ω ( k ) × v i d k + c 1 u 1 p i d x i d k + c 2 u 2 p g d x i d k
v i d k + 1 = ω ( k ) × v i d k + c 1 u 1 p i d x i d k
v i d k + 1 = ω ( k ) × v i d k + c 2 u 2 p g d x i d k
where v i d k represents the velocity of the i -th particle in the d -th dimension at iteration k . The term ω is the inertia weight that controls the influence of the previous velocity, c 1 and c 2 are cognitive and social coefficients, respectively, and u 1 and u 2 are uniformly distributed random numbers in the range [0, 1]. The personal best position of the i -th particle is denoted by p i d , while p g d represents the global best position found by the swarm in the d -th dimension.
During the iterative process, the update method for particle position coordinates is as follows:
x i d k + 1 = x i d k + v i d k + 1
where x i d k denotes the position of the ii i -th particle in the d -th dimension at iteration k and v i d k + 1 is the updated velocity as computed from one of the three strategies. The IPSO algorithm incorporates these three distinct velocity update strategies to balance exploration and exploitation during the optimization process.
(3)
Subpopulation Optimization Algorithm: QPSO algorithm
In classical PSO, the velocity of particles is inherently limited, preventing a comprehensive exploration of the feasible space. This limitation means that classical PSO cannot guarantee finding the global optimum. In contrast, in a quantum space, particles exhibit a property known as quantum tunneling, which allows them to search the entire feasible domain more effectively. Leveraging this property, Sun et al. proposed the QPSO algorithm [26], which extends the classical PSO framework to improve global search capabilities. The QPSO algorithm, a global optimization method, introduces quantum mechanics principles to overcome the limitations of classical PSO. By allowing particles to update their positions based on quantum probability distributions, the QPSO algorithm ensures a more thorough exploration of the search space. In the QPSO algorithm, the iterative formula for the position coordinates of particles is as follows:
x i k + 1 = p b e s t r 1 + g b e s t r 2 r 1 + r 2 ± α | m b e s t x i k | ln 1 u
where p b e s t and g b e s t represent the best positions experienced by an individual particle and the best positions experienced by the particle swarm, respectively. r 1 and r 2 are random numbers generated on the interval [0, 1]; α denotes the contraction–expansion factor, which linearly decreases from 1 to 0.5 during the iteration process; u is a random number on the interval [0, 1]; and m b e s t represents the average of the best positions of the particle swarm.
m b e s t = 1 M i = 1 M p b e s t i

2.4. Inter-Population Learning Strategies

Drawing inspiration from the characteristic of proactive learning observed in human societies, this paper incorporates an inter-population learning strategy into the algorithm to enhance the probability of escaping local optima and improve overall stability. After completing a cycle of iterations individually, the three subpopulations engage in interactive learning operations. When we refer to “Population learning from Population X j ”, it implies that particles in Population X i are learning from the best particles in Population X j . Assuming the best particle in Population X j is denoted as X j b e s t , the position coordinates of particles in Population X i after this learning process can be represented.
X i n = c 1 r a n d ( 1 10 log ( f 2 ) 10 log ( f 1 ) + 10 log ( f 2 ) ) X i n + c 2 r a n d 10 log ( f 2 ) 10 log ( f 1 ) + 10 log ( f 2 ) ( X j b e s t X i n )
f 1 = f ( X j b e s t ) + Δ f + ε
f 2 = f ( X i n ) + Δ f + ε
Δ f = min f ( X j b e s t ) , f ( X i 1 ) , f ( X i 2 ) , , f ( X i N i )
Equations (14) to (17) transform fitness values into probabilities within the range [0, 1], ensuring that particles learn from others probabilistically. Where ε is a real number slightly greater than 1, chosen to ensure the logarithmic function remains meaningful; for computational purposes, ε is set to 1.1. c 1 and c 2 represent parameters for inertia weight and learning weight, respectively. When c 1 is greater than c 2 , it indicates the algorithm prioritizes maintaining its information; conversely, when c 1 is less than c 2 , it suggests the algorithm prioritizes learning. In this paper, we choose c 1 = 0.5 and c 2 = 0.5 to signify a balanced consideration of these two tendencies. In practical applications, these parameters can be adjusted according to the specific characteristics of the problem.
As shown in Figure 1, the SDIQ algorithm consists of two main phases: initializing and learning. During initialization, all P particles are positioned in the D -dimensional search space using the SA algorithm, with a complexity of O k D 2 [27]. Where k is annealing times of the simulated annealing algorithm. Then, the algorithm executes N 0 times of outer loop iteration learning. In each iteration, P particles are evenly divided into three parts, followed by N 1 inner loop iterations. The time complexities of the QPSO and IPSO components are consistent with those of the basic PSO.
The complexity of PSO and DE can be calculated as follows [7,28]:
O PSO = O 1 / 3 × P × D × N
O DE = O 1 / 3 × P × D × N
Consequently, the computational complexity of the proposed SDIQ is as follows.
O SDIQ = O k × D 2 + O 2 × N 1 × 1 / 3 × P × D × N 0 + O N 1 × 1 / 3 × P × D × N 0   = O k × D 2 + P × D × N 0 × N 1
It can be seen that the time complexity of SDIO is at least N 1 times that of a single algorithm. And N 1 represents the number of times the population learns from each other, which means that SDIO needs to spend more computing time.

3. Solving and Discussing Standard Test Functions

To assess the performance of the SDIQ algorithm, ten commonly used standard test functions are selected for minimizing objectives. Each test function is independently optimized using the SDIQ algorithm, and the process is repeated 200 times to evaluate the results statistically. These test functions are all unconstrained, continuous optimization problems where the goal is to find values of variables within specified ranges that minimize the objective function. The program is executed on a system running Windows 10, with a CPU speed of 1.61 GHz, and utilizing the MATLAB 2022a programming environment. The parameters are set as follows: total number of particles: N = 60 ; maximum iteration count: i t e r 0 = 2000 ; i t e r 01 = 1000 ; i t e r 11 = 100 ; i t e r 2 = 2000 ; i t e r 21 = 1000 ; absolute error tolerance: r 123 = 1 × 10 10 . The specific forms of the ten standard test functions, variable ranges, and theoretical minimum values are presented in Table 2.
In Table 3, the results obtained by different algorithms from other research and the SDIQ algorithm for solving the ten standard test functions are compared. Ave and Std represent the mean and standard deviation of the solution results, respectively. It can be seen that, except for function f 7 , where the performance of the SDIQ algorithm is not satisfactory, the algorithm demonstrates good convergence and stability in solving the remaining nine standard test functions. The inclusion of random components in the objective functions of these test functions is the reason why the SDIQ algorithm is prone to be trapped in local optima, indicating that the algorithm is not suitable for optimization problems with objective functions containing random terms.
The Friedman test ( F f ) [30,31] is a non-parametric test that is used to compare different algorithms for all functions. This test is used to rank the SDIQ and competitor algorithms based on the achieved fitness by using Equation (21) [32].
F f = 12 × n k × ( k + 1 ) j R j 2 k × ( k + 1 ) 2 4
where k , n , and R j are the number of algorithms, case tests, and the mean rank of the j -th algorithm, respectively. For each pair of algorithms, it ranks from 1 (best result) to k (worst result) and then calculates the average ranks obtained in all problems to obtain the final rank of each algorithm. Based on the Friedman test results tabulated in Table 4, the overall ranking proved the SDIQ algorithm’s superiority compared with other state-of-the-art algorithms on dimensions 30.
Based on a comprehensive analysis of the calculation results in Table 3 and Table 4, it can be observed that compared to the classical algorithms such as GWO, DE, and PSO, SDIQ is able to obtain the optimal solution in most cases. It achieves the smallest standard deviation and ranks first in the mean ranks of the Friedman test, demonstrating the best-solving capability. It should be noted that in Table 3, the optimization result of SDIQ for the test function f7 is relatively poor. This is due to the inherent random disturbances in f7, which SDIQ has not yet accounted for in the objective function. This limitation of SDIQ should be acknowledged.
To further observe the iterative solving process of the SDIQ algorithm, f 8 is used as a case study. SDIQ, GWO, DE, and PSO are each run 300 times with a maximum iteration limit of 500. Figure 2 illustrates the convergence curves of the algorithms. It is evident from Figure 2 that SDIQ exhibits the fastest convergence speed and the best global convergence performance. Table 5 summarizes the running times of each algorithm, showing that SDIQ, due to its use of multiple populations and mutual learning strategies, has the longest running time. This is a limitation of SDIQ; however, in scenarios where obtaining the optimal solution is more critical than running time, SDIQ remains suitable.

4. Solution of Classic Engineering Optimization Problems

To further validate the performance of the SDIQ algorithm, three classical engineering optimization problems are selected from industrial design for analysis. These problems are compared with the latest literature results. These cases have been frequently used to assess optimization algorithms, including recent evaluations with the arithmetic optimization algorithm (AOA) [33], equilibrium optimizer (EO) [7], dwarf mongoose optimization algorithm (DMO) [8], starling murmuration optimizer (SMO) [34], reptile search algorithm (RSA) [6], and gaze cues learning-based grey wolf optimizer (GGWO) [35]. All three problems involve minimizing objective functions subject to both equality and inequality constraints. The constraints are handled using the penalty function method [36], followed by the direct application of the SDIQ algorithm for a solution.

4.1. Minimum Cost Welded Beam Design

The objective of this problem is to minimize the manufacturing cost of a welded beam. The constraints include shear stress, bending stress within the beam, buckling load, deflection at the beam’s ends, and boundary constraints. Four optimization variables need to be considered: weld thickness h , additional length of the bar l , bar height t , and bar thickness b . For a detailed description of the problem, refer to the literature [37]. The specific mathematical model is as follows:
x = x 1 ,     x 2 ,     x 3 ,     x 4 = [ h ,     l , t ,     b ] min f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 14 + x 2 g 1 ( x ) = τ ( x ) τ max 0 g 2 ( x ) = σ ( x ) σ max 0 g 3 ( x ) = x 1 x 4 0 g 4 ( x ) = 0.10471 x 1 2 + 0.04811 x 3 x 4 14 + x 2 5 0 g 5 ( x ) = 0.125 x 1 0 g 6 ( x ) = δ ( x ) δ max 0 g 7 ( x ) = P P c ( x ) 0 0.1 x 1 2 .     0.1 x 2 10 .     0.1 x 3 10 .     0.1 x 4 2
where:
δ max = 0.25     σ max = 3 E 4     τ m a x = 13600 G = 1.2 E 7     P = 6 E 3     L = 14     E = 3 E 7 τ = P 2 x 1 x 2 M = P L + x 2 2     R = x 2 2 4 + x 1 + x 3 2 2 J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2 τ = M R J     σ ( x ) = 6 P L x 4 x 3 2     δ ( x ) = 4 P L 3 E x 3 3 x 4 P c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G τ ( x ) = τ 2 + 2 τ τ x 2 2 R + τ 2
This problem has also been approached using different methods across multiple studies [37,38,39,40,41]. Table 6 presents a comparative analysis of the results obtained by the SDIQ algorithm and various other algorithms. As evidenced by Table 6, the welded beam design problem is characterized by the presence of multiple closely spaced local minima. This inherent complexity poses a significant challenge for many optimization techniques, making it difficult to avoid entrapment in suboptimal solutions. In contrast, the SDIQ algorithm demonstrates superior performance by successfully identifying the known minimum cost for the welded beam design problem. This highlights the robustness and efficacy of the SDIQ algorithm, especially when compared to several state-of-the-art algorithms that have been recently developed. The ability of the SDIQ algorithm to consistently locate the global optimum in the presence of numerous local minima underscores its potential as a powerful tool for solving complex engineering optimization problems.

4.2. Minimum Mass Design of Telescopic Springs

The objective of this problem is to minimize the mass of the spring, subject to constraints including shear stress, spring frequency, and minimum deflection. Three optimization variables need to be considered: wire diameter d , mean coil diameter D , and effective number of coils in the spring N . A detailed description of the problem can be referred to in the literature [37]. The optimization model is outlined as follows.
x = x 1 ,   x 2 ,   x 3 = [ d ,   D ,   N ] min f ( x ) = x 3 + 2 x 2 x 1 2 g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 x 2 x 1 3 x 1 4 + 1 5108 x 1 2 1 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0 0.05 x 1 2.00 .     0.25 x 2 1.30 .     2.00 x 3 15.0
This problem has been tackled using various methods in numerous studies [42,43,44]. Table 7 provides a comparison between the results obtained by the SDIQ algorithm and those from recent selected studies. It can be observed that, apart from the results with the strikethrough of the last two algorithms, which violate the underline constraints, the SDIQ algorithm, along with SMO, GGWO, and EO, has achieved the known minimum mass of the spring while satisfying the constraints. Furthermore, this minimum mass corresponds to multiple feasible solutions.

4.3. Design of Pressure Vessel

The objective of this problem is to minimize the overall cost of manufacturing the vessel, including material cost, forming cost, and welding cost. Four optimization variables need to be considered: shell thickness T s , cover thickness T h , inner diameter R , and cylinder length L . For a detailed description of the problem, refer to the literature [37]. The optimization model is as follows:
x = x 1 ,   x 2 ,   x 3 ,   x 4 = T s ,   T h ,   R ,   L min f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 g 1 ( x ) = x 1 + 0.0193 x 3 0 g 2 ( x ) = x 3 + 0.00954 x 3 0 g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 g 4 ( x ) = x 4 240 0 0 x 1 99 .     0 x 2 99 .     10 x 3 200 .     10 x 4 200
This problem has also been extensively studied using various methods in numerous research papers [44,45,46,47]. Table 8 presents a comparison between the results obtained by the SDIQ algorithm and those from recent selected research. The comparison reveals that, except for the results with the strikethrough of the last algorithm, which fails to meet the underline constraint conditions, the SDIQ algorithm, along with SMO, GGWO, and EO, achieves the known minimum cost for the pressure vessel design while satisfying all constraints.
The present study closely integrates four intelligent algorithms (SA, DE, IPSO, and QPSO) based on their respective characteristics, leveraging the strengths of each to enhance the global optimization capability of the new algorithm. According to the comparative examples in the paper, it can be seen that the proposed algorithm can achieve the known optimal solutions for some complex optimization problems. The SDIQ algorithm has the following advantages: Firstly, it demonstrates good global convergence on multiple standard test functions and real-world engineering optimization problems. This is attributed to the initial use of the SA algorithm for optimization, effectively avoiding local optima. Secondly, by dividing the population into multiple sub-populations and using different optimization algorithms in rotation, the algorithm achieves interactive learning between multiple populations, significantly improving stability and convergence. Additionally, the SDIQ algorithm is structurally flexible, allowing for adjustments in population allocation and optimization strategies based on specific problems, enhancing its applicability across diverse scenarios. This research contributes to the development of hybrid optimization algorithms, providing new approaches for solving complex real-world optimization problems. For instance, the SDIQ algorithm can be applied in complex industrial design (such as structural optimization and material selection), financial optimization (such as asset allocation and risk management), and aerospace engineering (such as multi-dimensional and multi-constraint optimization) by utilizing cooperative work among multiple populations to improve the accuracy, reliability, and stability of optimization results.

5. Conclusions

The SDIQ algorithm is developed based on the integration of the SA algorithm, DE algorithm, QPSO algorithm, and IPSO algorithm within the framework of a population learning strategy. The proposed algorithm is applied to test 10 standard benchmark functions and to solve three classical optimization problems in practical engineering. In numerical experiments with benchmark functions, by comparing the optimization results of classical algorithms such as GWO, DE, and PSO, it is found that the SDIQ algorithm has advantages in stability and global search capability. The mutual learning strategy and multiple population iterations significantly enhanced the algorithm’s ability to escape local optima and achieve global optimization. In engineering cases, compared to some of the latest algorithms in the literature, the SDIQ algorithm demonstrated good global optimization capabilities. For the spring design and pressure vessel design problems, the SDIQ algorithm obtained the same optimal solutions as other advanced algorithms. Notably, for the welded beam design problem, the SDIQ algorithm achieved the known optimal solution. This demonstrated the effectiveness of the strategy of integrating multiple intelligent optimization algorithms through mutual learning in practical optimization scenarios. However, despite the promising results, certain limitations of the algorithm must be acknowledged, which can be summarized as follows:
  • Computation time issue: Although the SDIQ algorithm enhances global optimization capabilities through multiple population iterations and mutual learning strategies, this approach also incurs longer computation times due to increased complexity. This can be a significant drawback when dealing with large-scale optimization problems or real-time applications.
  • Parameter setting issue: The SDIQ algorithm includes many parameters that control the algorithmic process. Achieving optimal performance requires fine-tuning these parameters, which can be time-consuming and may require domain-specific knowledge.
  • Scalability issue: While the algorithm performs well on tested benchmark functions and specific engineering problems, designing a basic framework to integrate more excellent algorithms requires further validation.
In future research, by addressing these limitations, the SDIQ algorithm can be further adapted to a wider range of applications to ensure its effectiveness and efficiency in solving complex optimization problems.

Author Contributions

Conceptualization, X.M.; Methodology, W.D. and W.Q.; Formal analysis, W.D., X.M. and W.Q.; Writing—original draft, W.D.; Writing—review & editing, W.Q. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the financial support provided by the Fundamental Research Funds for the Central Universities (grant number: 3132024627).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Albashish, D.; Mustafa, H.M.; Khurma, R.A.; Hasan, B.; Bani-Ahmad, S.; Abdullah, A.; Arram, A. Enhanced meta-heuristic methods for industrial winding process modeling. Expert Syst. 2024, 41, e13438. [Google Scholar] [CrossRef]
  2. Bazrkar, M.J.; Hosseini, S. Predict stock prices using supervised learning algorithms and particle swarm optimization algorithm. Comput. Econ. 2023, 62, 165–186. [Google Scholar] [CrossRef]
  3. Jia, B.H.; Deng, W.Y.; Wang, Y.Q. Improvement of economic analysis model with three-indenture and three-echelon for civil aircraft repair level. Acta Aeronaut. Astronaut. Sin. 2020, 41, 323–468. (In Chinese) [Google Scholar]
  4. Deng, W.; Qiao, W.; Ma, X.; Han, B. A novel unconstrained methodology for economic analysis of the level of repair with a case study of a multi-indenture and multi-echelon repair network. Comput. Ind. Eng. 2024, 192, 110215. [Google Scholar] [CrossRef]
  5. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  6. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  7. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  8. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  9. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  10. Coello, C.A.C.; Montes, E.M. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 2002, 16, 193–203. [Google Scholar] [CrossRef]
  11. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  13. Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  14. Angelini, M.C.; Ricci-Tersenghi, F. Limits and performances of algorithms based on simulated annealing in solving sparse hard inference problems. Phys. Rev. X 2023, 13, 021011. [Google Scholar] [CrossRef]
  15. Dasha, P. A comparative review of approaches for the evolutionary search to prevent premature convergence in GA. Appl. Soft Comput. 2023, 25, 1047–1077. [Google Scholar]
  16. Ismkhan, H. Effective heuristics for ant colony optimization to handle large-scale problems. Swarm Evol. Comput. 2017, 32, 140–149. [Google Scholar] [CrossRef]
  17. Ali, A.F.; Tawhid, M.A. A hybrid PSO and DE algorithm for solving engineering optimization problems. Appl. Math. Inf. Sci 2016, 10, 431–449. [Google Scholar] [CrossRef]
  18. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  19. Yuan, G.; Yang, W. Study on optimization of economic dispatching of electric power system based on Hybrid Intelligent Algorithms (PSO and AFSA). Energy 2019, 183, 926–935. [Google Scholar] [CrossRef]
  20. Lu, C.; Teng, D.; Keshtegar, B.; Alkabaa, A.S.; Taylan, O.; Fei, C.W. Extremum hybrid intelligent-inspired models for accurate predicting mechanical performances of turbine blisk. Mech. Syst. Signal Process. 2023, 190, 110136. [Google Scholar] [CrossRef]
  21. Zhang, H.; Ke, J. An Intelligent scheduling system and hybrid optimization algorithm for ship locks of the Three Gorges Hub on the Yangtze River. Mech. Syst. Signal Process. 2024, 208, 110974. [Google Scholar] [CrossRef]
  22. Van Truong, T.; Nayyar, A. System performance and optimization in NOMA mobile edge computing surveillance network using GA and PSO. Comput. Netw. 2023, 223, 109575. [Google Scholar] [CrossRef]
  23. Sahoo, H.B.; Chandrasekhar Rao, D. Optimal Resource Allocation in Cloud Computing Using Novel ACO-DE Algorithm. In Proceedings of the International Conference on Artificial Intelligence on Textile and Apparel, Bangalore, India, 11–12 August 2023; Springer Nature: Singapore, 2024; pp. 443–455. [Google Scholar]
  24. Chen, W.; Panahi, M.; Pourghasemi, H.R. Performance evaluation of GIS-based new ensemble data mining techniques of adaptive neuro-fuzzy inference system (ANFIS) with genetic algorithm (GA), differential evolution (DE), and particle swarm optimization (PSO) for landslide spatial modelling. Catena 2017, 157, 310–324. [Google Scholar] [CrossRef]
  25. Zhou, D.X.; Wang, G.N.; Gao, L.Q.; Wu, J.H. Application of Improved PSO Algorithm to Reliability Problems. J. Northeast. Univ. (Nat. Sci.) 2010, 9, 20–23. (In Chinese) [Google Scholar]
  26. Sun, J.; Feng, B.; Xu, W. Particle swarm optimization with particles having quantum behavior. In Proceedings of the 2004 Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; IEEE Cat. No. 04TH8753. Volume 1, pp. 325–331. [Google Scholar]
  27. Blum, A.; Dan, C.; Seddighin, S. Learning complexity of simulated annealing. In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, Virtual, 13–15 April 2021; pp. 1540–1548. [Google Scholar]
  28. Zielinski, K.; Peters, D.; Laur, R. Run time analysis regarding stopping criteria for differential evolution and particle swarm optimization. In Proceedings of the 1st International Conference on Experiments/Process/System Modelling/Simulation/Optimization, Athens, Greece, 6–9 July 2005. [Google Scholar]
  29. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  30. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  31. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  32. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  33. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  34. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  35. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 61, 101636. [Google Scholar] [CrossRef]
  36. Sun, S.; Wang, T.; Yang, H.; Chu, F. Damage identification of wind turbine blades using an adaptive method for compressive beamforming based on the generalized minimax-concave penalty function. Renew. Energy 2022, 181, 59–70. [Google Scholar] [CrossRef]
  37. Coello, C.A.C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  38. Deb, K. Optimal design of a welded beam via genetic algorithms. AIAA J. 1991, 29, 2013–2015. [Google Scholar] [CrossRef]
  39. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  40. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  41. Savsani, V. Implementation of modified artificial bee colony (ABC) optimization technique for minimum cost design of welded structures. Int. J. Simul. Multidiscip. Des. Optim. 2014, 5, A11. [Google Scholar] [CrossRef]
  42. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  43. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  44. Huang, F.Z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  45. Deb, K. GeneAS: A Robust optimal design technique for mechanical component design. In Evolutionary Algorithms in Engineering Applications; Springer: Berlin/Heidelberg, Germany, 1997; pp. 497–514.s. [Google Scholar]
  46. Kaveh, A.; Talatahari, S. An improved ant colony optimization for constrained engineering design problems. Eng. Comput. 2010, 27, 155–182. [Google Scholar] [CrossRef]
  47. Kannan, B.K.; Kramer, S.N. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
Figure 1. SDIQ algorithm flowchart.
Figure 1. SDIQ algorithm flowchart.
Mathematics 12 02570 g001
Figure 2. The fitness of various swarm optimal particles varies with iteration.
Figure 2. The fitness of various swarm optimal particles varies with iteration.
Mathematics 12 02570 g002
Table 1. List of abbreviations.
Table 1. List of abbreviations.
Algorithm NameAbbreviation
simulated annealingSA
differential evolutionDE
quantum behaved particle swarm optimizationQPSO
improved particle swarm optimizationIPSO
genetic algorithmGA
ant colony optimizationACO
particle swarm optimizationPSO
arithmetic optimization algorithmAOA
equilibrium optimizerEO
dwarf mongoose optimization algorithmDMO
starling murmuration optimizerSMO
reptile search algorithmRSA
gaze cues learning-based grey wolf optimizerGGWO
Table 2. Standard test functions and their settings.
Table 2. Standard test functions and their settings.
Standard Test FunctionsProblem DimensionsVariable Value RangeTheoretical Minimum Value
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
f 2 ( x ) = i = 1 n x i + i = 1 n x i 30[−10, 10]0
f 3 ( x ) = i = 1 n j = 1 i x j 2 30[−100, 100]0
f 4 ( x ) = max x i , 1 i n 30[−100, 100]0
f 5 ( x ) = i = 1 n 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 30[−30, 30]0
f 6 ( x ) = i = 1 n x i + 0.5 2 30[−100, 100]0
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
f 8 ( x ) = i = 1 n x i sin ( x i ) 30[−500, 500]−12,569.5
f 9 ( x ) = i = 1 n x i 2 10 cos ( 2 π x i ) + 10 30[−5.12, 5.12]0
f 10 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600, 600]0
Table 3. Comparison of optimization results for different algorithms.
Table 3. Comparison of optimization results for different algorithms.
FunctionSDIQGWO
[29]
BestWorstAveSTDBestWorstAveSTD
f10.000003.84802 × 10−406.41584 × 10−431.57095 × 10−411.49549 × 10−406.55408 × 10−371.66953 × 10−383.63264 × 10−38
f20.000007.43841 × 10−241.53805 × 10−263.04406 × 10−257.97178 × 10−245.89467 × 10−229.84876 × 10−237.90054 × 10−23
f30.000005.56699 × 10−434.85527 × 10−453.57391 × 10−442.67978 × 10−402.39070 × 10−352.96431 × 10−371.08494 × 10−36
f40.000002.67335 × 10−203.51276 × 10−212.85925 × 10−212.09586 × 10−111.32510 × 10−89.14614 × 10−101.15566 × 10−9
f51.34298 × 1011.84234 × 1011.03031 × 1013.11039 × 10−12.44374 × 1012.87278 × 1012.64815 × 1016.91464 × 10−1
f60.000006.54408 × 10−211.09097 × 10−232.67161 × 10−221.46335 × 10−51.01275 × 1002.73532 × 10−12.36999 × 10−1
f74.95241 × 10−18.91035 × 10−15.63591 × 10−12.68807 × 10−17.93151 × 10−52.76708 × 10−37.26837 × 10−43.97228 × 10−4
f8−1.12073 × 104−8.08828 × 103−9.72596 × 1035.41181 × 102−8.21756 × 103−3.27530 × 103−6.35643 × 1037.41731 × 102
f90.000001.19421 × 1024.55210 × 10−11.30195 × 10−10.000001.46701 × 1011.09811 × 1002.47260 × 100
f100.000001.32210 × 10−32.16567 × 10−58.55356 × 10−57.69716 × 10−64.20950 × 10−11.09883 × 10−16.83938 × 10−2
FunctionDE
[13]
PSO
[12]
BestWorstAveSTDBestWorstAveSTD
f12.11199 × 10−223.55143 × 10−155.99460 × 10−181.44984 × 10−168.99071 × 10−91.92746 × 10−56.05070 × 10−71.18895 × 10−6
f23.21178 × 10−121.00000 × 1017.94848 × 10−28.71385 × 10−16.14204 × 10−52.32962 × 10−21.49228 × 10−32.41518 × 10−3
f31.58109 × 10−211.00000 × 1043.33333 × 1015.76868 × 1021.23907 × 10−71.56786 × 10−47.94466 × 10−61.20159 × 10−5
f42.31328 × 10−11.27650 × 1013.02171 × 1001.89506 × 1002.60656 × 10−11.32110 × 1006.36342 × 10−11.59502 × 10−1
f59.94531 × 10−29.00236 × 1042.09016 × 1023.67529 × 1034.93832 × 1006.36935 × 1026.86167 × 1016.73738 × 101
f68.52562 × 10−231.06447 × 10−163.89201 × 10−195.70137 × 10−184.26701 × 10−99.18553 × 10−66.41544 × 10−79.04545 × 10−7
f75.80018 × 10−35.15659 × 10−21.85132 × 10−27.73621 × 10−31.95837 × 10−22.35668 × 10−18.21903 × 10−22.97511 × 10−2
f8−1.12819 × 104−7.81519 × 103−9.81169 × 1035.56089 × 102−9.32654 × 103−2.75943 × 103−6.47824 × 1039.85509 × 102
f91.39294 × 1012.29943 × 1025.07434 × 1011.84017 × 1011.99675 × 1019.49988 × 1014.37058 × 1011.08193 × 101
f100.000002.12503 × 10−11.16214 × 1021.66676 × 10−28.23491 × 10−104.91932 × 10−29.27812 × 10−39.65047 × 10−3
Table 4. Friedman test results.
Table 4. Friedman test results.
FunctionSDIQGWODEPSO
Mean RankMean RankMean RankMean Rank
f11.001671.998333.000004.00000
f21.000002.000003.011673.98833
f31.001671.998333.013333.98667
f41.000002.000003.965003.03500
f51.000002.643332.953333.40333
f61.000004.000002.000003.00000
f73.878331.001672.031673.08833
f81.001672.570002.428334.00000
f91.205001.795003.621673.37833
f101.358333.925002.245002.47167
Table 5. The running times of each algorithm.
Table 5. The running times of each algorithm.
AlgorithmSDIQGWODEPSO
Ave (s)18.331730.415820.584630.14015
STD0.377850.018990.023460.00629
Table 6. Optimization results of different algorithms for welded beam design.
Table 6. Optimization results of different algorithms for welded beam design.
AlgorithmOptimization VariablesCost
h l t b
SDIQ0.168004.067179.999780.168011.58713
RSA0.144683.514008.925100.211621.67260
DMO0.205573.256779.036180.205771.69530
AOA0.194482.5709210.000000.201821.71640
GGWO0.205703.470509.036600.205701.72490
EO0.205703.470509.036640.205701.72490
SMO0.205733.470499.036620.205731.72485
Table 7. Optimization results of different algorithms for spring design.
Table 7. Optimization results of different algorithms for spring design.
AlgorithmOptimization VariablesConstraintsObjective Function
d D N g 1 ( x ) g 2 ( x ) g 3 ( x ) g 4 ( x ) f ( x )
SDIQ0.051670.3561511.32242−1.50132 × 10−6−4.37484 × 10−6−4.05263−0.728121.26653 × 10−2
SMO0.051680.3564011.307566.71223 × 10−7−5.22062 × 10−7−4.05316−0.727951.26653 × 10−2
GGWO0.051700.3567011.289009.92097 × 10−4−6.33258 × 10−4−4.05534−0.727731.26701 × 10−2
EO0.051620.3550511.38797−6.48728 × 10−5−4.62668 × 10−6−4.05014−0.728881.26663 × 10−2
RSA0.057810.584784.01670−1.56964 × 10−30.10112−4.91153−0.57160 1.17600 × 10 2
AOA0.050000.3498011.86370−0.131900.08054−3.83737−0.73346 1.21244 × 10 2
Table 8. Optimization results of different algorithms for pressure vessel.
Table 8. Optimization results of different algorithms for pressure vessel.
AlgorithmOptimization VariablesConstraintsObjective Function
T s T h R L g 1 ( x ) g 2 ( x ) g 3 ( x ) g 4 ( x ) f ( x )
SDIQ0.778170.3846540.31962199.99999−2.47000 × 10−10−5.70435 × 10−10−3.27370 × 10−4−40.000005885.33279
EO0.778170.3846540.31962200.00000−2.47000 × 10−10−5.70435 × 10−10−3.27370 × 10−4−40.000005885.33279
GGWO0.778170.3846540.31962200.00000−2.47000 × 10−10−5.70435 × 10−10−3.27370 × 10−4−40.000005885.33279
SMO0.778170.3846540.31962199.99993−7.60000 × 10−8−8.25200 × 10−7−2.66790 × 10−1−40.000075885.33279
RSA0.840070.4189643.38117161.55560−2.80000 × 10−3−5.10000 × 10−3−1.10000 × 103−78.444406034.75910
AOA0.830370.4162042.75127169.34540−5.30000 × 10−3−8.40000 × 10−3−3.60000 × 103−70.654606048.78440
DMO1.093671.04000 × 10−1765.2252310.000001.65300 × 10−16.22200 × 10−11.50400 × 10−1−230.00000 2029.38470
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, W.; Ma, X.; Qiao, W. A Hybrid Intelligent Optimization Algorithm Based on a Learning Strategy. Mathematics 2024, 12, 2570. https://doi.org/10.3390/math12162570

AMA Style

Deng W, Ma X, Qiao W. A Hybrid Intelligent Optimization Algorithm Based on a Learning Strategy. Mathematics. 2024; 12(16):2570. https://doi.org/10.3390/math12162570

Chicago/Turabian Style

Deng, Wanyi, Xiaoxue Ma, and Weiliang Qiao. 2024. "A Hybrid Intelligent Optimization Algorithm Based on a Learning Strategy" Mathematics 12, no. 16: 2570. https://doi.org/10.3390/math12162570

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop