Next Article in Journal
A Formalization of Multilabel Classification in Terms of Lattice Theory and Information Theory: Concerning Datasets
Next Article in Special Issue
A Comprehensive Multi-Strategy Enhanced Biogeography-Based Optimization Algorithm for High-Dimensional Optimization and Engineering Design Problems
Previous Article in Journal
Solutions of Umbral Dirac-Type Equations
Previous Article in Special Issue
A Bioinspired Test Generation Method Using Discretized and Modified Bat Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of the Improved Cuckoo Algorithm in Differential Equations

School of Mathematics, Harbin Institute of Technology, Harbin 150001, China
Mathematics 2024, 12(2), 345; https://doi.org/10.3390/math12020345
Submission received: 18 December 2023 / Revised: 9 January 2024 / Accepted: 19 January 2024 / Published: 21 January 2024
(This article belongs to the Special Issue Smart Computing, Optimization and Operations Research)

Abstract

:
To address the drawbacks of the slow convergence speed and lack of individual information exchange in the cuckoo search (CS) algorithm, this study proposes an improved cuckoo search algorithm based on a sharing mechanism (ICSABOSM). The enhanced algorithm reinforces information sharing among individuals through the utilization of a sharing mechanism. Additionally, new search strategies are introduced in both the global and local searches of the CS. The results from numerical experiments on four standard test functions indicate that the improved algorithm outperforms the original CS in terms of search capability and performance. Building upon the improved algorithm, this paper introduces a numerical solution approach for differential equations involving the coupling of function approximation and intelligent algorithms. By constructing an approximate function using Fourier series to satisfy the conditions of the given differential equation and boundary conditions with minimal error, the proposed method minimizes errors while satisfying the differential equation and boundary conditions. The problem of solving the differential equation is then transformed into an optimization problem with the coefficients of the approximate function as variables. Furthermore, the improved cuckoo search algorithm is employed to solve this optimization problem. The specific steps of applying the improved algorithm to solve differential equations are illustrated through examples. The research outcomes broaden the application scope of the cuckoo optimization algorithm and provide a new perspective for solving differential equations.

1. Introduction

Since the seventeenth century, the widespread application of calculus has resolved numerous practical problems in fields such as physical chemistry, engineering, and population statistics, fostering the emergence of various new disciplines, including differential equations. Many problems in these fields can be described and understood through the use of differential equations, making the study of their solutions essential. However, most differential equations encountered in real-life scenarios and scientific research are complex, and determining their analytical solutions can be challenging. Although there exists a series of mature numerical methods for solving these equations, the effectiveness of the solutions often depends on the careful selection of the initial values. Achieving optimal performance and convergence characteristics requires precise choices for the initial solutions. Selecting an excellent initial point for solving differential equations is not a straightforward task. To address these challenges, researchers have developed various optimization algorithms to seek the best solutions.
In the past few decades, nature-inspired metaheuristic algorithms and their improved versions have gained popularity due to their simplicity and flexibility. Examples include the genetic algorithm (GA) [1,2], differential evolution (DE) [3], particle swarm optimization (PSO) [4,5], grey wolf optimization (GWO) [6], ant colony optimization (ACO) [7], wind-driven optimization (WDO) [8], CS [9,10,11], memetic algorithm (MA) [12], artificial bee colony algorithm (ABC), and others. With the evolution of intelligent algorithms, those with global optimization performance have been widely applied to solve differential equations. Intelligent algorithms overcome the challenge of rational initial guesses for solutions, are suitable for nonsmooth, non-differentiable, or noisy objective functions, and possess certain global search capabilities. Grosan and Biazar [13], utilized evolutionary computation techniques to address the conversion of systems into multi-objective optimization problems. Jaberipour et al. [14] employed an improved particle swarm optimization (PPSO) to solve nonlinear differential equations. Oliveira and Petraglia [15] proposed the fuzzy adaptive simulated annealing algorithm (ASA) for finding solutions to arbitrary nonlinear systems of equations. Abdollahi et al. [16] used the imperialist competitive algorithm to solve systems of differential equations and demonstrated its effectiveness on some well-known test problems. Raja et al. [17] introduced another memetic approach (GA-SQP), using an improved GA algorithm as a global search tool and employing sequential quadratic programming (SQP) for efficient local search, yielding satisfactory results. Zhang proposed a niching cuckoo search algorithm (NCSA) based on the principle of fitness sharing to solve systems of nonlinear equations. The introduced algorithm enhances the CS’s ability to solve nonlinear differential equations by incorporating a niching strategy. This algorithm can handle highly nonlinear problems and outperforms many algorithms mentioned in the literature. Verma [18] presented a novel hybrid algorithm of PSO and DE for finding solutions to nonlinear differential equations.
As the optimization problems we encounter in practical applications become increasingly complex, and the requirements for achieving the goals of problem-solving become higher, it is crucial for us to drive algorithms in more efficient and effective directions. The no free lunch theorem [19] proves that there is no one-size-fits-all optimization algorithm capable of solving all mathematical optimization problems. This is due to the diverse characteristics and constraints associated with different problems. Given the distinct features and constraints of various problems, it is essential to choose an appropriate optimization algorithm to obtain the optimal solution or the best possible approximate solution. Hence, to achieve better results when solving different types of optimization problems, experts and scholars optimize existing algorithms or develop new ones.
In recent years, the cuckoo search (CS) algorithm has been extensively applied in numerical optimization [20,21] and multi-objective optimization [22,23], among other domains. The CS algorithm is widely employed in diverse scenarios to search for robust solutions with fast convergence. While the efficiency of the CS algorithm generally surpasses that of other algorithms, its performance in precisely searching for optimal solutions is suboptimal, exhibiting immature convergence characteristics near local minima. Striking a balance between exploration and exploitation has become a crucial aspect when utilizing the CS algorithm. In order to fully unleash the potential of the CS, researchers have conducted numerous attempts, proposing new techniques to further enhance its performance. Improvements to the CS algorithm primarily focus on three aspects, namely:
(1)
Parameter Adjustment
The CS relies primarily on two crucial parameters: the Levy flight step-size control factor and the elimination probability. In the original CS proposed by Yang et al. [24], the step-size control factor was set as a fixed value, which, in practice, diminishes the algorithm’s performance. Ong, based on the assumption that cuckoos lay eggs in regions where the host bird’s egg survival rate is higher, introduced an adaptive step-size adjustment strategy [25], dynamically adjusting the step size. Wang et al. proposed a variable parameter strategy [26], where the step-size proportion factor is randomly generated, and in each iteration, this value follows a uniform distribution between 0 and 1. Cheng et al. introduced an algorithm with dynamic feedback information [27], utilizing the randomness and stability trends of cloud models, dynamically adjusting the step size and discovery probability based on individual fitness values.
(2)
Strategy Improvement
In the realm of strategy improvement, Ma et al. introduced an enhanced CS employing a search trend strategy [28], which enhances the overall solving performance of the algorithm. Test results demonstrate that this algorithm exhibits strong competitiveness in addressing multi-peaked and high-dimensional function optimization problems. Meng et al. developed a locally enhanced CS algorithm targeting multimodal numerical problems [29]. In this algorithm, the global optimal individual’s position guides the movement of all cuckoos, improving local search capabilities. The introduction of an inertia weight achieves a balance between exploration and exploitation. Meng utilized individual constraint and collective constraint techniques (CGC) for population initialization, enhancing the diversity of initial population decision variable values. Additionally, a cosine decay strategy [30] was employed for dynamic adaptive change in probability, speeding up convergence. In the same year, Gao et al. proposed the MSACS algorithm [31], incorporating five search strategies with different characteristics to replace a single Levy flight strategy. Each cuckoo adaptively selects a search strategy with an adaptive probability in each iteration, generating new solutions in every iteration.
(3)
Hybrid Algorithms
Combining the cuckoo search (CS) algorithm with other diverse algorithms can maximize the utilization of their respective strengths. Some scholars have attempted to enhance the performance of the CS algorithm through various hybrid strategies. In 2018, Saha [32] introduced the concept of population ranking in the grey wolf optimizer (GWO). After each iteration, the three best positions of the search individuals are recorded, guiding the cuckoo search in exploring new solutions. Simultaneously, the Cauchy distribution replaced the Levy flight. Testing the proposed algorithm on standard benchmark functions revealed its high competitiveness compared to existing techniques.
The improved CS algorithms described above have enhanced the performance of the CS algorithm to varying degrees in relevant studies. However, for the CS and most of its improved versions, there is a lack of effective information sharing among cuckoo individuals during the evolutionary process. This deficiency may result in the underutilization of useful information within the population, limiting the performance potential of these CS-based algorithms. We define our objective function F ( x ) with the aim of minimizing its value. Formally, the optimization problem is articulated as follows:
min       F ( x )
Here, x = x 1 , x 2 , , x n R n . The improved cuckoo search algorithm proposed in this paper is designed to effectively solve this optimization problem, especially in the context of complex differential equations.
The main contributions of this paper are as follows:
(1)
The proposal of an improved CS based on a sharing mechanism.
(2)
The introduction of a numerical solution method for differential equations using the coupling of function approximation and intelligent algorithms.
(3)
The application of the improved algorithm to solve differential equations.
The structure of this paper is as follows. Section 2 provides an overview of the CS and summarizes the algorithm’s optimization process. Section 3 introduces the improved CS based on a sharing mechanism. Section 4 briefly outlines the general form of the optimization problems and elucidates the fundamental approach of applying the improved CS to the solution of differential equations. And Section 5 concludes this paper.

2. Cuckoo Search Algorithm

2.1. Introduction to the Cuckoo Search Algorithm

The CS is designed to mimic the behavior of cuckoos in nature, specifically their parasitic egg-laying and hatching behavior. Cuckoos exhibit parasitic egg-laying behavior, characterized by not building their nests and not caring for their offspring. To maximize the survival rate of their eggs, cuckoos preferentially choose nests that resemble their own eggs, referring to the owner of such a nest as the host. When the host is absent from the nest, the cuckoo lays its eggs, allowing the host to feed and nurture the chicks. If the parasitic eggs go unnoticed by the host, they may be incubated and cared for by the host, but there is also a possibility of the host identifying the parasitic eggs. Once discovered as eggs not belonging to the host, the host may destroy the cuckoo eggs or abandon the current nest to build a new one, resulting in the failure of the cuckoo’s parasitic plan. In order to gain more space and food, the chicks may push the eggs of the host birds out of the nest.
The design of the CS corresponds to the biological behavior of cuckoos. The algorithm begins by randomly selecting initial solutions, evaluating and ranking them, and then choosing solutions with lower fitness as seed solutions for the next iteration. In each iteration, the seed solutions undergo random perturbation and recombination, generating a new set of solutions that is subsequently ranked and selected based on fitness. Ultimately, the algorithm converges to a global or local optimal solution.

2.2. Optimization Process of the Cuckoo Search Algorithm

The foundation of the CS lies in the survival efforts of cuckoos. Due to the large size of cuckoo eggs, it is challenging for cuckoos to carry multiple eggs at once. Moreover, carrying eggs makes flight more difficult, requiring more energy to maintain stamina and evade predators. To evade predators, cuckoos must find a secure place to hatch their eggs. Cuckoos continually strive to improve their mimicry of host bird eggs, while hosts continuously refine methods for detecting parasitic eggs. Essentially, the CS algorithm is an idealized algorithm based on the following assumptions:
Assumption 1.
Cuckoos randomly select nests, and each time, they lay only one egg in the host nest.
Assumption 2.
Based on the survival of the fittest rule, only a portion of the randomly chosen host nests, the best ones, are retained for the next generation.
Assumption 3.
The number of host nests available for cuckoo egg laying is fixed, and there is a probability  p a 0 , 1  that hosts discover eggs not their own. This may lead to the abandonment of cuckoo eggs or the host nest. In such cases, the host birds migrate to new habitats and establish new nests to start afresh.
Similar to other nature-inspired algorithms, the CS algorithm initiates its optimization process from the initialization stage and then proceeds to the iterative phase.
(1)
Population Initialization Based on a Random Distribution Strategy
To begin, the search space and population size are initialized, assuming the dimensions of the search space and the population size are denoted as D and d , respectively. Before the algorithm starts running, it is necessary to distribute the search individuals across the search space.
Let the position of the i-th cuckoo individual in the population at the t-th iteration be denoted as x i t = x i , 1 t , x i , 2 t , , x i , D t . The population information is initialized using a random distribution strategy, specifically through the following expression:
x i k = x k l o w + r a n d 1 ( x k u p x k l o w ) ,   i = 1 , 2 , , d ,   k = 1 , 2 , , D
where r a n d 1 represents a random number following a uniform distribution U ( 0 , 1 ) , and x k u p and x k l o w denote the upper and lower bounds, respectively, of the k-th dimension. U ( 0 , 1 ) signifies a continuous uniform distribution defined over the interval [0, 1].
(2)
Global Search Based on Lévy Flight
In most cases, animals can freely choose their direction of movement and alter it during their motion. Since the direction and shift to the next position depend on the current position, the present state, and the transition probability to the next position, animal movement generally adheres to the principle of random walking. After numerous experiments and deepening research by experts and scholars, attempts have been made to use probability-related mathematical formulas to describe this behavior. When the step size in random walking follows a Lévy distribution, it is termed a Lévy flight. A notable characteristic of Lévy flight is that its values can be positive or negative, allowing for the generation of suitable distributions.
The step length S in Lévy flight can be expressed as Equation (2):
S = U V 1 δ
where U ,   V follow normal and Gamma distributions, respectively: N 0 , σ U 2 ,   N 0 , σ V 2 . Additionally, for 1 δ 2 , Equation (3) holds:
σ U = Γ 1 + δ sin ( π δ 2 ) Γ 1 + δ 2 δ 2 1 δ 1 1 δ , σ V = 1
where Γ is the Gamma distribution defined as follows: Γ ( k ) = 0 t k 1 e t dt .
In a two-dimensional space, the trajectory of a Lévy flight walking 1000 steps under different parameters is shown in Figure 1. Observing the image, it can be noticed that after multiple instances of clustering, Lévy flight exhibits significant leaps.
In each iteration, the CS algorithm first employs a global search based on Lévy flight to generate new nest positions x i t + 1 ,   i = 1 , 2 , , D . The update formula for x i t + 1 can be formalized as follows:
x i t + 1 = x i t + α S = x i t + α 0 S ( x i t x b e s t t )
where:
  • α —the step size factor;
  • α 0 —the proportionality factor;
  • x b e s t t —the best nest position at the t-th iteration;
  • —denotes element-wise multiplication.
The movement strategy of the CS algorithm can be intuitively observed from Figure 2.
(3)
Local Search Based on Preference Mechanism
In the strategy of random preference wandering, there is a certain probability that the cuckoo’s parasitic eggs will be discovered by the host. Assuming the probability of the host discovering the cuckoo’s eggs is p a , let P a be a D-dimensional vector, and its elements are all p a . Once the host discovers that the egg does not belong to itself, the host nest with a high fitness value will be destroyed. The new host nest’s position x i t + 1   ( i = 1 , 2 , , D ) will be reconstructed based on the random mechanism in Equation (5).
x i t + 1 = x i t ( 1 G P a r a n d 2 ) + v i t G ( P a r a n d 2 )
where:
v i t = x i t + r a n d 3 ( x m t x n t )
r a n d 2 is a random vector whose all elements follow the distribution U ( 0 , 1 ) , r a n d 3 is a random number that follows U ( 0 , 1 ) , and x m t ,   x n t representing two randomly chosen and distinct host nests from the current population. Here, G x is the Heaviside function:
G x = G ( x 1 , x 2 , , x k , , x d ) = [ G 1 , G 2 , , G k , , G d ]
where G k = 1 , x k > 0 0 , x k 0 .
After completing the global or local random walk, the fitness function is determined as f x and the greedy strategy is employed to decide whether the newly generated host nest should replace the corresponding old host nest or be retained. Following the update of host-nest positions according to Equations (4) or (5), the fitness values are calculated. Subsequently, the new and old fitness values are compared to determine which one has a superior fitness, thus establishing the new solution.
For minimization problems, the selection strategy is formalized as follows:
x i t + 1 = x i t + 1 , f ( x i t + 1 ) < f ( x i t ) x i t       , o t h e r s
The bird nest where the host bird’s egg is located serves as a solution, and the cuckoo searches for a suitable solution through Lévy flights. Once a suitable nest is found, the cuckoo removes the host bird’s egg and lays its own egg in the host bird’s nest. These suitable nests represent new solutions generated by the update formula of the CS. The search for the optimal solution mainly relies on the mechanisms of Lévy flights and random walks to explore new solutions. The process of using the algorithm to find the optimal solution involves continually replacing the previous inferior solution with a new one.
The CS algorithm is derived from idealized rules, and its implementation process is shown in Figure 3. Through the above steps, it can be observed that the CS has a relatively simple and straightforward approach. While the CS is widely used for optimization problems, it still faces challenges such as slow convergence speed, long simulation time, and a certain degree of dependence on randomness.
In intelligent optimization algorithms, the initialization of the population in the search space plays a crucial role in determining the optimization performance of the algorithm. The distribution of initial positions in the population determines the environmental adaptability of the initial population. The initial population should represent individuals from the entire space as much as possible. Therefore, the distribution of individuals in the initial population can have a certain impact on the optimization effect of the algorithm. Random distribution policy for initializing the population is uncontrollable, and its coverage in the space is uncertain. This approach has a certain probability of obtaining an initial population with better fitness. However, when the randomly acquired population clusters in a certain area in the exploration space, it is not conducive to global optimization. When dealing with complex systems, the initial solution set cannot be obtained randomly, nor can it traverse the entire space. If the generated initial solutions deviate significantly from the actual solutions, it may be impossible or difficult to find the actual solutions. In other words, to better solve practical problems, it is essential to have an initial set of solutions that represents the potential distribution of solutions in the entire search space.
The CS uses a random approach to initialize the cuckoo population in the initialization phase. This results in an uneven distribution of the initial cuckoo population, leading to low algorithm efficiency and hindering global search. Equations (4) and (5) indicate that in the CS algorithm, each cuckoo individual’s search for a nest is guided only by the cuckoo that found the best host nest. This undoubtedly leads to the underutilization of useful information carried by other cuckoo individuals. Due to the lack of effective information from previous iterations, there is a phenomenon of repeated searching in the next iteration. This not only causes resource waste but also reduces the algorithm’s performance potential.

3. Improved Cuckoo Search Algorithm

3.1. Algorithm Design

(1)
Initialization of the Population Based on the Best Point Set Method
The uniform distribution method can effectively characterize the characteristics of the solution space. Scholars have proposed methods to construct a uniform distribution. The best point set method [33] is a way to uniformly sample points in space. The initial solution set obtained using the best point set method is uniformly distributed and has good diversity. In a D-dimensional space, a best point set with d points is constructed as follows:
P d ( i ) = { l 1 ( d ) × i } , { l 2 ( d ) × i } , , { l D ( d ) × i } , i = 1 , 2 , , d
Taking l i = 2 cos 2 π i p ( 1 i d ) , p as the minimum prime number satisfying the conditions p 3 2 D . P d i is considered a best point set, and l = { l i , i = 1 , , d } is a best point, which is the distribution of the improved initial cuckoo bird.
In a two-dimensional space, 200 points are randomly selected, and simultaneously, a best point set containing 200 points is constructed with values ranging [ 0 , 1 ] . The distribution effects are shown in Figure 4. By comparing Figure 4a with Figure 4b, it is evident that the distribution obtained by random point selection is highly scattered, while the best point set method yields a more uniform distribution. As long as the number of points is fixed, the best point set method ensures a uniform distribution of points regardless of the dimensionality of the search space, thereby guaranteeing good diversity in the population.
To ensure the uniformity of the population, the best point set can be used instead of a random method for population initialization. Theoretically, applying the best point set mapping to the target solution space can effectively enhance the performance of the CS. The specific steps for initializing the operation using the best point set principle are as follows:
Step 1: Construct a best point set containing d points, where d is the population size;
Step 2: The initial position of the i-th cuckoo is given by x i 0 = ( x i , 1 0 , x i , 2 0 , , x i , D 0 ) ,   i = 1 , , d . Here, D represents the dimensionality of the search space.
Step 3: The specific calculation formula is:
x i , j 0 = j l o w + 2 ( j u p j l o w ) cos 2 π j p
where j u p and j l o w represent the upper and lower bounds of the j-th dimension in the search space, respectively.
(2)
Shared Mechanism-Based Global Search Strategy
The generation of new solutions in the CS algorithm is primarily based on the information difference between individuals and their peers, leading to slow convergence in the later stages of the algorithm and a decrease in optimization accuracy. Information sharing among cuckoo individuals helps alleviate this problem, and previous research [34,35,36,37] has attempted to use information sharing to improve the performance of the CS algorithm, resulting in some new and improved CS algorithms. In order to further strengthen the information sharing among cuckoo individuals, an improved CS based on shared mechanism (ICSABOSM) is proposed, enhancing the collaboration within the population. The improved algorithm introduces a new iteration strategy to replace the CS algorithm’s globally search strategy based on Levy flights, referred to in this paper as the shared mechanism-based global search strategy. The improved CS modifies the traditional operation mode, allowing the algorithm to focus more on exploration in the early stages of evolution and more on exploitation in the later stages.
The CS algorithm is sensitive to parameter settings, and the process of updating nest positions, as per Equations (2) and (4), depends on two parameters, α 0 and δ , with a broad range of possible values. Typically, in research studies, a fixed value is assigned to the step-size factor α 0 throughout the algorithm’s stages. Parameter δ is a crucial factor in adjusting the convergence speed of the CS algorithm. When δ is fixed, it remains constant throughout the entire iteration process and cannot be changed. In cases where the iteration count is not sufficiently large, it can result in poor performance of the CS algorithm. To achieve the best global solution with an acceptable error, a large number of iterations is often required. To enhance the reliability and accuracy of the CS algorithm, many studies have introduced dynamic changes in the handling of parameter δ , transitioning from a constant to dynamic values. Numerous research works have demonstrated that dynamic variations in parameters can be controlled through linear or nonlinear functions [38]. Users can choose appropriate parameter values based on the specific problem they aim to solve, making the convergence results dependent on user choices. To overcome this limitation, an analysis is conducted based on the characteristics of parameter δ .
A notable characteristic of optimization algorithms is their reliance on sufficiently long moves in the initial iterations. If these steps progress towards the global optimum, the convergence speed of the algorithm will be enhanced. When the parameter δ linearly increases to 2, the search space gradually decreases. This indicates that if sufficiently long moves can be made in the initial iterations, the convergence speed of the algorithm will improve. At the same time, to avoid reaching a local optimum, it is necessary to reduce the movement space before the end of the iteration process.
Let t and T m a x denote the current iteration number and the total number of iterations, respectively. Set
δ = t T m a x
Control parameter δ in Equation (11) through Equation (12):
f 4 δ = r a n d o m f 1 δ , f 2 δ , f 3 δ
where
f 1 δ = 1 + sin ( π δ )
f 2 δ = δ + 1
f 3 δ = e 0.693 δ
Combining the characteristics of the function, the advantages of using function f 4 ( δ ) to calculate parameter δ can be analyzed. As shown in Figure 5, function f 1 ( δ ) is a nonlinear convex function characterized by a significant increase in the value of f 1 ( δ ) in the early iterations. This characteristic enhances the algorithm’s ability to explore feasible regions in the search space. However, in cases of flight failure, it cannot obtain the global optimum. Adding the linear function E helps strike a balance between convergence speed and precision, but when the convergence speed is too fast, it may fall into a local optimum. Therefore, adding the concave function f 2 ( δ ) controls the convergence speed. Function f 3 ( δ ) has a slower convergence speed than functions f 1 ( δ ) and f 2 ( δ ) . Thus, according to function f 3 ( δ ) , to achieve the desired precision, the algorithm requires more iterations. Function f 4 ( δ ) is used to control the step size S. Compared to any fixed value δ , it allows for sufficiently long moves in the initial iterations and short moves in the final iterations. Based on the advantages of the new vector step size S controlled by function f 4 ( δ ) , a new step size parameter S is proposed.
The cuckoo seeks the parasitic bird’s nest that is most suitable for egg-laying to maximize the survival rate of its eggs. In the CS algorithm, finding the host nest relies on completely random movements because of the lack of good information about the optimal parasitism. Therefore, multiple attempts are needed to find the optimal parasitic bird’s nest, meaning several iterations are required to achieve global optimality. Addressing this drawback of the CS algorithm, this paper introduces a new parameter, the feasible sharing area, established based on information sharing among cuckoos in the population.
Similar to the population structure in the grey wolf optimization algorithm, the population structure of the improved CS is defined as k b e s t , calculated by Equation (13):
k b e s t = d ( d 3 ) t T m a x
As the number of iterations increases, k b e s t shows a decreasing trend. Moreover, when t = T m a x , it holds that k b e s t = 3 .
To improve the success rate, a comparison is made among the cuckoo birds in the population to find all the nests. The fitness values of all the current positions of host nests are calculated, and after sorting, the position information of the excellent parasitic nests is recorded. The top k b e s t nests in terms of fitness values are considered feasible nests. As per the definition of feasible nests, these nests contain host nests in the search space with better properties. Based on feasible nests, a feasible sharing area F e a s b e s t is established. F e a s b e s t is calculated using Equation (14), representing the average position of nests ranked in the top k b e s t in terms of fitness values.
F e a s b e s t = i = 1 k b e s t X i k b e s t
where
  • k b e s t is calculated by Equation (13);
  • X i is the position of the cuckoo ranked i-th in fitness.
Clearly, after each iteration, F e a s b e s t will be updated. Let the host nests in the feasible sharing area take on the role of new leaders. In each iteration, guided by F e a s b e s t as the leader, each cuckoo bird’s movement is directed, and the new direction always aligns with the movement trend of the cuckoo bird.
In the improved algorithm, a new step size is established based on F e a s b e s t :
S * = S + α 0 α 0 F e a s b e s t S
In the equation, S is defined consistently with the step size in the CS algorithm, δ is controlled by the function f 4 ( δ ) , updating the step size S in the CS algorithm to maintain a certain level of randomness in the defined step size. The dominant individuals included in F e a s b e s t increase the likelihood of the cuckoo birds finding better solutions. Additionally, based on the information contained in F e a s b e s t , the optimal movement direction for each cuckoo bird can be determined. Such movement directions reduce the number of iterations, thereby improving the convergence speed of the algorithm.
The global search strategy based on the sharing mechanism can be explained using Equation (16):
x i t + 1 = x i t + α 0 S x b e s t t x i t
From Equations (15) and (16), it can be observed that in the new global exploration strategy based on the sharing mechanism, both the best information from the population and useful information from other cuckoo individuals guide the search simultaneously. This approach not only relies on the information from the best cuckoo individual but also helps the algorithm maintain population diversity. Additionally, using the new step-size operator instead of the single operator based on the Lévy flight further enhances the exploration capability.
(3)
Local Search Strategy Based on Sharing Mechanism
After global search completion, to enhance the ability of information sharing during local search, one cuckoo is randomly selected from set F e a s b e s t to share information with the i-th cuckoo, denoted as the qi-th cuckoo. The current fitness values of the nests found by these two cuckoos are compared, and the nest position with the smaller fitness value is chosen for updating in set x q i t . The updating process is illustrated in Figure 6.
Hosts with probability pa of finding alien bird eggs with high fitness values will have their nests destroyed. To ensure that cuckoos stay away from abandoned nests, more cuckoos need to provide location information. Four cuckoos with distinct positions are randomly selected from the current population, and their positions are denoted as x m t ,   x n t ,   x p t ,   x q t , respectively. The new nest position x i t + 1 ,   i = 1 , 2 , , D is reconstructed based on the shared information from multiple cuckoos. The double difference Formula (17) is used to update v i t in Formula (5), resulting in a local search strategy based on the sharing mechanism:
v i t = r a n d 5 x q i t + ( 1 r a n d 5 ) x b e s t + r a n d 6 ( x m t x n t ) + r a n d 4 ( x p t x q t )
where r a n d 5 ,   r a n d 6 ,   r a n d 4 is a random number generated by U ( 0 , 1 ) .
From Equations (6) and (17), double -difference vectors have taken the place of a single-difference vector used in the original CS. Double-difference vectors have taken the place of single-difference vectors used in the original CS. This contributes to the reinforcement. From Equation (17) described above, it can be seen that the modified operation mode offers more flexibility for the search process, since this new operation mode enables the population at the early and later evolution stages to still have the opportunity of exploiting the promising regions that have been located already and exploring new regions that have not been visited in the search space, respectively.

3.2. Algorithm Flow

The main steps of the improved CS are as follows, and the algorithm flowchart is shown in Figure 7.
(1)
Set the population size d and other variables T m a x , P a , and the fitness function to be optimized as F x . Assume the position of the bird nest found by the i-th cuckoo is: x i ( t ) = x i 1 ( t ) , x i 2 ( t ) , , x i D ( t ) , i = 1 , 2 , , D . Initialize the population using the optimal points set method: x i ( i = 1 , 2 , , d ) ;
(2)
Calculate the fitness value F ( x i ) of the initial bird nest position. Through comparison, designate the minimum fitness value in the current individuals as F b e s t , and record the corresponding best position as x b e s t ;
(3)
Search and update F e a s b e s t using Equations (13) and (14);
(4)
Update the position of each cuckoo bird using Equations (15) and (16), denoted as x ^ i ( t ) and calculate the fitness value of the new position as F ( x ^ i ( t ) ) .
(5)
If F ( x ^ i ( t ) ) < F ( x i ( t ) ) , then update x i ( t ) to x ^ i ( t ) , and correspondingly update F ( x i ( t ) ) to F ( x ^ i ( t ) ) . If F ( x ^ i ( t ) ) F ( x i ( t ) ) , then keep them unchanged.
(6)
Generate a random number θ [ 0 , 1 ] . If θ > P a , then eliminate solution x ^ i ( t ) . Update the eliminated solution according to Equations (5) and (17), then update its fitness.
(7)
Sort the fitness of all cuckoo birds, obtaining the current best position x b e s t t + 1 and the best value F ( x b e s t t + 1 ) .
(8)
Check the termination criteria. If satisfied, output the optimal solution; otherwise, iterate back to step (3).

3.3. Improved Algorithm Performance Test

3.3.1. Experimental Design

To evaluate the performance of the proposed improved CS, we compared it with the original CS on global optimization problems. We conducted a comprehensive experimental assessment using four well-known standard test functions, including two low-dimensional test functions and two high-dimensional test functions. The following are the four test functions [39] along with the optimal value f ( x ) of the functions in their defined domains.
(1)
f 1 = 14.203125 + x 1 ( 3 x 2 4.5 x 2 2 5.25 x 2 3 3 ) + x 1 2 ( x 2 + x 2 2 + x 2 4 + x 2 6 + 1 )
Whereas x 1 , x 2 100 , 100 , in function f 1 , the global minimum point is at x = ( 3 , 0.5 ) , and it has a global minimum value of f 1 x = 0 . As shown in Figure 8a, the terrain of the function is relatively flat.
(2)
f 2 = 0.5 + sin 2 ( x 1 2 x 2 2 ) 0.5 1 + 0.002 x 1 2 x 2 2 + x 1 2 x 2 2 2
Whereas x 1 , x 2 100 , 100 , the global minimum point for function f 2 is at x = ( 0 , 0 ) , and it has a global minimum value of f 2 x = 0 . As shown in Figure 8b, the function exhibits strong oscillations within the given domain.
(3)
f 3 ( x ) = i = 1 15 x i + Π i = 1 15 x i
Whereas x i [ 10 , 10 ] , in function f 3 , the global minimum point is x = ( 0 , 0 , , 0 ) , and it has a global minimum value of f 3 x = 0 . Typically, finding the global minimum point of this function is somewhat challenging.
(4)
f 4 ( x ) = i = 1 15 x i 2 10 ( cos 2 π x i + 1 )
Whereas x i [ 5.2 , 5.2 ] , in function f 4 , the global minimum point is x = ( 0 , 0 , , 0 ) , and it has a global minimum value of f 4 x = 0 . The function exhibits peaks and valleys in its overall shape.

3.3.2. Comparative Analysis of Algorithms

The optimal solution often requires multiple executions of the algorithm to achieve it. In the experiment, the algorithm was set to run independently 20 times, with a maximum iteration count of 2500. Under the same parameter settings, both the CS and the improved CS were tested. The population size was set to 30, discovery probabilities were set to 0.75 and 0.25, and the step-size control factor was set to 0.1 for two sets of experiments. The data obtained from running the algorithm independently 20 times were organized, and the best, worst, and average values were recorded as shown in Table 1 and Table 2.
Comparing the contents of Table 1 and Table 2, it can be observed that the improved CS with a discovery probability of 0.25 outperforms the CS in three evaluation metrics. Therefore, in terms of search accuracy and stability, the improved CS does show improvement compared to the original CS. These standard test functions exhibit complex properties and numerous local optima. Typically, optimization in high-dimensional spaces poses certain challenges as position updates cannot guarantee flexible movement. The experimental results for standard test functions indicate that, based on the newly proposed parameters, ICSABOSM’s position updates are more intelligent and flexible than those of the CS. The improved algorithm demonstrates a certain superiority, as it can perform sufficiently distant movements in the initial iterations, thereby expanding the search space.

3.3.3. Comparative Analysis of Algorithm Convergence

To more intuitively demonstrate the optimization performance of the ICSABOSM algorithm, it was compared with other modified CS algorithms and several commonly used intelligent optimization algorithms in engineering. This includes the classical CS, the firefly algorithm (FA), and particle swarm optimization (PSO). Additionally, a hybrid of the cuckoo search and firefly algorithms (CS-FA) was also included in the test. For function f 1 ,   f 2 , the iteration count was set to T max = 1000 , d 1 = 2 . The parameter settings for the algorithms involved in the test are shown in Table 3. For function f 3 ,   f 4 , the iteration count was set to T max = 8000 , d 1 = 13 .
Figure 9a–d shows the convergence curves of five algorithms regarding the test functions. The horizontal axis represents the number of iterations, and the vertical axis represents the logarithmic value of the best fitness.
Function f 1 : Multimodal function, characterized by a relatively flat fitness landscape. Traditional swarm intelligence algorithms and CS successfully find the global optimum in solving this problem, demonstrating good solutions. This indicates that the ICSABOSM exhibits commendable characteristics.
Function f 2 : A multimodal function with innumerable local minima within a given feasible domain, marked by strong oscillations, making the search for the global minimum challenging. Traditional swarm intelligence algorithms show lower precision in optimization, while the classic CS, CS-FA, and other improved CS algorithms all successfully identify the global optimum.
Function f 3 : A difficult-to-optimize unimodal function. For this function, all algorithms did not converge to the global minimum within 8000 iterations. However, the CS, CS-FA, and the improved algorithm ICSABOSM achieved significantly better optimal values compared to traditional swarm intelligence algorithms.
Function f 4 : A typical nonlinear multimodal function, with peaks varying greatly in height, rendering the search for its global optimum exceedingly difficult. Experimental results indicate that the PSO and FA became trapped in local optima, highlighting the strong local search capability of the ICSABOSM.
The ICSABOSM algorithm demonstrates a significant advantage in convergence accuracy and can converge rapidly to the optimal value or its vicinity within a small number of iterations.

4. Application of the Improved Algorithm in Differential Equations

Optimization problems can be formulated as the minimization or maximization of the objective function under variable constraints. The general form of a constrained minimization optimization problem is:
min     F ( T ) s . t .       h i ( T ) = 0 ,     i = 1 , 2 , , n 1                     g j ( T ) 0 ,     j = 1 , , n 2
where Ω = T R n is the feasible solution space, T is the decision variables, F ( T ) is the objective function, h i ( T ) = 0 represents equality constraints, and g j ( T ) 0 represents inequality constraints.
By transforming a system of differential equations into an optimization problem, the solution of the differential equations can be tackled using existing optimization algorithms. The fundamental idea behind this approach is to treat the unknown functions in the system of differential equations as optimization variables, and the residuals of the differential equations serve as the cost function. The solution to the system of differential equations is obtained by minimizing this cost function.

4.1. Construction of Approximate Solutions

Optimization is a mathematical method widely applied in various fields such as science, engineering, and business. Its purpose is to find the optimal solution within given constraints to meet specific requirements and objectives. In this process, the selection and adjustment of input variables play a crucial role in influencing the final results. The core of optimization methods lies in the cost function, objective function, or fitness function. These functions are used to measure the relationship between input variables and output results. By optimizing these functions, the optimal solution can be found.
Assuming the general form of a system of differential equations defined on the interval [ t 0 , t n ] can be described as follows:
F 1 ( t , X 1 , , X 1 ( n ) , X 2 , , X 2 ( n ) , , X n , , X n ( n ) ) = 0 F 2 ( t , X 1 , , X 1 ( n ) , X 2 , , X 2 ( n ) , , X n , , X n ( n ) ) = 0 F n ( t , X 1 , , X 1 ( n ) , X 2 , , X 2 ( n ) , , X n , , X n ( n ) ) = 0
The boundary value conditions are:
X 1 ( t 0 ) = X 1 , 0 X 2 ( t 0 ) = X 2 , 0 X n ( t 0 ) = X n , 0 , X 1 ( t 0 ) = X 1 , 0 X 2 ( t 0 ) = X 2 , 0 X n ( t 0 ) = X n , 0 , X 1 ( t n ) = X 1 , n X 2 ( t n ) = X 2 , n X n ( t n ) = X n , n , X 1 ( t n ) = X 1 , n X 2 ( t n ) = X 2 , n X n ( t n ) = X n , n
Or the initial value conditions are:
X 1 ( t 0 ) = X 1 , 0 X 2 ( t 0 ) = X 2 , 0 X n ( t 0 ) = X n , 0 , X 1 ( t 0 ) = X 1 , 0 X 2 ( t 0 ) = X 2 , 0 X n ( t 0 ) = X n , 0 , , X 1 ( n 1 ) ( t 0 ) = X 1 , 0 ( n 1 ) X 2 ( n 1 ) ( t 0 ) = X 2 , 0 ( n 1 ) X n ( n 1 ) ( t 0 ) = X n , 0 ( n 1 )
By constructing an approximate solution to the differential equation, the original problem can be transformed into a constrained optimization problem. Consider an n-th order initial-boundary value problem for a differential equation:
F ( t , x , x , , x ( n ) ) = 0
with boundary conditions:
x ( t 0 ) = x 0 , x ( t n ) = x n , , x ( t 0 ) = x 0 , , x ( t n ) = x n
or the initial value conditions of:
x ( t 0 ) = x 0 , x ( t 0 ) = x 0 , , x ( n 1 ) ( t 0 ) = x 0 ( n 1 )
At this point, solving the above Equation (19) can be addressed by utilizing the improved CS to find an approximate solution. For a general continuous function, it is known from the Fourier series convergence theorem that the convergence of the Fourier series is satisfied. In other words, a continuous function can be expressed in the form of a Fourier series expansion. As is well known, the sine and cosine functions can be infinitely differentiated, and it holds that:
sin ( n ) ( x ) = sin ( x + n π 2 )
The approximate solution to high-order differential equations can be obtained by solving them using a finite number of terms in the Fourier series. Moreover, during the differentiation process, the derivatives of the function are not reduced, and an improved algorithm is employed to find the optimal values for these coefficients.
Construct a Fourier series expansion centered around t ¯ as follows:
x ( t ) x ˜ ( t ) = a 0 + j = 1 M [ a j cos ( j π ( t t ¯ ) L ) + b j sin ( j π ( t t ¯ ) L ) ]
Compute the derivative as follows:
x ( t ) x ˜ ( t ) = j = 1 M j π L [ a j cos ( j π ( t t ¯ ) L + π 2 ) + b j sin ( j π ( t t ¯ ) L + π 2 ) ]
x ( t ) x ˜ ( t ) = j = 1 M ( j π L ) 2 [ a j cos ( j π ( t t ¯ ) L + π ) + b j sin ( j π ( t t ¯ ) L + π ) ]
x ( n ) ( t ) x ˜ ( n ) ( t ) = j = 1 M ( j π L ) n [ a j cos ( j π ( t t ¯ ) L + n π 2 ) + b j sin ( j π ( t t ¯ ) L + n π 2 ) ]
where L = t n t 0 is the interval length, M is the number of terms for sin and cos. To find the approximate solution to the differential equation, it is only necessary to determine the unknown coefficients a j ,   b j in Equation (25).
Based on the above discussion, for a general system of differential equations, the approximate solution can be constructed in the following form:
X 1 ( t ) X ˜ 1 ( t ) = a 0 1 + j = 1 M [ a j 1 cos ( t t ¯ L j π ) + b j 1 sin ( t t ¯ L j π ) ] X 2 ( t ) X ˜ 2 ( t ) = a 0 2 + j = 1 M [ a j 2 cos ( t t ¯ L j π ) + b j 2 sin ( t t ¯ L j π ) ] X n ( t ) X ˜ n ( t ) = a 0 n + j = 1 M [ a j n cos ( t t ¯ L j π ) + b j n sin ( t t ¯ L j π ) ]
For the derivative X ˜ 1 ( t ) ,   X ˜ 2 ( t ) , ,   X ˜ n ( t ) of the approximate solution X ˜ 1 ( t ) ,   X ˜ 2 ( t ) , ,   X ˜ n ( t ) , it is only necessary to calculate according to Equation (26).

4.2. Constraint Conditions

When using differential equations to construct optimization problems, the found solution must satisfy two conditions. First, this solution must comply with the differential Equation (19). Secondly, this solution must also meet the requirements of the optimization problem, i.e., it needs to satisfy its initial or boundary conditions. To achieve this, it is necessary to transform the forms of the homogeneous and non-homogeneous boundary or initial conditions, establishing the forms as (28) or (29), respectively:
x ( t 0 ) = 0 x ( t 0 ) = 0 x ( t 0 ) = 0 x ( n ) ( t 0 ) = 0 h 0 ( t 0 ) = x ( t 0 ) = x ˜ ( t 0 ) h 1 ( t 0 ) = x ( t 0 ) = x ˜ ( t 0 ) h 2 ( t 0 ) = x ( t 0 ) = x ˜ ( t 0 ) h n ( t 0 ) = x ( n ) ( t 0 ) = x ˜ ( n ) ( t 0 )
x ( t 0 ) = x 0 x ( t 0 ) = x 0 x ( t 0 ) = x 0 x ( n ) ( t 0 ) = x 0 ( n ) h 0 ( t 0 ) = x ( t 0 ) x 0 1 x ˜ ( t 0 ) x 0 1 h 1 ( t 0 ) = x ( t 0 ) x 0 1 x ˜ ( t 0 ) x 0 1 h 2 ( t 0 ) = x ( t 0 ) x 0 1 x ˜ ( t 0 ) x 0 1 h n ( t 0 ) = x ( n ) ( t 0 ) x 0 ( n ) 1 x ˜ ( n ) ( t 0 ) x 0 ( n ) 1
where
  • h 1 , h 2 , , h n represent the constraints of optimization problems.

4.3. Objective Function and Fitness Function

Replace x ,   x ( t ) ,   x ( t ) , ,   x ( n ) ( t ) in the differential Equation (19) with the constructed approximate solution x ˜ ( t ) and its derivative x ˜ ( t ) ,   x ˜ ( t ) , ,   x ˜ ( n ) ( t ) , respectively, to obtain the residual:
R ( t ) = F ( t , x ˜ , x ˜ , x ˜ , , x ˜ ( n ) ) .
Choosing an appropriate evaluation function to test the accuracy of the approximate solution, the weighted residual function is adopted as the evaluation criterion for the approximate solution, in the form of:
W = t 0 t n | W t | | R t | d t
where W t is the weight function. A smaller W value is preferable, indicating higher accuracy in the established approximate solution. Perform numerical calculations on the integration using trapezoidal or Simpson integration methods.
For the standard form of the penalty function, with a penalty factor of 1, the penalty function is given by:
P = j = 1 m 1 + m 2 h j
where
  • h j is the constraint condition;
  • m 1 is the number of boundary value conditions;
  • m 2 is the number of initial value conditions.
Build an appropriate fitness function to evaluate the quality of individual positions and find the optimal solution through iterative loops.
Add the penalty function value to both the residual function and the mean square error function to obtain the fitness function F:
F = W + P
Transform the problem of solving definite solutions to ordinary differential equations into a minimization problem of residual functions in the feasible domain Ω , with boundary or initial values as its constraint conditions, which will transform the solved differential equation problem into a optimization problem in the following form:
min         F
In the context of mechanical engineering and other fields, this approach offers significant potential. Mechanical engineering often involves complex systems governed by differential equations, such as dynamics, fluid mechanics, and thermal systems. By applying this improved algorithm, engineers and researchers can obtain more accurate and efficient solutions to these equations, leading to better modeling, analysis, and design of mechanical systems.

4.4. Algorithm Procedure for Problem Solving

The specific steps for applying the improved CS to solve differential equations are as follows:
(1)
Express the differential equation in the implicit function form on the solution interval [ t 0 , t n ] , as in Equation (19):  F ( t , x , x , , x ( n ) ) = 0
(2)
Transform the boundary conditions or initial value conditions into constraint forms (28) or (29);
(3)
Based on Equation (25), select an appropriate number M of terms in the Fourier series expansion;
(4)
Assign values to each undetermined coefficient a 0 , a 1 , a 2 , , a M , b 1 , , b M in the approximate function and introduce them into the ICSABOSM algorithm;
(5)
Call the ICSABOSM algorithm to search for the undetermined coefficients.
(6)
Calculate the values of the approximate solution x ˜ ( t ) at various points with Δ t : t j = t 0 + j Δ t , x ( t j ) x ˜ ( t j ) as the step size
(7)
Calculate the approximate value of the derivative at point t j using Equation (26): x ˜ ( t j ) , x ˜ ( t j ) , , x ˜ ( n ) ( t j )
(8)
Construct the residual function R ( t ) ;
(9)
Choose an appropriate fitness function based on the target function equation and calculate the fitness function value for each cuckoo’s current position;
(10)
Repeat steps (6) to (10) until the stopping criteria of the ICSABOSM algorithm are met.

4.5. Numerical Examples and Results Analysis

4.5.1. First-Order Linear Differential Equation

Solving the first-order linear differential equation with initial condition: y(0) = 1:
d d t y ( t ) + 2 y ( t ) = sin ( t ) ,       t [ 0 , 3 ]
The exact solution of this equation is y ( t ) = 2 5 sin ( t ) 1 5 cos ( t ) + 6 5 e 2 t .
The search space of the improved CS is influenced by the population size, resulting in noticeable differences in optimization results. When the population size is small, individuals lack diversity and are prone to converge to local optimal values. Conversely, when the population size is large, the search space increases, making it easier to obtain global optimal solutions but at the cost of increased time complexity. Population size, the number of iterations, and constraints all impact the optimization results. Based on the algorithm workflow and multiple experiments, the algorithm’s population size was set to 400, and the number of iterations was set to 10,000. For the above first-order linear differential equation, we used the least squares basis functions and the Fourier series to construct the approximate functions of the equation.
Figure 10 presents a comparative analysis between the numerical and analytical solutions of a first-order linear differential equation, and Figure 11 showcasing the deviations between them. For first-order linear differential equations, the enhanced algorithm significantly improves the accuracy of the exact solutions. This is evidenced by the reduced mean squared error and absolute error between the numerical and analytical solutions. Furthermore, these findings underscore the precision of the algorithm in solving boundary value problems for higher-order differential equations.
The graphs and tables mentioned above clearly illustrate that the improved algorithm significantly enhances the accuracy of the approximate solutions. This algorithm effectively increases the precision of these solutions in solving differential equations.

4.5.2. Second-Order Nonlinear Differential Equation

Selecting a second-order nonlinear differential equation:
y ( y + π 2 y y ) = 2 π sin π x cos π x e 2 x e 2 x ( e 5 e 5 ) 2 y ( 0 ) = 0 , y ( 0 ) = 0
The analytical solution of this equation is y ( t ) = ( e t e 2 t ) sin π x e 5 e 5 .
For the above second-order linear differential equation, we used the least squares basis functions and the Fourier series to construct the approximate functions of the equation.
In accordance with the algorithmic procedure, the population size of the algorithm is set at 500, and the number of iterations is fixed at 10,000. The relative error between the numerically approximated solution and the exact solution is presented in Table 4.
As indicated by Table 4 and Figure 12, the Fourier function approximation method employed in this study yields numerical solutions of higher precision compared to the least squares basis function approximation, as evidenced by the smaller relative error between the numerical and analytical solutions.
In summary, empirical evidence demonstrates that when extending from first-order to higher-order, whether dealing with first-order linear differential equations or second-order nonlinear differential equations, the analytical solutions obtained through the method proposed in this paper exhibit superior approximation results. Consequently, the algorithm presented in this paper demonstrates a higher degree of accuracy in solving boundary value problems for both first-order and higher-order differential equations.

4.5.3. No Analytic Solution to Differential Equation

Consider a differential equation with no analytic solution:
d d t y ( t ) = t 2 + y 2 ( t ) ,       t [ 0 , 1 ]
The interval [ 0 , 1 ] is divided equally into ten parts for point selection and training. An approximate solution to the differential equation is sought using the improved algorithm in conjunction with the fourth-order Runge–Kutta method, with the results presented in Figure 13. As can be observed from Figure 13, the approximate solution obtained in this study is closely aligned with that derived from the fourth-order Runge–Kutta method, demonstrating the efficacy of the solution method proposed in this paper.

5. Conclusions

To address the deficiency of the CS algorithm in lacking information sharing among individuals, this paper introduces an enhanced CS algorithm based on a sharing mechanism. The population is initialized using the good points set method, replacing the random initialization employed in traditional algorithms. The improved algorithm introduces a feasible sharing area, incorporating information from other cuckoos with superior qualities, supplanting the original approach that relied solely on the best cuckoo in the population. This establishes a global search strategy based on the sharing mechanism. The adoption of a dual-difference vector, in lieu of the single-difference vector used in the CS, formulates a local search strategy under the same mechanism. By transforming the problem of solving differential equations into an optimization problem, and applying the enhanced algorithm to solve this optimization problem, a novel approach to solving differential equations is provided, thereby expanding the applications of both the ICSABOSM algorithm and the CS algorithm.

Funding

This research received no external funding.

Data Availability Statement

Data are included in the article.

Acknowledgments

Thank the reviewers for their comments and suggestions.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Kumar, N.; Mahato, S.K.; Bhunia, A.K. A new QPSO based hybrid algorithm for constrained optimization problems via tournamenting process. Soft Comput. 2020, 24, 11365–11379. [Google Scholar] [CrossRef]
  2. Kumar, N.; Rahman, M.S.; Duary, A.; Mahato, S.K.; Bhunla, A.K. A new QPSO based hybrid algorithm for bound-constrained optimisation problem and its application in engineering design problems. Int. J. Comput. Sci. Math. 2021, 12, 385–412. [Google Scholar] [CrossRef]
  3. Storn, R.; Storn, P. Differential evolution—A simple and efficient heuristic for global optimization over continuous space. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  4. Avijit, D.; Kumar, N.; Akhtar, M.; Shalkh, A.A.; Bhunla, A.K. Real Coded Self-Organizing Migrating Genetic Algorithm for nonlinear constrained optimization problems. Int. J. Oper. Res. 2022, 45, 29–67. [Google Scholar]
  5. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Mhs95 Sixth International Symposium on Micro Machine & Human Science, Nagoya, Japan, 4 November 1995. [Google Scholar]
  6. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  7. Xue, Y.; Jiang, J.M.; Zhao, B.P.; Ma, T.H. A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Comput. 2018, 22, 2935–2952. [Google Scholar] [CrossRef]
  8. Kotte, S.; Pullakura, R.K.; Injeti, S.K. Optimal Multilevel Thresholding Selection for Brain MRI Image Segmentation based on Adaptive Wind Driven Optimization. Measurement 2018, 130, 340–361. [Google Scholar] [CrossRef]
  9. Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009. [Google Scholar]
  10. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S. Transient search optimization: A new meta-heuristic optimization algorithm. Appl. Intell. 2020, 50, 3926–3941. [Google Scholar] [CrossRef]
  11. Yang, X.S. Flower Pollination Algorithm for Global Optimization. Unconv. Comput. Nat. Comput. 2012, 7445, 240–249. [Google Scholar]
  12. Cotta, C.; Mathieson, L.; Moscato, P. Memetic Algorithms. Springer Int. Publ. 2016, 72, 607–638. [Google Scholar]
  13. Biazar, J.; Ghanbary, B. A new approach for solving systems of nonlinear equations. Int. Math. Forum 2008, 38, 1885–1889. [Google Scholar]
  14. Jaberipour, M.; Khorram, E.; Karimi, B. Particle swarm algorithm for solving systems of nonlinear equations. Comput. Math. Appl. 2011, 62, 566–576. [Google Scholar] [CrossRef]
  15. Oliveira, H.; Petraglia, A. Solving nonlinear systems of functional equations with fuzzy adaptive simulated annealing. Appl. Soft Comput. J. 2013, 13, 4349–4357. [Google Scholar] [CrossRef]
  16. Raja, M.A.Z.; Kiani, A.K.; Shehzad, A.; Zameer, A. Memetic computing through bio-inspired heuristics integration with sequential quadratic programming for nonlinear systems arising in different physical models. Springer Plus 2016, 5, 2063. [Google Scholar] [CrossRef] [PubMed]
  17. Ibrahim, A.M.; Tawhid, M.A. A hybridization of cuckoo search and particle swarm optimization for solving nonlinear systems. Evol. Intell. 2019, 12, 541–561. [Google Scholar] [CrossRef]
  18. Verma, P.; Parouha, R.P. Solving Systems of Nonlinear Equations Using an Innovative Hybrid Algorithm. Iran. J. Sci. Technol. Trans. Electr. Eng. 2022, 46, 1005–1027. [Google Scholar] [CrossRef]
  19. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  20. Thirugnanasambandam, k.; Prakash, S.; Subramanian, V. Reinforced cuckoo search algorithm-based multimodal optimization. Appl. Intell. 2019, 49, 2059–2083. [Google Scholar] [CrossRef]
  21. Civicioglu, P.; Besdok, E.; Gunen, M.A. Weighted differential evolution algorithm for numerical function optimization: A comparative study with cuckoo search, artificial bee colony, adaptive differential evolution, and backtracking search optimization algorithms. Neural Comput. Appl. 2018, 26, 3923–3937. [Google Scholar] [CrossRef]
  22. Lu, Z.; Dong, L.; Zhou, J. Nonlinear Least Squares Estimation for Parameters of Mixed Weibull Distributions by Using Particle Swarm Optimization. IEEE Access 2019, 7, 60545–60554. [Google Scholar] [CrossRef]
  23. Cheng, J.T.; Xiong, Y. Multi-strategy adaptive cuckoo search algorithm for numerical optimization. Artif. Intell. Rev. 2022, 56, 2031–2055. [Google Scholar] [CrossRef]
  24. Wei, J.M.; Yu, Y.G. A novel cuckoo search algorithm under adaptive parameter control for global numerical optimization. Methodol. Appl. 2020, 24, 4917–4940. [Google Scholar] [CrossRef]
  25. Pauline, O. Adaptive cuckoo search algorithm for unconstrained optimization. Sci. World J. 2014, 2014, 943403. [Google Scholar]
  26. Wang, G.G.; Zhang, Z.J.; Deb, S. Chaotic cuckoo search. Soft. Comput. 2016, 20, 3349–3362. [Google Scholar] [CrossRef]
  27. Cheng, J.T.; Wang, L.; Jiang, Q.Y.; Cao, Z.J.; Xiong, Y. Cuckoo search algorithm with dynamic feedback information. Future Gener. Comput. Syst. 2018, 89, 317–334. [Google Scholar] [CrossRef]
  28. Tsipianitis, A.; Tsompanakis, Y. Improved Cuckoo Search algorithmic variants for constrained nonlinear optimization. Adv. Eng. Softw. 2020, 149, 102865. [Google Scholar] [CrossRef]
  29. Salgotra, R.; Singh, U.; Saha, S. New cuckoo search algorithms with enhanced exploration and exploitation properties. Expert Syst. Appl. 2018, 95, 384–420. [Google Scholar] [CrossRef]
  30. Meng, X.J.; Chang, J.X.; Wang, X.B. Multi-objective hydropower station operation using an improved cuckoo search algorithm. Energy 2019, 168, 429–439. [Google Scholar] [CrossRef]
  31. Gao, S.Z.; Gao, Y.; Zhang, Y.M.; Xu, L. Multi-strategy Adaptive Cuckoo Search Algorithm. IEEE Access 2019, 7, 137642–137655. [Google Scholar] [CrossRef]
  32. Salgotra, R.; Singh, U.; Saha, S. Improved Cuckoo Search with Better Search Capabilities for Solving CEC 2017 Benchmark Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
  33. Rajabioun, R. Cuckoo Optimization Algorithm. Appl. Soft Comput. 2011, 11, 5508–5518. [Google Scholar] [CrossRef]
  34. Li, H.; Xiang, S.; Yang, Y.; Liu, C. Differential evolution particle swarm optimization algorithm based on good point set for computing Nash equilibrium of finite noncooperative game. AIMS Math. 2021, 6, 1309–1323. [Google Scholar] [CrossRef]
  35. Huang, Z.Y.; Gao, Z.Z.; Qi, L.; Duan, H. A Heterogeneous Evolving Cuckoo Search Algorithm for Solving Large-scale Combined Heat and Power Economic Dispatch Problems. IEEE Access 2019, 7, 111287–111301. [Google Scholar] [CrossRef]
  36. Li, J.; Li, Y.X.; Tian, S.S.; Xia, J.L. An improved cuckoo search algorithm with self-adaptive knowledge learning. Neural Comput. Appl. 2019, 32, 11967–11997. [Google Scholar] [CrossRef]
  37. Li, J.; Lei, H.; Wan, G.G. Solving Logistics Distribution Center Location with Improved Cuckoo Search Algorithm. Int. J. Comput. Intell. Syst. 2020, 14, 676–692. [Google Scholar] [CrossRef]
  38. Cuong-Le, T.; Minh, H.L.; Khatir, S.; Wahab, M.A.; Tran, M.T.; Mirjalili, S. A novel version of Cuckoo search algorithm for solving optimization problems. Expert Syst. Appl. 2021, 186, 115669. [Google Scholar] [CrossRef]
  39. Wang, J.; Zhou, B.; Zhou, S. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation. Comput. Intell. Neurosci. 2016, 2016, 2959370. [Google Scholar] [CrossRef]
Figure 1. Trajectory of Lévy flight: (a) δ = 1.5 ; (b) δ = 2.0 .
Figure 1. Trajectory of Lévy flight: (a) δ = 1.5 ; (b) δ = 2.0 .
Mathematics 12 00345 g001
Figure 2. Global search strategy of the CS algorithm.
Figure 2. Global search strategy of the CS algorithm.
Mathematics 12 00345 g002
Figure 3. Cuckoo search algorithm process.
Figure 3. Cuckoo search algorithm process.
Mathematics 12 00345 g003
Figure 4. Comparison of the distribution in two-dimensional space between 200 best points and 200 random points: (a) two-dimensional best point set; (b) two-dimensional random point set.
Figure 4. Comparison of the distribution in two-dimensional space between 200 best points and 200 random points: (a) two-dimensional best point set; (b) two-dimensional random point set.
Mathematics 12 00345 g004
Figure 5. Partial control function graphs.
Figure 5. Partial control function graphs.
Mathematics 12 00345 g005
Figure 6. Selecting the cuckoo to share information with the i-th cuckoo.
Figure 6. Selecting the cuckoo to share information with the i-th cuckoo.
Mathematics 12 00345 g006
Figure 7. Improved algorithm flowchart.
Figure 7. Improved algorithm flowchart.
Mathematics 12 00345 g007
Figure 8. Partial schematic diagrams of standard test functions in three dimensions: (a) three-dimensional schematic diagram of function f1; (b) three-dimensional schematic diagram of function f2.
Figure 8. Partial schematic diagrams of standard test functions in three dimensions: (a) three-dimensional schematic diagram of function f1; (b) three-dimensional schematic diagram of function f2.
Mathematics 12 00345 g008
Figure 9. Convergence curve graph: (a) the convergence curves of function f1; (b) the convergence curves of function f2; (c) the convergence curves of function f3; (d) the convergence curves of function f4.
Figure 9. Convergence curve graph: (a) the convergence curves of function f1; (b) the convergence curves of function f2; (c) the convergence curves of function f3; (d) the convergence curves of function f4.
Mathematics 12 00345 g009
Figure 10. Comparison of numerical solution and analytical solution for first-order linear differential equation.
Figure 10. Comparison of numerical solution and analytical solution for first-order linear differential equation.
Mathematics 12 00345 g010
Figure 11. The relative error between the numerical solution obtained and the analytical solution.
Figure 11. The relative error between the numerical solution obtained and the analytical solution.
Mathematics 12 00345 g011
Figure 12. Comparison between numerical and analytical solutions of second-order nonlinear differential equations.
Figure 12. Comparison between numerical and analytical solutions of second-order nonlinear differential equations.
Mathematics 12 00345 g012
Figure 13. Comparison with the fourth-order Runge–Kutta method.
Figure 13. Comparison with the fourth-order Runge–Kutta method.
Mathematics 12 00345 g013
Table 1. Algorithm test results with a discovery probability of 0.75.
Table 1. Algorithm test results with a discovery probability of 0.75.
FunctionAlgorithmBest ValueWorst ValueAverage Value
f 1 ICSABOSM7.5378 × 10−393.4674 × 10−324.5386 × 10−36
CS4.5726 × 10−312.6935 × 10−256.5320 × 10−29
f 2 ICSABOSM07.0054 × 10−355.3022 × 10−40
CS06.2378 × 10−264.2057 × 10−32
f 3 ICSABOSM4.2648 × 10−317.2538 × 10−238.5478 × 10−28
CS7.4875 × 10−224.5902 × 10−154.7326 × 10−20
f 4 ICSABOSM2.0018 × 10−151.4608 × 10−89.4010 × 10−11
CS1.2450 × 10−123.1634 × 10−33.7025 × 10−6
Table 2. Algorithm test results with discovery probability of 0.25.
Table 2. Algorithm test results with discovery probability of 0.25.
FunctionAlgorithmBest ValueWorst ValueAverage Value
f 1 ICSABOSM1.4473 × 10−335.5632 × 10−281.9824 × 10−31
CS6.3557 × 10−278.4367 × 10−235.2564 × 10−25
f 2 ICSABOSM06.5837 × 10−175.3704 × 10−30
CS8.3642 × 10−172.4579 × 10−115.3578 × 10−15
f 3 ICSABOSM4.8346 × 10−253.6849 × 10−137.2841 × 10−21
CS7.3648 × 10−144.3574 × 10−99.3572 × 10−13
f 4 ICSABOSM4.2602 × 10−109.4738 × 10−55.6173 × 10−7
CS7.6469 × 10−54.6328 × 10−23.7468 × 10−3
Table 3. Parameter settings.
Table 3. Parameter settings.
ICSABOSM Parameter Settings
n α P a T max d
200.010.751000/80002/13
CS Parameter Settings
n α P a T max d
200.010.751000/80002/13
CS-FA Parameter Settings
n α η 0 τ η ε 1 P a T max d
200.011.01.01.50.50.751000/80002/13
FA Parameter Settings
n α η 0 τ T max d
200.50.21.01000/80002/13
PSO Parameter Settings
n ω c 1 c 2 T max d
200.92.02.01000/80002/13
Table 4. Comparison of mean squared error and maximum absolute error between numerical solution and analytical solution for first-order linear differential equation.
Table 4. Comparison of mean squared error and maximum absolute error between numerical solution and analytical solution for first-order linear differential equation.
Fourier SeriesLeast Squares Basis Functions
Mean squared error8.4 × 10−92.1 × 10−7
Maximum absolute error0.00130.0122
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y. Application of the Improved Cuckoo Algorithm in Differential Equations. Mathematics 2024, 12, 345. https://doi.org/10.3390/math12020345

AMA Style

Sun Y. Application of the Improved Cuckoo Algorithm in Differential Equations. Mathematics. 2024; 12(2):345. https://doi.org/10.3390/math12020345

Chicago/Turabian Style

Sun, Yan. 2024. "Application of the Improved Cuckoo Algorithm in Differential Equations" Mathematics 12, no. 2: 345. https://doi.org/10.3390/math12020345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop