Next Article in Journal
Dynamic Network Resource Autonomy Management and Task Scheduling Method
Next Article in Special Issue
An Efficient Hybrid of an Ant Lion Optimizer and Genetic Algorithm for a Model Parameter Identification Problem
Previous Article in Journal
Evolutionary Analysis of Knowledge-Based Networks of the Electronic Information Industry from a Dual Innovation Perspective
Previous Article in Special Issue
Design of Fuzzy and Conventional Controllers for Modeling and Simulation of Urban Traffic Light System with Feedback Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Studying the Effect of Introducing Chaotic Search on Improving the Performance of the Sine Cosine Algorithm to Solve Optimization Problems and Nonlinear System of Equations

by
Mohammed A. El-Shorbagy
1,2,* and
Fatma M. Al-Drees
1
1
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
2
Department of Basic Engineering Science, Faculty of Engineering, Menoufia University, Shebin El-Kom 32511, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(5), 1231; https://doi.org/10.3390/math11051231
Submission received: 22 January 2023 / Revised: 22 February 2023 / Accepted: 27 February 2023 / Published: 2 March 2023
(This article belongs to the Special Issue Mathematical Methods and Models in Software Engineering)

Abstract

:
The development of many engineering and scientific models depends on the solution of nonlinear systems of equations (NSEs), and the progress of these fields depends on their efficient resolution. Due to the disadvantages in solving them with classical methods, NSEs are amenable to modeling as an optimization issue. The purpose of this work is to propose the chaotic search sine cosine algorithm (CSSCA), a new optimization approach for solving NSEs. CSSCA will be set up so that it employs a chaotic search to get over the limitations of optimization techniques like a lack of diversity in solutions, exploitation’s unfair advantage over exploration, and the gradual convergence of the optimal solution. A chaotic logistic map has been employed by many studies and has demonstrated its effectiveness in raising the quality of solutions and offering the greatest performance. So, it is used as a local search strategy. Three kinds of test functions—unimodal, multimodal, and composite test functions—as well as numerous NSEs—combustion problems, neurophysiology problems, arithmetic application, and nonlinear algebraic equations—were employed to assess CSSCA. To demonstrate the significance of the changes made in CSSCA, the results of the recommended algorithm are contrasted with those of the original SCA, where CSSCA’s average improvement rate was roughly 12.71, demonstrating that it is very successful at resolving NSEs. Finally, outcomes demonstrated that adding a chaotic search to the SCA improves results by modifying the chaotic search’s parameters, enabling better outcomes to be attained.
MSC:
65K10; 68T20; 68U01; 68V20; 68W30; 90B99; 90C59

1. Introduction

The nonlinear system of equations serves as the foundation for many engineering and scientific models (NSEs), and finding a solution to this problem is crucial for the growth of these sectors. When practical models are converted into NSEs, NSEs can be discovered both directly and indirectly in some applications [1]. So, it is important to conduct a study into finding a robust and effective solution for NSEs.
Some of the methods that have traditionally been used to solve NSEs include the bisection technique, Muller’s method, the false-position method, the steepest descent methods, the Broyden method, the Levenberg–Marquardt algorithm, the branch and prune approach, the Newton/damped Newton methods, Halley’s method, and the Secant method [2]. The two best methods for resolving NSEs are Secant and Newton [3]. Other methods handle the NSEs by treating them as an optimization problem and applying the augmented Lagrangian method [4,5]. These methods take a long time, have a tendency to diverge, are ineffective when dealing with a group of nonlinear equations, and are sensitive to the initial conditions. In order to create the Jacobian matrix, they also require a time-consuming procedure to calculate partial derivatives [6]. Finally, while dealing with problems that have a large size, they take a long time.
Meta-heuristic algorithms have been created as possible alternatives to numerical techniques to address their limitations as employing multiple initial guesses and depending on gradient information [7]. To enable strong exploration searchability and good exploitation, meta-heuristic algorithms were built on the foundation of randomization and local search. Additionally, they have additional features like simplicity and flexibility. There are four classifications of meta-heuristic optimization algorithms, which are illustrated in Figure 1 and include evolutionary, swarm, physical, behavioral based on humans, and biological algorithms [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54].
Based on biological evolution, which includes reproduction, mutation, recombination, and selection, evolutionary-based algorithms are introduced. They adhere to the idea of survival based on suitability for a population of candidates (i.e., a collection of solutions) in a particular setting. Genetic programming (GP) [8], genetic algorithms (GA) [9], evolution strategy (ES) [10], biogeography-based optimizers (BBO) [11], and differential evolution (DE) [12] are some of the well-known techniques for evolutionary-based algorithms. The second classification is the swarm-based techniques that are inspired from flocking of birds. This behavior mimics how swarms interact with one another and with their surroundings. The popular swarm-based techniques are ant colony optimization (ACO) [13], bat-inspired algorithm (BA) [14], particle swarm optimization (PSO) [15], firefly algorithm (FA) [16], fruit fly optimization (FFO) algorithm [17], ant bee colony (ABC) algorithm [18], cuckoo search (CS) [19], monkey search [20], grey wolf optimizer (GWO) [21], ant lion algorithm (ALA) [22], manta-ray foraging optimization (MRFO) [23], salp swarm algorithm (SSA) [24], and others [25,26,27]. According to physical laws like electromagnetic force, gravitational force, inertia force, and others, the physical-based algorithms were offered to carry out communication and movement among the agents throughout the search space. Some of the algorithms in this classification are simulated annealing (SA) [28], colliding bodies optimization (CBO) [29], gravitational search algorithm (GSA) [30], sine cosine algorithm (SCA) [31], charged system search (CSS) [32], big-bang big-crunch (BBBC) [33], central force optimization (CFO) [34], and others [35,36,37,38]. Human-based algorithms are the fourth category that imitates human behavior as the league championship algorithm (LCA) [39], the harmony search (HS) algorithm [40], the mine blast algorithm (MBA) algorithm [41], the taboo search (TS) algorithm [42], the teaching-learning strategy (TLS) [43], the soccer league competition (SLC) algorithm [45], the imperialist competitive algorithm (ICA) [46], the seeker optimization algorithm (SOA) [47], and the exchange market algorithm (EMA) [48]. The final classification of algorithms is one that is based on and inspired by biological behavior as the artificial immune system (AIS) [49] and bacterial forging optimization (BFO) [54].
The main characteristics of meta-heuristic algorithms are their exploration and exploitation searches, with exploration being in charge of finding the various interesting regions within the search space, while exploitation enhances the pursuit of ideal solutions inside a specified area. To achieve the best result during the search process, these features must be tuned. On the other hand, the stochastic nature of meta-heuristic algorithms makes it very challenging to balance these features. Furthermore, the effectiveness of a particular algorithm in solving a set of problems does not imply that it will be superior in handling another problems with various natures. This fact surely poses a difficult problem and can spur scientists and researchers to continue creating new meta-heuristic algorithms for resolving a wider range of practical issues [55].
Several meta-heuristic algorithms have been employed to resolve NSEs, including the GA [56], PSO [57], ABC [58], improved cuckoo CS algorithm [59], FA [60], and GOA [61]. Algelany and El-Shorbagy in [56] suggest a chaotic enhanced GA (CEGA). The CEGA is set up by using a new definition, which is chaotic noise, in order to address the inadequacies of optimization approaches, such as a lack of diversity in solutions, an imbalance between exploitation and exploration, and a sluggish convergence of the ideal solution. Mo and Liu [57] proposed a conjugate direction method (CDM) to PSO for addressing NSEs in order to enhance PSO and enable rapid optimization of high-dimensional optimization problems. The CDM and PSO are combined in this approach, helping PSO avoid local minima and efficiently optimize high-dimensional optimization problems. Jia and He [58] combined the ABC and PSO algorithms to propose the hybrid ABC technique for resolving NSEs. The hybrid algorithm combines the benefits of both strategies to solve the problem of entering a local or premature optimum. Additionally, Zhou and Li suggested an improved CSA to deal with the NSEs in [59]. They used a unique encoding technique that guarantees the supplied solution is feasible without changing the cuckoo’s evolution. Also, Ariyaratne et al. offered augmented FA to tackle NSEs as an optimization issue in [60] with several advantages, such as: the ability to produce several root estimates at once and the removal of the requirement for initial assumptions, function continuity, or even differentiation. Finally, a modified GA based on the GOA is proposed in [61] to solve NSEs. In order to escape the local minimum and advance the search process, the improvements depended on specific mathematical presumptions and changing the scope of the GOA control parameter.
The SCA is a meta-heuristic algorithm developed by Mirjalili [31] for solving optimization issues. It uses a mathematical model based on trigonometric sine and cosine functions to find the optimal solution. Mirjalili [31] demonstrated that SCA outperforms other contemporary meta-heuristic algorithms in terms of efficiency. However, SCA, like any meta-heuristic algorithm, is dependent on adaptive and random factors. Therefore, a satisfactory solution is not always produced. Furthermore, there is no internal memory for SCA to keep track of previously discovered solutions. The shortcomings of SCA motivated the introduction of a new optimization method in this paper. Chaotic search sine cosine algorithm (CSSCA) is the name of the proposed optimization method. There are numerous optimization methods that used the chaotic mathematical strategy to get a better performance [62]. The chaotic mathematical strategy has drawn a lot of interest and has been used in a variety of fields, including optimization. The proposed CSSCA combines the chaotic search and SCA. This combination tries to improve SCA by addressing its shortcomings, including the lack of diversity in solutions, the disparity between exploitation and exploration, and the sluggish convergence of the optimal solution.
The main contributions of this study include the following:
(1)
Introducing the chaotic search sine cosine algorithm (CSSCA), a novel method that combines chaotic search and SCA to resolve NSEs.
(2)
Utilizing chaotic search to enhance the SCA-obtained solution.
(3)
Numerous well-known functions and many NSEs are used to test CSSCA.
(4)
Demonstrate the outstanding performance of the proposed method with numerical results and prove it statistically.
(5)
Examining the introduction of the chaotic search on SCA and its effect on enhancing the outcomes by altering the chaotic search’s parameters and attaining improved results.
The structure of the paper is as follows. Methods and Materials are covered in Section 2. Section 3 describes the proposed methodology in full. Section 4 shows the numerical results and discussions. Section 5 concludes with observations and recommendations with discussion.

2. Methods and Materials

An overview of nonlinear system of equations, SCA, and chaos theory is given in this section.

2.1. Nonlinear System of Equations (NSEs)

An NSE is described as follows in mathematics [60]:
NSEs = { f 1 ( z ) = 0 f 2 ( z ) = 0 . . f Q ( z ) = 0
where z = ( z 1 ,   z 2 , ,   z n ) is a vector with n components that is a subset of R n , and f q q = 1 ,   2 , , Q are the nonlinear functions that convert z = ( z 1 ,   z 2 , ,   z n ) from the n -dimensional space of R n to the real line. While some of the functions might not be linear, others might be. To solve NSEs, one must locate a solution where each of the aforementioned Q functions equal zero [63].
Definition 1.
The solution z = ( z 1 ,   z 2 , ,   z n ) is referred to as the optimal solution of the NSEs if the functions f q ( z ) = 0 .
By including the left side of all equations and utilizing the absolute value function as a constrained optimization problem, many methods [64,65] convert the NSEs into a problem that can be solved.
F ( z ) = a b s ( f 1 ( z ) + f 2 ( z ) + + f Q ( z ) ) , s u b j e c t   t o = { f 1 ( z ) = 0 f 2 ( z ) = 0 . . f Q ( z ) = 0 ;
where F ( z ) denotes the objective function. The objective function in (2) has a global minimum if all the nonlinear equations equal to 0.

2.2. Traditional SCA

The “No Free Lunch Theorems” (NLF theorem) [66] and the ongoing increase in real-world problems have inspired many academics to develop new algorithms. Depending on the problem type and its characteristics, various algorithms have been proposed in the literature. As a result, no universal method can solve every issue and ensure an overall solution. This truth is known as the NLF theorem. According to the NLF theorem, Algorithm A can outperform Algorithm B for some cost functions, while Algorithm B can outperform Algorithm A for other cost functions [67].
The mathematical formulas published by Mirjalili in 2016 served as the inspiration for SCA, a population-based algorithm [31]. The fundamental idea behind SCA is to use the behaviors of the sine and cosine functions in mathematics to do optimization searches. Like many optimization approaches, SCA begins with the initialization phase, in which a population of agents generates initial solutions in a random manner. Using some stochastic parameters and the properties of the sine and cosine functions, these agents are iteratively updated. The following is a description of the SCA working phases.
  • Updating Phase
Each solution can be updated at this phase as follows:
x i ( t + 1 ) = { x i ( t ) + A 1 · s i n ( A 2 ) · | A 3 x b e s t ( t ) x i ( t ) |   r < 0.5 x i ( t ) + A 1 · c o s ( A 2 ) · | A 3 x b e s t ( t ) x i ( t ) |   r 0.5 ;
where x b e s t ( t ) denotes the best solution vector at the t t h iteration and x i ( t + 1 ) , x i ( t ) are the i t h   ( i = 1 ,   2 ,   ,   n ) solution vectors at ( t + 1 ) t h and t t h iterations, respectively. Here, r transitions from sine to cosine forms, and vice versa, using a uniformly distributed random number spread in the range ( 0 ,   1 ) . Here, the movement of the present solution either moves towards the x b e s t ( t ) or outside the x b e s t ( t ) where it is controlled by the direction parameter A 2 , while the A 3 weight parameter can emphasize the exploration approach ( A 3 > 1 ) and increase the exploitation ( A 3 < 1 ).
Figure 2 provides a conceptual explanation of the sine and cosine functions while updating the position within the interval [−2, 2]. The A 2 is inserted as you make this adjustment.
B.
Balancing Phase
The balancing phase is in charge of maintaining an acceptable balance between the exploration and exploitation in order to prevent the issue of premature convergence. In this method, the parameter A 1 can be introduced into the search space in the vicinity of the existing solution, which may be inside x b e s t ( t ) and x i ( t ) or outside them. Hence, the parameter A 1 contributes to the exploration feature during the first half of the total number of iterations and may encourage exploitation during the second half. The following mathematical formula can be used to express the parameter A 1 .
A 1 = 2 2 · l L ;
where the current and maximum iterations are defined, respectively, by l and L .

2.3. Chaos Theory

Numerous areas of the optimization sciences have recently benefited from the application of chaos theory mathematics. Hénon [68] first discussed chaos theory, and Lorenz [69] simplified it. Chaos is a frequent nonlinear phenomenon in nature that correctly depicts the complexity of a system and can be used for optimization. Chaos fundamentally differs from statistical randomness in its capacity to efficiently search the search space of interest and enhance the effectiveness of optimization techniques.
As a novel method for global optimization, chaos-based optimization algorithms have attracted a lot of attention. A chaotic map is an evolution function that behaves chaotically in some way [70,71,72]. A discrete-time or continuous-time parameter can be used to parameterize maps. Iterated functions are the most common form for chaotic maps. The Appendix A contains several well-known chaotic maps from the literature, including the sinusoidal, singer, Chebyshev, sine, tent, Gauss, circle, piecewise, logistic, intermittency, Liebovitch, and iterative maps [73].
The logistic map improves the quality of the solutions and provides the best performance, according to the findings in [74]. This study uses it as a result.

3. The Proposed Methodology

The proposed approach, which combines the chaotic search (CS) method and the sine cosine algorithm (SCA), is presented in this subsection. The suggested approach comprises two phases. In the first phase, an approximative solution to the optimization issue is obtained using the optimization system SCA. The second phase then employs CS to speed convergence and improve the solutions’ quality. The CSSCA fundamental steps can be summed up as follows:
Phase 1: SCA
Step 1. Agent N is initially generated at random, and SCA parameters are maintained to create an initial population that meets the feasibility of the solved problem.
Step 2. The desired optimization fitness function is assessed for each agent.
Step 3. Set the best position x b e s t ( t ) and its objective value to the best initial agent’s position and value for the first generation.
Step 4. According to Equation (3), search agents’ positions are updated.
Step 5. Choose the population member with the best objective value as the best agent. Update x b e s t ( t ) and its objective value with the location and objective value of the current best agent if the objective value is superior to the objective value of x b e s t ( t ) .
Step 6. The procedure is concluded, and the best result so far is returned as the SCA global optimum if the maximum number of generations have been produced, or when the population’s agents converge. When every agent’s position in the population is the same, convergence takes place. If not, go to Step 2 after updating the parameters of SCA ( A 1 , A 2 ,   A 3 , and r ).
Finally, SCA is used to generate an approximate solution, x * = ( x 1 * ,   x 2 * , , x n * ) .
Phase 2: Chaotic Search (CS)
In the local region of x * , which will be investigated, a chaos-based local search has the power to disrupt x * . The following is a description of CS in more detail:
Step 1. Determine the variance range [ a i ,   b i ] ,   i = 1 ,   2 , ,   n of CS boundary by x i * ε < a i ,   x i * + ε > b i .
Step 2. Create a chaos random number, z k , by the logistic map as:
z k = 4 z k 1 ( 1 z k 1 ) ,   z 0 ( 0 , 1 ) ,   z 0 { 0.0 , 0.25 , 0.50 ,   0.75 , 1.0 }
Step 3. The variance range of the optimization valuable [ a i ,   b i ] is mapped from the chaos variable z k as:
x i , k = a i + ( b i a i ) z k
which leads to:
x i , k = x i * ε + 2 ε z k   i = 1 , ,   n .
Step 4. Set x * = x k if f ( x k ) < f ( x * ) , otherwise breaking the iteration k .
Step 5. If f ( x * ) is not improved after all k iterations, CS should be stopped and x * should be shown as the best option.
Figure 3 displays the proposed algorithm’s flow chart.

4. Numerical Results

This section assesses the performance of the suggested algorithm using 19 test functions. The mathematical formulation of these test functions is provided in Appendix B [31]. To show the advantages of the recommended changes, the outcomes are contrasted with those achieved by the original SCA and another hybrid algorithm called HGWOSCA that combines the GWO and SCA. To ensure that comparisons of algorithms are genuine, a statistical analysis of the results is carried out using the non-parametric Friedman test and the Wilcoxon signed-rank test. Seven SNEs are also resolved as a case study for CSSCA [75,76,77].
The suggested technique is written in MATLAB R2012b and put into practice on a computer running Windows 10 with an Intel(R) Core (TM) i7-6600U CPU running at 2.60 GHz and 16 GB of RAM.
Additionally, the CSSCA termination standard is described as
δ = F t F t 1   ε = 1 e 20
F t is the estimated objective function at the iteration t , whereas F t 1 the estimated objective function at the iteration t 1 .

4.1. Results for 19 Test Functions

These test functions are solved by the original SCA, HGWOSCA, and the proposed CSSCA. For computational studies, the agents’ size N is 1000, specified neighborhood radius ε is 0.00001, and chaotic search iteration k is 100,000.
Table 1 presents the comparison of results achieved by original SCA, HGWOSCA, and the proposed CSSCA. Table 1 confirms that CSSCA produces better solutions than that obtained by the original SCA and HGWOSCA on average. This indicates that entering CS on SCA caused SCA to perform clearly better and achieve positive results.
In addition, to demonstrate the improvement between the original SCA and the new CSSCA algorithm, the following percentage decrease is used:
PD % = | SCA   Result CSSCA   Result | | SCA   Result | · 100
As indicated in Table 1’s last column, CSSCA improved all results significantly by a 12.71% decrease on average. In other words, the aim of the hybrid between SCA and CS is to escape the local minimum and advance the search process, and the average improvement between CSSCA algorithm and the original SCA obtained by the PD% equation is 12.71% on average. So, this means that we can, therefore, conclude that CS directs SCA to remove the local minimum and improve the search results.

4.1.1. Friedman Test

The Friedman test is used to statistically rank the significance of algorithms [78]. Table 2 lists the test’s outcomes in summary form. Since the p-value obtained in this statistical study is less than 0.05 α = 0.022 ), there are significant differences between the CSSCA and the other two algorithms that are also examined. The CSSCA wins this statistical investigation where Figure 4’s chart displays the rating of the CSSCA and competing algorithms. The smallest bar on the graph represents the best algorithm, while the longest bar depicts the worst. SCA earned the longest bar on the graph with a mean rank of 2.47, while CSSCA earned the shortest bar with a mean rank of 1.63. Hence, the graph shows that the CSSCA outperforms other algorithms by obtaining the top rank.

4.1.2. Wilcoxon Signed-Rank Test

The CSSCA’s distinction from the other algorithms is demonstrated using the Wilcoxon signed-rank test [78]. The Wilcoxon signed-rank test results are shown in Table 3. R+ is the total of all positive ranks, whereas R is the total of all negative ranks. By achieving R+ values greater than R values, CSSCA surpasses the other two algorithms in comparison, as demonstrated in Table 3. We can, therefore, conclude that CSSCA is a more effective algorithm than the other methods.

4.2. Case Study: Solving NSEs

The CSSCA algorithm will be used to solve several kinds of nonlinear systems of equations in this subsection. All the information of these nonlinear systems can be obtained in [75,76,77]. The NSEs are first transformed using Equation (2) into an optimization problem. The suggested method CSSCA then resolves this optimization problem. The descriptions of these NSEs are as follows:
  • Case 1: It contains the following two nonlinear algebraic equations.
    F 1 = { f 1 ( x 1 , x 2 ) = x 1 + x 2 + x 1 2 + x 2 2 8 = 0 f 2 ( x 1 , x 2 ) = x 1 + x 2 + x 1 x 2 5 = 0   x 1 [ 3.5 ,   2.5 ]   , x 2 [ 3.5 ,   2.5 ]   .
  • Case 2: It contains the following three nonlinear equations:
    F 2 = { f 1 ( x 1 , x 2 , x 3 ) = 3 x 1 2 + s i n ( x 1 x 2 ) x 3 2 + 2 = 0   f 2 ( x 1 , x 2 , x 3 ) = 2 x 1 3 x 2 2 x 3 + 3 = 0   f 3 ( x 1 , x 2 , x 3 ) = sin ( 2 x 1 ) + x 2 + cos ( x 2 x 3 ) 1 = 0 x 1 [ 5 ,   5 ]   ,   x 2 [ 1 ,   3 ]   ,   x 3 [ 5 ,   5 ]   .
  • Case 3: It contains non-differentiable system of non-linear equations as follows:
    F 3 = { f 1 ( x 1 , x 2 ) = x 1 2 x 2 + 1 + 1 9 | x 1 1 | = 0 f 2 ( x 1 , x 2 ) = x 2 2 x 1 7 + 1 9 | x 2 | = 0   x 1 [ 2 ,   2 ]   , x 2 [ 1 ,   6 ]   .
  • Case 4: It contains the following two nonlinear equations:
    F 4 = { f 1 ( x 1 , x 2 ) = c o s ( 2 x 1 ) c o s ( 2 x 2 ) 0.4 = 0 f 2 ( x 1 , x 2 ) = sin ( 2 x 2 ) sin ( 2 x 1 ) + 2 ( x 2 x 1 ) 1.2 = 0 x 1 [ 10 ,   10 ]   , x 2 [ 10 ,   10 ]   .
  • Case 5: It is a combustion problem with a complex set of nonlinear equations, as shown below:
    F 5 = f 1 = x 2 + 2 x 6 + x 9 + 2 x 10 10 5 = 0 f 2 = x 3 + x 8 3.10 5 = 0 f 3 = x 1 + x 3 + 2 x 5 + 2 x 8 + x 9 + x 10 5 · 10 5 = 0 f 4 = x 4 + 2 x 7 10 5 = 0 f 5 = 0.5140473 · 10 7 x 5 x 1 2 = 0 f 6 = 0.1006932 · 10 6 x 6 2 x 2 2 = 0 f 7 = 0.7816278 · 10 15 x 7 x 4 2 = 0 f 8 = 0.1496236 · 10 6 x 8 x 1 x 3 = 0 f 9 = 0.6194411 · 10 7 x 9 x 1 x 2 = 0 f 10 = 0.2089296 · 10 14 x 10 x 1 x 2 2 = 0 10 x 1 , x 2 , , x 10 10 .
  • Case 6: It is a neurophysiology problem with a complex set of nonlinear equations, as illustrated below:
    F 6 = f 1 = x 1 2 + x 3 2 1 = 0 f 2 = x 2 2 + x 4 2 1 = 0 f 3 = x 5 x 3 3 + x 6 x 4 3 = 0 f 4 = x 5 x 1 3 + x 6 x 2 3 = 0 f 5 = x 5 x 1 x 3 2 + x 6 x 4 2 x 2 = 0 f 6 = x 5 x 1 2 x 3 + x 6 x 2 2 x 4 = 0 10 x 1 , x 2 , , x 6 10 .
  • Case 7: It is an arithmetic application that has a complex set of nonlinear equations, as illustrated below:
    F 7 = f 1 = x 1 0.254287220 0.18324757 · x 4 x 3 x 9 = 0 f 2 = x 2 0.378421970 0.16275449 · x 1 x 10 x 6 = 0 f 3 = x 3 0.271625770 0.16955071 · x 1 x 2 x 10 = 0 f 4 = x 4 0.198079140 0.15585316 · x 7 x 1 x 6 = 0 f 5 = x 5 0.441667280 0.19950920 · x 7 x 6 x 3 = 0 f 6 = x 6 0.146541130 0.18922793 · x 8 x 5 x 10 = 0 f 7 = x 7 0.429371610 0.21180486 · x 2 x 5 x 8 = 0 f 8 = x 8 0.070564380 0.17081208 · x 1 x 7 x 6 = 0 f 9 = x 9 0.345049060 0.19612740 · x 10 x 6 x 8 = 0 f 10 = x 10 0.426511020 0.21466544 · x 4 x 8 x 1 = 0 10 x 1 , x 2 , , x 10 10 .

Results for Nonlinear Systems of Equations

These NSEs cases are solved under various conditions for the chaotic search parameters to evaluate the effect of changing these parameters on the solution and its quality. These conditions are different values of the initial chaos random number z0, the specified radius of chaotic search ε, and the iteration of chaotic search k as follows:
z 0 , ε , k = C o n d i t i o n s   1 C o n d i t i o n s   2 C o n d i t i o n s   3 C o n d i t i o n s   4 C o n d i t i o n s   5 C o n d i t i o n s   6 C o n d i t i o n s   7 = 0.01 , 0.01 , 1000 0.01 , 0.001 , 1000 0.00001 , 0.001 , 1000 0.00001 , 0.01 , 1000 0.01 , 0.01 , 500 0.01 , 0.01 , 1500 0.01 , 0.001 , 1500 .
The proposed CSSCA algorithm is a random approach as any meta-heuristic algorithm, where the improvement or accuracy is not guaranteed when solving any optimization problem. So, these parameters were chosen randomly for the proposed algorithm to study the effect of changing it on the results. Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 show the results of NSEs cases F 1 F 7 at the different conditions, while Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show the graphical presentation of these Tables. From the tables and figures, we can see that the changing of the chaotic search parameters improves the quality of the solutions. In Cases 1 and 4, the best values of F1 and F4 occur at Condition 2 z 0 , ε , k = 0.01 , 0.001 , 1000 , while in Case 2, the best value of F2 occurs at condition 1 z 0 , ε , k = 0.01 , 0.01 , 1000 . In Case 3, the best value of F3 occurs at Condition 7 z 0 , ε , k = 0.01 , 0.001 , 1500 , while in Cases 5 and 6, the best value of F5 and F6 occur at Condition 3 z 0 , ε , k = 0.00001 , 0.001 , 1000 . Finally, in Case 7, the best value of F7 occurs at all conditions except Condition 6 z 0 , ε , k = 0.01 , 0.01 , 1500 . Finally, Figure 12 shows the convergence of F for all NSEs cases with the chaotic search parameters. We notice from the figure that the introduction of the chaotic search on the SCA improves the results by changing the parameters of the chaotic search and thus better results can be obtained.

4.3. Computational Complexity of the CSSCA

When assessing any metaheuristic optimization technique’s processing time, it is essential to take into account the computational complexity, which is influenced by the design and implementation of the algorithm. It should be emphasized that the initialization procedure, the assessment of the fitness function, and the updating of the solutions all play major roles in determining the computing cost of the proposed CSSCA. The initialization process has an O ( N , n ) complexity, where N represents the population size and n represents the number of parameters in the problem (dimension). The complexity of updating solutions via SCA is O ( t m a x · N · n ) , where t m a x represents the maximum number of SCA iterations. The updating solutions’ complexity by CS is O ( k m a x · n ) ; where k m a x indicates maximum iterations of CS. The computational complexity of updating solutions is thus O ( t m a x · N · n ) + O ( k m a x · n ) . As a result, the suggested CSSCA has a computational complexity that is O ( N , n ) + O ( t m a x · N · n ) + O ( k m a x · n ) . In the following section, several benchmark test functions are used to check and corroborate the proposed CSSCA’s performance in addressing optimization difficulties.

5. Discussion and Conclusions

In this paper, a chaotic search sine cosine algorithm (CSSCA) is proposed to study the effect of introducing chaotic search (CS) on the original sine cosine (SCA) algorithm to improve its performance in terms of reaching a better solution. The advantages of both SCA and CS are combined in CSSCA, where the capabilities of SCA exploration and the CS exploitation are integrated. This combination also seeks to increase search effectiveness and avoid the local minimum by achieving the good balance between capabilities of exploration and exploitation.
The research proposed was divided into three parts. The proposed algorithm’s performance was assessed using 19 test functions in the first part to demonstrate how the CS input influences its ability to be a better SCA performer. The second part aimed to show how to convert the nonlinear system of equations (NSEs) into an optimization problem. Then, this optimization problem of NSEs was solved with the CSSCA. In, the final Part, 7 NSE benchmarking problems were solved and studied how modifying the chaotic search parameters affected the solution’s quality.
The following are the main study findings for the suggested algorithm:
(1)
According to the results that were obtained in Table 1, we see those results of CSSCA are better than those obtained by the original SCA and the other SCA-based algorithm (HGWOSCA).
(2)
The results obtained in Table 1, by the percentage decrease equation (Equation (9)), shows that adding CS to SCA improves the original SCA results by 12.71%. So, we can therefore infer that CS instructs SCA to get rid of the local minimum and optimize the search results for a better solution.
(3)
Friedman and Wilcoxon’s tests were used for the statistical analysis, and the findings are shown in Table 2 and Table 3. Based on the results, it can be shown that the CSSCA and the other two algorithms that were also tested have significant differences, with a p-value of less than 0.05 (α = 0.022). In addition, Figure 4 demonstrates that the CSSCA surpasses other algorithms by obtaining the first rank. Furthermore, Table 3 shows that CSSCA performs better than the other two algorithms since its R+ values are larger than its than its R values. This indicates that CSSCA performs better according to achieve lower objective function values for most testing functions.
(4)
The results of NSEs in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 and Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show that the introduction of CS on the SCA affects its performance as it was found that changing the CS parameters has an impact on the quality of the solution and obtaining better results.
The suggested algorithm’s possible flaw, like that of all meta-heuristic algorithms, is that there is no assurance that optimization problems will be solved without an increase to the time of computing or accuracy. This is because meta-heuristic algorithms employ random operations and chaotic search. To assess the effectiveness of the algorithm and identify any potential weaknesses, other larger and more complex NSEs may be taken into consideration in further works. In addition, more experiments must be done to reach the optimal values for the parameters of the CS through which the optimal solution can be obtained when solving any optimization problem. Finally, it is recommended that optimization problems and NSEs be resolved using various optimization techniques, including the monarch butterfly optimization (MBO) [79], gradient-based optimizer (GBO) [80], beluga whale optimization (BWO) [81], wild horse optimizer (WHO) [82], etc.

Author Contributions

Conceptualization, M.A.E.-S. and F.M.A.-D.; Methodology, M.A.E.-S. and F.M.A.-D.; Writing and original draft preparation, F.M.A.-D.; Co-review and validation, M.A.E.-S.; Writing—editing; M.A.E.-S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number (IF2/PSAU/2022/01/20208).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Chaotic Maps [73]

Sinusoidal map: The equation that produces the sine wave in Sinusoidal map is
x t + 1 = a x t 2 sin π x t
where a = 2.3 .
Chebyshev map: The Chebyshev map is shown as
x t + 1 = c o s ( t c o s 1 ( x t ) )
Singer map: The formulation of the one-dimensional chaotic Singer map is as follows:
x t + 1 = μ 7.86   x t 23.31   x t 2 + 28.75   x t 3 13.302875   x t 4
where μ ( 0.9 , 1.08 ) .
Tent map: The following iterative equation defines the tent map:
x t + 1 = x t 0.07 , x t < 0.7 , 10 3 1 x t , x t 0.7 .
Sine map: As an example, consider the sine map:
x t + 1 = a 4 s i n π x t
where 0 < a 4 .
Circle map: According to the following typical equation, a circle map is:
x t + 1 = x t + b a 2 π sin 2 π x t
where a = 0.5 and b = 0.2 .
Piecewise map: The formulation of the piecewise map is as follows:
x t + 1 = x t p ,   0 < x t < p x t p 0.5 p ,   p x t < 0.5 1 p x t 0.5 p ,   0.5 x t < 1 p 1 x t p ,   1 p < x t < 1
where p ( 0 , 0.5 ) and x ( 0 , 1 ) .
Gauss map: A nonlinear iterated map of the reals into a real interval determined by the Gaussian function is called the Gauss map, often referred to as the Gaussian map or mouse map:
x t + 1 = e x p ( α x t 2 ) + β ;
where α and β are real parameters.
Logistic map: Without the need for any random sequence, the logistic map illustrates how complicated behavior can develop from a straightforward deterministic system. Its foundation is a straightforward polynomial equation that captures the dynamics of a biological population.
x t + 1 = c x t ( 1 x t ) ;
where x 0 0 , 1 , x 0 0.0 , 0.25 , 0.50 , 0.75 , 1.0 and when c = 4.0 The logistic map creates a chaotic sequence.
Intermittency map: Two iterative equations are used to create the intermittency map, which is shown as:
x t + 1 = ε + x t + c x t n ,   i f   0 < x t p x t p 1 p ,   e l s e   i f   p < x t < 1
where c = 1 ε p p 2 , n = 2.0 and ε is very close to zero.
Liebovitch map: According to the proposed chaotic map,
x t + 1 = α x t ,   0 < x t p 1 , p 2 x t p 2 p 1 ,   p 1 < x t p 2 1 β 1 x t ,   p 2 < x t 1
where α = p 2 ( 1 ( p 2 p 1 ) ) p 1 and β = ( ( p 2 1 ) p 1 ( p 2 p 1 ) ) p 2 1 .
Iterative map: The definition of the iterative chaotic map with infinite collapses is as follows:
x t + 1 = sin a π x t
where a ( 0 , 1 ) .

Appendix B. Test Functions [31]

Table A1. Unimodal test functions.
Table A1. Unimodal test functions.
FunctionDimRangeShift Position f m i n
f 1 x = i = 1 n x i 2 20 [ 100 , 100 ] [ 30 , 30 , , 30 ] 0
f 2 x = i = 1 n x i + i = 1 n x i 20 [ 10 , 10 ] [ 3 , 3 , , 3 ] 0
f 3 x = i = 1 n ( j 1 i x j ) 2 20 [ 100 , 100 ] [ 30 , 30 , , 30 ] 0
f 4 x = max i { x i , 1 i n } 20 [ 100 , 100 ] [ 30 , 30 , , 30 ] 0
f 5 x = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 20 [ 30 , 30 ] [ 15 , 15 , , 15 ] 0
f 6 x = i = 1 n ( x i + 0.5 ) 2 20 [ 100 , 100 ] [ 750 , , 750 ] 0
f 7 x = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 20 [ 1.28 , 1.28 ] [ 0.25 , , 0.25 ] 0
Table A2. Multimodal test functions.
Table A2. Multimodal test functions.
FunctionDimRangeShift Position f m i n
f 8 x = i = 1 n x i sin ( x i ) 20 [ 500 , 500 ] [ 300 , , 300 ] 418.9829 × 5
f 9 x = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 20 [ 5.12 , 5.12 ] [ 2 , 2 , , 2 ] 0
f 10 x = 20 exp 0.2 1 n i = 1 n x i 2 e x p ( 1 n i = 1 n cos ( 2 π x i ) + 20 + e ) 20 [ 32 , 32 ] 0
f 11 x = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 20 [ 600 , 600 ] [ 400 , , 400 ] 0
f 12 x = π n { 10 sin π y 1 + i = 1 n 1 ( y i 1 ) 2 1 + 10 s i n 2 π y i + 1 + ( y n 1 ) 2 } + i = 1 n u x i , 10,100,4
y i = 1 + x i + 1 4
u = x i , a , k , m = k x i a m x i > a 0 a < x i < a k ( x i a ) m x i < a
20 [ 50 , 50 ] [ 30 , 30 , , 30 ] 0
f 13 x = 0.1 { s i n 2 ( 3 π x 1 ) + i = 1 n x i 1 2 [ 1 + s i n 2 3 π x i + 1 ] + x n 1 2 [ 1 + s i n 2 ( 2 π x n ) ] } + i = 1 n u x i , 5,100,4 20 [ 50 , 50 ] [ 100 , , 100 ] 0
Table A3. Composite test functions.
Table A3. Composite test functions.
FunctionDimRange f m i n
F 14 C F 1 :
f 1 , f 2 , f 3 , , f 10 = S p h e r e   F u n c t i o n
[ б 1 , б 2 , б 3 , , б 10 ] = 1 , 1 , 1 , , 1
[ λ 1 , λ 2 , λ 3 , , λ 10 ] = 5 / 100 , 5 / 100 , 5 / 100 , , 5 / 100
10 [ 5 , 5 ] 0
F 15 C F 2 :
f 1 , f 2 , f 3 , , f 10 = G r i e w a n k s   F u n c t i o n
[ б 1 , б 2 , б 3 , , б 10 ] = 1 , 1 , 1 , , 1
[ λ 1 , λ 2 , λ 3 , , λ 10 ] = 5 / 100 , 5 / 100 , 5 / 100 , , 5 / 100
10 [ 5 , 5 ] 0
F 16 C F 3 :
f 1 , f 2 , f 3 , , f 10 = G r i e w a n k s   F u n c t i o n
[ б 1 , б 2 , б 3 , , б 10 ] = 1 , 1 , 1 , , 1
[ λ 1 , λ 2 , λ 3 , , λ 10 ] = 1 , 1 , 1 , , 1
10 [ 5 , 5 ] 0
F 17 C F 4 :
f 1 , f 2 = A c k l e y s   F u n c t i o n
f 3 , f 4 = R a s t r i g i n s   F u n c t i o n
f 5 , f 6 = W e i e r s t r a s s   F u n c t i o n
f 7 , f 8 = G r i e w a n k s   F u n c t i o n
f 9 , f 10 = S p h e r e   F u n c t i o n
[ б 1 , б 2 , б 3 , , б 10 ] = 1 , 1 , 1 , , 1
[ λ 1 , λ 2 , λ 3 , , λ 10 ] = 5 / 32 , 5 / 32 , 1 , 1 , 5 / 0.5 , 5 / 0.5 , 5 / 100 , 5 / 100 , 5 / 100 , 5 / 100
10 [ 5 , 5 ] 0
F 18 C F 5 :
f 1 , f 2 = R a s t r i g i n s   F u n c t i o n
f 3 , f 4 = W e i e r s t r a s s   F u n c t i o n
f 5 , f 6 = G r i e w a n k s   F u n c t i o n
f 7 , f 8 = A c k l e y s   F u n c t i o n
f 9 , f 10 = S p h e r e   F u n c t i o n
[ б 1 , б 2 , б 3 , , б 10 ] = 1 , 1 , 1 , , 1
[ λ 1 , λ 2 , λ 3 , , λ 10 ] = 1 / 5 , 1 / 5 , 5 / 0.5 , 5 / 0.5 , 5 / 100 , 5 / 100 , 5 / 32 , 5 / 32 , 5 / 100 , 5 / 100
10 [ 5 , 5 ] 0
F 19 C F 6 :
f 1 , f 2 = R a s t r i g i n s   F u n c t i o n
f 3 , f 4 = W e i e r s t r a s s   F u n c t i o n
f 5 , f 6 = G r i e w a n k s   F u n c t i o n
f 7 , f 8 = A c k l e y s   F u n c t i o n
f 9 , f 10 = S p h e r e   F u n c t i o n
[ б 1 , б 2 , б 3 , , б 10 ] = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1
λ 1 , λ 2 , λ 3 , , λ 10 = [ 0.1 1 / 5 , 0.2 1 / 5 , 0.3 5 / 0.5 , 0.4 5 / 0.5 , 0.5 5 / 100 ,
0.6 5 / 100 , 0.7 5 / 32 , 0.8 5 / 32 , 0.9 5 / 100 , 1 5 / 100 ]
10 [ 5 , 5 ] 0

References

  1. Jeeves, T.A. Secant modification of Newton’s method. Commun. ACM 1958, 1, 9–10. [Google Scholar] [CrossRef]
  2. Moré, J.J.; Cosnard, M.Y. Numerical solution of nonlinear equations. ACM Trans. Math. Softw. (TOMS) 1979, 5, 64–85. [Google Scholar] [CrossRef]
  3. Azure, I.; Aloliga, G.; Doabil, L. Comparative study of numerical methods for solving non-linear equations using manual computation. Math. Lett. 2019, 5, 41. [Google Scholar] [CrossRef]
  4. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Society for Industrial and Applied Mathematics: University City, PA, USA, 1996. [Google Scholar]
  5. Conn, A.R.; Gould, N.I.M.; Toint, P. A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 1991, 28, 545–572. [Google Scholar] [CrossRef] [Green Version]
  6. Hoffman, J.D.; Frankel, S. Numerical Methods for Engineers and Scientists; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  7. Beheshti, Z.; Shamsuddin, S.M. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl. 2013, 5, 1–35. [Google Scholar]
  8. Koza, J.R. Genetic programming as a means for programming computers by natural selection. Stat. Comput. 1994, 4, 87–112. [Google Scholar] [CrossRef]
  9. Ayoub, A.Y.; El-Shorbagy, M.A.; El-Desoky, I.M.; Mousa, A.A. Cell blood image segmentation based on genetic algorithm. In The International Conference on Artificial Intelligence and Computer Vision; Springer: Cham, Switzerland, 2020; pp. 564–573. [Google Scholar]
  10. Hansen, N. The CMA evolution strategy: A comparing review. In Towards a New Evolutionary Computation: Advances in the Estimation of Distribution Algorithms; Springer: Berlin/Heidelberg, Germany, 2006; pp. 75–102. [Google Scholar]
  11. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  12. Price, K.V. Differential evolution. In Handbook of Optimization: From Classical to Modern Approach; Springer: Berlin/Heidelberg, Germany, 2013; pp. 187–214. [Google Scholar]
  13. Zhou, X.; Ma, H.; Gu, J.; Chen, H.; Deng, W. Parameter adaptation-based ant colony optimization with dynamic hybrid mechanism. Eng. Appl. Artif. Intell. 2022, 114, 105139. [Google Scholar] [CrossRef]
  14. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  15. Hendawy, Z.M.; El-Shorbagy, M.A. Combined Trust Region with Particle swarm for Multi-objective Optimisation. In Proceedings of International Academic Conferences, no. 2703860; International Institute of Social and Economic Sciences: London, UK, 2015. [Google Scholar]
  16. A El-Shorbagy, M.; El-Refaey, A.M. A hybrid genetic–firefly algorithm for engineering design problems. J. Comput. Des. Eng. 2022, 9, 706–730. [Google Scholar] [CrossRef]
  17. El-Shorbagy, M.A. Chaotic Fruit Fly Algorithm for Solving Engineering Design Problems. Complexity 2022, 2022, 6627409. [Google Scholar] [CrossRef]
  18. Kaya, E.; Gorkemli, B.; Akay, B.; Karaboga, D. A review on the studies employing artificial bee colony algorithm to solve combinatorial optimization problems. Eng. Appl. Artif. Intell. 2022, 115, 105311. [Google Scholar] [CrossRef]
  19. Yang, X.-S.; Suash, D. Cuckoo search: Recent advances and applications. Neural Comput. Appl. 2014, 24, 169–174. [Google Scholar] [CrossRef] [Green Version]
  20. Zhou, Y.; Xin, C.; Guo, Z. An improved monkey algorithm for a 0-1 knapsack problem. Appl. Soft Comput. 2016, 38, 817–830. [Google Scholar] [CrossRef] [Green Version]
  21. Ma, C.; Huang, H.; Fan, Q.; Wei, J.; Du, Y.; Gao, W. Gray wolf optimizer based on aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  22. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  23. El-Shorbagy, M.A.; Omar, H.A.; Fetouh, T. Hybridization of Manta-Ray Foraging Optimization Algorithm with Pseudo Parameter-Based Genetic Algorithm for Dealing Optimization Problems and Unit Commitment Problem. Mathematics 2022, 10, 2179. [Google Scholar] [CrossRef]
  24. El-Shorbagy, M.A.; Eldesoky, I.M.; Basyouni, M.M.; Nassar, I.; El-Refaey, A.M. Chaotic Search-Based Salp Swarm Algorithm for Dealing with System of Nonlinear Equations and Power System Applications. Mathematics 2022, 10, 1368. [Google Scholar] [CrossRef]
  25. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  26. Feng, Y.; Deb, S.; Wang, G.-G.; Alavi, A.H. Monarch butterfly optimization: A comprehensive review. Expert Syst. Appl. 2021, 168, 114418. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Andrew, L. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  28. Liu, Y.; Heidari, A.A.; Cai, Z.; Liang, G.; Chen, H.; Pan, Z.; Alsufyani, A.; Bourouis, S. Simulated annealing-based dynamic step shuffled frog leaping algorithm: Optimal performance design and feature selection. Neurocomputing 2022, 503, 325–362. [Google Scholar] [CrossRef]
  29. Kaveh, A.; Mahdavi, V. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  30. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  31. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  32. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  33. Erol, O.K.; Ibrahim, E. A new optimization method: Big bang–Big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  34. Formato, R.A. Central force optimization. Prog Electromagn. Res. 2007, 77, 425–491. [Google Scholar] [CrossRef] [Green Version]
  35. Hosseini, H.S. Principal components analysis by the galaxy-based search algorithm: A novel metaheuristic for continuous optimisation. Int. J. Comput. Sci. Eng. 2011, 6, 132. [Google Scholar] [CrossRef]
  36. Moghaddam, F.F.; Moghaddam, R.F.; Cheriet, M. Curved space optimization: A random search based on general relativity theory. arXiv 2012, arXiv:1208.2214. [Google Scholar]
  37. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  38. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2018, 163, 283–304. [Google Scholar] [CrossRef]
  39. Kashan, A.H. League championship algorithm: A new algorithm for numerical function optimization. In Proceedings of the 2009 International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, 4–7 December 2009. [Google Scholar]
  40. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  41. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  42. Rahoual, M.; Saad, R. Solving Timetabling problems by hybridizing genetic algorithms and taboo search. In Proceedings of the 6th International Conference on the Practice and Theory of Automated Timetabling (PATAT 2006), Brno, Czech Republic, 30 August–1 September 2006. [Google Scholar]
  43. Yong, L. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Binary Quadratic Programming. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021. [Google Scholar]
  44. He, S.; Wu, Q.H.; Saunders, J.R. Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Trans. Evol. Comput. 2009, 13, 973–990. [Google Scholar] [CrossRef]
  45. Moosavian, N.; Roodsari, B.K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 2014, 17, 14–24. [Google Scholar] [CrossRef]
  46. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007. [Google Scholar]
  47. Dai, C.; Chen, W.; Zhu, Y.; Zhang, X. Seeker optimization algorithm for optimal reactive power dispatch. IEEE Trans. Power Syst. 2009, 24, 1218–1231. [Google Scholar]
  48. Ghorbani, N.; Ebrahim, B. Exchange market algorithm. Appl. Soft Comput. 2014, 19, 177–187. [Google Scholar] [CrossRef]
  49. Hofmeyr, S.A.; Forrest, S. Architecture for an artificial immune system. Evol. Comput. 2000, 8, 443–473. [Google Scholar] [CrossRef]
  50. Abdullahi, M.; Ngadi, A.; Abdulhamid, S.M. Symbiotic organism search optimization based task scheduling in cloud computing environment. Future Gener. Comput. Syst. 2016, 56, 640–650. [Google Scholar] [CrossRef]
  51. Jahangiri, M.; Hadianfard, M.A.; Najafgholipour, M.A.; Jahangiri, M.; Gerami, M.R. Interactive autodidactic school: A new metaheuristic optimization algorithm for solving mathematical and structural design optimization problems. Comput. Struct. 2020, 235, 106268. [Google Scholar] [CrossRef]
  52. Randolph, H.E.; Barreiro, L.B. Herd immunity: Understanding COVID-19. Immunity 2020, 52, 737–741. [Google Scholar] [CrossRef]
  53. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  54. Das, S.; Arijit, B.; Sambarta, D.; Ajith, A. Bacterial foraging optimization algorithm: Theoretical foundations, analysis, and applications. In Foundations of Computational Intelligence Volume 3: Global Optimization; Springer: Berlin/Heidelberg, Germany, 2009; pp. 23–55. [Google Scholar]
  55. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2020, 54, 2323–2409. [Google Scholar] [CrossRef]
  56. Algelany, A.M.; El-Shorbagy, M.A. Chaotic Enhanced Genetic Algorithm for Solving the Nonlinear System of Equations. Comput. Intell. Neurosci. 2022, 2022, 1376479. [Google Scholar] [CrossRef] [PubMed]
  57. Mo, Y.; Liu, H.; Wang, Q. Conjugate direction particle swarm optimization solving systems of nonlinear equations. Comput. Math. Appl. 2009, 57, 1877–1882. [Google Scholar] [CrossRef] [Green Version]
  58. Jia, R.; He, D. Hybrid artificial bee colony algorithm for solving nonlinear system of equations. In Proceedings of the 2012 Eighth International Conference on Computational Intelligence and Security, Guangzhou, China, 17–18 November 2012; pp. 56–60. [Google Scholar]
  59. Zhou, R.H.; Li, Y.G. An improve cuckoo search algorithm for solving nonlinear equation group. Appl. Mech. Mater. 2014, 651–653, 2121–2124. [Google Scholar] [CrossRef]
  60. Ariyaratne, M.; Fernando, T.; Weerakoon, S. Solving systems of nonlinear equations using a modified firefly algorithm (MODFA). Swarm Evol. Comput. 2019, 48, 72–92. [Google Scholar] [CrossRef]
  61. Omar, H.A.; El-Shorbagy, M.A. Modified grasshopper optimization algorithm-based genetic algorithm for global optimization problems: The system of nonlinear equations case study. Soft Comput. 2022, 26, 9229–9245. [Google Scholar] [CrossRef]
  62. Mousa, A.A.; El-Shorbagy, M.A.; Mustafa, I.; Alotaibi, H. Chaotic search based equilibrium optimizer for dealing with nonlinear programming and petrochemical application. Processes 2021, 9, 200. [Google Scholar] [CrossRef]
  63. Cuyt, A.A.M.; Rall, L.B. Computational implementation of the multivariate Halley method for solving nonlinear systems of equations. ACM Trans. Math. Softw. (TOMS) 1985, 11, 20–36. [Google Scholar] [CrossRef]
  64. Nie, P.-Y. A null space method for solving system of equations. Appl. Math. Comput. 2004, 149, 215–226. [Google Scholar] [CrossRef]
  65. Nie, P.-Y. An SQP approach with line search for a system of nonlinear equations. Math. Comput. Model. 2006, 43, 368–373. [Google Scholar] [CrossRef]
  66. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  67. Sterkenburg, T.F.; Grünwald, P.D. The no-free-lunch theorems of supervised learning. Synthese 2021, 199, 9979–10015. [Google Scholar] [CrossRef]
  68. Hénon, M. A two-dimensional mapping with a strange attractor. Commun. Math. Phys. 1976, 50, 69–77. [Google Scholar] [CrossRef]
  69. Lorenz, E. The Essence of Chaos; University of Washington Press: Washington, DA, USA, 1996. [Google Scholar]
  70. Strogatz, S.H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  71. Aguirre, L.A.; Christophe, L. Modeling nonlinear dynamics and chaos: A review. Math. Probl. Eng. 2009, 2009, 238960. [Google Scholar] [CrossRef] [Green Version]
  72. Yang, D.; Li, G.; Cheng, G. On the efficiency of chaos optimization algorithms for global optimization. Chaos Solitons Fractals 2007, 34, 1366–1375. [Google Scholar] [CrossRef]
  73. El-Shorbagy, M.A.; Nasr, S.M.; Mousa, A.A. A Chaos Based Approach for Nonlinear Optimization Problems; LAP (Lambert Academic Publishing): Saarbrücken, Germany, 2016. [Google Scholar]
  74. El-Shorbagy, M.; Mousa, A.; Nasr, S. A chaos-based evolutionary algorithm for general nonlinear programming problems. Chaos Solitons Fractals 2016, 85, 8–21. [Google Scholar] [CrossRef]
  75. Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2008, 38, 698–714. [Google Scholar] [CrossRef]
  76. Ren, H.; Wu, L.; Bi, W.; Argyros, I.K. Solving nonlinear equations system via an efficient genetic algorithm with symmetric and harmonious individuals. Appl. Math. Comput. 2013, 219, 10967–10973. [Google Scholar] [CrossRef]
  77. Pourrajabian, A.; Ebrahimi, R.; Mirzaei, M.; Shams, M. Applying genetic algorithms for solving nonlinear algebraic equations. Appl. Math. Comput. 2013, 219, 11483–11494. [Google Scholar] [CrossRef]
  78. Wasserman, L. All of Nonparametric Statistics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  79. Wang, G.G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef] [Green Version]
  80. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  81. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  82. Naruei, I.; Keynia, F. Wild horse optimizer: A new meta-heuristic algorithm for solving engineering optimization problems. Eng. Comput. 2021, 38, 3025–3056. [Google Scholar] [CrossRef]
Figure 1. Meta-heuristic algorithms’ classification.
Figure 1. Meta-heuristic algorithms’ classification.
Mathematics 11 01231 g001
Figure 2. The procedure for updating in terms of sine and cosine functions with a [ 2 ,   2 ] range [31].
Figure 2. The procedure for updating in terms of sine and cosine functions with a [ 2 ,   2 ] range [31].
Mathematics 11 01231 g002
Figure 3. CSSCA’s flowchart.
Figure 3. CSSCA’s flowchart.
Mathematics 11 01231 g003
Figure 4. SCA, HGWOSCA, and CSSCA’s mean rankings in the Friedman test.
Figure 4. SCA, HGWOSCA, and CSSCA’s mean rankings in the Friedman test.
Mathematics 11 01231 g004
Figure 5. Graphical presentation of Table 4.
Figure 5. Graphical presentation of Table 4.
Mathematics 11 01231 g005
Figure 6. Graphical presentation of Table 5.
Figure 6. Graphical presentation of Table 5.
Mathematics 11 01231 g006
Figure 7. Graphical presentation of Table 6.
Figure 7. Graphical presentation of Table 6.
Mathematics 11 01231 g007
Figure 8. Graphical presentation of Table 7.
Figure 8. Graphical presentation of Table 7.
Mathematics 11 01231 g008
Figure 9. Graphical presentation of Table 8.
Figure 9. Graphical presentation of Table 8.
Mathematics 11 01231 g009
Figure 10. Graphical presentation of Table 9.
Figure 10. Graphical presentation of Table 9.
Mathematics 11 01231 g010
Figure 11. Graphical presentation of Table 10.
Figure 11. Graphical presentation of Table 10.
Mathematics 11 01231 g011
Figure 12. The convergence of F for all NSEs cases with the chaotic search parameters.
Figure 12. The convergence of F for all NSEs cases with the chaotic search parameters.
Mathematics 11 01231 g012
Table 1. Comparing the best function value obtained by the original SCA, HGWOSCA, and the proposed CSSCA.
Table 1. Comparing the best function value obtained by the original SCA, HGWOSCA, and the proposed CSSCA.
PD% between the Original SCA and CSSCACSSCA ResultHGWOSCASCA ResultPD% between the Original SCA and CSSCA
F10.0832390.00530.053535.72724324
F22.7645 × 10−190.03192.7645 × 10−190
F34.8556 × 10−81.06612 × 10−44.4013 × 10−89.356207266
F41.3407 × 10−100.77851.3407 × 10−100
F57.303826.58377.07723.102494592
F60.808340.00310.674716.53264715
F74.6115 × 10−40.00243.5722 × 10−422.53713542
F8−2049.352−5553.8−4272.3108.4707752
F96.161105.85544.961776306
F108.8818 × 10−160.00268.8818 × 10−160
F110.2347900.206112.21943013
F120.0848860.00250.072914.12011404
F130.211340.00110.20094.939907258
F141.01280.99800.99801.461295419
F150.000700610.00126.4760 × 10−47.566263685
F16−1.0316−1.0315−1.03160
F170.400390.39790.39840.49701541
F183.000133.00000.003333222
F19−3.8598−3.8625−3.86000.005181616
Table 2. Friedman test’ results for the 19 test functions.
Table 2. Friedman test’ results for the 19 test functions.
Test StatisticsRank
N19AlgorithmMean Rank
Chi-Square7.657SCA2.47
df2HGWOSCA1.89
Asymp. Sig.0.022CSSCA1.63
Table 3. The 19 test functions’ Wilcoxon signed-ranks test outcomes.
Table 3. The 19 test functions’ Wilcoxon signed-ranks test outcomes.
Test Statistics a
a. Wilcoxon Signed-Ranks Test
Ranks
SumNMean RankSum of Ranks< Or > Or =
SCA—CSSCAR0 a0.000.00a. SCA < CSSCA
Z−3.408 bR+15 b8.00120.00b. SCA > CSSCA
Asymp. Sig. (2-tailed)0.001Ties4 c c. SCA = CSSCA
b. Based on negative ranks.Total19
HGWOSCA—CSSCAR9 d10.6796.00d. HGWOSCA < CSSCA
Z−0.923 cR+8 e7.1357.00e. HGWOSCA > CSSCA
Asymp. Sig. (2-tailed)0.356Ties2 f f. HGWOSCA = CSSCA
c. Based on negative ranks.Total19
Table 4. Results of case 1: F1 at different conditions.
Table 4. Results of case 1: F1 at different conditions.
Conditions z 0 ε kBest Position ( x 1 , x 2 )Best F1
Conditions 10.010.0110002.0002, 0.99992.7336 × 10−4
Conditions 20.010.00110001.0000, 2.00002.8633 × 10−5
Conditions 30.000010.00110001.9996, 1.00076.6914 × 10−4
Conditions 40.000010.0110002.0004, 0.99955.4087 × 10−4
Conditions 50.010.015001.0006, 1.99966.0820 × 10−4
Conditions 60.010.0115001.0001,2.00004.0604 × 10−4
Conditions 70.010.00115002.0006, 0.99918.6317 × 10−4
Table 5. Results of Case 2: F2 at different conditions.
Table 5. Results of Case 2: F2 at different conditions.
Conditions z 0 ε kBest Position ( x 1 , x 2 , x 3 )Best F2
Conditions 10.010.011000−0.0329, 1.2648, 1.40062.6693 × 10−4
Conditions 20.010.0011000−0.0230, 1.2645, 1.40099.0229 × 10−4
Conditions 30.000010.0011000−0.0333, 1.2627, 1.40568.3069 × 10−4
Conditions 40.000010.011000−0.0339, 1.2649, 1.39997.7486 × 10−4
Conditions 50.010.01500−0.0338, 1.2650, 1.40027.3606 × 10−4
Conditions 60.010.011500−0.0338, 1.2648, 1.40016.1370 × 10−4
Conditions 70.010.0011500−0.0336, 1.2650, 1.39987.4714 × 10−4
Table 6. Results of Case 3: F3 at different conditions.
Table 6. Results of Case 3: F3 at different conditions.
Conditions z 0 ε kBest Position ( x 1 , x 2 )Best F3
Conditions 10.010.011000−1.3659, 3.12736.4579 × 10−4
Conditions 20.010.0011000−1.3657, 3.12734.3137 × 10−4
Conditions 30.000010.00110001.4419, 3.12735.3103 × 10−4
Conditions 40.000010.011000−1.3657, 3.12734.0623 × 10−4
Conditions 50.010.015001.4417, 3.12731.8692 × 10−4
Conditions 60.010.0115001.4419, 3.12734.0073 × 10−4
Conditions 70.010.00115001.4416, 3.12731.4685 × 10−5
Table 7. Results of Case 4: F4 at different conditions.
Table 7. Results of Case 4: F4 at different conditions.
Conditions z 0 ε kBest Position ( x 1 , x 2 )Best F4
Conditions 10.010.0110000.1570, 0.49357.8956 × 10−4
Conditions 20.010.00110000.1565, 0.49344.7106 × 10−6
Conditions 30.000010.00110000.1566, 0.49347.0050 × 10−5
Conditions 40.000010.011000−2.9851, −2.64826.6633 × 10−5
Conditions 50.010.015006.4402, 6.77688.2988 × 10−4
Conditions 60.010.0115000.1568, 0.49354.0707 × 10−4
Conditions 70.010.00115000.1570, 0.49367.1143 × 10−5
Table 8. Results of Case 5: F5 at different conditions.
Table 8. Results of Case 5: F5 at different conditions.
Conditions z 0 ε kBest Pos ( x 1 , , x 10 )Best F5
Conditions 10.010.011000−5.0135 × 10−6, 2.8680 × 10−5, −10, 0.0037, 6.0578 × 10−12, 10, −0.0018, 10, −3.4240 × 10−6, −103.9154 × 10−4
Conditions 20.010.00110000.0038, −0.0054, −0.0104, −0.0044, 2.1600 × 10−11, 0.0095, 0.0019, 0.0145, −0.0307, 0.00822.7376 × 10−4
Conditions 30.000010.0011000−1.1033 × 10−8, −3.1127 × 10−6, −10, −5.5679 × 10−4, −1.8332 × 10−12, 10, 2.8015 × 10−4, 10, 2.1671 × 10−5, −106.0646 × 10−5
Conditions 40.000010.011000−0.0104, −1.7648 × 10−4, −0.0222, 3.1607 × 10−4, 1.2687 × 10−10, −0.0152, −5.5569 × 10−5, 0.0251, −0.0671, 0.04872.6557 × 10−4
Conditions 50.010.015001.3712 × 10−7, −2.1896 × 10−4, −9.9991, −7.0970 × 10−7, −1.1578 × 10−13, 10, −1.4509 × 10−5, 10, −2.5461 × 10−4, −103.4767 × 10−4
Conditions 60.010.0115001.5492 × 10−7, −2.7488 × 10−4, −10, −4.9143 × 10−6, −2.9417 × 10−12, 10, 3.0756 × 10−6, 10, 2.9057 × 10−4, −103.5273 × 10−4
Conditions 70.010.00115001.2119 × 10−6, −3.0283 × 10−4, 10, 3.6210 × 10−5, 2.3696 × 10−12, −10, −1.1715 × 10−5, −10, 3.0054 × 10−4, 103.5469 × 10−4
Table 9. Results of Case 6: F6 at different conditions.
Table 9. Results of Case 6: F6 at different conditions.
Conditions z 0 ε KBest Pos ( x 1 , , x 6 )Best F6
Conditions 10.010.0110000.1933, −0.9973, −0.9804, 0.0730, −5.0824 × 10−4, 0.00318.7958 × 10−4
Conditions 20.010.00110000.1431, 0.9524, −0.9896, 0.3050, 7.5318 × 10−4, −0.00236.0153 × 10−4
Conditions 30.000010.0011000−1.0000, 0.9687, 0.0030, −0.2495, −0.0017, −0.00132.4897 × 10−4
Conditions 40.000010.0110001.7130 × 10−5, −0.9636, 1.0019, 0.2684, 3.3332 × 10−4, 4.3633 × 10−57.9744 × 10−4
Conditions 50.010.015000.7161, 1.0010, −0.6963, −0.0190, −1.5278 × 10−4, 4.4880 × 10−68.2790 × 10−4
Conditions 60.010.0115000.1862, 0.6602, −0.9835, 0.7511, 0.0015, 8.7707 × 10−46.8913 × 10−4
Conditions 70.010.00115000.0092, 0.0114, 1.0000, 1.0013, −9.9631 × 10−4, −0.00128.3455 × 10−4
Table 10. Results of Case 7: F7 at different conditions.
Table 10. Results of Case 7: F7 at different conditions.
Conditions z 0 ε kBest Pos ( x 1 , , x 6 )Best F7
Conditions 10.010.0110000.2317, 0.3962, 0.2888, 0.2005, 0.4093, 0.1540, 0.4427, 0.0634, 0.2999, 0.44280.0176
Conditions 20.010.00110000.2317, 0.3962, 0.2888, 0.2005, 0.4093, 0.1540, 0.4427, 0.0634, 0.2999, 0.44280.0176
Conditions 30.000010.00110000.2317, 0.3962, 0.2888, 0.2005, 0.4093, 0.1540, 0.4427, 0.0634, 0.2999, 0.44280.0176
Conditions 40.000010.0110000.2317, 0.3962, 0.2888, 0.2005, 0.4093, 0.1540, 0.4427, 0.0634, 0.2999, 0.44280.0176
Conditions 50.010.015000.2317, 0.3962, 0.2888, 0.2005, 0.4093, 0.1540, 0.4427, 0.0634, 0.2999, 0.44280.0176
Conditions 60.010.0115000.2066, 0.4182, 0.2583, 0.1698, 0.4791, 0.1494, 0.4275, 0.0728, 0.3549, 0.42340.0190
Conditions 70.010.00115000.2317, 0.3962, 0.2888, 0.2005, 0.4093, 0.1540, 0.4427, 0.0634, 0.2999, 0.44280.0176
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El-Shorbagy, M.A.; Al-Drees, F.M. Studying the Effect of Introducing Chaotic Search on Improving the Performance of the Sine Cosine Algorithm to Solve Optimization Problems and Nonlinear System of Equations. Mathematics 2023, 11, 1231. https://doi.org/10.3390/math11051231

AMA Style

El-Shorbagy MA, Al-Drees FM. Studying the Effect of Introducing Chaotic Search on Improving the Performance of the Sine Cosine Algorithm to Solve Optimization Problems and Nonlinear System of Equations. Mathematics. 2023; 11(5):1231. https://doi.org/10.3390/math11051231

Chicago/Turabian Style

El-Shorbagy, Mohammed A., and Fatma M. Al-Drees. 2023. "Studying the Effect of Introducing Chaotic Search on Improving the Performance of the Sine Cosine Algorithm to Solve Optimization Problems and Nonlinear System of Equations" Mathematics 11, no. 5: 1231. https://doi.org/10.3390/math11051231

APA Style

El-Shorbagy, M. A., & Al-Drees, F. M. (2023). Studying the Effect of Introducing Chaotic Search on Improving the Performance of the Sine Cosine Algorithm to Solve Optimization Problems and Nonlinear System of Equations. Mathematics, 11(5), 1231. https://doi.org/10.3390/math11051231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop