Next Article in Journal
Novel Fermatean Fuzzy Aczel–Alsina Model for Investment Strategy Selection
Next Article in Special Issue
FSSSA: A Fuzzy Squirrel Search Algorithm Based on Wide-Area Search for Numerical and Engineering Optimization Problems
Previous Article in Journal
Free-Space Quantum Teleportation with Orbital Angular Momentum Multiplexed Continuous Variable Entanglement
Previous Article in Special Issue
Exploring Initialization Strategies for Metaheuristic Optimization: Case Study of the Set-Union Knapsack Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Algorithm Based on Jellyfish Search and Particle Swarm Optimization

by
Husham Muayad Nayyef
1,
Ahmad Asrul Ibrahim
1,*,
Muhammad Ammirrul Atiqi Mohd Zainuri
1,
Mohd Asyraf Zulkifley
1 and
Hussain Shareef
2
1
Department of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
2
Department of Electrical and Communication Engineering, United Arab Emirates University, Al Ain 15551, United Arab Emirates
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3210; https://doi.org/10.3390/math11143210
Submission received: 20 June 2023 / Revised: 18 July 2023 / Accepted: 19 July 2023 / Published: 21 July 2023
(This article belongs to the Special Issue Metaheuristic Algorithms)

Abstract

:
Metaheuristic optimization is considered one of the most efficient and powerful techniques of recent decades as it can deal effectively with complex optimization problems. The performance of the optimization technique relies on two main components: exploration and exploitation. Unfortunately, the performance is limited by a weakness in one of the components. This study aims to tackle the issue with the exploration of the existing jellyfish search optimizer (JSO) by introducing a hybrid jellyfish search and particle swarm optimization (HJSPSO). HJSPSO is mainly based on a JSO structure, but the following ocean current movement operator is replaced with PSO to benefit from its exploration capability. The search process alternates between PSO and JSO operators through a time control mechanism. Furthermore, nonlinear and time-varying inertia weight, cognitive, and social coefficients are added to the PSO and JSO operators to balance between exploration and exploitation. Sixty benchmark test functions, including 10 CEC-C06 2019 large-scale benchmark test functions with various dimensions, are used to showcase the optimization performance. Then, the traveling salesman problem (TSP) is used to validate the performance of HJSPSO for a nonconvex optimization problem. Results demonstrate that compared to existing JSO and PSO techniques, HJSPSO contributes in terms of exploration and exploitation improvements, where it outperforms other well-known metaheuristic optimization techniques that include a hybrid algorithm. In this case, HJSPSO secures the first rank in classical and large-scale benchmark test functions by achieving the highest hit rates of 64% and 30%, respectively. Moreover, HJSPSO demonstrates good applicability in solving an exemplar TSP after attaining the shortest distance with the lowest mean and best fitness at 37.87 and 36.12, respectively. Overall, HJSPSO shows superior performance in solving most benchmark test functions compared to other optimization techniques, including JSO and PSO. As a conclusion, HJSPSO is a robust technique that can be applied to solve most optimization problems with a promising solution.

1. Introduction

Most problems in critical applications such as engineering, sciences, and economics require a decision-making mechanism or optimization technique. The optimization technique is used to choose between various possible solutions to reach the best decision based on a designated objective function [1]. Various types of optimization techniques are used to successfully solve problems in many applications. In the last two decades, metaheuristic optimization techniques have received more attention compared to classical optimization techniques [2]. The main reason behind this discrepancy is because classical optimizations are commonly trapped in a local optimum [3]. On the contrary, metaheuristic techniques can escape from the local optimum and search for a better solution, which leads to the global optimum. Furthermore, the classical optimization technique is highly dependent on the starting point to give a good solution. By contrast, the performance of metaheuristic optimization techniques is not significantly affected by the starting point [3].
The two major components in metaheuristic techniques are known as exploration and exploitation. Exploration is a search for possible solutions in promising areas within a search space, while exploitation is a search for a better solution within the surrounding area of the optimum solution so far. A good balance between exploration and exploitation should be arranged to achieve effective performance in solving an optimization problem [4]. However, many recent studies revealed that exploration and exploitation in most existing metaheuristic techniques are still not treated well [5]. For instance, particle swarm optimization (PSO), grey wolf optimizer (GWO), and heap-based optimizer (HBO) have poor exploitation but better exploration [6]. By contrast, artificial bee colony [7], firefly algorithm (FA) [8,9], and cuckoo search [10] have poor exploration but better exploitation. Consequently, several new hybrid techniques have been suggested to improve the balance between exploitation and exploration. The hybrid techniques consist of two or more metaheuristic algorithms to complement one another and take advantage of their good features to produce a more accurate technique [11,12].
In general, metaheuristic optimization techniques can be classified into four main categories: evolutionary-based, physical-based, human-based, and swarm-intelligence-based techniques [13]. Evolutionary-based techniques imitate the processes of biological evolution, which are selection, reproduction, mutation, and recombination. There are several popular evolutionary-based techniques, including genetic algorithm (GA) [14], differential evolution (DE) [15], and biography-based optimizer (BBO) [16]. Physical-based techniques are inspired by physical laws and phenomena such as gravitational force, inertia force, and lightning phenomena. Popular physical-based techniques include gravitational search algorithm [17], simulated annealing [18], and lightning search algorithm (LSA) [19]. Human-based techniques, such as teaching–learning-based optimization (TLBO) [20], HBO [13], and coronavirus herd immunity optimizer (CHIO) [21], mimic human social behavior. Swarm-intelligence-based techniques are inspired by the collective food-foraging behavior of social creatures in nature, such as flocks of birds, schools of fishes, and colonies of insects. The popular optimization techniques from this category are PSO [22], jellyfish search optimizer (JSO) [23], and rat swarm optimizer (RSO) [24].
JSO is one of the most recent swarm-intelligence-based techniques. This algorithm is inspired by the foraging behavior of jellyfish in the ocean [23]. Its competitive features make it popular, and it is used to solve various optimization problems and different engineering problems [25]. However, JSO has shown poor exploration capability [26]. Therefore, hybridizing the JSO with a good exploration technique will balance exploration and exploitation to improve the search process. In the last two decades, PSO has been presented as a robust metaheuristic optimization technique with many features, such as simplicity, fast convergence, and excellent exploration capability [27]. Therefore, the present paper proposes a novel hybrid optimization algorithm based on JSO and PSO, which is called hybrid jellyfish search and particle swarm optimization (HJSPSO). The main contributions of this paper are as follows:
  • A significant improvement in HJSPSO in terms of accuracy at fast convergence rates compared to the original PSO and JSO techniques.
  • The superiority of HJSPSO is verified by comparing it with nine well-known optimization techniques, including the existing hybrid algorithm.
  • The robustness of HJSPSO is validated through unimodal, multimodal, and large-scale benchmark test functions.
The remaining sections of this paper are structured as follows: Section 2 reviews the recent literature on PSO and JSO, including their existing hybridization. Section 3 explains the original formulations of PSO and JSO, and Section 4 presents and discusses the proposed HJSPSO. Section 5 evaluates the performance of HJSPSO by using benchmark test functions and an exemplar TSP to showcase its effectiveness compared to other optimization techniques. Finally, Section 6 draws a conclusion.

2. Review of PSO and JSO Applications

Numerous studies have been published recently in the field of metaheuristic techniques, and this trend is expected to continue as more new techniques are introduced. Metaheuristic optimization techniques are popular and widely used in engineering applications because of their good performance in providing a promising solution [28]. Introduced by Kennedy and Eberhart [22] in 1995, PSO is now considered one of the most popular optimization techniques. In the last two decades, many studies have been conducted to improve the performance of PSO, whether by modifying the original form of PSO or hybridizing it with other metaheuristic techniques. Cui et al. [29] introduced a disturbance factor in PSO to improve its performance. In that case, a few particles are selected when no improvement is observed for a period longer than the disturbance factor, and then their velocities are modified to escape from the local optimum. Ibrahim et al. [30] adopted an artificial immune system in PSO to tackle the issue with constraint violations. Gupta and Devi [31] presented a modified PSO where the inertia weight and acceleration coefficients are varied nonlinearly along with iterations to balance global exploration and local exploitation. Yan et al. [32] modified PSO with an exponential decay weight where a constraint factor is inserted into the velocity-updating procedure. Al-Bahrani and Patra [33] presented an orthogonal PSO where the swarm particles are divided into two groups to enhance the diversity in the population. The first group comprises the active best personal experience, while the remaining particles form the second group, representing passive personal experiences. In each iteration, the positions of the active group are orthogonally diagonalized to enhance the exploration capability. Numerous researchers have also sought balance between exploration and exploitation in PSO by hybridizing it with other metaheuristic techniques. For instance, PSO is hybridized with the spotted hyena optimizer (HPSSHO), differential evolution (HPSO-DE), fireworks algorithm (PS-FW), and gravitational search algorithm (HGSPSO) [4,34,35,36].
JSO is one of the more recent swarm-based optimization techniques. It was introduced by Chou and Truong [23] in 2020 and, therefore, only several studies have been carried out to enhance its performance. Most previous studies focused on modifying the formulation of JSO to enhance its performance. Abdel-Basset et al. [37] proposed a premature convergence strategy in the JSO algorithm to improve its capability to search for the optimal solution. This strategy is based on a control mechanism to accelerate the convergence toward the best solution and decrease the probability of being stuck in the local optimum. It consists of two steps: (i) it randomly selects two particles from the population to relocate their positions and (ii) seeks better solutions between the current best and the random positions in the population. Manita and Zermani [26] proposed an orthogonal JSO that uses an orthogonal learning strategy to improve the exploration capability. This strategy helps to search for the best solution by forecasting the best combination between two solution vectors based on limited trials. In another work, Juhaniya et al. [38] improved the exploration capability of JSO by using an opposition-based learning strategy to solve an optimal stator and stator slot designs. Rajpurohit and Sharma [39] introduced seven chaotic maps into JSO by inserting these into the active movement step to improve its accuracy and efficiency. Only one study has been found to hybridize the JSO with another metaheuristic technique. Introduced by Ginidi et al. [40], the hybrid technique, called hybrid heap-based and jellyfish search algorithm (HBJSA), combined HBO and JSO to solve the combined heat and power economic dispatch problem. On the other hand, Chou et al. [41] presented a hybrid model that integrates JSO with a machine learning algorithm called convolutional neural network (CNN) to improve the predictive accuracy of energy consumption in 20 cities in Taiwan. The best CNN model was selected by evaluating available models with different training datasets before it was integrated with JSO. In this case, the role of JSO was limited to determining the best internal parameters of the CNN model, where the effectiveness of JSO in finding the best solution was very minimal. Therefore, similar work to that in [40] should be carried out to improve the performance of JSO and benefit from its advantages.
The prior literature clearly shows that numerous attempts have been made to achieve a balance between exploration and exploitation in JSO and PSO by either modifying their operators or hybridizing them with other metaheuristic optimization techniques. However, no hybrid technique that combines JSO and PSO has been found in the literature. It is important to utilize the advantages of PSO’s exploration capability and JSO’s exploitation capability by hybridizing them to achieve a balance between exploration and exploitation. The existing formulation of PSO and JSO is explained in the next section to provide a better understanding before they can be hybridized.

3. The Existing Optimization Formulation

3.1. PSO Formulation

PSO is a famous optimization technique based on swarm intelligence, which is inspired by the social behavior of animals during food collection, like bird flocks and fish schooling [22]. It is an iterative algorithm consisting of a swarm of particles where the position of each particle, X, is a potential solution for the optimization problem. A group of particles (population) is initialized by randomly locating the positions of each particle in the search space. In each iteration, the velocity of the i-th particle, V i , is updated according to the individual best performance, known as the personal best ( P b e s t ), and the best particle performance of the entire swarm, known as the global best ( G b e s t ) using the following expression [22]:
V i t + 1 = w V i t + c 1 r 1 P b e s t i t X i t + c 2 r 2 G b e s t t X i t
where w is the inertia weight that provides the balance between exploration and exploitation by decreasing the velocity while solutions reach the global minimum, c 1 and c 2 are the cognitive and social coefficients, and r 1 and r 2 are real random vectors in the range of [0, 1]. The position of each particle is updated as follows [22]:
X i t + 1 = X i t + V i t + 1 1 i N
P b e s t is updated when individual particles find an improvement in their performance so far, and G b e s t is updated when the swarm finds a better solution. This process is repeated until the termination criterion is satisfied.

3.2. JSO Formulation

JSO, which is also derived from the swarm intelligence technique, simulates the foraging behavior of jellyfish in the ocean as they search for food [23]. JSO consists of two main movements: (i) following the ocean current and (ii) moving inside the swarm of jellyfish. A time control mechanism is used to switch between the movements. At the beginning, the swarm of jellyfish (population) is initialized randomly using one of the chaotic maps known as a logistic map. The logistic map provides better initialization by distributing jellyfish in the search space to ensure they are not trapped in the local optimum and improve the convergence accuracy. The initialization of JSO can be expressed as follows:
X i = L B + U B L B L i 1 i N
where X i represents the position of the i-th jellyfish, and U B and L B are the upper and lower bounds of the search space, respectively. L i refers to the logistic value of the i-th jellyfish, which can be expressed as
L i t + 1 = η L i t 1 L i t 0 L i 0 1
where L i 0 is the initial value of the jellyfish, L i 0 { 0 , 0.25 , 0.5 , 0.75 , 1 } , and η is set to 4.
The position of the jellyfish in each iteration is updated either by following the ocean current or moving inside the swarm of jellyfish with respect to the time control mechanism. A new location of the i-th jellyfish can be updated in the following ocean current as follows [23]:
X i t + 1 = X i t + r 1 ( X * 3 r 2 X i t ) 1 i N
where X i t + 1 and X i t are the updated and current positions of the i-th jellyfish, respectively. r 1 and r 2 are the random vectors generated between 0 and 1. X * refers to the current best location in the swarm so far.
On the other hand, movements inside the swarm are divided into two types: passive and active motions. In passive motion (type A), jellyfish move around their own positions to find better positions using the following expression [23]:
X i t + 1 = X i t + r 1 γ ( U B L B ) 1 i N
where γ is the motion coefficient associated with the length of motion around the locations of jellyfish and is usually at 0.1 [23].
In active motion (type B), a random position of the j-th jellyfish is selected to compare with the current position of the i-th jellyfish to determine the direction of movement based on the food quality. If the food quality at the j-th jellyfish is better, then the i-th jellyfish moves toward the direction of the j-th jellyfish; otherwise, the i-th jellyfish moves away from the j-th jellyfish. The position-updating mechanism can be expressed as follows [23]:
X i t + 1 = X i t + r 1 S t e p 1 i N
S t e p = X i t X j t , if   f ( X i t ) < f ( X j t ) X j t X i t , if   f ( X j t ) < f ( X i t )
As mentioned in Section 3.2, a time control mechanism is used to switch between following the ocean current and moving inside the swarm. This mechanism consists of a function of time control c ( t ) and a constant c o . c ( t ) gives a random value that fluctuates between 0 and 1, while c o is a mean value between 0 and 1, in which c o = 0.5 [23]. The time control function c ( t ) is computed using the following expression [23]:
c ( t ) = 1 t T × ( 2 r 1 )
where T is the total number of iterations and r is a random number that distributes uniformly in a range of [0, 1].
The decided movement is to follow the ocean current when c ( t ) is higher than c o ; otherwise, it is to move inside the swarm. At the same time, the time control function is used to switch between the active and passive motions inside the swarm. A function of 1 c ( t ) is compared with a random variable in the range of [0, 1], and then the passive motion (type A) is executed if 1 c ( t ) is higher than the random variable, or the active motion (type B) is executed otherwise. The value of 1 c ( t ) increases gradually from 0 to 1. In other words, the probability of the passive motion (type A) is higher than the active motion (type B) at the beginning, but the active motion (type B) has more probability to be selected as time goes on [23]. In JSO, following the ocean current represents the exploration (global search) while moving inside a swarm of jellyfish represents the exploitation (local search).

4. Proposed Optimization Formulation

The main goal of this study is to tackle the poor exploration capability of the existing JSO to achieve a proper balance between the exploration and exploitation through a combination with PSO, resulting in a hybrid algorithm called HJSPSO. In addition, the fusion between JSO and PSO enables HJSPSO to escape from the local optimum and avoid premature convergence. The basic structure of the HJSPSO algorithm is based on JSO but with some modifications to adopt PSO operators as follows:
  • The movement of following the ocean current in JSO is replaced with the velocity and position-updating mechanism of PSO to take advantage of its exploration capability (referred as PSO phase).
  • The passive motion in JSO is modified by introducing a new formulation with respect to the global solution to improve the exploration capability (referredto as JSO phase).
  • Nonlinear time-varying inertia weight and cognitive and social coefficients are added to enable the technique to escape from the local optimum.
  • The time control mechanism of JSO is used to switch between PSO and JSO phases.
An inertia weight is usually used in optimization techniques to adjust the treatment between exploration and exploitation. A low inertia weight gives high exploitation and low exploration. On the other hand, a high inertia weight gives low exploitation and high exploration. A linear transition from exploration to exploitation in the original PSO [22] is fixed and unable to be adjusted. As an alternative, a nonlinear decreasing inertia weight enables adjustment to give emphasis to either exploration or exploitation. Since JSO is lacking in terms of exploration, the introduced nonlinear decreasing inertia weight helps to improve the exploration and balance with the existing strong exploitation. Apart from that, cognitive and social coefficients are used in PSO to control the influence of exploration and exploitation, respectively. Similar to inertia weight, time-varying cognitive and social coefficients help to ensure high diversity for global exploration at the early stage and exploit around the global solution at the later stage. Then, sine and cosine functions are used in [42] to make them nonlinear and complementary to each other. The parameters w, c 1 , and c 2 in (1) are modified as follows:
w = w m i n + ( w m a x w m i n ) 1 t T β
c 1 = c m i n + ( c m a x c m i n ) sin π 2 1 t T
c 2 = c m i n + ( c m a x c m i n ) cos π 2 1 t T
where w m i n = 0.4 , w m a x = 0.9 , β = 0.1 , c m i n = 0.5 , and c m a x = 2.5 . As more emphasis should be given to exploration at the early state, the cognitive coefficient is thus set at the maximum while the social coefficient is at the minimum. At a later stage, toward the end of the searching process, more emphasis is given to exploitation. In this case, c 1 varies from 2.5 to 0.5 and c 2 from 0.5 to 2.5 [42]. The inertia weight w nonlinearly decreases from 0.9 to 0.4 along with iteration.
Figure 1 presents the flowchart of HJSPSO. At the beginning, the position of each member in the population is initialized randomly by using a chaotic logistic map to avoid premature convergence as in (3), while the velocity of each member is set to 0. Next, the position is updated either by the PSO or JSO phase depending on the time control mechanism as in (9). If the PSO phase is selected, then the position is updated using (1) and (2), while inertia weight, cognitive coefficient, and social coefficient are based on expressions in (10)–(12). Otherwise, the JSO phase is selected, and the position is updated using the movement inside the swarm. The JSO phase retains the movement inside the swarm, which consists of passive motion (type A) and active motion (type B). However, the passive motion is replaced with the following ocean current movement as in (5), and the nonlinear time-varying inertia weight w is added to the equation to standardize between the phases. A new expression of passive motion in HJSPSO is obtained as follows:
X i t + 1 = X i t + w r 1 × ( X * 3 r 2 X i t ) 1 i N
The nonlinear time-varying inertia weight w is also introduced in the active motion. As a result, the step in the active motion reduces gradually over time to avoid moving far away from the optimal solution at the later period. At the same time, the JSO phase has a higher probability to be selected according to the time control mechanism during this period. Therefore, the process of local exploitation to find the optimal solution can be improved. A new active motion in HJSPSO can be expressed as follows:
X i t + 1 = X i t + w r 1 S t e p 1 i N
The search process of HJSPSO swings between the PSO and JSO phases with respect to the time control function c ( t ) . If c ( t ) is higher than or equal to c o , then the PSO phase is selected; otherwise, the JSO phase is executed. According to the time control function, only the JSO phase is involved when the search reaches half of the total iteration. As a result, HJSPSO can benefit from the advantage of PSO at the early stage to explore the search space and the advantage of JSO at the later stage to exploit the global optimum solution.

5. Results and Discussion

This section showcases the performance of HJSPSO in solving 60 benchmark test functions and a TSP. The performance is compared with nine well-known metaheuristic optimization techniques, including PSO and JSO. The 60 benchmark test functions to evaluate performance are described in the next subsection.

5.1. Benchmark Test Functions

The performance of a new technique is typically evaluated using benchmark test functions with different characteristics and compared with various optimization techniques to showcase its effectiveness. A set of 50 classical benchmark test functions (Table 1) and 10 CEC-C06 2019 test benchmark functions (Table 2) are used to evaluate the performance of HJSPSO. The classical test functions consist of unimodal, multimodal, regular, irregular, separable, and nonseparable functions with various dimensions in the range of 2–30 variables [43]. The first 17 functions (F1–F17) are unimodal and commonly used to evaluate the exploitation capability. The rest of the functions (F18–F50) are multimodal and have many local optima to test the exploration capability. On the other hand, the CEC-C06 2019 test benchmark functions (CEC01–CEC10) are multimodal large-scale optimization problems where the dimensions of the first three functions (CEC01–CEC03) are fixed, whereas the other seven functions (CEC04–CEC10) are shafted and rotated within their dimensions [44].

5.2. Metaheuristic Techniques for Comparison

The seven well-known techniques besides the original techniques (i.e., PSO and JSO) that are used for comparison are as follows:
  • Grey Wolf Optimizer [45]: GWO was introduced in 2014. It is one of the swarm-intelligence-based techniques inspired by the hunting strategy of grey wolves, which includes searching, surrounding, and attacking the prey.
  • Lightning Search Algorithm [19]: LSA was proposed in 2015. It is one of the physical-based techniques that simulates the lightning phenomena and the mechanism of spreading the step leader by using the conception of fast particles known as projectiles.
  • Hybrid Heap-Based and Jellyfish Search Algorithm [40]: HBJSA was proposed in 2021. It is a hybrid optimization technique based on HBO and JSO that benefits from the exploration feature of HBO and the exploitation feature of JSO.
  • Rat Swarm Optimizer [24]: RSO was introduced in 2020. It is one of the swarm-intelligence-based algorithms that imitates rats’ behavior in chasing and attacking prey.
  • Ant Colony Optimization [46]: Ant colony optimization (ACO) was introduced in 1999. It is one of the intelligence-based swarm algorithms that simulates the foraging behavior of ants in finding food and depositing pheromones on the ground to guide other ants to the food.
  • Biogeography-based Optimizer [16]: BBO was introduced in 2008. It is an evolutionary-based technique closely related to GA and DE. BBO is inspired by the migration behavior of species between habitats.
  • Coronavirus Herd Immunity Optimizer [21]: CHIO was proposed in 2020. It is one of the human-based techniques that mimics the concept of herd immunity to face the coronavirus.
The parameter settings of the selected optimization techniques, including PSO, JSO, and HJSPSO, are tabulated in Table 3.

5.3. Comparison of Optimization Performance

The proposed HJSPSO and other optimization techniques were carried out within MATLAB environment using a PC with 2.7 GHz Core i7 processor and 20 GB memory. All optimization techniques were executed for 30 runs at each benchmark function to evaluate their effectiveness and robustness. The statistical results consist of the mean, standard deviation, and best and worst fitness values for each benchmark function tabulated in Table A1 and Table A2 in the Appendix A. The mean and standard deviation of fitness are considered the key indicators to determine the best performance. The lowest mean value is considered as the best performance, but the lowest standard deviation is selected if their mean values are equal (the best performance techniques are highlighted in bold). A hit rate is used to determine the overall best performance to count how many times the individual optimization technique achieves the best performance score from the total number of test functions [23]. A fitness value below 10 12 is normally assumed to be 0 in [43,47] for simplification purposes. However, this criterion misleads the selection of best performance and causes confusion in the comparison, especially for those very competitive optimization techniques. Therefore, the actual mean values used are those presented in the tables. Table 4 and Table 5 are used to simplify the presentation by ranking the techniques based on their performance, where the same rank is given if they share the same mean and standard deviation values. In the tables, the optimization techniques that provide the best solution (i.e., first rank) are highlighted in bold.
Table 4 clearly shows that the exploration capability of HJSPSO is superior to its competitors in solving 22 out of 33 multimodal classical test functions (F18–F50) with the best solution, while the rest solved as follows: HBJSA (17/33), JSO (16/33), PSO (9/33), LSA (9/33), ACO (8/33), GWO (6/33), RSO (6/33), BBO (5/33), and CHIO (1/33). At the same time, HJSPSO demonstrates better exploitation capability compared to the other eight optimization techniques in solving 10 functions out of 17 unimodal test functions (F1–F17) with the best solution, while the rest solved as follows: RSO (9/17), JSO (7/17), ACO (5/17), PSO (5/17), LSA (4/17), GWO (3/17), BBO (3/17), and CHIO (1/17). In this case, HBJSA shows better exploitation when performed in 12 of the 17 unimodal test functions compared to HJSPSO, which had an advantage in only two functions. Note that HBJSA and HJSPSO provide similar solutions in eight of the test functions. In other words, HJSPSO has better exploitation than HBJSA in two other functions (F9 and F10), where it is solely the best performer in solving the functions, whereas HBJSA shares the same best performance with RSO in three different test functions (F12–F14). HBJSA secures the first rank only in one function (F5). Therefore, HJSPSO has a good and unique exploitation capability. Apart from the above results, HJSPSO has shown superiority in solving large-scale, shifted, and rotated benchmark test functions (CEC-C06 2019) compared to other optimization techniques after providing the best solution in three functions out of 10, while the rest solved as follows (Table 5): JSO (2/10), RSO (2/10), ACO (2/10), HBJSA (1/10), PSO (0/10), GWO (0/10), LSA (0/10), BBO (0/10), and CHIO (0/10). Although HBJSA is a good competitor, as it secured the first and second places for the classical test functions, it is far behind HJSPSO in terms of solving the large-scale test functions.
Overall, HJSPSO outperforms the other optimization techniques by securing the first rank in the classical and large-scale benchmark test functions. This outcome clearly indicates the ability of HJSPSO to efficiently solve unimodal, multimodal, separate, nonseparable, rotated, and shifted composite functions. HJSPSO attains 64% of the hit rate, higher than all the competing techniques in the classical benchmark test functions: HBJSA (58%), JSO (46%), RSO (30%), PSO (28%), LSA (26%), ACO (26%), GWO (18%), BBO (16%), and CHIO (4%). On the other hand, the HJSPSO attains 30% of the hit rate in the large-scale benchmark test functions, higher than JSO (20%), RSO (20%), ACO (20%), HBJSA (10%), PSO (0%), GWO (0%), LSA (0%), BBO (0%), and CHIO (0%). Therefore, HJSPSO can be considered a robust metaheuristic optimization technique based on its ability to explore the search space and exploit unvisited areas in the search space to avoid the local optimum and find a better solution efficiently [23].

5.4. Convergence Performance Analysis

Six functions are selected from the benchmark test functions where two functions are randomly taken to represent the main three categories: unimodal (F7 and F9), multimodal (F33 and F50), and large-scale (CEC03 and CEC10). The convergence curves of HJSPSO and other optimization techniques in solving the selected benchmark functions are shown in Figure 2. The figure shows that HJSPSO has faster convergence in most cases compared to other optimization techniques. A notable ability of HJSPSO to exploit the promising areas can be observed in its solving of the unimodal benchmark functions (F7 and F9). Likewise, HJSPSO demonstrates the ability to escape quickly from the local optimum when solving the multimodal functions (F33, F50, CEC03, CEC10). It is important to note that PSO has good performance at the beginning because it converges fast, but there is no improvement in the later period. This behavior can be considered as premature convergence, which PSO is unable to exploit for a better solution. On the other hand, JSO has shown good exploitation in the unimodal test functions where the optimal solution is improved slowly until it reaches maximum iteration. Therefore, a combination of JSO and PSO in the form of HJSPSO, supported by some modifications, improved the performance significantly.
A computational time analysis of the obtained results in Figure 2 is carried out to explain the time complexity of HJSPSO compared to other optimization techniques. Table 6 shows a computational time and convergence point for each optimization technique in solving the selected six benchmark test functions. The results generally show that HJSPSO has a slightly higher computational time than JSO due to a combination with PSO. On a positive note, the computational time of HJSPSO is significantly less than PSO. This computational performance of HJSPSO can be achieved because it alternates between the operators of JSO and PSO instead of cascading them. A convergence point is also important to give the actual time taken to obtain the optimal solution. Although LSA shows high computational burden, it is usually converged at the lowest iteration, which leads to a significantly low time taken if it stops at the convergence point. In this perspective, HJSPSO is almost similar to most of the selected optimization techniques with less computational time. Accordingly, HJSPSO gives better results for the time taken compared to the optimization techniques.

5.5. Nonparametric Statistical Test

The performance of metaheuristic techniques is normally evaluated using basic statistics such as mean value, standard deviation, best fitness, and worst fitness, as presented earlier. However, this evaluation method requires a more accurate statistical test [48]. The best-performing technique should give a mean value that is as small (i.e., minimization problems) as possible and a standard deviation of 0. It is difficult to decide which one has better performance when one of the techniques has a slightly small mean value but higher standard deviation. As an alternative, a nonparametric statistical test is used to evaluate the performance of metaheuristic techniques. The statistical test is considered more suitable for metaheuristic techniques owing to their stochastic behavior [49]. The nonparametric statistical test can be divided into two types: (i) pairwise comparison and (ii) multiple comparisons. Pairwise comparison is used to compare two techniques, whereas multiple comparisons are used for more than two techniques [49]. Two well-known nonparametric tests are used, namely, Wilcoxon signed-rank test (pairwise comparison) and Friedman test (multiple comparisons). Only the large-scale (CEC-C06 2019) benchmark test functions are used to highlight the effectiveness of HJSPSO in this section.
The Wilcoxon signed-rank test is based on the significance level α at 5%. If the p value is higher than α , a null hypothesis H 0 is confirmed where there is no difference between HJSPSO and the compared optimization technique. Otherwise, when the p value is less than or equal to α , an alternative hypothesis H 1 is confirmed where a significant difference is found between the two optimization techniques. Table 7 presents the p values of HJSPSO compared to the nine other optimization techniques in solving the CEC-C06 2019 benchmark test functions. HJSPSO scores p values higher than α or the alternative hypothesis H 1 in most cases and only a few cases are null hypotheses H 0 (bold). This finding indicates that the reported performance of HJSPSO in Section 5.3 is significantly different from the existing optimization techniques, especially with RSO when all test functions are less than 5%. Furthermore, HJSPSO is improved significantly compared to the original JSO and PSO when the majority of test functions are H 1 .
The Friedman test is likewise used to compare the performance of HJSPSO with all the selected optimization techniques. The lowest value in the Friedman test indicates the best performance among the selected techniques. Table 8 shows scores using the Friedman test, where the first ranks are in bold. It clearly shows that HJSPSO secured the highest number of first ranks (3/10), which is like the hit rate in the previous subsection. This confirms the previous performance evaluation and accepts that HJSPSO outperforms other selected optimization techniques.

5.6. Case Study: Traveling Salesman

A simple traveling salesman problem (TSP) is used to find the shortest route for visiting all cities u, as given in Table 9 [19]. The control variable in this problem is to decide whether to travel between the i-th city ( u i ) and the j-th city ( u j ) or not as given by the following expression:
c i j = 1 , if there is a path between u i and u j 0 , otherwise
The objective is to minimize the total traveling distance, subject to the condition of visiting each city only once and then returning to the initial city where the trip began [19] as follows:
f t s p = min i = 1 n c i j , j = 1 n c d i j c i j
d i j = [ ( u i ( x ) u j ( x ) ] 2 + [ u i ( y ) u j ( y ) ] 2
subject to
i = 1 , i j n c c i j = 1 , j
j = 1 , j i n c c i j = 1 , i
i Q j i , j Q c i j | Q | 1 , Q { 1 , , n c } , | Q | 2
The optimization techniques were executed for 30 runs like in the previous analysis, but the total number of iterations T and population size N were set to 500 and 50, respectively. The results show that HJSPSO outperforms other optimization techniques and gives the lowest best, worst, and mean fitness values (bold) as tabulated in Table 10. However, HJSPSO is behind CHIO and HBJSA in terms of standard deviation. The performance comparison can also be presented in a statistical box plot as in Figure 3, where the red + represents outliers. In this case, HJSPSO has shown the best fitness within the box without any outliers. Therefore, HJSPSO can be considered as having the best performance among the optimization techniques. Figure 4 shows the shortest path obtained by the optimization techniques where HJSPSO provides the best shortest distance at 36.12. The solution is the same for GWO and ACO, but their starting points are different (HJSPSO at 3, ACO at 7, and GWO at 13). Nevertheless, the solutions provided by HJSPSO are more accurate and precise than GWO and ACO because HJSPSO gives significantly lower mean and standard deviation values.

6. Conclusions

This paper presents a novel HJSPSO that is based on JSO to benefit from the advantage of its exploitation (local search) capability and adopted a PSO operator to tackle exploration (global search). The movement of following the ocean current in JSO is replaced with a PSO operator and the movement of swimming inside the swarm uses a operator with some modifications. A time control mechanism is used to switch between the two operators to gain a good balance between exploration and exploitation. The effectiveness of HJSPSO was tested using a set of 50 classical and 10 large-scale (CEC-C06 2019) benchmark test functions and compared with 9 well-known metaheuristic optimization techniques, including PSO and JSO. In addition, a case study of TSP is used to demonstrate the effectiveness of HJSPSO in solving a nonconvex optimization problem. The results show that HJSPSO contributes in terms of exploration and exploitation improvements compared to the existing JSO and PSO techniques, where it ultimately secures the first rank in 64% and 30% of the classical and large-scale benchmark test functions, respectively. The Wilcoxon signed-ranked and Friedman rank tests also confirm that HJSPSO is significantly improved in obtaining the optimal solution for the complex optimization problems (large-scale benchmark test functions). In the TSP case study, HJSPSO outperforms the other selected optimization techniques at the first rank in finding the shortest distance between 20 cities after providing the lowest mean and best fitness at 38.82 and 36.12, respectively. The results clearly show that HJSPSO is a robust technique that can be applied to most optimization problems. Nevertheless, this work can be extended by conducting a sensitivity analysis on the parameter settings to provide the highest performance. Afterwards, HJSPSO can be applied in a real-world setting, especially in solving nonlinear and nonconvex power system problems, such as optimal power flow, transmission line planning, electric vehicle scheduling, and economic load dispatch.

Author Contributions

Conceptualization, H.M.N. and A.A.I.; methodology, H.M.N. and A.A.I.; software, H.M.N. and A.A.I.; validation, A.A.I., M.A.A.M.Z., M.A.Z. and H.S.; formal analysis, H.M.N.; investigation, H.M.N., A.A.I., M.A.A.M.Z., M.A.Z. and H.S.; resources, H.M.N. and A.A.I.; data curation, H.M.N.; writing—original draft preparation, H.M.N.; writing—review and editing, H.M.N., A.A.I., M.A.A.M.Z., M.A.Z. and H.S.; visualization, H.M.N., A.A.I. and H.S.; supervision, A.A.I., M.A.A.M.Z. and M.A.Z.; project administration, A.A.I.; funding acquisition, A.A.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universiti Kebangsaan Malaysia under grant GP-2021-K017238.

Data Availability Statement

All required data are described in the article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ACOAnt colony optimization
BBOBiography-based optimizer
CHIOCoronavirus herd immunity optimizer
CNNConvolutional neural network
DEDifferential evolution
FAFirefly algorithm
GAGenetic algorithm
GWOGrey wolf optimizer
HBOHeap-based optimizer
HBJSAHybrid heap-based and jellyfish search algorithm
HGSPSOHybrid gravitational search particle swarm optimization
HJSPSOHybrid jellyfish search and particle swarm optimization
HPSO-DEHybrid algorithm based on PSO and DE
HPSSHOHybrid particle swarm optimization spotted hyena optimizer
JSOJellyfish search optimizer
LSALightning search algorithm
PSOParticle swarm optimization
PS-FWHybrid algorithm based on particle swarm and fireworks
RSORat swarm optimizer
TLBOTeaching–learning-based optimization
TSPTraveling salesman problem

Appendix A

Table A1. A performance comparison of classical test functions.
Table A1. A performance comparison of classical test functions.
FunctionIndicatorJSOPSOGWOLSAHBJSARSOACOBBOCHIOHJSPSO
F1Mean0000000000
Std.0000002.1909000
Best0000000000
Worst0000005000
F2Mean00.06666701000.2333300.0333330
Std.00.2537101.0828000.5040100.182570
Best0000000000
Worst0104002010
F3Mean 4.5 × 10 134 1.6 × 10 258 6.7 × 10 262 1.5 × 10 151 00 3.5 × 10 170 0.0242730.172840
Std. 1.4 × 10 133 00 4.1 × 10 151 0000.0087280.224560
Best 4.2 × 10 136 9.3 × 10 274 7.7 × 10 268 1.0 × 10 161 00 2.5 × 10 174 0.011924 1.2 × 10 11 0
Worst 7.2 × 10 133 4.7 × 10 257 1.4 × 10 260 1.5 × 10 150 00 4.2 × 10 169 0.041231.13170
F4Mean 3.8 × 10 135 6.6 × 10 259 5.5 × 10 264 1.2 × 10 150 00 5.0 × 10 171 0.0034110.0207980
Std. 7.5 × 10 135 00 6.1 × 10 150 0000.0013010.023490
Best 9.4 × 10 138 7.1 × 10 269 1.1 × 10 269 1.2 × 10 160 00 1.0 × 10 174 0.001423 1.6 × 10 8 0
Worst 3.0 × 10 134 2.0 × 10 257 6.8 × 10 263 3.3 × 10 149 00 5.2 × 10 170 0.0057140.0889230
F5Mean0.0001770.0018990.0000650.0110810.0000110.0000230.0020010.0007680.082110.000113
Std.0.0000620.0006350.0000410.002462 7.0 × 10 6 0.0000230.0009940.0002030.0192750.000052
Best0.0000930.0008130.0000120.006049 1.2 × 10 6 1.2 × 10 6 0.0007440.0003730.0414710.000045
Worst0.0003380.0033810.0001690.0157180.0000270.0000840.0054500.0011880.112180.000255
F6Mean00 1.5 × 10 9 000.0001760 1.7 × 10 10 0.0000970
Std.00 1.5 × 10 9 000.0001860 2.7 × 10 10 0.0001130
Best00 3.7 × 10 11 00 2.2 × 10 6 0 2.3 × 10 14 5.5 × 10 6 0
Worst00 5.3 × 10 9 000.0006110.76207 1.2 × 10 9 0.0004270
F7Mean−1−1−1−1−1−0.9987−1−1−0.9916−1
Std.00 3.8 × 10 9 000.001276000.0310890
Best−1−1−1−1−1−0.99995−1−1−1−1
Worst−1−1−1−1−1−0.9938−1−1−0.83808−1
F8Mean0000000 5.8 × 10 13 0.0000250
Std.0000000 1.5 × 10 12 0.0000300
Best0000000 3.3 × 10 20 9.2 × 10 8 0
Worst0000000 6.9 × 10 12 0.0001320
F9Mean 7.8 × 10 17 5.5 × 10 12 0.13307 5.5 × 10 12 2.8 × 10 7 1.54070.0012420.0072840.4057 4.8 × 10 18
Std. 3.7 × 10 16 9.0 × 10 12 0.406 6.6 × 10 12 3.7 × 10 7 0.29540.0049560.0068380.30376 1.3 × 10 17
Best 1.5 × 10 21 6.3 × 10 14 3.6 × 10 7 2.1 × 10 16 1.6 × 10 10 0.29806 2.5 × 10 7 0.0001470.065115 1.9 × 10 23
Worst 2.0 × 10 15 4.8 × 10 11 1.3345 2.5 × 10 11 1.5 × 10 6 2.38170.0250510.0284741.0787 6.4 × 10 17
F10Mean−50−50−50−50−50−19.3568−50−50−49.952−50
Std. 3.0 × 10 14 4.6 × 10 14 2.5 × 10 6 7.8 × 10 14 1.7 × 10 6 10.2918 1.0 × 10 13 3.0 × 10 11 0.055344 2.8 × 10 14
Best−50−50−50−50−50−38.6815−50−50−49.9988−50
Worst−50−50−50−50−500.48768−50−50−49.7493−50
F11Mean−210−210−204.5386−210−206.7294−6.0603−210−209.9979−206.3257−210
Std. 4.8 × 10 11 6.3 × 10 12 16.6674 2.2 × 10 9 0.92096.1459 6.3 × 10 12 0.000573.8949 4.5 × 10 9
Best−210−210−209.9999−210−209.0753−22.3292−210−209.9992−209.7321−210
Worst−210−210−154.3612−210−204.67054.6506−210−209.9964−192.7169−210
F12Mean 2.7 × 10 112 1.1 × 10 278 4.9 × 10 324 6.8 × 10 242 00 4.0 × 10 209 2.3 × 10 7 3.2438 2.3 × 10 216
Std. 9.1 × 10 112 000000 1.8 × 10 7 1.76010
Best 4.0 × 10 118 1.9 × 10 292 0 6.8 × 10 254 00 1.3 × 10 217 2.6 × 10 8 0.43572 3.7 × 10 227
Worst 4.3 × 10 111 2.77 × 10 277 1.5 × 10 323 2.0 × 10 240 00 1.1 × 10 207 7.9 × 10 7 7.1901 6.8 × 10 215
F13Mean 2.57 × 10 6 0.000212 2.1 × 10 6 0.000038000.0001070.0153930.44318 2.4 × 10 7
Std. 7.3 × 10 6 0.000277 2.6 × 10 6 0.00007400 2.2 × 10 5 0.0044260.24949 5.7 × 10 7
Best 2.2 × 10 8 0.000031 1.6 × 10 8 4.8 × 10 6 00 5.3 × 10 5 0.0070040.085003 3.2 × 10 105
Worst0.0000400.001514 9.6 × 10 6 0.000381000.0001340.0222891.0397 3.0 × 10 6
F14Mean 4.2 × 10 70 0.000021 1.7 × 10 150 1.1 × 10 6 00 2.9 × 10 102 0.0445770.19129 5.6 × 10 177
Std. 1.9 × 10 69 0.000077 2.2 × 10 150 4.0 × 10 6 00 6.1 × 10 102 0.0088460.0794410
Best 2.9 × 10 74 1.6 × 10 13 1.6 × 10 152 1.1 × 10 26 00 9.1 × 10 104 0.0294450.000365 9.1 × 10 180
Worst 1.1 × 10 68 0.000347 9.1 × 10 150 0.00002100 3.3 × 10 101 0.0637030.34191 5.9 × 10 176
F15Mean 3.7 × 10 133 2.1 × 10 260 2.1 × 10 263 6.6 × 10 149 00 2.3 × 10 168 0.344512.3140
Std. 1.0 × 10 132 00 3.5 × 10 148 0000.117444.70750
Best 6.3 × 10 135 1.3 × 10 272 8.9 × 10 268 2.8 × 10 160 00 1.1 × 10 172 0.17032 1.5 × 10 6 0
Worst 5.6 × 10 132 3.7 × 10 259 3.0 × 10 262 1.9 × 10 147 00 4.5 × 10 167 0.6392425.45990
F16Mean0.0582729.614426.08253.708915.54628.319421.644159.0824134.02820.3876
Std.0.1910724.2660.732513.817211.26230.3711228.084237.739165.87680.32964
Best0.0000180.9317824.98430.0001410.00003527.78260.00065325.0157.064819.7712
Worst1.027676.678427.908715.160828.172428.9608111.2683142.212265.622821.3463
F17Mean0.010680.666670.666670.666670.543690.666670.666670.98422.43860.66667
Std.0.05605 3.2 × 10 16 2.3 × 10 8 6.0 × 10 16 0.18008 4.3 × 10 8 2.1 × 10 17 0.832491.6745 8.4 × 10 16
Best 1.4 × 10 9 0.666670.666670.666670.0706220.666670.666670.668870.0873390.66667
Worst0.307170.666670.666670.666670.683810.666670.666674.81285.54490.66667
F18Mean0.9981.32912.43750.9980.9981.92392.07812.30970.9980.998
Std. 1.1 × 10 16 0.602112.9447 1.4 × 10 16 1.1 × 10 16 1.00682.58522.5508 2.7 × 10 12 1.1 × 10 16
Best0.9980.9980.9980.9980.9980.9980.9980.9980.9980.998
Worst0.9982.982110.76320.9980.9982.982110.763210.76320.9980.998
F19Mean0.397890.397890.397890.397890.397890.577240.397890.397890.397890.39789
Std.00 4.8 × 10 8 000.15066 2.9 × 10 14 1.9 × 10 15 3.3 × 10 8 0
Best0.397890.397890.397890.397890.397890.398050.397890.397890.397890.39789
Worst0.397890.397890.397890.397890.397890.937250.397890.397890.397890.39789
F20Mean00000000 9.6 × 10 6 0
Std.000000000.0000470
Best0000000000
Worst000000000.0002590
F21Mean00 5.0 × 10 9 000.0001380 3.8 × 10 11 3.7 × 10 6 0
Std.00 5.5 × 10 9 000.0001850 9.8 × 10 11 0.0000130
Best00 1.4 × 10 10 00 3.2 × 10 6 00 1.3 × 10 10 0
Worst00 2.1 × 10 8 000.0007610 3.9 × 10 10 0.0000680
F22Mean4.587242.9158057.93970016.682121.87593.23920
Std.4.75212.9218013.4727004.12046.15091.75120
Best025.8689037.8084009.949612.95550.0414110
Worst13.929470.642080.59150025.868944.786.20760
F23Mean−8097.69−6548.23−5987.234−8279.335−10486.3−5822.831−8765.300−9168.879−11631.16−8249.76
Std.596.0855862.8825627.9572622.63471663.726701.2344635.8039504.1961190.2833460.9557
Best−9209.117−8339.086−7508.985−9544.808−12150.5−6951.771−9915.361−9959.251−12029.93−9248.716
Worst−6651.382−5101.433−4716.264−7156.004−7486.446−3579.892−7572.415−8123.383−11300.07−7432.332
F24Mean−1.8013−1.8013−1.8013−1.8013−1.8013−1.4896−1.8013−1.8013−1.8013−1.8013
Std. 9.0 × 10 16 9.0 × 10 16 4.6 × 10 8 9.0 × 10 16 9.0 × 10 16 0.27273 9.0 × 10 16 9.5 × 10 16 7.8 × 10 14 9.0 × 10 16
Best−1.8013−1.8013−1.8013−1.8013−1.8013−1.7815−1.8013−1.8013−1.8013−1.8013
Worst−1.8013−1.8013−1.8013−1.8013−1.8013−0.94607−1.8013−1.8013−1.8013−1.801
F25Mean−4.6793−4.538−4.5439−4.6003−4.6877−2.3764−4.5908−4.6491−4.6877−4.6701
Std.0.0169910.188970.20330.089854 1.8 × 10 15 0.315570.08970.054374 6.9 × 10 8 0.047281
Best−4.6877−4.6877−4.6876−4.6877−4.6877−2.8405−4.6877−4.6877−4.6877−4.6877
Worst−4.6459−3.8658−3.8446−4.3331−4.6877−1.7803−4.3331−4.4831−4.6877−4.5377
F26Mean−9.5319−8.8762−7.9497−8.9966−9.6602−3.7965−9.3772−9.2552−9.6589−9.5186
Std.0.0945230.575641.03230.30029 4.4 × 10 6 0.587150.169550.260290.0020790.093213
Best−9.6602−9.5527−9.3656−9.4641−9.6602−5.0334−9.6602−9.6176−9.6602−9.6184
Worst−9.3281−6.9144−5.7263−8.3181−9.6591−2.8528−9.0305−8.5856−9.6526−9.3356
F27Mean0000.002911000.004400.0026580
Std.0000.01108000.013300.0085710
Best00000000 1.5 × 10 10 0
Worst0000.043671000.04367100.0436710
F28Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std. 6.8 × 10 16 6.8 × 10 16 2.4 × 10 10 6.8 × 10 16 6.8 × 10 16 7.8 × 10 6 6.8 × 10 16 6.1 × 10 16 8.7 × 10 10 6.8 × 10 16
Best−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Worst−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
F29Mean00000000 1.5 × 10 6 0
Std.00000000 3.9 × 10 6 0
Best0000000000
Worst000000000.0000170
F30Mean0000000 7.1 × 10 9 0.0011550
Std.0000000 1.3 × 10 8 0.0015760
Best0000000 4.9 × 10 15 9.2 × 10 8 0
Worst0000000 6.8 × 10 8 0.0079480
F31Mean−186.7309−186.7309−186.7284−186.7309−186.7309−186.7286−186.7309−186.7309−186.7308−186.7309
Std. 1.8 × 10 14 4.0 × 10 14 0.009404 3.4 × 10 14 2.0 × 10 14 0.007582 3.9 × 10 14 3.9 × 10 14 0.000254 2.8 × 10 14
Best−186.7309−186.7309−186.7309−186.7309−186.7309−186.7309−186.7309−186.7309−186.7309−186.7309
Worst−186.7309−186.7309−186.6817−186.7309−186.7309−186.6946−186.7309−186.7309−186.7297−186.7309
F32Mean3333333333
Std. 2.0 × 10 15 2.0 × 10 15 1.0 × 10 7 1.3 × 10 15 1.7 × 10 15 3.6 × 10 8 1.3 × 10 15 3.4 × 10 15 0.000013 1.3 × 10 15
Best3333333333
Worst333333333.00013
F33Mean0.0003070.0003680.0024350.0003380.0003070.0006750.0016750.0003540.0006770.000307
Std. 1.9 × 10 19 0.0002440.0060860.000167 5.6 × 10 9 0.0002670.0050830.000060.000123 1.2 × 10 19
Best0.0003070.0003070.0003070.0003070.0003070.0003540.0003070.0003070.0003170.000307
Worst0.0003070.0015940.0203630.0012230.0003080.00130.0203630.0005140.000870.000307
F34Mean−10.1532−5.9744−9.6449−8.3806−10.1532−1.3749−6.2478−7.228−10.1528−10.1532
Std. 7.2 × 10 15 3.371.55092.5859 7.2 × 10 15 0.774483.72923.28980.000973 7.2 × 10 15
Best−10.1532−10.1532−10.1532−10.1532−10.1532−3.1593−10.1532−10.1532−10.1532−10.1532
Worst−10.1532−2.6305−5.0552−2.6305−10.1532−0.4962−2.6305−2.6305−10.1489−10.1532
F35Mean−10.4029−7.6896−10.2257−8.1426−10.4029−1.5068−7.9193−7.7295−10.4024−10.4029
Std. 1.8 × 10 15 3.45130.970433.2909 1.6 × 10 15 1.33113.57963.58490.00173 1.8 × 10 15
Best−10.4029−10.4029−10.4029−10.4029−10.4029−7.7302−10.4029−10.4029−10.4029−10.4029
Worst−10.4029−2.7519−5.0877−2.7659−10.4029−0.58241−2.7519−2.7519−10.3939−10.4029
F36Mean−10.5364−7.9068−10.5364−9.4643−10.5364−1.6671−7.0608−7.9283−10.5349−10.5364
Std. 1.8 × 10 15 3.79070.0000152.4485 1.8 × 10 15 0.884783.80083.51440.003992 1.7 × 10 15
Best−10.5364−10.5364−10.5364−10.5364−10.5364−4.4413−10.5364−10.5364−10.5364−10.5364
Worst−10.5364−2.4217−10.5363−3.8354−10.5364−0.67852−2.4273−1.6766−10.515−10.5364
F37Mean0.004850.0955740.335910.0147610.0166794.07690.14650.119380.118620.004616
Std.0.0011810.158260.53840.0361540.0199225.77710.188540.139620.091180.001398
Best0.002045 2.9 × 10 6 6.4 × 10 6 7.1 × 10 6 0.0019360.06555 5.6 × 10 6 0.000620.001690.000088
Worst0.0063890.472311.53770.130660.09034728.31140.472310.472310.372340.006388
F38Mean0.0001140.0002210.0301260.0001920.00547917.29020.0009380.0017260.0101950.000083
Std.0.0001730.0001770.160840.0001730.00320224.88120.0030200.0025370.0072090.000093
Best 5.5 × 10 7 1.0 × 10 9 0.000026 6.4 × 10 9 0.0011220.38614 3.1 × 10 14 6.9 × 10 7 0.000441 7.3 × 10 7
Worst0.0008310.0004280.881680.0004190.011985124.71920.0153940.0082460.0306970.000325
F39Mean−3.8628−3.8628−3.8628−3.8628−3.8628−2.7081−3.8628−3.8628−3.8628−3.8628
Std. 2.7 × 10 15 2.7 × 10 15 0.002405 2.7 × 10 15 2.7 × 10 15 0.924 2.7 × 10 15 2.5 × 10 15 5.2 × 10 10 2.7 × 10 15
Best−3.8628−3.8628−3.8628−3.8628−3.8628−3.8537−3.8628−3.8628−3.8628−3.8628
Worst−3.8628−3.8628−3.8549−3.8628−3.8628−0.57993−3.8628−3.8628−3.8628−3.8628
F40Mean−3.3224−3.2588−3.2579−3.2469−3.3224−2.1921−3.2826−3.3065−3.3224−3.3224
Std. 5.9 × 10 16 0.0604870.0671230.058427 6.4 × 10 16 0.440010.0571550.041215 3.0 × 10 8 5.7 × 10 16
Best−3.3224−3.3224−3.3224−3.3224−3.3224−2.9142−3.3224−3.3224−3.3224−3.3224
Worst−3.3224−3.2032−3.1376−3.2032−3.3224−1.3712−3.2032−3.2032−3.3224−3.3224
F41Mean00.01082600.008372000.0009860.0622070.144320
Std.00.01232600.011661000.0030770.0223680.140440
Best00000000.027649 1.2 × 10 7 0
Worst00.05136900.044263000.0123210.127540.463680
F42Mean 4.9 × 10 15 0.15683 8.0 × 10 15 1.8126 8.9 × 10 16 2.6 × 10 15 5.9 × 10 15 0.0406490.11434 3.3 × 10 15
Std. 1.2 × 10 15 0.4172501.13270 1.8 × 10 15 1.8 × 10 15 0.0074390.097559 1.5 × 10 15
Best 4.4 × 10 15 8.0 × 10 15 8.0 × 10 15 8.0 × 10 15 8.9 × 10 16 8.9 × 10 16 4.4 × 10 16 0.025084 4.7 × 10 6 4.4 × 10 16
Worst 8.0 × 10 15 1.5017 8.0 × 10 15 3.9346 8.9 × 10 16 4.4 × 10 15 8.0 × 10 15 0.0558480.31791 4.0 × 10 15
F43Mean 3.3 × 10 27 0.0034560.0097830.27412 3.6 × 10 9 0.316380.0276450.0000570.002775 1.3 × 10 27
Std. 1.7 × 10 26 0.0189270.0084490.6863 2.7 × 10 9 0.128950.0813710.0000230.002182 2.9 × 10 27
Best 2.1 × 10 30 1.6 × 10 32 3.9 × 10 8 2.1 × 10 32 9.0 × 10 10 0.091686 1.6 × 10 32 0.000019 2.2 × 10 9 1.7 × 10 30
Worst 9.2 × 10 26 0.103670.0392313.4496 1.2 × 10 8 0.787040.414670.0001190.008432 1.6 × 10 26
F44Mean0.0060360.0033830.14480.006264 2.7 × 10 8 2.7347 4.1 × 10 32 0.0006970.0368110.029482
Std.0.0229720.0185320.102670.023854 1.6 × 10 8 0.055416 1.1 × 10 31 0.0002360.0349190.051025
Best 2.2 × 10 29 1.5 × 10 33 2.7 × 10 7 2.5 × 10 32 5.3 × 10 9 2.6245 1.5 × 10 33 0.000278 5.3 × 10 12 2.0 × 10 22
Worst0.0905430.10150.385050.097371 7.2 × 10 8 2.8429 6.1 × 10 31 0.0013630.102150.14521
F45Mean−1.0809−1.0809−1.0809−1.0809−1.0809−0.73561−1.0809−1.0494−1.0809−1.0809
Std. 4.5 × 10 16 4.5 × 10 16 3.8 × 10 9 4.5 × 10 16 4.5 × 10 16 0.23848 4.5 × 10 16 0.0582080.000013 4.5 × 10 16
Best−1.0809−1.0809−1.0809−1.0809−1.0809−1.0661−1.0809−1.0809−1.0809−1.0809
Worst−1.0809−1.0809−1.0809−1.0809−1.0809−0.098209−1.0809−0.94563−1.0809−1.0809
F46Mean−1.5−1.1997−1.2323−1.2909−1.5−0.21821−1.3641−1.0043−1.4064−1.5
Std. 6.8 × 10 16 0.288790.299040.28109 6.8 × 10 16 0.224970.251890.369350.21175 6.8 × 10 16
Best−1.5−1.5−1.5−1.5−1.5−0.90126−1.5−1.5−1.5−1.5
Worst−1.5−0.73607−0.57409−0.79773−1.5−0.011193−0.79782−0.51319−0.90597−1.5
F47Mean−0.97768−0.71091−0.62771−0.58377−0.89084−0.000724−0.89442−0.56605−0.77381−0.72769
Std.0.358180.361880.362670.273620.29440.0015870.248590.248590.0936770.22809
Best−1.5−1.5−1.5−1.5−1.4993−0.006644−1.5−0.79769−0.96436−1.5
Worst−0.46585−0.27494−0.13427−0.27494−0.41215 8.0 × 10 7 −0.35577−0.14546−0.51318−0.27494
F48Mean000.0000290 1.9 × 10 14 0.018916 5.5 × 10 18 9.8 × 10 6 0.0076130
Std.000.0000540 9.4 × 10 14 0.030807 3.0 × 10 17 0.0000260.0094960
Best00 7.0 × 10 8 000.0003270 3.3 × 10 10 0.000210
Worst000.0001940 5.1 × 10 13 0.1483 1.7 × 10 16 0.0001210.033290
F49Mean 6.6 × 10 28 463.683975.4101158.0587152.26861842.24191.0691231.87567.5535 6.5 × 10 6
Std. 4.1 × 10 28 1188.208167.4468291.4037273.251737.654200.0792324.01056.18440.000035
Best000.00408800.00002592.7308 1.1 × 10 25 5.0 × 10 14 0.166450
Worst 1.0 × 10 27 5066.931692.4573677.3945692.45656188.255677.3945692.456529.56540.000194
F50Mean 7.1 × 10 28 528.691681.361167.7395426.06751824.00663.7245209.80426.1735 5.3 × 10 28
Std. 6.6 × 10 28 1084.965165.7652206.6924302.04392047.863170.1124317.12885.3754 5.2 × 10 28
Best0010.539703.135672.1616 2.8 × 10 26 2.2 × 10 14 0.238670
Worst 3.4 × 10 27 4348.837692.4587677.3945692.45656139.581692.4565692.456523.6686 1.6 × 10 27
Number of best hits23149132915138332
Hit rate (%)4628182658302616664
Table A2. A performance comparison of CEC-C06 2019 test functions.
Table A2. A performance comparison of CEC-C06 2019 test functions.
FunctionIndicatorJSOPSOGWOLSAHBJSARSOACOBBOCHIOHJSPSO
CEC01Mean1897.469347.033471.47357110.0521130345.2575567.76215989547.7927
Std.2867.53810741.981809.45510272.97 4.2 × 10 10 030030.0179953.371323728103.9644
Best7.89598.66312.623811397.1488278.5102329903.21
Worst12599.6835716.129008.05241270.071194955.56338853.25778371465.4388
CEC02Mean158.8989211.8816182.3192276.04124.95434.3106334.29274.26381456.03722.8269
Std.71.789598.4277157.820988.46750.162050.18938130.1392.8797336.342718.0358
Best28.416841.0255.359129.46014.2464.21954.2181140.061809.63884.2165
Worst347.462413.0268607.7932468.465555689.29562.20112146.50566.2613
CEC03Mean1.70181.60561.79671.61924.3164.74394.61792.24631.82731.3745
Std.0.389581.15571.15081.15050.48131.09330.51272.17070.280450.1407
Best1.4092111.40913.5061.56143.33811.40911.47131
Worst2.81967.71196.65947.71095.337.45245.38987.71042.53961.6115
CEC04Mean5.22716.620811.48924.547315.16158.79588.00119.35779.43728.7707
Std.2.29369.03685.939910.64683.106610.01956.99283.76472.58523.1094
Best2.07855.97482.99726.96989.866241.665512.98995.42163.9849
Worst11.944539.803324.727446.76822.485979.595572.056118.909215.577815.9244
CEC05Mean1.03721.1181.34181.12541.071537.02381.04001.08651.08321.0343
Std.0.023940.0768110.218810.0733220.0378149.0680.09540.046960.0465030.023769
Best1.00741.02961.08071.01721.008522.889711.01971.01141
Worst1.09351.35681.79451.37611.142566.79161.53561.21431.18411.0935
CEC06Mean1.12171.65651.67522.9911.03897.30741.30681.32172.56751.1953
Std.0.218170.878210.654291.30460.0333070.990910.46340.60390.486890.39333
Best111.0791.08131.00425.674111.00831.68621
Worst1.9654.13443.40225.52461.11999.69552.51433.49393.64442.5774
CEC07Mean176.394780.0061506.8545770.3265640.87521231.502103.99676.6212310.6974309.9096
Std.157.7418226.8214243.8092325.8487119.8646199.7113108.2556299.134198.4024145.5422
Best1.1249416.05991.4188126.5957388.7151837.71977.89244.602338.27681.2498
Worst544.36411284.3861073.911705.947855.20981662.029506.73671138.152508.1458593.3495
CEC08Mean2.29953.45443.07413.40373.77884.53512.28043.35043.32752.2913
Std.0.430680.573270.605970.507270.19740.20130.437760.533450.262030.41965
Best1.24092.43562.2012.00713.13424.18571.68912.05062.8341.6462
Worst3.14374.4754.5244.47854.09835.08613.21394.40733.68963.1452
CEC09Mean1.07751.13291.09511.24091.20191.50091.08041.09431.16461.1155
Std.0.0252390.0522330.0441480.120480.0359460.410230.0170940.0381490.033670.029268
Best1.03691.05221.05091.04511.13241.37551.03461.02731.11441.0635
Worst1.15241.26641.20251.64251.2573.67011.11321.1771.23181.1796
CEC10Mean5.800420.32719.974419.122815.822821.117620.539720.999819.63324.2985
Std.7.91613.65034.98025.75268.96730.515253.70050.0007334.31116.833
Best111.08362.15511.004418.764120.99615.85471
Worst21.36982121.391821.141721.20921.435321.41052121.058721.3619
Number of best hits2000122003
Hit rate (%)200001020200030

References

  1. Chong, E.K.; Zak, S.H. An Introduction to Optimization; John Wiley & Sons: Hoboken, NJ, USA, 2013; Volume 75. [Google Scholar]
  2. Yang, X.S. Metaheuristic optimization: Algorithm analysis and open problems. In Proceedings of the Experimental Algorithms: 10th International Symposium (SEA 2011), Kolimpari, Chania, Greece, 5–7 May 2011; pp. 21–32. [Google Scholar]
  3. Vasiljević, D.; Vasiljević, D. Classical algorithms in the optimization of optical systems. In Classical and Evolutionary Algorithms in the Optimization of Optical Systems; Springer: Boston, MA, USA, 2002; pp. 11–39. [Google Scholar]
  4. Dhiman, G.; Kaur, A. A hybrid algorithm based on particle swarm and spotted hyena optimizer for global optimization. In Proceedings of the Soft Computing for Problem Solving (SocProS 2017), Bhubaneswar, India, 23–24 December 2017; pp. 599–615. [Google Scholar]
  5. Dahmani, S.; Yebdri, D. Hybrid algorithm of particle swarm optimization and grey wolf optimizer for reservoir operation management. Water Resour. Manag. 2020, 34, 4545–4560. [Google Scholar] [CrossRef]
  6. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
  7. Singh, S.; Chauhan, P.; Singh, N. Capacity optimization of grid connected solar/fuel cell energy system using hybrid ABC-PSO algorithm. Int. J. Hydrogen Energy 2020, 45, 10070–10088. [Google Scholar] [CrossRef]
  8. Wong, L.A.; Shareef, H.; Mohamed, A.; Ibrahim, A.A. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation. Sci. World J. 2014, 2014, 752096. [Google Scholar] [CrossRef]
  9. Wong, L.A.; Shareef, H.; Mohamed, A.; Ibrahim, A.A. Novel quantum-inspired firefly algorithm for optimal power quality monitor placement. Front. Energy 2014, 8, 254–260. [Google Scholar] [CrossRef]
  10. Long, W.; Cai, S.; Jiao, J.; Xu, M.; Wu, T. A new hybrid algorithm based on grey wolf optimizer and cuckoo search for parameter extraction of solar photovoltaic models. Energy Convers. Manag. 2020, 203, 112243. [Google Scholar] [CrossRef]
  11. Ting, T.; Yang, X.S.; Cheng, S.; Huang, K. Hybrid metaheuristic algorithms: Past, present, and future. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Springer: Cham, Switzerland, 2015; pp. 71–83. [Google Scholar]
  12. Farnad, B.; Jafarian, A.; Baleanu, D. A new hybrid algorithm for continuous optimization problem. Appl. Math. Model. 2018, 55, 652–673. [Google Scholar] [CrossRef]
  13. Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
  14. Ibrahim, A.A.; Mohamed, A.; Shareef, H. Optimal placement of power quality monitors in distribution systems using the topological monitor reach area. In Proceedings of the 2011 IEEE International Electric Machines & Drives Conference (IEMDC), Niagara Falls, ON, Canada, 15–18 May 2011; pp. 394–399. [Google Scholar]
  15. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341. [Google Scholar] [CrossRef]
  16. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  17. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  18. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  19. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. 2015, 36, 315–333. [Google Scholar] [CrossRef]
  20. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  21. Al-Betar, M.A.; Alyasseri, Z.A.A.; Awadallah, M.A.; Abu Doush, I. Coronavirus herd immunity optimizer (CHIO). Neural Comput. Appl. 2021, 33, 5011–5042. [Google Scholar] [CrossRef]
  22. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN’95), Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  23. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  24. Dhiman, G.; Garg, M.; Nagar, A.; Kumar, V.; Dehghani, M. A novel algorithm for global optimization: Rat swarm optimizer. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 8457–8482. [Google Scholar] [CrossRef]
  25. Khare, A.; Kakandikar, G.M.; Kulkarni, O.K. An insight review on jellyfish optimization algorithm and its application in engineering. Rev. Comput. Eng. Stud. 2022, 9, 31–40. [Google Scholar] [CrossRef]
  26. Manita, G.; Zermani, A. A modified jellyfish search optimizer with orthogonal learning strategy. Procedia Comput. Sci. 2021, 192, 697–708. [Google Scholar] [CrossRef]
  27. Nguyen, T.T.; Li, Z.; Zhang, S.; Truong, T.K. A hybrid algorithm based on particle swarm and chemical reaction optimization. Expert Syst. Appl. 2014, 41, 2134–2143. [Google Scholar] [CrossRef]
  28. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  29. Cui, G.; Qin, L.; Liu, S.; Wang, Y.; Zhang, X.; Cao, X. Modified PSO algorithm for solving planar graph coloring problem. Prog. Nat. Sci. 2008, 18, 353–357. [Google Scholar] [CrossRef]
  30. Ibrahim, A.A.; Mohamed, A.; Shareef, H.; Ghoshal, S.P. Optimal power quality monitor placement in power systems based on particle swarm optimization and artificial immune system. In Proceedings of the 2011 3rd Conference on Data Mining and Optimization (DMO), Putrajaya, Malaysia, 28–29 June 2011; pp. 141–145. [Google Scholar]
  31. Gupta, S.; Devi, S. Modified PSO algorithm with high exploration and exploitation ability. Int. J. Softw. Eng. Res. Pract. 2011, 1, 15–19. [Google Scholar]
  32. Yan, C.m.; Lu, G.y.; Liu, Y.t.; Deng, X.y. A modified PSO algorithm with exponential decay weight. In Proceedings of the 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Guilin, China, 29–31 July 2017; pp. 239–242. [Google Scholar]
  33. Al-Bahrani, L.T.; Patra, J.C. A novel orthogonal PSO algorithm based on orthogonal diagonalization. Swarm Evol. Comput. 2018, 40, 1–23. [Google Scholar] [CrossRef]
  34. Yu, X.; Cao, J.; Shan, H.; Zhu, L.; Guo, J. An adaptive hybrid algorithm based on particle swarm optimization and differential evolution for global optimization. Sci. World J. 2014, 2014, 215472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Chen, S.; Liu, Y.; Wei, L.; Guan, B. PS-FW: A hybrid algorithm based on particle swarm and fireworks for global optimization. Comput. Intell. Neurosci. 2018, 2018, 6094685. [Google Scholar] [CrossRef]
  36. Khan, T.A.; Ling, S.H. A novel hybrid gravitational search particle swarm optimization algorithm. Eng. Appl. Artif. Intell. 2021, 102, 104263. [Google Scholar] [CrossRef]
  37. Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.K.; Ryan, M.J.; El-Fergany, A. An improved artificial jellyfish search optimizer for parameter identification of photovoltaic models. Energies 2021, 14, 1867. [Google Scholar] [CrossRef]
  38. Juhaniya, A.I.S.; Ibrahim, A.A.; Mohd Zainuri, M.A.A.; Zulkifley, M.A.; Remli, M.A. Optimal stator and rotor slots design of induction motors for electric vehicles using opposition-based jellyfish search optimization. Machines 2022, 10, 1217. [Google Scholar] [CrossRef]
  39. Rajpurohit, J.; Sharma, T.K. Chaotic active swarm motion in jellyfish search optimizer. Int. J. Syst. Assur. Eng. Manag. 2022, 1–17. [Google Scholar] [CrossRef]
  40. Ginidi, A.; Elsayed, A.; Shaheen, A.; Elattar, E.; El-Sehiemy, R. An innovative hybrid heap-based and jellyfish search algorithm for combined heat and power economic dispatch in electrical grids. Mathematics 2021, 9, 2053. [Google Scholar] [CrossRef]
  41. Chou, J.S.; Truong, D.N.; Kuo, C.C. Imaging time-series with features to enable visual recognition of regional energy consumption by bio-inspired optimization of deep learning. Energy 2021, 224, 120100. [Google Scholar] [CrossRef]
  42. Chen, K.; Zhou, F.; Yin, L.; Wang, S.; Wang, Y.; Wan, F. A hybrid particle swarm optimizer with sine cosine acceleration coefficients. Inf. Sci. 2018, 422, 218–241. [Google Scholar] [CrossRef]
  43. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  44. Rahman, C.M.; Rashid, T.A. Dragonfly algorithm and its applications in applied science survey. Comput. Intell. Neurosci. 2019, 2019, 9293617. [Google Scholar] [CrossRef] [PubMed]
  45. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  46. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–7 July 1999; pp. 1470–1477. [Google Scholar]
  47. Chou, J.S.; Ngo, N.T. Modified firefly algorithm for multidimensional optimization in structural design problems. Struct. Multidiscip. Optim. 2017, 55, 2013–2028. [Google Scholar] [CrossRef]
  48. Gomes, W.J.; Beck, A.T.; Lopez, R.H.; Miguel, L.F. A probabilistic metric for comparing metaheuristic optimization algorithms. Struct. Saf. 2018, 70, 59–70. [Google Scholar] [CrossRef]
  49. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. Flowchart of HJSPSO algorithm.
Figure 1. Flowchart of HJSPSO algorithm.
Mathematics 11 03210 g001
Figure 2. Convergence curves of the selected test functions: (a) Easom (F7); (b) Colville (F9); (c) Kowalik (F33); (d) Fletcher Powell 10 (F50); (e) Lennard–Jones energy cluster (CEC03); (f) Ackley function (CEC10).
Figure 2. Convergence curves of the selected test functions: (a) Easom (F7); (b) Colville (F9); (c) Kowalik (F33); (d) Fletcher Powell 10 (F50); (e) Lennard–Jones energy cluster (CEC03); (f) Ackley function (CEC10).
Mathematics 11 03210 g002
Figure 3. Box plots for performance comparison in solving TSP.
Figure 3. Box plots for performance comparison in solving TSP.
Mathematics 11 03210 g003
Figure 4. Path variations of the best solution in TSP by each optimization technique: (a) the shortest path by JSO: 3 17 10 12 9 8 16 2 18 14 20 7 1 11 5 15 6 13 4 19 3 ; (b) the shortest path by PSO: 13 1 11 5 15 6 19 3 7 20 14 18 2 16 8 9 12 10 17 4 13 ; (c) the shortest path by GWO: 13 6 15 5 11 1 19 7 20 14 18 2 16 8 9 12 10 17 3 4 13 ; (d) the shortest path by LSA: 18 9 10 12 17 8 2 16 7 19 3 4 13 6 1 15 5 20 11 14 18 ; (e) the shortest path by HBJSA: 14 20 1 15 5 11 6 13 4 3 17 10 8 9 12 19 7 16 2 18 14 ; (f) the shortest path by RSO: 5 11 1 6 13 4 19 3 17 8 9 10 12 16 7 20 18 2 14 15 5 ; (g) the shortest path by ACO: 7 19 1 11 5 15 6 13 4 3 17 10 12 9 8 16 2 18 14 20 7 ; (h) the shortest path by BBO: 10 17 3 4 13 19 7 1 6 11 5 15 20 14 18 2 16 12 8 9 10 ; (i) the shortest path by CHIO: 8 10 17 3 4 13 6 15 5 11 1 19 7 2 18 14 20 16 12 9 8 ; (j) the shortest path by HJSPSO: 3 17 10 12 9 8 16 2 18 14 20 7 19 1 11 5 15 6 13 4 3 .
Figure 4. Path variations of the best solution in TSP by each optimization technique: (a) the shortest path by JSO: 3 17 10 12 9 8 16 2 18 14 20 7 1 11 5 15 6 13 4 19 3 ; (b) the shortest path by PSO: 13 1 11 5 15 6 19 3 7 20 14 18 2 16 8 9 12 10 17 4 13 ; (c) the shortest path by GWO: 13 6 15 5 11 1 19 7 20 14 18 2 16 8 9 12 10 17 3 4 13 ; (d) the shortest path by LSA: 18 9 10 12 17 8 2 16 7 19 3 4 13 6 1 15 5 20 11 14 18 ; (e) the shortest path by HBJSA: 14 20 1 15 5 11 6 13 4 3 17 10 8 9 12 19 7 16 2 18 14 ; (f) the shortest path by RSO: 5 11 1 6 13 4 19 3 17 8 9 10 12 16 7 20 18 2 14 15 5 ; (g) the shortest path by ACO: 7 19 1 11 5 15 6 13 4 3 17 10 12 9 8 16 2 18 14 20 7 ; (h) the shortest path by BBO: 10 17 3 4 13 19 7 1 6 11 5 15 20 14 18 2 16 12 8 9 10 ; (i) the shortest path by CHIO: 8 10 17 3 4 13 6 15 5 11 1 19 7 2 18 14 20 16 12 9 8 ; (j) the shortest path by HJSPSO: 3 17 10 12 9 8 16 2 18 14 20 7 19 1 11 5 15 6 13 4 3 .
Mathematics 11 03210 g004aMathematics 11 03210 g004b
Table 1. Description of 50 classical benchmark test functions.
Table 1. Description of 50 classical benchmark test functions.
No.Function’s NameTypesOptimal ValueDimensionRange
1SetpintUS05[−5.12, 5.12]
2StepUS030[−100, 100]
3SphereUS030[−100, 100]
4SumSquaresUS030[−10, 10]
5QuarticUS030[−1.28, 1.28]
6BealeUN02[−4.5, 4.5]
7EasomUN−12[−100, 100]
8MatyasUN02[−10, 10]
9ColvilleUN04[−10, 10]
10Trid 6UN−506[−D2, D2]
11Trid 10UN−21010[−D2, D2]
12ZakharovUN010[−5, 10]
13PowellUN024[−4, 5]
14Schwefel 2.22UN030[−10, 10]
15Schwefel 1.2UN030[−100, 100]
16RosenbrockUN030[−30, 30]
17Dixon-PriceUN030[−10, 10]
18FoxholesMS0.9982[−65.536, 65.536]
19BraninMS0.3982[−5, 10], [0, 15]
20Bohachevsky1MS02[−100, 100]
21BoothMS02[−10, 10]
22RastriginMS030[−5.12, 5.12]
23SchwefelMS−12,569.530[−500, 500]
24Michalewicz 2MS−1.80132[0, π ]
25Michalewicz 5MS−4.68775[0, π ]
26Michalewicz 10MS−9.660210[0, π ]
27SchafferMS02[−100, 100]
28Six Hump Camel BackMS−1.031632[−5, 5]
29Bohachevsky 2MS02[−100, 100]
30Bohachevsky 3MS02[−100, 100]
31ShubertMS−186.732[−10, 10]
32Goldstein-PriceMS32[−2, 2]
33KowalikMS0.000314[−5, 5]
34Shekel 5MS−10.154[0, 10]
35Shekel 7MS−10.44[0, 10]
36Shekel 10MS−10.534[0, 10]
37PermMS04[−D, D]
38PowersumMS04[0, 1]
39Hartman 3MS−3.863[0, D]
40Hartman 6MS−3.326[0, 1]
41GriewankMS030[−600, 600]
42AckleyMS030[−32, 32]
43PenalizedMS030[−50, 50]
44Penalized 2MS030[−50, 50]
45Langermann 2MS−1.082[0, 10]
46Langermann 5MS−1.55[0, 10]
47Langermann 10MSNA10[0, 10]
48Fletcher Powell 2MS02[ π , π ]
49Fletcher Powell 5MS05[ π , π ]
50Fletcher Powell 10MS010[ π , π ]
US: unimodal and separable function; UN: unimodal and nonseparable function; MS: multimodal and separable function; and MN: multimodal and nonseparable function.
Table 2. Description of CEC-C06 2019 benchmark test functions.
Table 2. Description of CEC-C06 2019 benchmark test functions.
No.FunctionFunction’s NameOptimal ValueDimensionRange
1CEC01Storn’s Chebyshev polynomial fitting problem19[−5.12, 5.12]
2CEC02Inverse Hilbert matrix problem116[−100, 100]
3CEC03Lennard–Jones minimum energy cluster118[−100, 100]
4CEC04Rastrigin’s function110[−10, 10]
5CEC05Grienwank’s function110[−1.28, 1.28]
6CEC06Weierstrass function110[−4.5, 4.5]
7CEC07Modified Schwefel’s function110[−100, 100]
8CEC08Expanded Schaffer’s F6 function110[−10, 10]
9CEC09Happy CAT function110[−10, 10]
10CEC10Ackley Function110[−D2, D2]
Table 3. Parameter settings of the optimization techniques.
Table 3. Parameter settings of the optimization techniques.
TechniqueParameter Settings
HJSPSO N = 50 ; T = 3000 ; c m i n = 0.5 ; c m a x = 2.5 ; w m i n = 0.4 ; w m a x = 0.9 ; β = 0.1 ; γ = 0.1 ; c o = 0.5
PSO [22] N = 50 ; T = 3000 ; c 1 = 0.5 ; c 2 = 2.5 ; w m i n = 0.4 ; w m a x = 0.9
JSO [23] N = 50 ; T = 3000 ; γ = 0.1 , c o = 0.5
GWO [45] N = 50 ; T = 3000 ; control parameter a linearly decreases from 2 to 0
LSA [19] N = 50 ; T = 3000 ; channel time is set to 10
HBJSA [40] N = 50 ; T = 3000 ; adaptive coefficient φ increases gradually until reaching 0.5
RSO [24] N = 50 ; T = 3000 ; ranges of R and C are set to [1, 5] and [0, 2], respectively
ACO [46] N = 50 ; T = 3000 ; pheromone evaporation rate, ρ = 0.5; pheromone exponential weight; α = 1; heuristic exponential weight, β = 2
BBO [16] N = 50 ; T = 3000 ; habitat modification probability = 1; immigration probability bound per iteration = [0, 1]; step size for numerical integration of probability at 1; mutation probability, M m a x = 0.005 ; maximal immigration rate, I = 1 ; maximal emigration rate, E = 1 ; elitism parameter = 2
CHIO [21] N = 50 ; T = 3000 ; number of initial infected cases = 1; basic reproduction rate, B P r , and maximum infected cases age, M a x A g e , are positive integers
Table 4. Performance comparison of classical test functions.
Table 4. Performance comparison of classical test functions.
FunctionJSOPSOGWOLSAHBJSARSOACOBBOCHIOHJSPSO
F11111111111
F218110191171
F396581179101
F486471159101
F557391286104
F611811101791
F711811911101
F811111119101
F924835106791
F1023847105691
F1131948101675
F1274351189106
F1358361179104
F1468471159103
F1585471169101
F1618623759104
F1713648729105
F1816104178951
F1911911107681
F2011111111101
F2111811101791
F2269110117851
F2378952104316
F2411911101781
F2539861107524
F2638971105624
F2711191110181
F2822822102192
F2911111111101
F3011111119101
F3117104295583
F3255914817101
F3326104379581
F3419561108741
F3529561107842
F3628462109751
F3725934108761
F3824937105681
F3922922102182
F4027893106541
F4118171169101
F4249610125783
F4326793108451
F4454962101387
F4511711101981
F4618761105941
F4716783102945
F4811816105791
F4919476105832
F5029658104731
No. best hits23149132915138232
Hit rate (%)4628182658302616464
Table 5. Performance comparison of CEC-C06 2019 test functions.
Table 5. Performance comparison of CEC-C06 2019 test functions.
FunctionJSOPSOGWOLSAHBJSARSOACOBBOCHIOHJSPSO
CEC0157462189103
CEC0246582197103
CEC0342538109761
CEC0418697102453
CEC0527984103651
CEC0626791104583
CEC0729586101743
CEC0838479101652
CEC0916498102375
CEC1027643108951
No. best hits2000122003
Hit rate (%)200001020200030
Table 6. Time complexity analysis of the selected test functions.
Table 6. Time complexity analysis of the selected test functions.
TechniqueTime Taken per Iteration (ms)Convergence Point (Iteration)
F7F9F33F50CEC03CEC10F7F9F33F50CEC03CEC10
JSO0.2280.2550.2160.4750.9570.88318329892978175029982115
PSO1.3292.2181.5311.5192.9803.049114300029692034324714
GWO0.3590.3440.3620.5950.7560.646299729992999300030001999
LSA1.3673.4903.7926.15316.407.1933729981059225397522
HBJSA0.3440.3160.4380.4821.0761.0434352198223122384032163
RSO0.1470.1230.1260.2330.5070.524193323212359259729131369
ACO1.1622.6742.6763.24910.479.009383000300023152447169
BBO0.7191.4371.5482.2395.9884.25272429993000260529602939
CHIO1.0641.0491.1271.1092.6662.522280023222903292228692997
HJSPSO0.2680.2550.2330.4670.8701.00122929711616175529942367
Table 7. Statistical results of Wilcoxon signed–ranked test.
Table 7. Statistical results of Wilcoxon signed–ranked test.
FunctionJSOPSOGWOLSAHBJSARSOACOBBOCHIO
CEC01 9 × 10 6 2 × 10 6 0.943 2 × 10 6 2 × 10 6 2 × 10 6 2 × 10 6 2 × 10 6 2 × 10 6
CEC02 3 × 10 6 2 × 10 6 7 × 10 6 2 × 10 6 4 × 10 5 6 × 10 6 2 × 10 6 2 × 10 6 2 × 10 6
CEC03 2 × 10 6 0.0530.0020.059 2 × 10 6 2 × 10 6 2 × 10 6 5 × 10 5 2 × 10 6
CEC04 8 × 10 5 6 × 10 5 0.079 2 × 10 6 2 × 10 6 2 × 10 6 0.0150.3930.658
CEC050.704 2 × 10 6 2 × 10 6 3 × 10 6 0.001 2 × 10 6 0.043 4 × 10 6 8 × 10 5
CEC060.5440.045 5 × 10 4 4 × 10 6 0.171 2 × 10 6 0.6880.382 2 × 10 6
CEC070.003 2 × 10 6 0.002 2 × 10 6 3 × 10 6 2 × 10 6 1 × 10 5 5 × 10 5 0.910
CEC080.910 2 × 10 6 3 × 10 5 5 × 10 6 2 × 10 6 2 × 10 6 0.471 2 × 10 6 2 × 10 6
CEC09 2 × 10 4 0.2290.052 5 × 10 6 3 × 10 6 2 × 10 6 0.0720.03 2 × 10 5
CEC100.295 7 × 10 6 2 × 10 5 9 × 10 6 2 × 10 4 2 × 10 6 3 × 10 6 5 × 10 6 6 × 10 6
Table 8. Statistical results of the Friedman test.
Table 8. Statistical results of the Friedman test.
FunctionJSOPSOGWOLSAHBJSARSOACOBBOCHIOHJSPSO
CEC015.607.003.636.132.031.127.478.47103.55
CEC025.206.035.576.932.171.177.937.20102.80
CEC035.832.434.902.428.438.938.404.876.402.35
CEC042.286.605.307.957.009.970.484.584.534.30
CEC052.626.178.536.434.67103.455.605.202.33
CEC063.384.736.207.773.30103.384.777.933.53
CEC072.337.505.337.606.779.731.576.773.833.57
CEC082.576.435.136.378.109.931.806.236.102.33
CEC092.675.073.237.777.979.403.703.406.774.50
CEC103.083.528.304.325.738.179.174.675.632.42
Table 9. Coordinates of cities in a sample TSP [19].
Table 9. Coordinates of cities in a sample TSP [19].
CitiesCoordinatesCitiesCoordinatesCitiesCoordinatesCitiesCoordinates
x y x y x y x y
14.387.5164.899.59112.768.4164.983.49
23.812.5574.455.47126.792.54179.591.96
37.655.0586.461.38136.558.14183.42.51
47.96.9997.091.49141.622.43195.856.16
51.868.9107.542.57151.199.29202.234.73
Table 10. Optimization performance in solving TSP.
Table 10. Optimization performance in solving TSP.
IndicatorJSOPSOGWOLSAHBJSARSOACOBBOCHIOHJSPSO
Mean39.6748.8441.8348.9444.0850.4239.5145.7142.6137.87
Std.2.985.084.003.351.805.512.584.161.531.87
Best36.9741.2236.1243.3239.9041.9136.1237.4238.6036.12
Worst48.4761.8053.7755.3347.7061.9845.9355.2646.0545.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nayyef, H.M.; Ibrahim, A.A.; Mohd Zainuri, M.A.A.; Zulkifley, M.A.; Shareef, H. A Novel Hybrid Algorithm Based on Jellyfish Search and Particle Swarm Optimization. Mathematics 2023, 11, 3210. https://doi.org/10.3390/math11143210

AMA Style

Nayyef HM, Ibrahim AA, Mohd Zainuri MAA, Zulkifley MA, Shareef H. A Novel Hybrid Algorithm Based on Jellyfish Search and Particle Swarm Optimization. Mathematics. 2023; 11(14):3210. https://doi.org/10.3390/math11143210

Chicago/Turabian Style

Nayyef, Husham Muayad, Ahmad Asrul Ibrahim, Muhammad Ammirrul Atiqi Mohd Zainuri, Mohd Asyraf Zulkifley, and Hussain Shareef. 2023. "A Novel Hybrid Algorithm Based on Jellyfish Search and Particle Swarm Optimization" Mathematics 11, no. 14: 3210. https://doi.org/10.3390/math11143210

APA Style

Nayyef, H. M., Ibrahim, A. A., Mohd Zainuri, M. A. A., Zulkifley, M. A., & Shareef, H. (2023). A Novel Hybrid Algorithm Based on Jellyfish Search and Particle Swarm Optimization. Mathematics, 11(14), 3210. https://doi.org/10.3390/math11143210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop