Next Article in Journal
Simultaneous Optimization of Phenolic Compounds and Antioxidant Abilities of Moroccan Pimpinella anisum Extracts Using Mixture Design Methodology
Next Article in Special Issue
Special Issue: Neural Networks, Fuzzy Systems and Other Computational Intelligence Techniques for Advanced Process Control
Previous Article in Journal
Decentralized Multi-Performance Fuzzy Control for Nonlinear Large-Scale Descriptor Systems
Previous Article in Special Issue
Deep Reinforcement Learning for Traffic Light Timing Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Particle Swarm Optimization Algorithm for Optimizing Artificial Neural Network in Classification Tasks

1
Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia
2
Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
3
Department of Computer Science, Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
4
Computer Engineering and Control Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
5
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Processes 2022, 10(12), 2579; https://doi.org/10.3390/pr10122579
Submission received: 25 October 2022 / Revised: 28 November 2022 / Accepted: 1 December 2022 / Published: 3 December 2022

Abstract

:
Artificial neural networks (ANNs) have achieved great success in performing machine learning tasks, including classification, regression, prediction, image processing, image recognition, etc., due to their outstanding training, learning, and organizing of data. Conventionally, a gradient-based algorithm known as backpropagation (BP) is frequently used to train the parameters’ value of ANN. However, this method has inherent drawbacks of slow convergence speed, sensitivity to initial solutions, and high tendency to be trapped into local optima. This paper proposes a modified particle swarm optimization (PSO) variant with two-level learning phases to train ANN for image classification. A multi-swarm approach and a social learning scheme are designed into the primary learning phase to enhance the population diversity and the solution quality, respectively. Two modified search operators with different search characteristics are incorporated into the secondary learning phase to improve the algorithm’s robustness in handling various optimization problems. Finally, the proposed algorithm is formulated as a training algorithm of ANN to optimize its neuron weights, biases, and selection of activation function based on the given classification dataset. The ANN model trained by the proposed algorithm is reported to outperform those trained by existing PSO variants in terms of classification accuracy when solving the majority of selected datasets, suggesting its potential applications in challenging real-world problems, such as intelligent condition monitoring of complex industrial systems.

1. Introduction

The human nervous system inspires an artificial neural network (ANN), and many artificial neurons act as interconnected processing elements in ANN to emulate the cerebral cortex of brain structure. In contrast to other conventional machine learning methods, ANNs demonstrate outstanding capabilities in generalizing, organizing, and learning the nonlinear data that are commonly observed in real-world scenarios [1]. Due to these appealing features, ANNs have been widely implemented to solve various real-world application, including intelligent condition monitoring [2], speech recognition [3], fault diagnosis [4], pothole classification [5], etc. Generally, ANN structure can be separated into three parts: the input layer, the hidden layer, and the output layer. Information flows in a single direction within an ANN model, i.e., from the input layer to the hidden layer, followed by the output layer [6]. Each neuron of the layers is incorporated with an activation function to convert the summed weighted input received by each neuron into the nonlinear output. This nonlinear characteristic serves as the cornerstone for ANN to have competitive performance in tackling various challenging machine learning tasks, such as the learning of complex data, functions approximation, and predictions [7].
Before deploying an ANN model to solve machine learning tasks, a training process is performed to determine the optimal combinations of weight and bias values for all neurons that can achieve minimum error. A gradient-based algorithm known as the backpropagation (BP) method is conventionally used to train ANN models [7]. At the beginning stage of ANN training, the initial weight and bias values of neurons are randomly generated, and the actual output values of the network are determined. The error signals between the desired and actual outputs are then calculated and backpropagated to the network to adjust the weight and bias values of each neuron. Despite its popularity, some drawbacks were reported when using the classical BP method to train ANN models, especially when solving complex nonlinear problems. For instance, the BP method has a high tendency to be trapped in local optima and fail to reach global optimum when dealing with solution regions with complex characteristics. The performance of the ANN model when solving complex problems can be significantly degraded due to suboptimal weight and bias values assigned to all neurons [7]. The convergence characteristic of the classical BP method in ANN training is also sensitive to the initial values of weight, bias, and network parameters (e.g., activation function). There might be scenarios where the classical BP method produces poor initial weight and bias values for ANN during the training process, leading to compromised network performances [7]. These limitations of classical BP methods have motivated researchers to seek robust alternatives that can address ANN training problems more efficiently and effectively.
In recent years, metaheuristic search algorithms (MSAs) have emerged as promising solutions to tackle complex optimization problems, such as those reported in [8,9,10,11,12,13]. The excellent global search ability and stochastic characteristic of these MSAs can also be harnessed for training ANN models to solve classification or regression problems. Particle swarm optimization (PSO) is one of the most popular MSAs, and it is motivated by the collective behavior of bird flocks in locating food sources [14]. Each PSO particle can memorize its previous best searching experiences, and this unique feature distinguishes PSO from other MSAs [15]. The search trajectory of each PSO particle is adjusted based on its previous best experience and the best experience achieved by the entire population during the optimization process. Since the inception of the original PSO in 1995, many new variants have been proposed to solve various real-world optimization problems. Despite its appealing features, such as simplicity in implementation and fast convergence speed, the capability of original PSO to solve complex optimization problems such as ANN training remain questionable due to its high tendency to suffer from premature convergence issues when solving complex problems with high-dimensional search space. Therefore, more robust search mechanisms must be incorporated into the original PSO to balance the algorithm’s exploration and exploitation searches.

1.1. Research Motivations

A PSO variant known as particle swarm optimization without velocity (PSOWV) was proposed in [16] by discarding the velocity component of each particle during the search process. PSOWV has demonstrated more competitive performance than the conventional PSO by solving simple benchmark problems with better accuracy in fewer iteration numbers. Despite its promising performance, PSOWV still suffers from some inherent drawbacks that might restrict its feasibility to solve more challenging optimization problems, such as the training of ANN models for classification or regression tasks.
Similar to most MSAs, PSOWV employs a conventional initialization scheme to randomly generate the initial population of particles during the search process. This conventional initialization scheme produces the initial position of each particle using uniform distribution without considering the characteristics or fitness landscapes of given optimization problems [17]. When particles are mistakenly initialized in local optima, it may lead to premature convergence issues and poor solution accuracy. In contrary, the convergence speed of the algorithm can be significantly deteriorated if particles are initialized at solution regions far away from the global optimum. It is more desirable to have a robust initialization scheme that can generate the initial position of each particle more systematically to ensure the initial population has a better solution quality.
When carefully inspecting the search operator of PSOWV, the new position of each particle is directly affected by the directional information of historical best positions, i.e., the personal best position of the particle itself and the global best position of the population. While these historical best positions are beneficial to accelerate the convergence speed of PSOWV at the early stage of optimization, they are less frequently updated at the latter stages [18]. When both personal and global best positions are overemphasized during the search process of PSOWV, the entrapment of these historical best positions into local optima at the early stage of the search process could misguide the remaining particles converging towards the inferior solution regions and lead to premature convergence. To prevent these undesirable scenarios, it is necessary for PSOWV to be incorporated with a more robust diversity preservation scheme when dealing with complex problems. The directional information offered by other non-historical best positions should be leveraged during the search process to prevent the potential negative impacts brought by personal and global best positions.
Finally, it is observed that PSOWV has limited ability to achieve proper trade-off between exploration and exploitation searches because only one search operator is used to perform the search process. Hence, PSOWV can only perform well in certain types of optimization problems (i.e., unimodal problem), and it exhibits poor optimization performance in other problem categories. When solving optimization problems with different complexity levels as governed by the characteristics of fitness landscapes, it is crucial for an algorithm to intelligently regulate its exploration and exploitation strength to locate the global optima [19]. The incorporation of multiple search operators with different levels of exploration and exploitation strengths can be envisioned as an alternative to enhance the robustness of the algorithm to solve different complex optimization problems competitively.

1.2. Research Contributions

To address the shortcomings of PSOWV in solving complex optimization problems such as the ANN model training, an enhanced variant known as multi-swarm-based particle swarm optimization with two-level learning phases (MSPSOTLP) is proposed. Apart from optimizing the weight and bias values of ANN models during the training process, the proposed MSPSOTLP can also determine the optimal activation function of an ANN model to solve the given classification problem. The main modifications introduced into MSPSOTLP to highlight its research contributions are summarized as follows:
  • A modified initialization scheme is incorporated into MSPSOTLP to generate an initial population with better robustness and diversity by leveraging the benefits of the chaotic system (CS) and oppositional-based learning (OBL).
  • In the primary learning phase of MSPSOTLP, both the multi-swarm concept and social learning concept are incorporated to promote rapid convergence of the population towards the optimal regions by enabling particles to learn from other superior population members while preserving the diversity level of population.
  • In the secondary learning phase of MSPSOTLP, two modified search operators with different characteristics are designed for each particle to perform searching with different levels of exploration and exploitation strengths, hence enabling the proposed algorithm to solve different types of optimization problems more competitively.
  • The performance of MSPSOTLP in solving global optimization problems is investigated using CEC 2014 benchmark functions. The classification performances of ANN models trained using MSPSOTLP are also evaluated with 16 datasets selected from UCI Machine Learning Repository. The proposed MSPSOTLP is proven more competitive than its peer algorithms at solving the benchmark functions and ANN training.
The remaining sections of this article are organized as follows. Related works of this study are explained in Section 2. The detailed search mechanisms of MSPSOTLP are described in Section 3. Extensive simulation studies performed to investigate the performance of MSPSOTLP in solving global optimization problems and its capability to train ANN models in solving classification problems are presented in Section 4. Finally, Section 5 concludes the research findings and future works.

2. Related Works

2.1. Particle Swarm Optimization (PSO)

PSO is inspired by the collective behavior of bird flocking or fish schooling to search for food sources [14]. Suppose that N is the population size and D is the dimensional size of an optimization problem. Each PSO particle is a potential solution to solve an optimization problem represented with a velocity vector of V n = [ V n , 1 , , V n , d , , V n , D ] and a position vector of X n = [ X n , 1 , , X n , d , , X n , D ] , where n = 1 , , N and d = 1 , , D refer to the population index and dimension index, respectively. Unlike other MSAs, each PSO particle can memorize its previous best experience and the best experience achieved by the population, denoted as personal best position of X n P b e s t = [ X n , 1 P b e s t , , X n , d P b e s t , , X n , D P b e s t ] and global best position of G b e s t = [ G 1 b e s t , , G d b e s t , , G D b e s t ] , respectively. During the t-th iteration of the search process, the new velocity V n , d t + 1 of each n-th particle in any d-th dimension is adjusted based on the corresponding dimensional components of the personal best position X n , d P b e s t , t (i.e., self-cognitive component) and global best position G d b e s t , t (i.e., social component) as follows:
V n , d t + 1 = ω V n , d t + c 1 r 1 ( X n , d P b e s t , t X n , d t ) + c 2 r 2 ( G d b e s t , t X n , d t )
where ω is an inertia weight;   c 1 and c 2 are acceleration coefficients;   r 1 , r 2 [ 0 , 1 ] are two random numbers generated from a uniform distribution. Referring to V n , d t + 1 , the new position of each n-th particle in every d-th dimension is updated as:
X n , d t + 1 = V n , d t + 1 + X n , d t
The fitness of every n-th particle with updated position is evaluated as f ( X n t + 1 ) and compared with those of the personal best position and global best position denoted as f ( X n P b e s t , t ) and   f ( G b e s t , t ) , respectively. Both of X n P b e s t , t and G b e s t , t will be updated as X n t + 1 if the latter solution is superior. The search process of PSO using Equations (1) and (2) is iterated until the predefined termination criteria are satisfied.

2.2. Particle Swarm Optimization without Velocity (PSOWV)

PSOWV is a velocity-discarded version of PSO proposed in [16], aiming to enhance the ability of PSO to locate the global optimum of a given problem with a lesser iteration number. During the search process, the d-th dimension of position for every n-th PSOWV particle (i.e., X n , d t + 1 ) can be updated based on the random linear combination between its personal best position (i.e., X n , d P b e s t , t ) and the global best position (i.e., G d b e s t , t ) of the population in the same dimensional component as follows:
X n , d t + 1 = c 1 r 1 X n , d P b e s t , t + c 2 r 2 G d b e s t , t
The fitness evaluation and procedure to update historically best positions (i.e., X n P b e s t , t and G b e s t , t ) of PSOWV are similar to those of the original PSO. Although PSOWV can solve the simple benchmark functions with a faster convergence speed than the original PSO, its feasibility to solve more challenging real-world optimization problems, such as training ANN models, remains unexplored. Furthermore, preliminary studies also revealed the high tendency of PSOWV to suffer premature convergence issues because its search behavior is governed by a single search operator that highly relies on the directional information provided by historically best positions.

2.3. PSO Variants

Despite appealing characteristics, such as rapid convergence speed and simplistic implementation, the original PSO tends to suffer from rapid loss of population diversity and premature convergence issues when solving more complex optimization problems. Proper balancing of exploration and exploitation searches is considered a fundamental cornerstone for MSAs such as PSO to solve different optimization problems competitively. Therefore, various modification schemes were proposed by researchers over the years to address the demerits of the original PSO.
Parameter adaptation is a popular enhancement strategy of PSO by determining the proper combination of control parameters that can govern its search trajectories. The control parameters to be adjusted include inertia weight, constriction factor, acceleration coefficients, and a variety of these parameters. Some notable PSO variants that were proposed with parameter adaptation approaches are reported in [20,21,22,23]. Neighborhood structure modification is another promising technique of PSO because it governs the broadcast rate of information between population members. Particularly, the fully connected population topology tends to be more exploitative, whereas the partially connected one has stronger exploitation strengths. PSO variants reported in [19,24,25,26,27,28] can achieve proper tradeoffs between the exploration and exploitation searches by varying their population topology with time or based on current search environments. Apart from modifying the neighborhood structure, it is also feasible to introduce single or multiple modified learning strategies into PSO for achieving performance enhancements such as those proposed in [29,30,31,32,33,34,35,36]. To alleviate potential negative impacts brought by the historically best solution (e.g., personal and global best positions), novel exemplars might be constructed by these modified learning strategies from the non-fittest solutions to guide the search process of particles more effectively while preserving the population diversity. Finally, the PSO’s robustness in solving high-complexity real-world problems can be enhanced using the hybridization approach, such as those reported in [37,38,39,40]. Different hybridization frameworks [41] can be designed to leverage the strengths of search operators incorporated in other MSAs for compensating the drawbacks of PSO in solving certain problem classes. The strengths and limitations of selected PSO variants are analyzed and summarized in Table 1.

2.4. Application of MSAs in Training ANN Models

Given the drawbacks of the conventional BP method, numerous MSAs were designed as promising alternatives to train ANN models with more robust network performances [42]. A two-layer PSO (PSO-PSO) algorithm was proposed in [43] to train a multilayer perceptron (MLP) neural network by interleaving two PSO algorithms. The first layer of PSO was used to optimize the number of perceptrons in the network architecture, whereas the second layer of PSO optimized the weight and bias values based on the network architecture obtained by the first layer. Nevertheless, simulation studies revealed that the ANN models optimized by PSO-PSO did not significantly outperform their peers when solving the classification benchmark datasets. A hybridized PSO and gravitational search algorithm (PSOGSA) was proposed [44] to optimize the weights and biases of the ANN model by leveraging the unique searching behaviors of PSO and GSA. When training the ANN model, PSO was incorporated to alleviate the inherent drawbacks of GSA, i.e., low convergence rate and high tendency to be trapped in local optima. A hybrid improved opposition-based PSO with a backpropagation algorithm (IOPSO-BPA) [45] was introduced to optimize the weight values of ANNs. Both oppositional-based learning and mutation schemes were incorporated as diversity preservation schemes of IOPSO-BPA.
Furthermore, the concepts of time-varying parameters were introduced to improve the convergence characteristic of the algorithm in optimizing the weights of ANN models. Kandasamy and Rajendran [46] proposed a hybrid algorithm to overcome the drawbacks of the conventional BP methods overfitting and entrapment in local optima when training ANN models. PSO was firstly employed to search for the optimal combination of trainable weights of the ANN model by minimizing the classification error function. Subsequently, the steepest descent method was applied to fine tune the near-optimal weight values encoded in the global best position to further improve classification accuracy. In [47], a self-adaptive and strategy-based PSO (SPS-PSO) was designed to solve the large-scale feature selection and ANN training problems for large-scale datasets. Five search operators with different characteristics were introduced in SPS-PSO, and an adaptive mechanism was used to assign a suitable operator for each particle to ensure it can perform searching with balanced exploration and exploitation strengths. A comprehensive adaptive PSO (ACPSO) was designed in [48] to tackle the denoising issue of ultrasound images by optimizing the functional-link neural network (FLNN). The velocity of the ACPSO was only dependent on the global best position, and its controlling parameters were adaptively adjusted based on the personal and global best positions. To mitigate the flooding issue, an ANN model was trained using PSO in [49] to formulate river flow modelling based on the weather and meteorological data. It was revealed that the river flow strongly correlates with selected variables, such as temperature, evaporation, and rainfall. In [50], hybrid intelligent modeling was developed using PSO and ANN (PSO-ANN) to predict the soil matric sanction, i.e., a useful metric to indicate the soil shear strength in addressing sudden landslides issue, with improved prediction accuracy.
The genetic algorithm (GA) [51] is another popular MSA used to train ANN models. A hybrid model of ANN and GA (ANN-GA) was proposed in [52] to optimize the weights and biases of the ANN model used for modeling the slump of ready-mix concrete. The initial weights and biases of ANN-GA were determined using GA and fine-tuned with the BP algorithm. ANN-GA was reported to outperform its peers by leveraging the benefits of GA and BP to promote its global and local search abilities, respectively. In [53], GABPNN was proposed by integrating GA into a BP-trained ANN to optimize the thickness of blow-molded polypropylene bellows. In GABPNN, the BP algorithm was first applied to train the weights of the ANN model using lesser learning samples, and these weights were further evolved in the feasible solution regions using GA. Contrary to ANN-GA, GA promoted the local search behavior of GABPNN by adopting an elitist strategy and a simulated annealing algorithm. GABPNN demonstrated its effectiveness and competitive performance in solving blow molding problems. A hybrid MLP ANN and GA approach (MLPANN-GA) was introduced in [54] to predict sludge bulking for water treatment plants. GA was incorporated to train the weights, activation functions, and thresholds of the MLPANN model. Simulation studies reported that the incorporation of GA to train MLPANN can increase the accuracy of the ANN model in estimating the sludge volume index.
In addition to GA, teaching-learning-based optimization (TLBO) [55] is another popular MSA widely used for ANN optimization. A TLBO-based ANN (TLBOANN) was proposed in [56] to estimate the energy consumption in Turkey. TLBO was applied to search for the optimal weights and biases of the ANN model by minimizing the error function based on the input data provided, i.e., gross domestic product, population, and import and export data in Turkey. An improved hybrid TLBO and ANN (iTLBO-ANN) was proposed in [57] to solve real-world building energy consumption forecasting problems. Three modifications, known as feedback stage, accuracy factor, and worst solution elimination, were introduced to improve the performance of TLBO in optimizing the weights and thresholds of the ANN model within a shorter time. The iTLBO-ANN outperformed other ANN models trained by GA and PSO by predicting building energy consumption with better accuracy and computational speed. Another TLBO-optimized ANN [58] was proposed to improve the performance in predicting the axial capacity of pile foundations. TLBO was used to train the weights of the ANN model by minimizing the mean square error produced when predicting the ultimate capacity of both driven and drilled shaft piles embedded in uncemented soils. The ANN model trained by TLBO outperformed that trained by BP by producing better variance accounted for and determination coefficient. A new TLBO variant known as TLBO-MLPs was used to train ANN model for data classification [59]. Additional mechanisms were introduced in both the teacher and learner phases of TLBO-MLPs to achieve realistic emulation of classroom teaching and learning that can lead to performance gain in solving complex optimization problems [60].

3. Proposed Methodology

3.1. Formulation of ANN Training as an Optimization Problem

An ANN model to be optimized in this study consists of a three-layer structure with P input neurons, Q hidden neurons, and R output neurons in input, hidden, and output layers, respectively, as illustrated in Figure 1. The neurons in each layer are considered a set of processing elements that are connected by weights with other layers. Suppose that I p refers to the value of p -th neuron at the input layer, H q represents the value of   q -th neuron at the hidden layer, and O r is the r-th neuron at the output layer, where p = 1 , , P ,   q = 1 , , Q , and r = 1 , , R . Denote W p , q H as the connection weight between I p and H q ; W q , r O as the connection weight between H q and O r ; B q H and B r O as the biases of H q and O r , respectively. The values of each q-th hidden neuron H q and r-th output neuron O r can be produced by computing the sum of input weight with the presence of biases, followed by the non-linearization process of this weighted summation with an activation function of Φ ( · ) expressed as follows:
H q = Φ ( p = 1 P W p , q H I p + B q H )
O r = Φ ( q = 1 Q W q , r O I q + B r O )
Contrary to the majority of existing ANN training algorithms that only focus on searching optimal weights and biases, this study also considers the optimal selection of the activation function used to solve a given classification task. The decision variables to be optimized by the proposed MSPSOTLP when training the ANN model include: (a) weights W p , q H , W q , r O [ 1 , 1 ] , (b) biases B q H , B r O [ 1 , 1 ] , and (c) index K = { 1 , 2 , 3 , 4 , 5 } that refer to the candidate activation functions, where Binary Step, Sigmoid [7], Hyperbolic Tangent (Tanh) [7], Inverse Tangent (ATan), and Rectified Linear Unit (ReLU) [61] functions are assigned with indices of 1, 2, 3, 4, and 5, respectively. The mathematical formulation of the candidate activation functions considered in this study is presented in Table 2.
The decision variables to be optimized by the proposed MSPSOTLP in ANN training are encoded into the position vector X of each particle as follows:
X = [ W 1 , 1 H , , W p , q H , , W P , Q H , W 1 , 1 O , , W q , r O , , W Q , R O , B 1 H , , B q H , , B Q H , B 1 O , , B r O , , B R O , K ]
Referring to Equation (6), the ANN training considered in the current study can be formulated as an optimization problem with a dimensional size of D, where
D = P Q + Q R + Q + R + 1
In this study, the fitness of each MSPSOTLP particle when it is used to train the ANN model can be evaluated by measuring the mean square error between the predicted and expected output values. Given a dataset with a total of G data samples, the predicted output of g-th data sample produced by an ANN model trained by MSPSOTLP and its corresponding expected outcome stored in the dataset are indicated as g p r e d and g e x p , respectively, where g = 1 , , G . The mean square error ε ( X ) produced by an ANN model constructed using the decision variables stored in the position vector X of the particle is calculated as the fitness value f ( X ) as follows:
f ( X ) = ε ( X ) = 1 G g = 1 G [ g p r e d ( X ) g e x p ] 2
Based on Equation (8), the ANN training is considered a minimization problem because it is more desirable to produce an ANN model with a minor error, implying its high classification accuracy in solving a given dataset.

3.2. Proposed MSPSOTLP Algorithm

In this study, a new PSO variant known as MSPSOTLP is proposed to solve challenging optimization problems, including the ANN training problem formulated in Equation (8), with improved performances. For the latter problem, the proposed MSPSOTLP is used to optimize the weights, biases, and selection of activation functions of the ANN model when solving the given datasets. The essential modifications introduced to enhance the search performance of MSPSOTLP are described in the following subsections.

3.2.1. Modified Initialization Scheme of MSPSOTLP

Population initialization is considered a crucial process to develop robust MSAs because the quality of initial candidate solutions can influence the algorithm’s convergence rate and searching accuracy [17]. Most PSO variants employed random initialization to generate the initial population without considering any meaningful information about the search environment [17]. The stochastic behavior of the random initialization scheme might produce particles at inferior solution regions at the beginning stage of optimization. This undesirable scenario can prevent the algorithm’s convergence towards the global optimum, thus compromising the algorithm’s overall performance.
In this study, a modified initialization scheme incorporated with the chaotic system (CS) and oppositional-based learning (OBL), namely the CSOBL initialization scheme, is designed for the proposed MSPSOTLP to overcome the drawbacks of the conventional initialization scheme. Unlike a random system that demonstrates completely unpredictable behaviors, CS is considered a more powerful initialization scheme that can produce an initial swarm with better diversity by leveraging its ergodicity and non-repetition natures. Denote ϑ 0 as the initial condition of a chaotic variable that is randomly generated in each independent run. ϑ z refers to the value of the chaotic variable in z-th iteration with z = 1 , , Z , where Z represents the maximum sequence number. Given the bifurcation coefficient of μ = π , the chaotic sequence is updated using a chaotic sine map [62] as:
ϑ z + 1 = sin ( μ ϑ z ) ,           where   z = 1 , , Z
Let X d U and X d L be the upper and lower limits of the decision variable in each d-th dimension, respectively, where 1 = 1 , , D . Given the chaotic variable ϑ Z produced in the final iteration of Z, the d-th dimension of each n-th chaotic swarm member X n , d C S can be initialized as:
X n , d C S = X d L + ϑ Z ( X d U X d L   )
Referring to Equation (10), a chaotic population with a swarm size of N can be produced and represented as a population set of P C S = [ X 1 C S , , X n C S , , X N C S ] .
Despite the benefits of the chaotic map in enhancing population diversity, these chaotic swarm members can be initialized in solution regions far away from the global optimum, leading to the low convergence rate of the algorithm. To overcome this drawback, a solution set opposite with P C S is generated by leveraging the OBL concept [63]. For every d-th dimension of n-th chaotic swarm member represented as X n , d C S , the corresponding opposite swarm member X n , d O L is calculated using OBL strategy [17,64] as follows:
X n , d O L = X d L + X d U X n , d C S
Similarly, an opposite population with a swarm size of N can be generated using Equation (11) and represented as another population set of P O L = [ X 1 O L , , X n O L , , X N O L ] .
To produce an initial population with better fitness and wider coverage in the solution space, both of P C S and P O L are merged as a combined population set of P M R G with a swarm size of 2N as follows:
P M R G = P C S P O L
Subsequently, the fitness of all solution members in P M R G are evaluated, and a sorting operator of Ψ ( · ) is then applied to rearrange these solution members from the best to worst based on their fitness to produce a sorted population set of P S o r t , where
P S o r t = Ψ ( P M R G )
Finally, a truncation operator Γ ( · ) is applied to select the top N solution members of on P S o r t to construct the initial population of MSPSOTLP, i.e., P = [ X 1 , , X n , , X N ] , where
P = Γ ( P S o r t )
The pseudocode used to describe the CSOBL initialization scheme of MSPSOTLP is presented in Algorithm 1.
Algorithm 1. Pseudocode of CSOBL initialization scheme for MSPSOTLP
Input:  N , D , X U , X L , Z
01:  Initialize P C S   and P O L ;
02: for each n-th particle do
03:   for each d-th dimension do
04:   Randomly generate initial chaotic variable ϑ 0 [ 0 , 1 ] ;
05:   Initialize the chaotic sequence as z = 1 ;
06:   while z is smaller than Z do
07:    Update chaotic variable ϑ z + 1 using Equation (9);
08:    end while
09:   Compute X n , d C S using Equation (10);
10:   Compute X n , d O L using Equation (11);
11:    end for
12:    P C S P C S X n C S ;/* Store new chaotic swarm member */
13:    P O L P O L X n O L ;/* Store new opposite swarm member */
14:    end for
15:   Construct P M R G using Equation (12);
16:   Evaluate the fitness of all solution members in P M R G ;
17:   Sort all solution members of P M R G from the best to worst using Equation (13);
18:   Produce the initial population P using Equation (14);
Output:  P = [ X 1 , , X n , , X N ]

3.2.2. Primary Learning Phase of MSPSOTLP

Most PSO variants, including PSOWV, rely on the global best position to adjust the search trajectories of particles during the optimization process without considering the useful information of other non-fittest particles in the population. Although the directional information of the global best position might be useful to solve simple unimodal problems, it might not necessarily be the best option to handle complex problems with multiple numbers of local optima due to the possibility of the global best position being trapped at the local optima in an earlier stage of optimization. Without a proper diversity preservation scheme, other population members tend to be attracted by misleading information about the global best position and converge towards the inferior region, leading to premature convergence and poor optimization results.
To address the aforementioned issues, several modifications are incorporated into the primary learning phase of MSPSOTLP to achieve a proper balancing of exploration and exploitation searches. The multiswarm concept is first introduced as a diversity preservation scheme at the beginning stage of the primary learning phase by randomly dividing the main population of P = [ X 1 , , X n , , X N ] into S subswarms. Each s-th subswarm is denoted by P s s u b = [ X 1 s u b , , X n s u b , , X N s u b s u b ] consists of N s u b = N / S particles, where   s = 1 , , S . To produce each s-th subswarm P s s u b from the main population   P , a reference point of s = [ X s , 1 r e f , , X s , d r e f , , X s , D r e f ] is randomly generated in search space. The normalized Euclidean distance between the reference point s and personal best position of each n-th particle, i.e., X n P b e s t = [ X n , 1 P b e s t , , X n , d P b e s t , , X n , D P b e s t ] are measured as Ξ ( s , n ) , i.e.,
Ξ ( s , n ) = d = 1 D ( X s , d r e f X n , d P b e s t X d U X d L ) 2
Referring to the Ξ ( s , n ) values computed for all N particles, the N s u b particles with the nearest   Ξ ( s , n ) distances from s are identified as the members of s-th subswarm and stored in P s s u b before they are discarded from the main population   P . From Algorithm 2, the reference-point-based population division scheme used to generate the multiswarm for the primary learning phase of MSPSOTLP is repeated until all S subswarms are generated.
Algorithm 2. Pseudocode of reference-point-based population division scheme used to generate the multiswarm for the primary learning phase of MSPSOTLP
Input:  P , N , S , D , N s u b , X U , X L
01:  Initialize s 1 ;
02: while main population P   is not empty do
03:  Randomly generate s = [ X s , 1 r e f , , X s , d r e f , , X s , D r e f ] in search space;
04: for each n-th particle do
05:  Calculate Ξ ( s , n )   using Equation (15);
06: end for
07:  Select N s u b particles with the nearest Ξ ( s , n )   from s to construct P s s u b ;
08:  Eliminate the members of P s s u b from P ;
09:  s s + 1 ; /* Update the index of subswarm*/
10: end while
Output:  P s s u b = [ X 1 s u b , , X n s u b , , X N s u b s u b ]   where s = 1 , , S
Define f ( X n P b e s t , s )   as the personal best fitness of each n-th particle stored in the s-th subswarm P s s u b , where n = 1 , , N s u b and s = 1 , , S . All N s u b particles stored in each s-th subswarm P s s u b are then sorted from the worst to best based on their personal best fitness values, as shown in Figure 2. Accordingly, any k-th particle stored in the sorted P s s u b is considered to have better or equally good personal best fitness than that of n-th particle if the condition of n k N s u b is satisfied. Referring to Figure 2, it is notable that the first particle stored in P s s u b has the worst personal best fitness, whereas the final particle stored in P s s u b has the most competitive personal best fitness after the sorting process. Therefore, the personal best position of the final particle is also considered as the subswarm best position of P s s u b represented as X s S b e s t = X N s u b P b e s t , s for s = 1 , , S .
For each s-th sorted subswarm, define Ω n s = { X k P b e s t , s | k [ n , N s u b ] } as a set variable used to store the personal best position of all solution members that are superior to that of n-th solution member for n = 1 , , N s u b 1 . Notably, the set variable Ω N s u b s is not constructed for the final solution member because none of the solution members stored in P s s u b has better personal best fitness than X N s u b P b e s t , s or X s S b e s t . Referring to the solution members stored in each Ω n s , a unique mean position denoted as X n m e a n , s is then specifically constructed to guide the search process of every n-th solution member in the s-th subswarm, where:
X n m e a n , s = 1 N s u b n + 1 ( k = n N s u b X k P b e s t , s )
Apart from X n m e a n , s , a social exemplar X n S O C , s = [ X n , 1 S O C , s , , X n , d S O C , s , , X n , D S O C , s ] that plays crucial roles to adjust the search trajectory of each n-th particle stored in P s s u b is also formulated. In contrary to global best position, the social exemplar constructed for each n-th particle is unique and it can guide the search process with better diversity by fully utilizing the promising directional information of other particles stored in Ω n s with better personal best fitness values. Specifically, each d-th dimension of n-th social learning exemplar for s-th subswarm, i.e., X n , d S O C , s , can be contributed by the same dimensional component of any randomly selected solution members of Ω n s . The procedures used to construct the social exemplar X n S O C , s for each n-th particle stored in P s s u b are described in Algorithm 3, where α refers to a random integer generated between the indices of n and N s u b .
Algorithm 3. Pseudocode used to generate the social exemplar for each non-fittest solution member in each subswarm
Input:  D , N s u b , n ,   Ω n s
01: for each d-th dimension do
02:  Randomly generate an integer α between indices of n and N s u b ;
03:  Extract the associated component of X α , d P b e s t , s from Ω n s ;
04:  X n , d S O C , s X α , d P b e s t , s ;
05: end for
Output:  X n S O C , s
Given the subswarm best position X s S b e s t , mean position X n m e a n , s and social exemplar X n S O C , s , the new position X n s of each n-th non-fittest solution member stored in the s-th subswarm, where n = 1 , , N s u b 1 and s = 1 , , S , is updated as follows:
X n s = X n s + c 1 r 1 ( X n S O C , s X n s ) + c 2 r 2 ( X s S b e s t X n s ) + c 3 r 3 ( X n m e a n , s X n s )
where c 1 , c 2 and c 3 represent the acceleration coefficients;   r 1 , r 2 , r 3 [ 0 , 1 ] are random numbers generated from uniform distributions. Referring to Equation (17), the directional information contributed by X n S O C , s and X n m e a n , s are unique for each n-th non-fittest solution member of P s s u b because the better solution members are stored in every set variable Ω n s are different for n = 1 , , N s u b 1 . The social learning concept incorporated in Equation (17) also ensures that only the useful information brought by better-performing solutions is used to guide the search process of each n-th particle to accelerate the algorithm’s convergence rate. Furthermore, this learning strategy does not consider the global best position in updating the new position of each n-th particle; therefore, it has better robustness against premature convergence issues.
On the other hand, different approaches are proposed to generate the mean position and social exemplar used for guiding the search process of the final particle stored in every n-th subswarm because none of the solution members of P s s u b can have better personal best fitness than that of X N s u b P b e s t , s . Define Ω N s u b s as a set variable used to store the subswarm best position of any b-th subswarm P b s u b if f ( X b S b e s t ) is better than   f ( X s S b e s t ) , i.e.,
Ω N s u b s = { X b S b e s t | f ( X b S b e s t )   is   better   than   ( X s S b e s t ) ,     b B s }
where B s is a set containing the indices of all subswarms that have better subswarm best fitness than that of s-th subswarm; | B s   | refers to the size of set B s in the range of 0 to S 1 . Obviously,   | B s   | = 0 is the subswarm best position of s-th subswarm is the same as the global best position G b e s t of population, therefore the empty sets of B s = Ω N s u b s = are obtained under this circumstance.
The subswarm consists of G b e s t , the unique mean position X N s u b m e a n , s used to guide the search process of the final particle X N s u b s stored in each s-th subswarm based on Ω N s u b s are calculated as follows:
X N s u b m e a n , s = 1 | B s   | (     b B s , | B s   | 0 X b S b e s t )
Similarly, a social exemplar of X N s u b S O C , s = [ X N s u b , 1 S O C , s , , X N s u b , d S O C , s , , X N s u b , D S O C , s ] is also derived from adjusting the search trajectory of the final particle X N s u b s stored in each s-th subswarm except for the one consisting of G b e s t . As shown in Algorithm 4, each d-th dimension of the social exemplar is assigned to the final particle of s-th subswarm P s s u b , i.e., X N s u b , d S O C , s , is contributed by the same dimensional component of any subswarm best position X b S b e s t randomly selected from Ω N s u b s , where b refers to a subswarm index randomly selected from B s .
Algorithm 4. Social Exemplar Scheme for the Best Particle in Each Subswarm
Input:  D , Ω N s u b s , B s , N s u b
01: for each d-th dimension do
02:  Randomly generated a subswarm index of b B s ;
03:  Extract the corresponding component of X b , d S b e s t from Ω N s u b s ;
04:  X N s u b , d S O C , s X b , d S b e s t ;
05: end for
Output:  X N s u b S O C , s = [ X N s u b , 1 S O C , s , , X N s u b , d S O C , s , , X N s u b , D S O C , s ]
Except for the subswarm consisting of G b e s t with B s = Ω N s u b s = , the position X N s u b s of final particle stored in each s-th subswarm can be updated as follows:
X N s u b s = X N s u b s + c 1 r 4 ( X N s u b S O C , s X N s u b s ) + c 2 r 5 ( G b e s t X N s u b s ) + c 3 r 6 ( X N s u b m e a n , s X N s u b s )
where r 4 , r 5 , r 6 [ 0 , 1 ] are random numbers generated from a uniform distribution. Similarly, the directional information provided by X N s u b S O C , s and X N s u b m e a n , s are unique for each final particle X N s u b s in each s-th subswarm P s s u b because the solution members stored in each Ω N s u b s set are different. Contrary to Equation (17), the social learning concept introduced in Equation (20) allows each subswarm to converge towards the promising solution regions without experiencing rapid loss of population diversity by facilitating the information exchanges between different subswarms. In addition, the employment of G b e s t in Equation (20) is expected to improve the convergence rate of MSPSOTLP.
The overall procedures of the primary learning phase proposed for MSPSOTLP is described in Algorithm 5. For each new position X n s obtained from Equations (17) or (20), boundary checking is first performed to ensure all decision variables’ upper and lower limits are not violated. The fitness value corresponding to the updated X n s for each particle in s-th subswarm is then evaluated as f ( X n s ) and compared with those of its personal best position and global best position denoted as f ( X n P b e s t , s ) and f ( G b e s t ) , respectively. Both X n P b e s t , s   and G b e s t are replaced by the updated X n s if the latter solution is proven to be superior to the latter two solutions.
Algorithm 5. Pseudocode of primary learning phase for MSPSOTLP
Input:  P , N , S , D , N s u b , X U , X L , G b e s t
01:  Divide main population into multiple subswarm using Algorithm 2;
02:  Sort the solution members of P s s u b from the worst to best based on their personal best fitness values and create Ω n s for each s-th subswarm;
03:  Identify the subswarm best position X s S b e s t of each s-th subswarm and create Ω N s u b s for the final particle in all S subswarms using Equation (18);
04:  Determine B s and | B s   | for the final particle of all S subswarms based on Ω N s u b s ;
05: for each s-th subswarm do
06: for each n-th particle do
07: if n N s u b   then
08:  Calculate X n m e a n , s based on Ω n s using Equation (16);
09:  Generate X n S O C , s based on Ω n s using Algorithm 3;
10:  Update X n s using Equation (17);
11: else if n = N s u b   then
12: if B s   and   Ω N s u b s then
13:  Calculate X N s u b m e a n , s based on Ω N s u b s using Equation (19);
14:  Generate X N s u b S O C , s based on Ω N s u b s using Algorithm 4;
15:  Update X N s u b s using Equation (20);
16: end if
17: end if
18:  Evaluate f ( X n s ) of the updated X n s ;
19: if f ( X n s ) is better than f ( X n P b e s t , s ) then
20:  X n P b e s t , s X n s ,   f ( X n P b e s t , s ) f ( X n s ) ;
21: if f ( X n s ) is better than f ( G b e s t ) then
22:  G b e s t X n s ,   f ( G b e s t ) f ( X n s ) ;
23: end if
24: end if
25: end for
26: end for
Output:  X n s ,   f ( X n s ) ,   X n P b e s t , s ,   f ( X n P b e s t , s ) ,   G b e s t   and   f ( G b e s t )

3.2.3. Secondary Learning Phase of MSPSOTLP

Substantial studies [19,65] reported that most PSO variants employed single search operators that can only solve specific optimization problems with good results, but fail to perform well in the remaining problems due to the limited variations of exploration and exploitation strengths. For some challenging optimization problems, the fitness landscapes contained in different subregions of search space can be significantly different. Therefore, the particles need to adjust their exploration and exploitation strengths dynamically when searching in different regions of the solution space to locate global optimum and deliver good optimization results.
Motivated by these findings, a secondary phase is designed as an alternative framework of MSPSOTLP, where two search operators with different search characteristics are incorporated to guide the search process of particles with varying levels of exploration and exploitation strengths. Unlike the primary learning phase, both search operators assigned in the secondary learning phase aim to further refine those already found promising regions by searching around the personal best positions of all MSPSOTLP particles. Before initiating the secondary learning phase, all S subswarms constructed during the primary learning phase, i.e., P s s u b for s = 1 , , S , are merged to form the main population P with N particles as shown below:
P = P 1 s u b P s s u b P S s u b
To assign a search operator for each n-th particle during the secondary learning phase of MSPSOTLP, a randomly selected particle with a population index of e is randomly generated, where e [ 1 , N ] and e n . Define X e P b e s t as the personal best position of this randomly selected e-th particle and its personal best fitness is evaluated as f ( X e P b e s t ) . If the e-th particle has better personal best fitness than that of X n P b e s t , the new personal best position of latter particle can be updated as X n P b e s t , n e w , where
X n P b e s t , n e w = X n P b e s t + r 7 ( X e P b e s t X n P b e s t ) ,   if   f ( X e P b e s t )   is   better   than   f ( X n P b e s t )
where r 7 [ 0 , 1 ] are random numbers generated from a uniform distribution. The search operator of Equation (22) can attract of n-th particle towards the promising solution regions covered by the e-th peer particle, hence it behaves more exploratively.
For the case, if e-th particle has more inferior personal best fitness than that of n-th particle, the former solution is discarded. Another four distinct particles with population indices of w, x, y and z are randomly selected instead, where w , x , y , z [ 1 , N ] and w x y z . Denote X w , d P b e s t , X x , d P b e s t , X y , d P b e s t and X z , d P b e s t as the d-th dimension of the personal best position for the w-th, x-th, y-th and z-th particles, respectively. Let G d b e s t be the same d-th dimension of the global best position. For every n-th particle, each of the d-th dimension of its new personal best position can be calculated as:
X n , d P b e s t , n e w = { G d b e s t + τ 1 ( X w , d P b e s t X x , d P b e s t ) + τ 2 ( X y , d P b e s t X z , d P b e s t ) ,     if   r 8 > 0.5       X n , d P b e s t                                                                                                                                                     ,     otherwise
where τ 1 , τ 2 , r 8 [ 0 , 1 ] are random numbers generated from a uniform distribution. From Equation (23), there is a probability for each X n , d P b e s t , n e w to inherit its original information from X n , d P b e s t or to perform searching around the nearby region of G d b e s t with small perturbations based on the information of X w , d P b e s t , X x , d P b e s t , X y , d P b e s t and X z , d P b e s t . Hence, the search operator of Equation (23) is considered more exploitative than that of Equation (22). The procedures used to implement the secondary learning phase of MSPSOTLP is shown in Algorithm 6. For each new X n P b e s t , n e w obtained from Equations (22) or (23), boundary checking is performed. For each n-th particle, the fitness value of its updated X n P b e s t , n e w is obtained as f ( X n P b e s t , n e w ) and compared with f ( X n P b e s t ) and f ( G b e s t ) . If X n P b e s t , n e w is better than X n P b e s t   and G b e s t , the latter two solutions are replaced by the former one.
Algorithm 6. Pseudocode of secondary learning phase for MSPSOTLP
Input:  P s s u b for s = 1 , , S , G d b e s t ,   f ( G b e s t )
01:  Reconstruct main population P   using Equation (21);
02: for each n-th particle do
03:  Randomly select e-th particle from P with X e P b e s t and f ( X e P b e s t ) ;
04: if f ( X e P b e s t ) is better than f ( X n P b e s t ) then
05:  Calculate X n P b e s t , n e w using Equation (22);
06: else if f ( X e P b e s t ) is not better than f ( X n P b e s t ) then
07: for each d-th dimension do
09:  Calculate X n , d P b e s t , n e w using Equation (23);
10: end for
11: end if
12:  Evaluate f ( X n P b e s t , n e w ) ;
13: if f ( X n P b e s t , n e w ) is better than f ( X n P b e s t ) then
14:  X n P b e s t X n P b e s t , n e w ,   f ( X n P b e s t ) f ( X n P b e s t , n e w ) ;
15: if f ( X n P b e s t , n e w ) is better than f ( G b e s t ) then
16:  G b e s t X n P b e s t ,   f ( G b e s t ) f ( X n P b e s t ) ;
17: end if
18: end if
19: end for
Output:  X n P b e s t , f ( X n P b e s t ) , G b e s t , f ( G b e s t )

3.2.4. Overall Framework of MSPSOTLP

The overall framework of MSPSOTLP is described in Algorithm 7. The initial population of MSPSOTLP is first generated using the CSOBL initialization scheme, where the chaotic map and oppositional-based learning concepts are incorporated to enhance the quality and coverage of the initial solutions in the search space, respectively. The primary learning phase is performed to update the positions of particles by leveraging the benefits of the multiswarm and social learning concepts. The secondary learning phase is then executed to fine-tune the personal best position of each particle based on two newly proposed search operators with different search characteristics. These iterative search processes are repeated until the termination criterion of γ > Γ m a x is satisfied, where γ is a counter used to record the fitness evaluation number consumed by MSPSOTLP and Γ m a x is a predefined maximum fitness evaluation number. At the end of the optimization process, G b e s t is returned as the best-optimized result found by the proposed algorithm. If MSPSOTLP is used to train the ANN model, then G b e s t can be decoded as the optimal combination of weights, biases, and activation functions used by an ANN model to solve a given dataset.
Algorithm 7 (Main Algorithm). MSPSOTLP
Input:  N , D , ,   X U , X L , Γ m a x
01:  Initialize G b e s t as an empty vector and f ( G b e s t ) ;
02:  Initialize γ 0 ;
03:  Generate the initial population P   using Algorithm 1;
04:  γ γ + 2 N ;
05: for each n-th particle do
06:  X n P b e s t X n ,   f ( X n P b e s t ) f ( X n ) ;
07: If f ( X n ) is better than f ( G b e s t ) then
08:  G b e s t   X n ,   f ( G b e s t ) f ( X n ) ;
09: end if
10: end for
11: while γ Γ m a x do
12:  Perform the primary learning phase using Algorithm 5;
13:  γ γ + N ;
14:  Perform the secondary learning phase using Algorithm 6;
15:  γ γ + N ;
16: end while
Output:  G b e s t , f ( G b e s t )

4. Performance Analysis of MSPSOTLP

The performance of the proposed MSPSOTLP in solving various types of challenging optimization problems is investigated and compared with well-established PSO variants. This includes the performance of MSPSOTLP in solving CEC 2014 benchmark functions, followed by its performance in optimizing the weights, biases, and activation functions of ANN models for solving classification problems.

4.1. Performance Evaluation of MSPSOTLP in Solving Global Optimization Problems

4.1.1. CEC 2014 Benchmark Functions

The optimization performance of the proposed MSPSOTLP is evaluated using 30 benchmark functions of CEC 2014 [66]. As described in Table 3, the benchmark functions with different fitness landscape characteristics can be classified into four categories, known as (i) unimodal functions (F1–F3), (ii) simple multimodal functions (F4–F16), (iii) hybrid functions (F17–F22) and (iv) composition functions (F23–F30). For all benchmark functions in CEC 2014 with D dimensions, the search range X of each decision variable is constrained between −100 to 100. Furthermore, the fitness value of theoretical global optimum f ( X * ) of each function is presented in Table 3.

4.1.2. Performance Metrics for Solving Benchmark Functions

Performances of all compared algorithms are measured using the mean fitness F m e a n and standard deviation SD. Specifically, F m e a n is the mean error between the fitness of the best solution produced by an algorithm and the actual global optimum of a benchmark function in multiple runs. At the same time, the SD value measures the consistency of an algorithm in solving a given problem. Smaller Fmean and SD imply the capability of an algorithm to solve a function with better accuracy and consistency, respectively.
A set of non-parametric statistical analysis procedures [67,68] are applied to analyze the performance of the proposed MSPSOTLP and its competitors from a statistical point of view. The Wilcoxon signed rank test [68] is applied to conduct a pairwise comparison between the proposed MSPSOTLP and each competitor at a significance level of α = 0.05 . The results generated by the Wilcoxon signed rank test are presented in terms of R + , R , p, and h values. R + and R summarize the sum of ranks where MSPSOTLP outperforms or underperforms its peer algorithm, respectively. The p-value indicates the minimum significance level required to identify the performance deviations between the two algorithms. If the p-value is smaller than   α = 0.05 , the better result achieved by an algorithm is considered statistically significant. Based on the obtained p-value and predefined α , the corresponding h value is concluded to be significantly better (i.e., h = “+”), statistically insignificant (i.e., h = “=”), or significantly worse (i.e., h = “-”).
Multiple comparisons among the proposed MSPSOTLP and its competitors is also conducted using the Friedman test [67]. The Friedman test first produces the average ranking of each algorithm. The p-value obtained from the Friedman test measures the global differences among all compared algorithms at a significance level of α = 0.05 . If significant global differences are observed, three post-hoc analyses [67], known as Bonferroni-Dunn, Holm, and Hochberg, are performed to analyze the substantial differences among all compared algorithms based on the adjusted p-values (APVs).

4.1.3. Parameter Settings for Solving Benchmark Functions

The performance of the proposed MSPSOTLP in solving CEC 2014 benchmark functions is compared with seven well-established PSO variants. The selected PSO variants include the conventional PSO (PSO) [14], PSO without velocity (PSOWV) [16], unconstrained version of multi-swarm PSO without velocity (MPSOWV) [15], competitive swarm optimizer (CSO) [69], social learning PSO (SLPSO) [18], hybridized PSO with gravitational search algorithm (PSOGSA) [44], and accelerated PSO (APSO) [70].
The parameter settings of all of the compared algorithms are set with the recommended values in their respective literature and presented in Table 4. All compared algorithms are configured with a population size of N = 100 to solve each benchmark function at   D = 30 for 30 independent times. The maximum fitness evaluation numbers of all algorithms are set as Γ m a x = 10 , 000 × D . All compared algorithms are simulated using Matlab 2019b on a personal computer with Intel ® Core i7-7500 CPU @ 2.70 GHz.

4.1.4. Performance Comparison in Solving CEC 2014 Benchmark Functions

The simulation results of Fmean and SD produced by the proposed MSPSOTLP and other selected PSO variants in solving the CEC 2014 benchmark functions are presented in Table 5. The algorithms’ best and second-best Fmean obtained are indicated in boldface and underlined, respectively. Moreover, the performance comparison between the proposed MSPSOTLP and selected PSO variants is summarized in #BMF and w/t/l. Specifically, #BMF indicates the number of best Fmean obtained by an algorithm in solving all function, while w/t/l reports that the proposed MSPSOTLP has better performance in w function, similar performance in t function, and worse performance in l function as compared to the particular compared algorithm.
For unimodal functions (i.e., F1–F3), the proposed MSPSOTLP has the most dominating search performance by producing the best Fmean values to solve these three functions. MSPSOTLP is also the only PSO variant to locate the global and near-global optimum solutions of F2 and F3, respectively. Apart from MSPSOTLP, both PSO and PSOGSA are considered to have relatively better search performance than the rest of the algorithms in solving unimodal functions by producing one second-best Fmean. Meanwhile, PSOWV, MPSOWV and APSO have inferior search performance by producing Fmean values that are generally larger than those of the other algorithms.
For simple multimodal functions (i.e., F4-F16), the proposed MSPSOTLP has the most competitive performance in solving these 13 functions with the seven best Fmean (i.e., F4, F7, F10, F13, and F14) and three second-best Fmean (i.e., F5, F8, and F9). MSPSOTLP is also the only algorithm that successfully locates the global optimum of function F7. Contrary to unimodal functions, CSO and SLPSO have exhibited relatively better performance in solving several simple multimodal functions, such as F6, F8, F9, and F11. Although PSO and PSOGSA have good performance in solving unimodal functions, their performances in solving F4, F6, F7, F8, F9, F10, F11, F13, F15, and F16 are relatively inferior. The performance degradations of PSO and PSOGSA reflect the limitation of both algorithms in tackling optimization problems with multiple local optima. Meanwhile, APSO shows relatively better search performance at solving F5, F12, and F14 than PSOWV and MPSOWV.
The excellent optimization performance of the proposed MSPSOTLP is also demonstrated in the hybrid function category (i.e., F17–F22) by solving all six functions with the best Fmean values. PSOGSA follows this, producing three second-best Fmean values for F17, F18, and F21. However, the performance of PSOGSA in solving hybrid functions is not consistent, as shown by its relatively inferior results in F19 and F22. A similar scenario is observed from PSO, CSO, and SLPSO, producing mediocre performance in solving most hybrid functions. Specifically, PSO can solve F20 and F21 with relatively good performance, but it performs poorly in F17, F18, F19, and F22. Meanwhile, CSO is reported to solve F19 and F22 with competitive performance, but delivers poor results for F17, F18, F20, and F21. On the other hand, APSO, PSOWV, and MPSOWV are reported to have inferior search performance in solving all hybrid functions with higher complexity levels.
In the category of composition function (i.e., F23–F30), PSOGSA shows its competitive performance in dealing with these more complex functions, followed by the proposed MSPSOTLP. PSAGSA can solve all eight composite functions with the best Fmean values, while MSPSOTLP produces two best Fmean (i.e., F25 and F26) and five second-best Fmean (i.e., F23, F24, F28, F29, and F30). Although PSOGSA performs better than MSPSOTLP when solving composite functions, PSOGSA performs more inferiorly than MSPSOTLP in three other problem categories. CSO produces one best Fmean (i.e., F26) and three second-best Fmean (i.e., F23, F25, and F27), implying its competitive performance in solving composite functions. Meanwhile, SLPSO is observed to have relatively good performance in solving F23 and F25, but mediocre performance in the remaining composite functions.
Overall, the proposed MSPSOTLP has demonstrated the best search accuracy among all compared PSO variants by producing 18 best Fmean values in solving 30 functions, implying that the search mechanisms incorporated are sufficiently robust to handle optimization problems with different levels of complexity as compared to most of its peer algorithms. This is followed by PSOGSA and CSO, which are reported to have 10 and 5 best Fmean values, respectively. On the other hand, PSOWV is identified as the worst algorithm by producing 26 worst Fmean values in solving 30 CEC 2014, implying its limitations in solving the benchmark functions with simple fitness landscapes.

4.1.5. Non-Parametric Statistical Analyses

Based on the reported Fmean values, Wilcoxon signed rank test [68] is applied to perform a pairwise comparison between the proposed MSPSOTLP and the selected PSO variants. The results in terms of R + , R , p, and h values, are presented in Table 6. Accordingly, MSPSOTLP performs significantly better than all other PSO variants at a significance level of α = 0.05 as indicated by the h value of “+”. Notably, the proposed MSPSOTLP completely dominates PSO, PSOWV, and PSOWV to solve CEC 2014 benchmark functions based on the promising R + , R , p and h values reported.
The Friedman test [67] is further conducted for multiple comparisons between the proposed MSPSOTLP and selected PSO variants based on their Fmean values. The results, in terms of average ranking, chi-square statistics, and p-value, are reported in Table 7. The Friedman test reports that MSPSOTLP has the best performance by scoring an average rank of 1.6833, followed by CSPSO, PSOGSA, SLPSO, PSO, APSO, MPSOWV, and PSOWV with average ranks of 3.0500, 3.0667, 3.7667, 4.4167, 5.6500, 6.6500, and 7.7167, respectively. The p-value determined by the Friedman test through chi-square statistics is smaller than the predefined significance level of α = 0.05 . Therefore, significant global performance deviations among all compared algorithms are observed.
Given the global performance difference observed from the Friedman test, three post-hoc statistical analyses [67], known as Bonferroni-Dunn, Holm, and Hochberg, are utilized to identify other concrete performance differences between the proposed MSPSOTLP and different PSO variants. The results, in terms of z values, unadjusted p values, and adjusted p values (APVs), produced by three procedures are reported in Table 8. All post-hoc procedures confirm the significant performance enhancement of MSPSOTLP against PSOWV, MPSOWV, APSO, SLPSO, and PSO at α = 0.05 . The Hochberg procedure has higher sensitivity to detect the significant performance difference between MSPSOTLP, CSO, and PSAGSA. Notably, the Holm procedure can detect the significant performance improvement of MSPSOTLP against CSO and PSOGSA if the threshold level is adjusted to α = 0.10 .

4.1.6. Performance Analysis of Proposed Improvement Strategies

A further performance analysis is conducted in this subsection to investigate the contribution brought by each improvement strategy introduced into MSPSOTLP, i.e., modified initialization scheme (i.e., chaotic map and oppositional based learning), primary learning phase (i.e., multiswarm concept and construction of social exemplar), and secondary learning phase (i.e., two search operators with different search characteristic). The original PSOWV is chosen as the baseline method to be compared in this subsection. Another three variants of MSPSOTLP, i.e., MSPSOTLP-1, MSPSOTLP-2 and MSPSOTLP-3, are also introduced to analyze the performance gains brought by the modified initialization scheme, primary learning phase, and secondary learning phase, respectively. Particularly, MSPSOTLP-1 refers to the PSOWV enhanced with the CSOBL initialization scheme. Meanwhile, MSPSOTLP-2 refers to PSOWV enhanced with the CSOBL initialization scheme and primary learning phase. Finally, MSPSOTLP-3 refers to PSOWV enhanced with the CSOBL initialization scheme and secondary learning phase, where its primary phase is replaced with the original search operator of PSOWV. The performance gain achieved by each MSPSOTLP variants against the original PSOWV when solving every benchmark function is measured as Δ G as follow:
Δ G = F m e a n ( M S P S O T L P   v a r i a n t ) F m e a n ( P S O W V ) | F m e a n ( P S O W V ) | × 100 %
Referring to Equation (24), it is evident that a positive value of   Δ G can be obtained if a particular MSPSOTLP variant can solve the benchmark functions with better Fmean value than that of PSOWV and vice versa.
The simulation results in terms of Fmean and   Δ G obtained by PSOWV and all MSPSOTLP variants when solving all CEC 2014 benchmark functions are presented in Table 9. Accordingly, all MSPSOTLP variants have successfully solved the majority of CEC 2014 benchmark functions with different degrees of performance gains. MPSOTLP-1 is observed to outperform PSOWV in the majority of CEC 2014 benchmark functions except for F1, F5, F6, F11, F12, F16, F17, F21, F29, and F30. Although the CSOBL can produce an initial population with better solution quality in terms of fitness and diversity that can lead to performance gain of algorithm, it is not sufficient to solve the complex problem because CSOBL is only executed once during the search process. Some interesting findings can be observed when both variants of MSPSOTLP-2 and MSPSOTLP-3 are used to solve CEC 2014 benchmark functions. Particularly, MSPSOTLP-2 can perform better than MSPSOTLP-3 when solving the unimodal (i.e., F1 to F3), simple multimodal (i.e., F4 to F16) and hybrid (i.e., F17 to F22) functions. Meanwhile, MSPSOTLP-3 is revealed to be more competitive than MSPSOTLP-2 in dealing with the most complex composition function. The performance differences between MSPSOTLP-2 and MSPSOTLP03 can be justified based on their inherent search mechanisms. For MSPSOTLP-2, the primary learning phase is incorporated with multiswarm and social learning concepts used to accelerate the convergence characteristic of the algorithm without compromising its population diversity. When dealing with optimization functions with less complex fitness landscapes (i.e., unimodal, simple multimodal, and hybrid functions), the benefits brought by both the multiswarm and social learning concepts can still suppress the potential negative drawbacks of the historically best position (i.e., global best position in this case). When the fitness landscapes of the optimization problems are increased further, such as those in composition functions, the numbers of local optima in the solution space have increased exponentially. Under this circumstance, the diversity maintenance scheme introduced in the primary learning phase of MSPSOTLP-2 is not sufficient to curb the high tendency of historically best positions to be trapped in these local optima. On the other hand, the secondary learning phase of MSPSOTLP-3 can leverage the useful directional information of other non-fittest solutions to perform searching with greater exploration strengths; thus, it has a higher chance to escape from the inferior regions of the solution space. Finally, the complete MSPSOTLP has exhibited the best performance when solving all CEC 2014 benchmark functions with 26 best and 4 s-best Fmean values. The simulation results reported in Table 9 have verified that each improvement strategy incorporated into MSPSOTLP indeed has different contributions to enhancing the search performance of the proposed algorithm.

4.2. Performance Evaluation of MSPSOTLP in Training ANN Models

4.2.1. Classification Datasets for Training ANN Models

Apart from general optimization performance, the capability of the proposed MSPSOTLP in training ANN models for data classification tasks is also evaluated using sixteen standard datasets extracted from the University of California Irvine (UCI) machine learning repository [71]. The sixteen datasets selected for performance evaluation include Iris, Liver Disorder, Blood Transfusion, Statlog Heart, Hepatitis, Wine, Breast Cancer, Seeds, Australian Credit Approval, Haberman’s Survival, New Thyroid, Glass, Balance Scale, Dermatology, Landsat and Bank Note. The properties of each dataset are summarized in Table 10. Each selected dataset is separated into two parts, known as 70% of training samples and 30% of testing samples. Specifically, the training samples are used by MSPSOTLP and other PSO variants to optimize the parameters of ANN models (i.e., weights, biases, and activation function). In contrast, the testing samples are used to evaluate the generalization performance of ANN models trained by all compared algorithms.

4.2.2. Performance Metrics for ANN Training

Classification accuracy C is a popular performance metric used to measure the classification performance of an ANN model. Suppose that R ˜ refers to the number of correctly classified data samples by the ANN model, while R represents the total number of data samples in each dataset. The C value is calculated as follow:
C = R ˜ R × 100 %
An ANN model with a larger value of C is more desirable because it can produce better results in terms of classification accuracy. In addition, the ANN model produces a larger C value when solving the testing samples s also considered to have better generalization performance due to its excellent capability to accurately classify unseen data in different classes with its understanding of the existing data. Furthermore, the standard deviation SD values are also recorded to observe the consistency of ANN models trained by compared PSO variants in solving classification datasets.

4.2.3. Parameter Settings for ANN Training

Similar to the global optimization, the performance of the ANN model trained by the proposed MSPSOTLP in solving sixteen datasets extracted from the UCI machine learning repository is compared with the seven PSO variants reported in Section 4.1.3. The parameters of all compared algorithms are configured as reported in Table 3 as recommended in their original literature. The same values of N = 100 and Γ m a x = 10 , 000 × D are also configured for all compared PSO variants in solving ANN training problems, where the value of D is calculated based on Equation (7). The ANN model to be trained in this study is constructed by an input layer, a hidden layer, and an output layer. The number of input and output neurons of each ANN model are configured based on the number of attributes and classes of a given dataset, respectively, as presented in Table 8. Meanwhile, the number of neurons in the hidden layer is 15. Similarly, all compared algorithms in training ANN model are simulated using Matlab 2019b on a personal computer with Intel ® Core i7-7500 CPU @ 2.70GHz.

4.2.4. Performance Comparison in Training ANN Models

The C and SD values produced by the ANN models optimized by all compared PSO variants when classifying the training and testing samples are presented in Table 11 and Table 12, respectively. Similarly, the best and second best C values produced by the compared methods in each dataset are indicated in boldface and underlined, respectively. The comparative studies between the ANN models trained by MSPSOTLP and other PSO variants are summarized in # B C and w/t/l as similar to those in Table 5. Specifically, # B C records the number of best C values produced by each algorithm in training the ANN models for 16 datasets extracted from UCI machine learning repository.
According to Table 11, ANN models trained by the proposed MSPSOTLP are reported to have the best performance for being able to solve 12 out of 16 sets of training samples with the best C values. Specifically, MSPSOTLP emerges as the best training algorithm for ANN models in dealing with datasets of Iris, Statlog Hearts, Hepatitis, Wine, Seeds, Australian Credit Approval, New Thyroid, Glass, Balance Scale, Dermatology, Landsat, and Bank Note. It is also noteworthy that the proposed MSPSOTLP is the only algorithm that has successfully trained ANN models with 100% of C values to classify the training sets of Wine and Bank Note. The ANN models trained by PSO and PSAGSA can occasionally deliver good performances by producing two best C values in solving training samples. Although the C values produced by ANN models trained by the proposed MSPSOSLP in classifying training samples of Liver Disorder, Blood Transfusion, Breast Cancer, and Haberman’s Survival are slightly lower than PSO and PSOGSA, the performance differences between these algorithms are marginal and not more than 1%. On the other hand, the ANN models trained by MSPSOTLP are reported to have more notable performance differences than PSO and PSOGSA in terms of C values when classifying the training samples of Iris, Statlog Heart, Hepatitis, Seeds, New Thyroid, Glass, and Landsat. The ANN models trained by PSOWV are reported to have the most inferior performance by producing eight worst and seven second-worst C values when solving all 16 classification datasets except for Landsat.
In addition to training samples, 30% of datasets are extracted as testing samples to evaluate the generalization performances of ANN models optimized by all compared PSO variants. According to Table 12, ANN models trained by the proposed MSPSOTLP are reported to have the best generalization performance for being able to produce 12 best and 2 s-best C values when classifying the testing samples of 16 selected datasets. Specifically, ANN models trained by MSPSOTLP successfully produce the best C in solving the testing samples of Iris, Statlog Heart, Hepatitis, Wine, Breast Cancer, Seeds, Haberman’s Survival, New Thyroid, Glass, Balance Scale, Landsat, and Bank Note. Moreover, MSPSOTLP is also reported to be the only algorithm that can train ANN models to solve the testing samples of Iris and Seeds with 100% of C . On the other hand, the ANN models trained by PSO, MPSOWV, SLPSO, and APSO are reported to produce the best C values when classifying the testing samples of Dermatology, Blood Transfusion, Australian Credit Approval, and Liver Disorder, respectively. Although ANN models trained by MSPSOTLP are observed to produce relatively inferior C values in solving the testing samples of these four datasets, some performance differences are insignificant. On the contrary, ANN models trained by MSPSOTLP produce much better C values than PSO, MPSOWV, SLPSO, and APSO in solving the testing samples of certain datasets, such as Hepatitis, Wine, Seeds, New Thyroid, etc. The ANN models trained by PSOWV perform the worst by producing seven lowest C values (i.e., testing samples of Liver Disorder, Breast Cancer, Seeds, Australian Credit Approval, New Thyroid, Glass, and Balance Scale) and eight second-worst C values (i.e., testing samples of Iris, Statlog Heart, Hepatitis, Wine, Haberman’s Survival, Dermatology, Landsat, and Bank Note).

4.2.5. Non-Parametric Statistical Analyses

Similar to those reported in Section 4.1.5, the Wilcoxon signed rank test [68] is applied to perform a pairwise comparison between the proposed MSPSOTLP and the selected PSO variants based on the reported C values. The R + , R , p, and h values produced by ANN models trained by the compared algorithms in classifying the training and testing samples are presented in Table 13 and Table 14, respectively. From Table 13, ANN models trained by the proposed MSPSOTLP have significantly better performance than those of other PSO variants at a significance level of α = 0.05 , as indicated by the h value of “+”. Notably, the ANN model optimized by the proposed MSPSOTLP can completely dominate those of PSOWV, CSO, SLPSO, and APSO when solving the training samples of selected datasets. Similarly, ANN models trained by MSPSOTLP are observed to perform significantly better than other PSO variants in solving all testing samples of datasets chosen, as indicated by the h values of “+” in Table 14. These pairwise comparison results imply the excellent generalization ability of ANN model trained by MSPSOTLP due to its ability to handle unseen data of testing samples effectively.
Apart from pairwise comparison, multiple comparisons among the ANN models trained by all compared algorithms are also conducted by Friedman Test [67]. The average ranking values produced by the ANN models trained by all compared algorithms in solving training and testing samples are reported in Table 15 and Table 16, respectively. Table 15 shows that the ANN models trained by MSPSOTLP score the best average ranking when classifying the training datasets, followed by PSOGSA, PSO, MPSOWV, CSO, SLPSO, APSO, and PSOWV. Although Table 7 reports that MPSOWV has a relatively poor ranking in solving CEC 2014 benchmark functions, it performs relatively well in training ANN models in solving training samples. Conversely, CSO does not perform well in training the ANN model despite producing relatively competitive performance in solving CEC 2014 benchmark functions. Table 16 reveals that the ANN model trained by MSPSOTLP has the best average ranking value in classifying the testing samples, followed by MPSOWV, PSO, PSOGSA, SLPSO, CSO, APSO, and PSOWV. Similarly, MPSOWV shows its competitive performance in training the ANN model, despite having inferior performance in solving CEC 2014 benchmark functions. Although the ANN model trained by PSOGSA has the second-best average ranking in solving training samples, as reported in Table 15, it does not perform well in solving testing samples, as reported in Table 16, implying the tendency of PSAGSA to produce the ANN models that suffer with overfitting issues and have poor generalization performance to handle unseen data.
Referring to the p values reported in Table 15 and Table 16, significant global performance differences among the ANN models trained by all compared algorithms to solve training and testing samples are observed at a significance level of α = 0.05 . The concrete differences between the ANN models trained by MSPSOTLP and other PSO variants in classifying the training and testing samples are further analyzed using the Bonferroni-Dunn, Holm, and Hochberg procedures, as shown in Table 17 and Table 18, respectively. According to the APVs for solving training samples, as reported in Table 17, all post-hoc procedures confirm the significant performance improvement of ANN models trained by MSPSOTLP against those of PSOWV, APSO, SLPSO, and CPSO at α = 0.05 . On the other hand, all post-hoc procedures can detect the significant performance improvement of ANN models trained by MSPSOTLP against those of PSOWV, APSO, CSO, SLPSO, and PSOGSA in solving testing samples, as indicated by the APVs values in Table 18.

5. Conclusions

This study proposes a new PSO variant known as multi-swarm-based particle swarm optimization with two-level learning phases (MSPSOTLP) to address the potential drawbacks of PSOWV. Three significant modifications are introduced into the proposed MSPSOTLP to ensure that proper balancing of the exploration and exploitation searches of the algorithm can be achieved in handling more challenging optimization problems, including the training process of the ANN model. A new population initialization scheme, the CSOBL initialization scheme, is incorporated to replace the conventional random initialization scheme in generating initial solutions with better diversity and broader coverage in the solution space. Both multiswarm and social learning concepts are incorporated into the primary learning phase of MSPSOTLP to guide the search process of particles more effectively without losing population diversity by leveraging the directional information contributed by other non-fittest particles in the population. Additionally, a secondary learning phase is introduced with the adoption of two search operators with different levels of exploration and exploitation strengths, aiming to address the limitations of a single search operator adopted by many existing PSO variants. Extensive simulation studies report that the proposed MSPSOTLP outperforms the selected seven PSO variants in solving benchmark problems from CEC 2014 by producing 18 best mean fitness values out of 30 functions. Moreover, the training process of the ANN model is also formulated as an optimization problem, where the objective is to produce the optimal values of weights and biases and the selection of activation functions. The proposed MSPSOTLP is reported to have the best overall performance in training ANN models to solve classification datasets extracted from the UCI machine learning repository.
While MSPSOTLP has demonstrated competitive performances to solve the CEC 2014 benchmark functions and train ANN model for classifying UCI machine learning datasets, the proposed work still has room for improvement in terms of its search mechanisms and potential real-world applications. First, the main population of MSPSOTLP is divided by the reference-point-based population division scheme into a predefined number of subswarms during the primary learning phase. It is nontrivial to determine the optimal subswarm numbers for optimization problems with different types of fitness landscapes. Second, the solution update process of MSPSOTLP is performed by comparing the fitness values of current and new particles. It is noteworthy that such a greedy selection scheme tends to suppress the survival of novel particles that might have temporary poor performances at the earlier stage of search process, but can contribute for long-term success if given sufficient iteration numbers. Third, the performance of the ANN optimized by MSPSOTLP is currently evaluated using the datasets obtained from a public database. Despite exhibiting promising classification accuracy in most selected datasets, the feasibility of the proposed method to solve more challenging real-world classification and regression problems remains unexplored. Some potential future works are then suggested to address these aforementioned limitations. First, the population division scheme of MSPSOTLBP can be further enhanced such that the optimal subswarm numbers can be determined adaptively based on the types of fitness landscapes encountered by the population. Second, other criteria, such as the fitness improvement rate and population diversity, should be considered by MSPSOTLP during the solution update process to preserve the novel particles that can bring long-term success for the algorithm. Finally, it is worth it to investigate the feasibility of ANN optimized by MSPSOTLP to address challenging issues encountered in the intelligent condition monitoring of complex industrial systems [2], such as the remaining useful life prediction of gear pumps [72], the time series prognosis of fuel cells [73], and predictive maintenance of renewable energy systems [74].

Author Contributions

Conceptualization, W.H.L., S.S.T. and E.-S.M.E.-K.; methodology, K.M.A., W.H.L. and E.-S.M.E.-K.; software, K.M.A., C.E.C., A.A.A. and A.I.; validation, K.M.A., C.E.C., F.K.K. and D.S.K.; formal analysis, K.M.A., S.S.T. and W.H.L.; investigation, K.M.A., W.H.L. and E.-S.M.E.-K.; resources, E.-S.M.E.-K., C.E.C., F.K.K. and D.S.K.; data curation, K.M.A., C.E.C., A.A.A., A.I. and F.K.K.; writing—original draft preparation, K.M.A., C.E.C., A.A.A., A.I. and F.K.K.; writing—review and editing, W.H.L., S.S.T. and E.-S.M.E.-K.; visualization, K.M.A., C.E.C. and D.S.K.; supervision, W.H.L., S.S.T. and E.-S.M.E.-K.; project administration, W.H.L. and E.-S.M.E.-K.; funding acquisition, E.-S.M.E.-K. and D.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R300), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data will be provided upon reasonable request.

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R300), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khashei, M.; Bijari, M. An artificial neural network (p, d, q) model for timeseries forecasting. Expert Syst. Appl. 2010, 37, 479–489. [Google Scholar] [CrossRef]
  2. Berghout, T.; Benbouzid, M. A Systematic Guide for Predicting Remaining Useful Life with Machine Learning. Electronics 2022, 11, 1125. [Google Scholar] [CrossRef]
  3. Abdelhamid, A.A.; El-Kenawy, E.-S.M.; Alotaibi, B.; Amer, G.M.; Abdelkader, M.Y.; Ibrahim, A.; Eid, M.M. Robust Speech Emotion Recognition Using CNN+ LSTM Based on Stochastic Fractal Search Optimization Algorithm. IEEE Access 2022, 10, 49265–49284. [Google Scholar] [CrossRef]
  4. El-kenawy, E.-S.M.; Albalawi, F.; Ward, S.A.; Ghoneim, S.S.; Eid, M.M.; Abdelhamid, A.A.; Bailek, N.; Ibrahim, A. Feature selection and classification of transformer faults based on novel meta-heuristic algorithm. Mathematics 2022, 10, 3144. [Google Scholar] [CrossRef]
  5. Alhussan, A.A.; Khafaga, D.S.; El-Kenawy, E.-S.M.; Ibrahim, A.; Eid, M.M.; Abdelhamid, A.A. Pothole and Plain Road Classification Using Adaptive Mutation Dipper Throated Optimization and Transfer Learning for Self Driving Cars. IEEE Access 2022, 10, 84188–84211. [Google Scholar] [CrossRef]
  6. Wu, H.; Zhou, Y.; Luo, Q.; Basset, M.A. Training feedforward neural networks using symbiotic organisms search algorithm. Comput. Intell. Neurosci. 2016, 2016, 9063065. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Feng, J.; Lu, S. Performance analysis of various activation functions in artificial neural networks. J. Phys. Conf. Ser. 2019, 1237, 022030. [Google Scholar] [CrossRef]
  8. Abu-Shams, M.; Ramadan, S.; Al-Dahidi, S.; Abdallah, A. Scheduling Large-Size Identical Parallel Machines with Single Server Using a Novel Heuristic-Guided Genetic Algorithm (DAS/GA) Approach. Processes 2022, 10, 2071. [Google Scholar] [CrossRef]
  9. Sharma, A.; Khan, R.A.; Sharma, A.; Kashyap, D.; Rajput, S. A Novel Opposition-Based Arithmetic Optimization Algorithm for Parameter Extraction of PEM Fuel Cell. Electronics 2021, 10, 2834. [Google Scholar] [CrossRef]
  10. Singh, A.; Sharma, A.; Rajput, S.; Mondal, A.K.; Bose, A.; Ram, M. Parameter Extraction of Solar Module Using the Sooty Tern Optimization Algorithm. Electronics 2022, 11, 564. [Google Scholar] [CrossRef]
  11. El-Kenawy, E.-S.M.; Mirjalili, S.; Abdelhamid, A.A.; Ibrahim, A.; Khodadadi, N.; Eid, M.M. Meta-heuristic optimization and keystroke dynamics for authentication of smartphone users. Mathematics 2022, 10, 2912. [Google Scholar] [CrossRef]
  12. Khafaga, D.S.; Alhussan, A.A.; El-Kenawy, E.-S.M.; Ibrahim, A.; Eid, M.M.; Abdelhamid, A.A. Solving Optimization Problems of Metamaterial and Double T-Shape Antennas Using Advanced Meta-Heuristics Algorithms. IEEE Access 2022, 10, 74449–74471. [Google Scholar] [CrossRef]
  13. El-Kenawy, E.-S.M.; Mirjalili, S.; Alassery, F.; Zhang, Y.-D.; Eid, M.M.; El-Mashad, S.Y.; Aloyaydi, B.A.; Ibrahim, A.; Abdelhamid, A.A. Novel Meta-Heuristic Algorithm for Feature Selection, Unconstrained Functions and Engineering Problems. IEEE Access 2022, 10, 40536–40555. [Google Scholar] [CrossRef]
  14. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 1944, pp. 1942–1948. [Google Scholar]
  15. Ang, K.M.; Lim, W.H.; Isa, N.A.M.; Tiang, S.S.; Wong, C.H. A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems. Expert Syst. Appl. 2020, 140, 112882. [Google Scholar] [CrossRef]
  16. El-Sherbiny, M.M. Particle swarm inspired optimization algorithm without velocity equation. Egypt. Inform. J. 2011, 12, 1–8. [Google Scholar] [CrossRef] [Green Version]
  17. Tian, D.; Zhao, X.; Shi, Z. DMPSO: Diversity-guided multi-mutation particle swarm optimizer. IEEE Access 2019, 7, 124008–124025. [Google Scholar] [CrossRef]
  18. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  19. Lim, W.H.; Isa, N.A.M.; Tiang, S.S.; Tan, T.H.; Natarajan, E.; Wong, C.H.; Tang, J.R. A self-adaptive topologically connected-based particle swarm optimization. IEEE Access 2018, 6, 65347–65366. [Google Scholar] [CrossRef]
  20. Isiet, M.; Gadala, M. Self-adapting control parameters in particle swarm optimization. Appl. Soft Comput. 2019, 83, 105653. [Google Scholar] [CrossRef] [Green Version]
  21. Li, M.; Chen, H.; Wang, X.; Zhong, N.; Lu, S. An improved particle swarm optimization algorithm with adaptive inertia weights. Int. J. Inf. Technol. Decis. Mak. 2019, 18, 833–866. [Google Scholar] [CrossRef]
  22. Ghasemi, M.; Akbari, E.; Rahimnejad, A.; Razavi, S.E.; Ghavidel, S.; Li, L. Phasor particle swarm optimization: A simple and efficient variant of PSO. Soft Comput. 2019, 23, 9701–9718. [Google Scholar] [CrossRef]
  23. Liu, W.; Wang, Z.; Zeng, N.; Yuan, Y.; Alsaadi, F.E.; Liu, X. A novel randomised particle swarm optimizer. Int. J. Mach. Learn. Cybern. 2021, 12, 529–540. [Google Scholar] [CrossRef]
  24. Ang, K.M.; Juhari, M.R.M.; Cheng, W.-L.; Lim, W.H.; Tiang, S.S.; Wong, C.H.; Rahman, H.; Pan, L. New Particle Swarm Optimization Variant with Modified Neighborhood Structure. In Proceedings of the 2022 International Conference on Artificial Life and Robotics (ICAROB2022), Oita, Japan, 20–23 January 2022. [Google Scholar] [CrossRef]
  25. Wu, D.; Jiang, N.; Du, W.; Tang, K.; Cao, X. Particle swarm optimization with moving particles on scale-free networks. IEEE Trans. Netw. Sci. Eng. 2018, 7, 497–506. [Google Scholar] [CrossRef]
  26. Xu, Y.; Pi, D. A reinforcement learning-based communication topology in particle swarm optimization. Neural Comput. Appl. 2020, 32, 10007–10032. [Google Scholar] [CrossRef]
  27. Chen, K.; Xue, B.; Zhang, M.; Zhou, F. Novel chaotic grouping particle swarm optimization with a dynamic regrouping strategy for solving numerical optimization tasks. Knowl. Based Syst. 2020, 194, 105568. [Google Scholar] [CrossRef]
  28. Roshanzamir, M.; Balafar, M.A.; Razavi, S.N. A new hierarchical multi group particle swarm optimization with different task allocations inspired by holonic multi agent systems. Expert Syst. Appl. 2020, 149, 113292. [Google Scholar] [CrossRef]
  29. Yang, Q.; Chen, W.-N.; Da Deng, J.; Li, Y.; Gu, T.; Zhang, J. A level-based learning swarm optimizer for large-scale optimization. IEEE Trans. Evol. Comput. 2017, 22, 578–594. [Google Scholar] [CrossRef]
  30. Li, W.; Meng, X.; Huang, Y.; Fu, Z.-H. Multipopulation cooperative particle swarm optimization with a mixed mutation strategy. Inf. Sci. 2020, 529, 179–196. [Google Scholar] [CrossRef]
  31. Liu, H.; Zhang, X.-W.; Tu, L.-P. A modified particle swarm optimization using adaptive strategy. Expert Syst. Appl. 2020, 152, 113353. [Google Scholar] [CrossRef]
  32. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.-H. An expanded particle swarm optimization based on multi-exemplar and forgetting ability. Inf. Sci. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  33. Xu, G.; Cui, Q.; Shi, X.; Ge, H.; Zhan, Z.-H.; Lee, H.P.; Liang, Y.; Tai, R.; Wu, C. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  34. Wang, C.; Song, W. A modified particle swarm optimization algorithm based on velocity updating mechanism. Ain Shams Eng. J. 2019, 10, 847–866. [Google Scholar] [CrossRef]
  35. Karim, A.A.; Isa, N.A.M.; Lim, W.H. Modified particle swarm optimization with effective guides. IEEE Access 2020, 8, 188699–188725. [Google Scholar] [CrossRef]
  36. Karim, A.A.; Isa, N.A.M.; Lim, W.H. Hovering Swarm Particle Swarm Optimization. IEEE Access 2021, 9, 115719–115749. [Google Scholar] [CrossRef]
  37. Wei, B.; Zhang, W.; Xia, X.; Zhang, Y.; Yu, F.; Zhu, Z. Efficient feature selection algorithm based on particle swarm optimization with learning memory. IEEE Access 2019, 7, 166066–166078. [Google Scholar] [CrossRef]
  38. Şenel, F.A.; Gökçe, F.; Yüksel, A.S.; Yiğit, T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2019, 35, 1359–1373. [Google Scholar] [CrossRef]
  39. Zhang, M.; Long, D.; Qin, T.; Yang, J. A chaotic hybrid butterfly optimization algorithm with particle swarm optimization for high-dimensional optimization problems. Symmetry 2020, 12, 1800. [Google Scholar] [CrossRef]
  40. Ang, K.M.; Juhari, M.R.M.; Lim, W.H.; Tiang, S.S.; Ang, C.K.; Hussin, E.E.; Pan, L.; Chong, T.H. New Hybridization Algorithm of Differential Evolution and Particle Swarm Optimization for Efficient Feature Selection. In Proceedings of the 2022 International Conference on Artificial Life and Robotics (ICAROB2022), Oita, Japan, 20–23 January 2022; Volume 27, p. 5. [Google Scholar] [CrossRef]
  41. Grosan, C.; Abraham, A. Hybrid evolutionary algorithms: Methodologies, architectures, and reviews. In Hybrid Evolutionary Algorithms; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1–17. [Google Scholar]
  42. Abdolrasol, M.G.; Hussain, S.S.; Ustun, T.S.; Sarker, M.R.; Hannan, M.A.; Mohamed, R.; Ali, J.A.; Mekhilef, S.; Milad, A. Artificial neural networks based optimization techniques: A review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
  43. Carvalho, M.; Ludermir, T.B. Particle swarm optimization of neural network architectures andweights. In Proceedings of the 7th International Conference on Hybrid Intelligent Systems (HIS 2007), Kaiserslautern, Germany, 17–19 September 2007; pp. 336–339. [Google Scholar]
  44. Mirjalili, S.; Hashim, S.Z.M.; Sardroudi, H.M. Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl. Math. Comput. 2012, 218, 11125–11137. [Google Scholar] [CrossRef]
  45. Yaghini, M.; Khoshraftar, M.M.; Fallahi, M. A hybrid algorithm for artificial neural network training. Eng. Appl. Artif. Intell. 2013, 26, 293–301. [Google Scholar] [CrossRef]
  46. Kandasamy, T.; Rajendran, R. Hybrid algorithm with variants for feed forward neural network. Int. Arab J. Inf. Technol. 2018, 15, 240–245. [Google Scholar]
  47. Xue, Y.; Tang, T.; Liu, A.X. Large-scale feedforward neural network optimization by a self-adaptive strategy and parameter based particle swarm optimization. IEEE Access 2019, 7, 52473–52483. [Google Scholar] [CrossRef]
  48. Kumar, M.; Mishra, S.K.; Joseph, J.; Jangir, S.K.; Goyal, D. Adaptive comprehensive particle swarm optimisation-based functional-link neural network filtre model for denoising ultrasound images. IET Image Process. 2021, 15, 1232–1246. [Google Scholar] [CrossRef]
  49. Hayder, G.; Solihin, M.I.; Mustafa, H.M. Modelling of river flow using particle swarm optimized cascade-forward neural networks: A case study of Kelantan River in Malaysia. Appl. Sci. 2020, 10, 8670. [Google Scholar] [CrossRef]
  50. Davar, S.; Nobahar, M.; Khan, M.S.; Amini, F. The Development of PSO-ANN and BOA-ANN Models for Predicting Matric Suction in Expansive Clay Soil. Mathematics 2022, 10, 2825. [Google Scholar] [CrossRef]
  51. Melanie, M. An Introduction to Genetic Algorithms; Massachusetts Institute of Technology: Cambridge, MA, USA, 1996; p. 158. [Google Scholar]
  52. Chandwani, V.; Agrawal, V.; Nagar, R. Modeling slump of ready mix concrete using genetic algorithms assisted training of Artificial Neural Networks. Expert Syst. Appl. 2015, 42, 885–893. [Google Scholar] [CrossRef]
  53. Huang, H.-X.; Li, J.-C.; Xiao, C.-L. A proposed iteration optimization approach integrating backpropagation neural network with genetic algorithm. Expert Syst. Appl. 2015, 42, 146–155. [Google Scholar] [CrossRef]
  54. Bagheri, M.; Mirbagheri, S.A.; Bagheri, Z.; Kamarkhani, A.M. Modeling and optimization of activated sludge bulking for a real wastewater treatment plant using hybrid artificial neural networks-genetic algorithm approach. Process Saf. Environ. Prot. 2015, 95, 12–25. [Google Scholar] [CrossRef]
  55. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  56. Uzlu, E.; Kankal, M.; Akpınar, A.; Dede, T. Estimates of energy consumption in Turkey using neural networks with the teaching–learning-based optimization algorithm. Energy 2014, 75, 295–303. [Google Scholar] [CrossRef]
  57. Li, K.; Xie, X.; Xue, W.; Dai, X.; Chen, X.; Yang, X. A hybrid teaching-learning artificial neural network for building electrical energy consumption prediction. Energy Build. 2018, 174, 323–334. [Google Scholar] [CrossRef]
  58. Benali, A.; Hachama, M.; Bounif, A.; Nechnech, A.; Karray, M. A TLBO-optimized artificial neural network for modeling axial capacity of pile foundations. Eng. Comput. 2021, 37, 675–684. [Google Scholar] [CrossRef]
  59. Ang, K.M.; Lim, W.H.; Tiang, S.S.; Ang, C.K.; Natarajan, E.; Ahamed Khan, M. Optimal Training of Feedforward Neural Networks Using Teaching-Learning-Based Optimization with Modified Learning Phases. In Proceedings of the 12th National Technical Seminar on Unmanned System Technology 2020; Springer: Singapore, 2022; pp. 867–887. [Google Scholar]
  60. Chong, O.T.; Lim, W.H.; Isa, N.A.M.; Ang, K.M.; Tiang, S.S.; Ang, C.K. A Teaching-Learning-Based Optimization with Modified Learning Phases for Continuous Optimization. In Science and Information Conference; Springer: Cham, Switzerland, 2020; pp. 103–124. [Google Scholar]
  61. Lin, G.; Shen, W. Research on convolutional neural network based on improved Relu piecewise activation function. Procedia Comput. Sci. 2018, 131, 977–984. [Google Scholar] [CrossRef]
  62. Gao, W.; Liu, S.; Huang, L. A global best artificial bee colony algorithm for global optimization. J. Comput. Appl. Math. 2012, 236, 2741–2753. [Google Scholar] [CrossRef] [Green Version]
  63. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  64. Dong, N.; Wu, C.-H.; Ip, W.-H.; Chen, Z.-Q.; Chan, C.-Y.; Yung, K.-L. An opposition-based chaotic GA/PSO hybrid algorithm and its application in circle detection. Comput. Math. Appl. 2012, 64, 1886–1902. [Google Scholar] [CrossRef] [Green Version]
  65. Vrugt, J.A.; Robinson, B.A.; Hyman, J.M. Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Trans. Evol. Comput. 2008, 13, 243–259. [Google Scholar] [CrossRef]
  66. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization. In Technical Report 201311, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  67. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  68. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
  69. Cheng, R.; Jin, Y. A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 2014, 45, 191–204. [Google Scholar] [CrossRef] [PubMed]
  70. Yang, X.-S.; Deb, S.; Fong, S. Accelerated particle swarm optimization and support vector machine for business optimization and applications. In International Conference on Networked Digital Technologies; Springer: Berlin/Heidelberg, Germany, 2011; pp. 53–66. [Google Scholar]
  71. Lichman, M. UCI Machine Learning Repository. 2013. Available online: https://archive.ics.uci.edu/ml/index.php (accessed on 3 June 2022).
  72. Zhang, P.; Jiang, W.; Shi, X.; Zhang, S. Remaining Useful Life Prediction of Gear Pump Based on Deep Sparse Autoencoders and Multilayer Bidirectional Long ands Short Term Memory Network. Processes 2022, 10, 2500. [Google Scholar] [CrossRef]
  73. Wang, P.; Liu, H.; Hou, M.; Zheng, L.; Yang, Y.; Geng, J.; Song, W.; Shao, Z. Estimating the Remaining Useful Life of Proton Exchange Membrane Fuel Cells under Variable Loading Conditions Online. Processes 2021, 9, 1459. [Google Scholar] [CrossRef]
  74. Benbouzid, M.; Berghout, T.; Sarma, N.; Djurović, S.; Wu, Y.; Ma, X. Intelligent Condition Monitoring of Wind Power Systems: State of the Art Review. Energies 2021, 14, 5967. [Google Scholar] [CrossRef]
Figure 1. The network architecture of an ANN model with three-layer structure.
Figure 1. The network architecture of an ANN model with three-layer structure.
Processes 10 02579 g001
Figure 2. Graphical illustration of sorting all particles stored in each subswarm based on their personal best fitness values.
Figure 2. Graphical illustration of sorting all particles stored in each subswarm based on their personal best fitness values.
Processes 10 02579 g002
Table 1. Strengths and limitations of selected PSO variants.
Table 1. Strengths and limitations of selected PSO variants.
ReferenceYearStrengthsLimitations
[20]2019
  • Acceleration coefficients and inertia weight of each particle were adjusted adaptively based on current search environment.
  • High computation costs incurred by the novel method used to estimate the search environment in each iteration.
[21]2019
  • Inertia weights of particles were adjusted based on their personal best fitness.
  • Mutation scheme was performed on stagnated particles to preserve swarm diversity.
  • Both adaptive inertia weight and mutation scheme were highly relying on the directional information of global best position.
[22]2019
  • Periodic trigonometric functions were used to adjust the inertia weight and acceleration coefficient of particle.
  • Limited ability to adjust exploration and exploitation searches with single search operator.
  • Search process only relied on personal and global best positions.
[23]2021
  • Gaussian white noise with different intensity was used to adjust the acceleration coefficients of particles adaptively.
  • Wider exploration search.
  • Limited ability to adjust exploration and exploitation searches with single search operator.
  • Search process only relied on personal and global best positions.
[24]2022
  • Neighborhood structure of each particle was gradually increased from ring topology to fully connected topology.
  • Limited flexibility to regulate exploration and exploitation searches of algorithm because the swarm diversity level is increased monotonically.
[19]2018
  • Neighborhood structure of each particle can be adaptively maintained, decreased, increased or shuffled by referring to the search track record of population.
  • Expensive computation cost used to adaptively adjust the neighborhood structure of each particle.
  • Laborious works to fine tune the newly introduced parameters.
[25]2018
  • Flexible neighborhood structure concept was introduced to achieve proper tradeoff between exploration and exploitation searches.
  • Expensive computation cost used to adaptively adjust the neighborhood structure of each particle.
  • Relatively poor performances when dealing with unimodal problems.
[26]2020
  • A reinforcement learning concept (i.e., Q-learning) was used to select the optimal topology of particle.
  • Expensive computation cost due to the involvement of Q-learning and computation of swarm diversity.
  • Search process only relied on personal and global best positions.
[27]2020
  • Multiple good quality subswarms were constructed based on correlations between group sequences.
  • A dynamic regrouping strategy was introduced to promote information sharing between different subswarms and accelerate their convergence speed.
  • Overemphasized on the influences of historically best positions to guide the search process of subswarms.
  • Complex population division scheme.
  • Laborious works to fine tune the newly introduced parameters.
[28]2020
  • Benefits of holonic organization in multiagent system were leveraged to achieve proper tradeoff between exploration and exploitation searches.
  • Relatively slow convergence speed when locating the global optima of unimodal problems.
[29]2017
  • Multiple subswarms were constructed based on the fitness levels of particles.
  • Directional information of non-fittest particles was used to guide the search process.
  • Relatively poor performances when dealing with low and medium scale optimization problems.
[30]2020
  • Exemplar used to guide the general swarm was derived from the mean positions of elitist swarm.
  • Limited enhancement of swarm diversity because mean position used to guide the population search was shared by all particles.
[31]2020
  • Positions of other particles were updated using mainstream and stochastic learning strategies.
  • Global worst position was handled using terminal replacement mechanism.
  • Limited enhancement of swarm diversity because mean position used to guide the population search was shared by all particles.
[32]2020
  • Forgetting ability was introduced to maintain the population diversity of algorithm.
  • Neglected the potential benefits brought by non-fittest particles to guide the search process.
[33]2019
  • Dimensional learning strategy and comprehensive learning strategy were introduced to achieve proper tradeoff between exploration and exploitation searches.
  • High fitness evaluation numbers were consumed by dimensional learning strategy when generating the exemplar.
[34]2019
  • Oppositional-based learning and convex combination concepts were used to generate exemplar for the fittest particle.
  • Strong dependency on historically best positions used to guide the non-fittest particles.
[35]2020
  • Optimal guide creation module was designed to generate a global exemplar based on two nearest neighbors of global best position.
  • Expensive computation cost due to the construction of global exemplar in every iteration.
[36]2021
  • Construction of main swarm and hover swarm as diversity maintenance scheme.
  • Construction of unique exemplar for main swarm and hover swarm,
  • Expensive computation cost due to the binary population division scheme and construction of unique exemplar in every iteration.
[37]2019
  • Crossover and mutation of genetic algorithm were used to enhance the exploitation and exploration searches of PSO, respectively.
  • Huge memory consumption to store the individual solutions that offered significant performance gains.
[38]2019
  • Grey wolf optimizer was used to update the positions of some particles to enhance exploration.
  • Increasing execution time due to sequential cascading of grey wolf optimizer and PSO.
[39]2020
  • Butterfly optimization algorithm was hybridized with PSO to improve the exploration ability.
  • Neglected the potential benefits brought by non-fittest particles to guide the search process.
[40]2022
  • Differential evolution was hybridized with PSO to achieve better balancing of exploration and exploitation searches of algorithm
  • Increasing execution time due to sequential cascading of differential evolution and PSO.
Table 2. The mathematical formulation of the considered activation functions.
Table 2. The mathematical formulation of the considered activation functions.
Activation FunctionsMathematical Formulation
Binary Step Φ ( X ) = { 0 ,                   for   X < 0 1 ,                   for   X 0
Sigmoid Φ ( X ) = X 1 + e X
Hyperbolic Tangent (Tanh) Φ ( X ) = ( e X e X e X + e X )
Inverse Tangent (ATan) Φ ( X ) = tan 1 ( X )
Rectified Linear Unit (ReLU) Φ ( X ) = { X ,                   for   X 0 0 ,                     for   X < 0
Table 3. CEC 2014 benchmark functions and its fitness value of theoretical global optimum.
Table 3. CEC 2014 benchmark functions and its fitness value of theoretical global optimum.
CategoriesNo. Function Name f ( X * )
UnimodalF1Rotated High Conditioned Elliptic Function100
F2Rotated Bent Cigar Function200
F3Rotated Discus Function300
Simple MultimodalF4Shifted and Rotated Rosenbrock’s Function400
F5Shifted and Rotated Ackley’s Function500
F6Shifted and Rotated Weierstrass Function600
F7Shifted and Rotated Griewank’s Function700
F8Shifted Rastrigin’s Function800
F9Shifted and Rotated Rastrigin’s Function900
F10Shifted Schewefel’s Function1000
F11Shifted and Rotated Schwefel’s Function1100
F12Shifted and Rotated Katsuura Function1200
F13Shifted and Rotated HappyCat Function1300
F14Shifted and Rotated HGBat Function1400
F15Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function1500
F16Shifted and Rotated Expanded Schaffer’s F6 Function1600
HybridF17Hybrid Function11700
F18Hybrid Function 21800
F19Hybrid Function 31900
F20Hybrid Function 42000
F21Hybrid Function 52100
F22Hybrid Function 62200
CompositionF23Composition Function 12300
F24Composition Function 22400
F25Composition Function 32500
F26Composition Function 42600
F27Composition Function 52700
F28Composition Function 62800
F29Composition Function 72900
F30Composition Function 83000
Table 4. Parameter settings of all compared algorithms.
Table 4. Parameter settings of all compared algorithms.
AlgorithmsParameter Settings
PSOInertia weight ω : 0.9 0.2 , acceleration coefficients c 1 = c 2 = 2.05
PSOWV c 1 = 1.00 ,   c 2 = 1.70
MPSOWVSubpopulation size N s s u b = 10 , where s = 1 , , 10 , acceleration coefficients c 1 = c 2 = c 2 = 4.1 / 3
CSOParameter control the influence of mean position φ [ 0 , 0.3 ]
SLPSOExponential component to adjust learning probability α ˜ = 0.5 , parameter to control social influence factor β ˜ = 0.01
PSOGSAInitial gravitational constant G 0 = 1 , descending coefficient of gravitational constant α ^ = 20
APSOAdditional acceleration coefficient A = 0.3 ,   ω = 1 ,   c 1 = c 2 = 2.05
MSPSOTLP N s s u b = 10 , where s = 1 , , 10 ,   c 1 = c 2 = c 2 = 4.1 / 3
Table 5. Fmean and SD values produced by the proposed MSPSOTLP and other PSO variants in solving CEC 2014 benchmark functions.
Table 5. Fmean and SD values produced by the proposed MSPSOTLP and other PSO variants in solving CEC 2014 benchmark functions.
Func.CriteriaMSPSOTLPPSOPSOWVCSOMPSOWVSLPSOPSOGSAAPSO
F1Fmean5.61 × 1036.69 × 1064.70 × 1083.12 × 1051.61 × 1084.88 × 1052.47 × 1051.25 × 108
SD2.57 × 1038.99 × 1063.79 × 1081.32 × 1057.38 × 1072.86 × 1057.24 × 1042.02 × 107
F2Fmean0.00 × 1007.27 × 1016.94 × 10106.84 × 1032.12 × 10101.57 × 1041.25 × 1043.01 × 107
SD0.00 × 1002.90 × 1021.56 × 10105.63 × 1032.74 × 1091.10 × 1046.97 × 1035.81 × 107
F3Fmean2.18 × 10114.87 × 1013.18 × 1057.85 × 1038.20 × 1047.26 × 1036.93 × 1031.97 × 105
SD2.52 × 10−116.61 × 1015.70 × 1045.62 × 1032.44 × 1045.50 × 1031.33 × 1033.56 × 104
F4Fmean1.44 × 10−21.79 × 1028.66 × 1035.71 × 1011.39 × 1032.56 × 1014.49 × 1012.71 × 102
SD1.23 × 10−25.12 × 1013.26 × 1032.31 × 1013.56 × 1023.04 × 1012.91 × 1013.29 × 101
F5Fmean2.01 × 1012.09 × 1012.09 × 1012.10 × 1012.09 × 1012.09 × 1012.00 × 1012.00 × 101
SD5.32 × 10−28.52 × 10−29.48 × 10−23.65 × 10−23.01 × 10−26.49 × 10−22.99 × 10−43.26 × 10−2
F6Fmean3.25 × 1002.12 × 1013.83 × 1014.08 × 10−13.38 × 1015.43 × 10−11.80 × 1012.45 × 101
SD1.02 × 1005.40 × 1002.41 × 1006.73 × 10−19.46 × 10−16.38 × 10−15.22 × 1003.50 × 100
F7Fmean0.00 × 1002.62 × 10−25.61 × 1021.48 × 10−31.62 × 1021.97 × 10−31.23 × 10−21.25 × 100
SD0.00 × 1002.21 × 10−22.05 × 1023.12 × 10−31.44 × 1014.41 × 10−37.38 × 10−36.16 × 10−1
F8Fmean1.15 × 1011.15 × 1023.87 × 1028.66 × 1002.68 × 1021.37 × 1011.70 × 1021.37 × 102
SD1.57 × 1002.51 × 1013.23 × 1011.88 × 1002.21 × 1013.25 × 1009.24 × 1002.75 × 101
F9Fmean1.20 × 1011.39 × 1024.85 × 1029.65 × 1003.29 × 1021.65 × 1011.91 × 1021.98 × 102
SD3.16 × 1001.42 × 1013.67 × 1013.78 × 1001.25 × 1015.52 × 1007.06 × 1002.18 × 101
F10Fmean1.52 × 1022.63 × 1037.66 × 1031.78 × 1026.22 × 1035.47 × 1023.89 × 1033.71 × 103
SD7.91 × 1014.52 × 1024.67 × 1021.36 × 1022.45 × 1022.60 × 1023.20 × 1025.04 × 102
F11Fmean2.42 × 1033.76 × 1037.56 × 1032.85 × 1027.24 × 1037.21 × 1024.36 × 1034.16 × 103
SD3.73 × 1026.08 × 1024.57 × 1022.28 × 1024.44 × 1022.29 × 1021.82 × 1028.01 × 102
F12Fmean1.62 × 1001.74 × 1002.70 × 1002.39 × 1002.65 × 1002.50 × 1002.07 × 10−15.61 × 10−1
SD2.07 × 10−13.84 × 10−11.22 × 10−13.10 × 10−11.63 × 10−13.68 × 10−11.66 × 10−12.68 × 10−1
F13Fmean1.29 × 10−14.82 × 10−16.12 × 1001.34 × 10−13.09 × 1001.97 × 10−15.62 × 10−15.92 × 10−1
SD7.49 × 10−31.19 × 10−15.58 × 10−11.61 × 10−22.33 × 10−12.41 × 10−27.52 × 10−21.17 × 10−1
F14Fmean1.93 × 10−12.90 × 10−11.46 × 1023.94 × 10−14.48 × 1014.42 × 10−12.82 × 10−12.57 × 10−1
SD8.61 × 10−39.55 × 10−26.54 × 1014.61 × 10−27.08 × 1007.32 × 10−22.23 × 10−25.35 × 10−2
F15Fmean2.30 × 1001.10 × 1003.58 × 1063.14 × 1009.90 × 1045.07 × 1008.15 × 1003.16 × 101
SD2.08 × 10−13.82 × 1007.86 × 1054.29 × 10−12.26 × 1044.93 × 1004.94 × 1002.06 × 101
F16Fmean7.60 × 1001.16 × 1011.34 × 1011.08 × 1011.30 × 1011.21 × 1011.25 × 1011.28 × 101
SD3.86 × 10−15.88 × 10−12.05 × 10−14.73 × 10−11.38 × 10−11.79 × 10−12.12 × 10−16.66 × 10−1
F17Fmean2.10 × 1032.86 × 1051.77 × 1071.49 × 1056.01 × 1061.05 × 1051.86 × 1042.05 × 106
SD5.36 × 1023.25 × 1051.08 × 1076.66 × 1041.12 × 1065.42 × 1045.76 × 1032.37 × 106
F18Fmean1.22 × 1023.79 × 1035.88 × 1082.17 × 1032.58 × 1088.42 × 1023.84 × 1021.41 × 105
SD6.67 × 1014.62 × 1032.87 × 1083.17 × 1038.05 × 1074.97 × 1021.85 × 1023.12 × 105
F19Fmean4.08 × 1001.24 × 1013.64 × 1025.30 × 1001.41 × 1026.13 × 1001.59 × 1012.93 × 101
SD2.68 × 10−13.21 × 1001.51 × 1026.33 × 10−12.38 × 1015.50 × 10−12.89 × 1002.74 × 101
F20Fmean7.96 × 1013.74 × 1022.78 × 1051.26 × 1042.90 × 1042.48 × 1042.84 × 1031.02 × 105
SD8.58 × 1002.38 × 1021.88 × 1059.04 × 1039.09 × 1031.36 × 1042.73 × 1024.65 × 104
F21Fmean8.02 × 1026.12 × 1046.03 × 1066.92 × 1041.56 × 1068.18 × 1041.18 × 1041.16 × 106
SD2.07 × 1026.19 × 1046.96 × 1063.80 × 1045.59 × 1053.84 × 1041.29 × 1039.69 × 105
F22Fmean3.52 × 1016.77 × 1021.08 × 1031.28 × 1029.76 × 1021.51 × 1027.94 × 1026.23 × 102
SD1.30 × 1012.68 × 1022.42 × 1025.43 × 1011.96 × 1021.32 × 1011.59 × 102173 × 102
F23Fmean3.15 × 1023.16 × 1023.15 × 1023.15 × 1024.17 × 1023.15 × 1022.00 × 1023.62 × 102
SD3.20 × 10−133.52 × 10−18.04 × 1019.16 × 10−121.26 × 1014.99 × 10−131.06 × 10−71.51 × 101
F24Fmean2.24 × 1022.30 × 1024.66 × 1022.26 × 1023.20 × 1022.33 × 1022.01 × 1022.50 × 102
SD1.50 × 10−16.64 × 1003.90 × 1014.74 × 1001.13 × 1017.32 × 1001.09 × 10−12.84 × 100
F25Fmean2.00 × 1022.13 × 1022.59 × 1022.06 × 1022.25 × 1022.06 × 1022.00 × 1022.22 × 102
SD0.00 × 1004.81 × 1002.21 × 1011.55 × 1005.87 × 1002.72 × 1008.94 × 10−102.99 × 100
F26Fmean1.00 × 1021.70 × 1021.06 × 1021.00 × 1021.03 × 1021.40 × 1021.00 × 1021.83 × 102
SD2.32 × 10−24.81 × 1011.57 × 1001.83 × 10−25.44 × 10−15.47 × 1011.35 × 10−14.59 × 101
F27Fmean3.94 × 1028.49 × 1021.28 × 1033.55 × 1021.14 × 1033.69 × 1022.00 × 1028.02 × 102
SD2.91 × 1013.58 × 1023.92 × 1015.77 × 1016.81 × 1012.77 × 1015.10 × 10−92.03 × 102
F28Fmean8.55 × 1024.04 × 1031.82 × 1038.57 × 1021.36 × 1039.77 × 1022.00 × 1022.50 × 103
SD1.80 × 1019.59 × 1025.12 × 1024.42 × 1013.69 × 1021.07 × 1022.07 × 10−85.80 × 102
F29Fmean8.60 × 1021.43 × 1031.04 × 1071.63 × 1032.15 × 1061.55 × 1032.02 × 1021.23 × 107
SD5.36 × 1017.53 × 1029.14 × 1064.77 × 1024.02 × 1064.57 × 1022.00 × 10−11.88 × 107
F30Fmean1.72 × 1033.56 × 1033.79 × 1052.69 × 1034.04 × 1043.17 × 1032.00 × 1029.88 × 104
SD5.39 × 1021.93 × 103339 × 1051.06 × 1038.14 × 1033,17 × 1033.62 × 10−33.25 × 104
#BMF1800500101
w/t/l-29/1/030/0/022/3/530/0/026/1/319/2/928/0/2
Table 6. Pairwise comparisons by Wilcoxon signed rank test between MSPSOTLP and each peer algorithm.
Table 6. Pairwise comparisons by Wilcoxon signed rank test between MSPSOTLP and each peer algorithm.
MSPSOTLP vs. R+Rp Valueh Value
PSO465.00.02.00 × 10−6+
PSOWV465.00.02.00 × 10−6+
CSO382.582.51.91 × 10−3+
MPSOWV465.00.02.00 × 10−6+
SLPSO390.045.01.76 × 10−4+
PSOGSA347.5117.51.75 × 10−2+
APSO459.06.03.00 × 10−6+
Table 7. Average ranking and p-value produced by Friedman test.
Table 7. Average ranking and p-value produced by Friedman test.
AlgorithmRankingChi-Square Statisticsp value
MSPSOTLP1.6833144.6361110.00 × 100
PSOGSA3.0667
CSO3.0500
PSO4.4167
SLPSO3.7667
APSO5.6500
MPSOWV6.6500
PSOWV7.7167
Table 8. Adjusted p values produced by each algorithm through three post-hoc analysis procedures.
Table 8. Adjusted p values produced by each algorithm through three post-hoc analysis procedures.
MSPSOTLP vs. zUnadjusted pBonferroni-Dunn pHolm pHochberg p
PSOWV9.54 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
MPSOWV7.85 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
APSO6.27 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
SLPSO4.32 × 1001.50 × 10−51.08 × 10−46.20 × 10−36.20 × 10−5
PSO3.29 × 1009.88 × 10−46.91 × 10−32.96 × 10−32.96 × 10−3
CSO2.19 × 1002.87 × 10−22.01 × 10−15.75 × 10−23.07 × 10−2
PSOGSA2.16 × 1003.07 × 10−22.15 × 10−15.75 × 10−23.07 × 10−2
Table 9. Simulation results of Fmean and Δ P obtained by PSOWV and all MSPSOTLP variants when solving CEC 2014 benchmark functions.
Table 9. Simulation results of Fmean and Δ P obtained by PSOWV and all MSPSOTLP variants when solving CEC 2014 benchmark functions.
FuncFmeanG)
PSOWVMSPSOTLP-1MSPSOTLP-2MSPSOTLP-3MSPSOTLP
F1 4.70 × 10 8 (−) 8.08 × 10 8   (−71.93%) 8.42 × 10 6 (98.21%) 1.37 × 10 8 (70.83%) 5.61 × 10 3 (100.00%)
F2 6.94 × 10 10   (−) 1.47 × 10 10 (78.79%) 2.70 × 10 7 (99.96%) 4.95 × 10 9 (92.87%) 0.00 × 10 0 (100.00%)
F3 3.18 × 10 5 (−) 8.56 × 10 4 (73.09%) 1.10 × 10 4 (96.53%) 4.09 × 10 4 (87.13%) 2.18 × 10 11 (100.00%)
F4 8.66 × 10 3 (−) 6.80 × 10 3 (21.52%) 1.48 × 10 2 (98.29%) 2.43 × 10 2 (97.20%) 1.44 × 10 2 (100.00%)
F5 2.09 × 10 1 (−) 2.10 × 10 1 (−0.60%) 2.09 × 10 1 (0.04%) 2.08 × 10 1 (0.43%) 2.01 × 10 1 (3.83%)
F6 3.83 × 10 1 (−) 4.34 × 10 1 (−13.26%) 2.45 × 10 1 (35.92%) 2.78 × 10 1 (27.39%) 3.25 × 10 0 (91.51%)
F7 5.61 × 10 2 (−) 2.23 × 10 2 (60.17%) 1.19 × 10 0 (99.79%) 3.68 × 10 0 (99.34%) 0.00 × 10 0 (100.00%)
F8 3.87 × 10 2 (−) 2.49 × 10 2 (35.67%) 4.13 × 10 1 (89.34%) 1.53 × 10 2 (60.36%) 1.15 × 10 1 (97.03%)
F9 4.85 × 10 2 (−) 3.16 × 10 2 (34.90%) 9.96 × 10 1 (79.47%) 1.67 × 10 2 (65.50%) 1.20 × 10 1 (97.53%)
F10 7.66 × 10 3 (−) 6.85 × 10 3 (10.55%) 2.12 × 10 3 (72.36%) 4.47 × 10 3 (41.70%) 1.52 × 10 2 (98.02%)
F11 7.56 × 10 3 (−) 8.54 × 10 3 (−12.92%) 3.60 × 10 3 (52.35%) 4.82 × 10 3 (36.24%) 2.42 × 10 3 (67.99%)
F12 2.70 × 10 0 (−) 3.61 × 10 0 (−33.67%) 2.15 × 10 0 (20.26%) 2.15 × 10 0 (20.48%) 1.62 × 10 0 (40.00%)
F13 6.12 × 10 0 (−)5 .14 × 10 0 (16.06%) 2.64 × 10 1 (95.68%) 7.40 × 10 1 (87.91%) 1.29 × 10 1 (97.89%)
F14 1.46 × 10 2 (−) 1.40 × 10 2 (3.96%) 9.63 × 10 1 (99.34%) 2.98 × 10 1 (99.80%) 1.93 × 10 1 (99.87%)
F15 3.58 × 10 6 (−) 3.64 × 10 4 (98.98%) 1.89 × 10 1 (100.00%) 9.12 × 10 1 (100.00%) 2.30 × 10 0 (100.00%)
F16 1.34 × 10 1 (−) 1.37 × 10 1 (−2.00%) 1.31 × 10 1 (2.56%) 1.36 × 10 1 (−1.63%) 7.60 × 10 0 (43.28%)
F17 1.77 × 10 7 (−) 5.56 × 10 7 (−100.00%) 8.27 × 10 5 (95.33%) 6.08 × 10 5 (95.56%) 2.10 × 10 3 (99.99%)
F18 5.88 × 10 8 (−) 4.07 × 10 8 (30.77%) 2.59 × 10 4 (100.00%) 7.51 × 10 5 (99.87%) 1.22 × 10 2 (100.00%)
F19 3.64 × 10 2 (−) 1.44 × 10 2 (60.49%) 1.06 × 10 1 (97.09%) 2.26 × 10 1 (93.80%) 4.08 × 10 0 (98.88%)
F20 2.78 × 10 5 (−) 2.54 × 10 5 (8.77%) 1.98 × 10 4 (92.88%) 5.06 × 10 3 (98.18%) 7.96 × 10 1 (99.97%)
F21 6.03 × 10 6 (−) 1.22 × 10 7 (−100.00%) 1.21 × 10 5 (97.99%) 7.86 × 10 5 (86.97%) 8.02 × 10 2 (99.99%)
F22 1.08 × 10 3 (−) 8.96 × 10 2 (17.07%) 3.15 × 10 2 (70.81%) 4.25 × 10 2 (60.65%) 3.52 × 10 1 (96.74%)
F23 6.74 × 10 2 (−) 2.00 × 10 2 (70.33%) 3.15 × 10 2 (53.21%) 2.00 × 10 2 (70.33%) 3.15 × 10 2 (53.26%)
F24 4.66 × 10 2 (−) 2.00 × 10 2 (57.08%) 2.12 × 10 2 (54.45%) 2.00 × 10 2 (57.08%) 2.24 × 10 2 (51.93%)
F25 2.59 × 10 2 (−) 2.00 × 10 2 (22.78%) 2.07 × 10 2 (19.92%) 2.00 × 10 2 (22.78%) 2.00 × 10 2 (22.78%)
F26 1.06 × 10 2 (−) 1.04 × 10 2 (1.80%) 1.01 × 10 2 (5.19%) 1.01 × 10 2 (5.13%) 1.00 × 10 2 (5.66%)
F27 1.28 × 10 3 (−) 2.00 × 10 2 (84.38%) 7.58 × 10 2 (40.76%) 2.00 × 10 2 (84.38%) 3.94 × 10 2 (69.22%)
F28 1.82 × 10 3 (−) 2.00 × 10 2 (89.01%) 8.98 × 10 2 (50.65%) 2.00 × 10 2 (89.01%) 8.55 × 10 2 (53.02%)
F29 1.04 × 10 7 (−) 3.21 × 10 7 (−100.00%) 1.47 × 10 3 (99.99%) 5.95 × 10 4 (99.43%) 8.60 × 10 2 (99.99%)
F30 3.79 × 10 5 (−) 8.38 × 10 5 (−100.00%) 2.54 × 10 3 (99.33%) 9.93 × 10 4 (73.80%) 1.72 × 10 3 (99.55%)
Table 10. Properties of datasets selected for ANN model training.
Table 10. Properties of datasets selected for ANN model training.
No.Dataset# Attributes# Classes# Samples
DS1Iris43150
DS2Liver Disorder62345
DS3Blood Transfusion42748
DS4Statlog Heart132270
DS5Hepatitis19280
DS6Wine133178
DS7Breast Cancer92277
DS8Seeds73210
DS9Australian Credit Approval142690
DS10Haberman’s Survival32306
DS11New Thyroid53215
DS12Glass96214
DS13Balance Scale43625
DS14Dermatology346338
DS15Landsat3664435
DS16Bank Note421372
Table 11. The C and SD values produced by ANN models trained by the proposed MSPSOTLP and other PSO variants when solving the training samples.
Table 11. The C and SD values produced by ANN models trained by the proposed MSPSOTLP and other PSO variants when solving the training samples.
DatasetCriteriaMSPSOTLPPSOPSOWVCSOMPSOWVSLPSOPSOGSAAPSO
DS1 C 99.5894.5867.4279.0898.6777.0090.9219.83
SD4.39 × 10−12.25 × 1001.11 × 1016.30 × 1003.73 × 10−12.20 × 1009.84 × 1004.01 × 101
DS2 C 72.7273.4452.6858.5569.6057.0373.2658.59
SD2.45 × 10−15.37 × 10−15.30 × 1001.05 × 1009.12 × 10−12.67 × 1002.09 × 10−12.43 × 100
DS3 C 80.5980.5578.6378.9379.4578.9381.2978.90
SD5.29 × 10−24.83 × 10−12.70 × 1000.00 × 1003.34 × 10−10.00 × 1001.67 × 10−10.00 × 100
DS4 C 96.2094.0769.8280.5691.9077.6992.1852.45
SD8.11 × 10−12.09 × 1007.87 × 1002.12 × 1005.35 × 10−12.41 × 1001.17 × 1001.86 × 101
DS5 C 96.4194.5362.1975.9495.0080.3192.8160.94
SD1.81 × 1001.80 × 1009.55 × 1000.00 × 1001.80 × 1003.61 × 1009.02 × 10−11.18 × 101
DS6 C 100.0094.0959.5882.3299.7966.4199.5128.80
SD0.00 × 1001.08 × 1002.22 × 1013.33 × 1004.07 × 10−14.30 × 1004.07 × 10−14.64 × 101
DS7 C 82.4883.2967.3474.3279.6974.7379.5569.73
SD3.82 × 10−16.88 × 10−12.31 × 1009.38 × 10−14.50 × 10−16.88 × 10−12.74 × 1004.82 × 100
DS8 C 97.5087.6260.3075.7796.3777.2093.1069.05
SD1.04 × 1007.35 × 1002.03 × 1012.81 × 1003.44 × 10−14.81 × 1009.09 × 10−14.70 × 101
DS9 C 89.6789.4073.5582.6388.9783.9789.1088.46
SD2.96 × 10−11.41 × 1007.60 × 1001.73 × 1004.56 × 10−12.20 × 1003.14 × 10−11.56 × 101
DS10 C 75.1475.4370.9472.3375.1472.0475.7172.20
SD2.32 × 10−12.36 × 10−13.89 × 1008.16 × 10−12.36 × 10−14.08 × 10−11.08 × 1000.00 × 100
DS11 C 98.5593.2080.3592.8595.8785.2394.4871.28
SD1.29 × 1001.01 × 1015.53 × 1003.20 × 1001.34 × 1006.40 × 1006.71 × 10−11.11 × 101
DS12 C 54.3945.7313.2215.9741.1713.7449.0645.61
SD1.04 × 1011.32 × 1011.14 × 1014.39 × 1002.06 × 1012.05 × 1001.88 × 1001.49 × 101
DS13 C 91.1886.9665.0680.7889.4281.4689.8277.00
SD1.50 × 1002.31 × 1004.63 × 1001.93 × 1001.15 × 10−14.05 × 1007.02 × 10−12.70 × 101
DS14 C 29.0228.5024.3427.2029.0224.8627.9421.33
SD9.90 × 10−33.03 × 1002.29 × 1002.13 × 1004.35 × 10−156.06 × 10−13.03 × 1006.06 × 10−1
DS15 C 76.6371.1633.3552.3075.5735.5369.5033.74
SD3.67 × 10−11.30 × 1003.48 × 1012.20 × 1008.15 × 1001.10 × 10−18.03 × 1002.55 × 101
DS16 C 100.0098.4873.9696.0799.4692.7199.6863.37
SD0.00 × 1001.90 × 10−16.01 × 1006.59 × 10−10.00 × 1008.71 × 1001.58 × 10−11.46 × 101
# B C 122001020
w/t/l-13/0/316/0/016/0/014/2/016/0/013/0/316/0/0
Table 12. The C and SD values produced by ANN models trained by the proposed MSPSOTLP and other PSO variants when solving the testing samples.
Table 12. The C and SD values produced by ANN models trained by the proposed MSPSOTLP and other PSO variants when solving the testing samples.
DatasetCriteriaMSPSOTLPPSOPSOWVCSOMPSOWVSLPSOPSOGSAAPSO
DS1 C 100.0099.6756.6788.3397.3392.3387.6716.67
SD0.00 × 1001.49 × 1003.16 × 1012.23 × 1010.00 × 1000.00 × 1003.35 × 1014.88 × 101
DS2 C 50.0047.6835.8040.7347.8337.1049.2855.22
SD7.64 × 10−16.48 × 10−11.51 × 1017.67 × 1001.45 × 1001.35 × 1015.23 × 1003.89 × 100
DS3 C 58.5358.5360.6763.8065.3365.4749.7363.13
SD2.81 × 10−11.19 × 1011.27 × 1012.40 × 1004.37 × 1007.70 × 10−14.23 × 1002.31 × 100
DS4 C 80.3775.9361.3075.0077.2272.7873.1542.04
SD1.56 × 1003.85 × 1002.38 × 1012.14 × 1001.07 × 1003.21 × 1003.85 × 1001.57 × 101
DS5 C 72.5060.0051.8855.6360.6360.6353.1346.88
SD3.23 × 1003.61 × 1007.22 × 1001.30 × 1019.55 × 1001.08 × 1010.00 × 1001.44 × 101
DS6 C 90.2876.9434.4461.1182.5042.7883.0616.11
SD1.46 × 1003.21 × 1001.58 × 1018.93 × 1002.78 × 1001.79 × 1016.99 × 1004.01 × 101
DS7 C 73.8272.5559.4665.0970.7366.3666.9169.27
SD9.39 × 10−11.05 × 1007.57 × 1003.15 × 1001.05 × 1004.81 × 1002.78 × 1004.58 × 100
DS8 C 100.0078.3330.2432.8696.1940.2474.2954.76
SD7.53 × 10−15.22 × 1013.23 × 1015.23 × 1013.71 × 1012.75 × 1004.76 × 1004.81 × 101
DS9 C 84.5782.3967.7576.3882.5487.3187.0586.86
SD6.87 × 10−12.54 × 1000.00 × 1003.16 × 1002.09× 1002.09 × 1003.83 × 1001.51 × 101
DS10 C 78.6978.0368.8573.1278.5376.3967.3878.53
SD0.00 × 1000.00 × 1004.41 × 1011.64 × 1000.00 × 1005.91 × 1001.46 × 1011.89 × 100
DS11 C 84.1967.2138.8448.3754.1942.7947.6740.47
SD1.05 × 1014.65 × 1003.55 × 1004.65 × 1001.28 × 1018.70 × 10−158.06 × 1004.03 × 100
DS12 C 46.5144.198.1414.1929.0715.5829.0725.58
SD6.39 × 1005.85 × 1002.01 × 1015.37 × 1006.15 × 1007.48 × 1009.30 × 1007.48 × 100
DS13 C 88.7285.8454.8082.0887.3680.2486.8880.80
SD1.22 × 1002.01 × 1008.09 × 1003.23 × 1001.67 × 1007.26 × 1004.62 × 10−13.80 × 101
DS14 C 65.8365.9757.7862.6465.1459.1762.7854.17
SD9.71 × 10−14.88 × 1007.13 × 1006.56 × 1001.39 × 1004.01 × 1002.41 × 1000.00 × 100
DS15 C 80.4772.6612.2225.7878.1910.6763.8421.76
SD1.49 × 1001.14 × 1019.85 × 1002.22 × 1003.03 × 1017.81 × 1003.06 × 1013.41 × 101
DS16 C 92.1590.1833.984.981.3555.9986.2831.68
SD3.30 × 1002.64 × 1000 × 1003.39 × 1001.48 × 1013.45 × 1001.16 × 1013.16 × 101
# B C 121001101
w/t/l-14/1/115/0/115/0/115/0/114/0/215/0/113/0/3
Table 13. Wilcoxon signed rank test as a pairwise comparison between the ANN models optimized by MSPSOTLP and each PSO variant when classifying the training samples.
Table 13. Wilcoxon signed rank test as a pairwise comparison between the ANN models optimized by MSPSOTLP and each PSO variant when classifying the training samples.
MSPSOTLP vs. R+Rp Valueh Value
PSO122.014.03.36 × 10−3+
PSOWV136.00.03.05 × 10−5+
CSO136.00.03.05 × 10−5+
MPSOWV119.01.01.22 × 10−4+
SLPSO136.00.03.05 × 10−5+
PSOGSA122.014.03.36 × 10−3+
APSO136.00.03.05 × 10−5+
Table 14. Wilcoxon signed rank test as pairwise comparison between the ANN models optimized by MSPSOTLP and each PSO variant when classifying the testing samples.
Table 14. Wilcoxon signed rank test as pairwise comparison between the ANN models optimized by MSPSOTLP and each PSO variant when classifying the testing samples.
MSPSOTLP vs. R+Rp Valueh Value
PSO134.02.09.16 × 10−5+
PSOWV135.01.06.10 × 10−5+
CSO134.02.09.16 × 10−5+
MPSOWV125.011.01.68 × 10−3+
SLPSO130.06.04.27 × 10−4+
PSOGSA133.03.01.53 × 10−4+
APSO125.011.01.68 × 10−3+
Table 15. The average ranking values of ANN models are trained by all compared algorithms in solving training samples.
Table 15. The average ranking values of ANN models are trained by all compared algorithms in solving training samples.
AlgorithmRankingChi-Square Statisticsp Value
MSPSOTLP1.468894.8750000.00 × 100
PSOGSA2.8125
PSO2.8750
MPSOWV2.9062
CPSO5.5312
SLPSO5.9062
APSO6.9375
PSOWV7.5625
Table 16. The average ranking values of ANN models are trained by all compared algorithms in solving testing samples.
Table 16. The average ranking values of ANN models are trained by all compared algorithms in solving testing samples.
AlgorithmRankingChi-Square Statisticsp value
MSPSOTLP1.625059.8645830.00 × 100
MPSOWV2.9688
PSO3.3750
PSOGSA4.4688
SLPSO5.2188
CSO5.3125
APSO5.7188
PSOWV7.3125
Table 17. Adjusted p values produced by trained ANN models in solving training samples through three post-hoc analysis procedures.
Table 17. Adjusted p values produced by trained ANN models in solving training samples through three post-hoc analysis procedures.
MSPSOTLP vs. zUnadjusted pBonferroni-Dunn pHolm pHochberg p
PSOWV7.04 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
APSO6.31 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
SLPSO5.12 × 1000.00 × 1002.00 × 10−61.00 × 10−51.00 × 10−5
CSO4.69 × 1003.00 × 10−61.70 × 10−51.10 × 10−51.10 × 10−5
MPSOWV1.66 × 1009.69 × 10−26.79 × 10−12.91 × 10−11.21 × 10−1
PSO1.62 × 1001.04 × 10−17.31 × 10−12.91 × 10−11.21 × 10−1
PSOGSA1.55 × 1001.21 × 10−18.45 × 10−12.91 × 10−11.21 × 10−1
Table 18. Adjusted p values produced by trained ANN models in solving testing samples through three post-hoc analysis procedures.
Table 18. Adjusted p values produced by trained ANN models in solving testing samples through three post-hoc analysis procedures.
MSPSOTLP vs. zUnadjusted pBonferroni-Dunn pHolm pHochberg p
PSOWV6.57 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
APSO4.73 × 1000.00 × 1001.60 × 10−51.40 × 10−51.40 × 10−5
CSO4.26 × 1002.10 × 10−51.44 × 10−41.03 × 10−41.03 × 10−4
SLPSO4.15 × 1003.30 × 10−52.33 × 10−41.33 × 10−41.33 × 10−4
PSOGSA3.28 × 1001.03 × 10−37.17 × 10−33.07 × 10−33.07 × 10−3
PSO2.02 × 1004.33 × 10−23.03 × 10−18.66 × 10−18.66 × 10−1
MPSOWV1.55 × 1001.21 × 10−18.45 × 10−11.21 × 10−11.21 × 10−1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ang, K.M.; Chow, C.E.; El-Kenawy, E.-S.M.; Abdelhamid, A.A.; Ibrahim, A.; Karim, F.K.; Khafaga, D.S.; Tiang, S.S.; Lim, W.H. A Modified Particle Swarm Optimization Algorithm for Optimizing Artificial Neural Network in Classification Tasks. Processes 2022, 10, 2579. https://doi.org/10.3390/pr10122579

AMA Style

Ang KM, Chow CE, El-Kenawy E-SM, Abdelhamid AA, Ibrahim A, Karim FK, Khafaga DS, Tiang SS, Lim WH. A Modified Particle Swarm Optimization Algorithm for Optimizing Artificial Neural Network in Classification Tasks. Processes. 2022; 10(12):2579. https://doi.org/10.3390/pr10122579

Chicago/Turabian Style

Ang, Koon Meng, Cher En Chow, El-Sayed M. El-Kenawy, Abdelaziz A. Abdelhamid, Abdelhameed Ibrahim, Faten Khalid Karim, Doaa Sami Khafaga, Sew Sun Tiang, and Wei Hong Lim. 2022. "A Modified Particle Swarm Optimization Algorithm for Optimizing Artificial Neural Network in Classification Tasks" Processes 10, no. 12: 2579. https://doi.org/10.3390/pr10122579

APA Style

Ang, K. M., Chow, C. E., El-Kenawy, E. -S. M., Abdelhamid, A. A., Ibrahim, A., Karim, F. K., Khafaga, D. S., Tiang, S. S., & Lim, W. H. (2022). A Modified Particle Swarm Optimization Algorithm for Optimizing Artificial Neural Network in Classification Tasks. Processes, 10(12), 2579. https://doi.org/10.3390/pr10122579

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop