*2.1. Proper Adjustment of Parameters*

Researchers have modified the basic three PSO parameters to achieve a decent balance between exploration and exploitation search. Eberhart and Shi first used an inertia weight in the PSO algorithm to control the searching capabilities of the particles [4].

The velocity equation is modified after the incorporation of the inertia weight *w*:

$$V\_i^k = wV\_i^k + c\_1r\_1(pbest\_i^k - X\_i^k) + c\_2r\_2(gbest\_i^k - X\_i^k) \tag{3}$$

To control the diversity of the population and improve the performance of PSO, the authors presented a tactic where the inertia weight can be determined based on Euclidean distance [14]. In [15], an updated version of PSO that sought to solve the drawbacks of traditional PSO in perspective of photovoltaics (PV) parameter estimation has been reported. In this work two ways for controlling the inertia weight and an acceleration coefficients are designed to improve the performance of PSO and to ensure an adequate balance between local and global search, a sine chaotic inertia weight mechanism is first used. Thus, in search of an optimal solution, a tangent chaotic technique is used to steer acceleration coefficients. In [16], an improved multi-strategy particle swarm optimization (IMPSO) approach is described. It proposes to optimize the structure and parameters for better mapping the highly nonlinear characteristics of railway traction braking employing multi-strategy evolution methods with a nonlinear decreasing inertia weight to enhance the global optimizing performance of particle swarms. In the PSO velocity update equation, an adaptive inertia weight factor (AIWF) is added. The main feature is that, unlike a traditional PSO, where the inertia weight is held constant during optimization, the weights are attuned adaptively built on the particle's feat rate in reaching the optimum solution [17].

#### *2.2. Mutation Methods*

Many scholars have been working to update the traditional PSO by introducing mutation operators to preserve the diversity of the population and solve the problem of premature convergence. Some of the updated mutation mechanisms are reviewed in the following paragraph. An adaptive mutation strategy is described using the extended nonuniform mutation operator, in which adaptive mutation is used to help trapped particles and extract them from local optima [18]. The hybridizing inertia weight modification tactic, based on new particle diversity and adaptive mutation strategy, has been used to escape local algorithm convergence in complex networks [19]. In [20], they applied different mutation operators on particles in instruction to increase the search capability of particles and avoid them stagnating. In [21], the author proposes a novel idea using an adaptive mutation-selection strategy to conduct local pursuit of the global optimal particle in the up-to-date population, which could help to improve the exploratory potential of the search domain and speed up the convergence speed of the candidates. In [22], the work's aim is to find the best solution with a combination of stochastic methods and PSO with an adaptive cauchy mutation method to design the new algorithm. In [23], the author presents a multiple scale self-adaptive cooperative mutation strategy-based particle swarm

optimization algorithm (MSCPSO) to address the two fundamental drawbacks of PSO. To improve the capability of sufficiently searching the whole solution space, we use multiscale Gaussian mutations with varying standard deviations in the suggested approach. Equations (4) and (5) are the mathematical representation:

$$\mathcal{G}\_d(t) = \mathcal{G}\_d(t-1) + \sum\_{i=1}^{N} c\_{id}(t) \tag{4}$$

in which

$$c\_{id}(t) = \begin{cases} 0, & v\_{id}(t) > T\_d \\ 1, & v\_{id}(t) < T\_d \end{cases}$$

if *Gd*(*t*) > *k*<sup>1</sup> then

$$G\_d(t) = 0; \ T\_d = \frac{T\_d}{k\_2} \tag{5}$$

In [24], the authors proposed a novel approach to the learning parameters. According to this idea, the two learning variables are dynamically modified in order to affect the particles escaping from a local optimum and converge to the global optimal solution. In [25], the application of Cauchy mutation and Gaussian mutation in the modified PSO is investigated. The major aim is to obtain greater convergence and obtain the best results in the solutions of various real-world problems. In the domain of swarm intelligence, the PSO serves as a basis. The proposed PSO used an improved weight factor compared to the traditional PSO to achieve better convergence.
