*2.3. Topological Structure*

When dealing with complex and high-dimensional optimization problems, researchers are currently working on changing the topological structure of particle swarm optimization to escape the issue of premature convergence. In [26], an example-based learning PSO has been reported to improve swarm and convergence speed diversity. According to the ELPSO idea, many global best particles are set as examples to participate in the velocity update equation, selecting from the current best candidates instead of the *gbest* particle. The proposed work mathematically is shown as:

$$V\_i^k = wV\_i^k + c\_1 rand1\_i^k (pbest\_{r\_i}^k - X\_i^k) + c\_2 rand2\_i^k (gbest\_{r\_i}^k - X\_i^k) \tag{6}$$

In [27], instead of *pbest* and *gbest* particles, only the "historical best info" has been used in the conventional PSO velocity update equation to maintain the population diversification. In [28], the exact particles location and position were described and explained for the purpose of adjusting the balance for exploration and exploitation in the search process and is mathematically expressed as:

$$X\_i^{k+1} = (1 - \beta(t))p\_i^k + \beta(t)p\_\mathcal{g}^r + a(t)R\_i^k \tag{7}$$

In [29], an advanced particle swarm optimization algorithm (APSO) approach is presented. The algorithm uses an improved velocity to modify the equation to ensure that the particles reach the best solution speedily as compared to traditional PSO. In [30], PSO with combined Local and global expanding neighborhood topology (PSOLGENT) is proposed that employs a novel expanding neighborhood topology. In [31], a local search strategy was developed where every candidate tries to reach a better position during the search process and then tries to get the best in the whole swarm.
