**1. Introduction**

Recently, inverse problems, or real world design problems, have been recognized as an active research topic in the fields of academia and engineering sciences, and the optimal solution to such kinds of problems is difficult and hard due to the presence of multimodal cost functions. Because traditional optimization methods are incapable of resolving inverse or real-world problems, a wealth of studies has consequently contributed to the development of nature-inspired algorithmic models, to improve computational capabilities and diversity of the search space in engineering complex and complicated problems. At the same time, researchers have tried to design various nature-inspired algorithmic models in the state of the art to enhance the computational capabilities as well as increase the diversity of search space in engineering optimization problems.

In this modern world of optimization, when one wishes to solve the engineering optimization problems arising from electromagnetics, more devotion will be paid to optimization techniques. From the previous work, we knew that the optimization problems have more minima and one optimum solution, while the current existence of the stochastic algorithm will try to reach the global optimum region or space. One of these methods' limitations is that they have a slow rate of convergence or require additional computational modifications. In order to relieve unnecessary computational engagement and develop a robust method for the case study, such techniques play an imperative role in improving and makes the algorithms more efficient while building a decent balance between clarity, reliability, and computational performance.

**Citation:** Khan, R.A.; Yang, S.; Khan, S.; Fahad, S.; Kalimullah. A Multimodal Improved Particle Swarm Optimization for High Dimensional Problems in Electromagnetic Devices. *Energies* **2021**, *14*, 8575. https://doi.org/ 10.3390/en14248575

Academic Editors: Marcin Kami ´nski and Angel A. Juan

Received: 25 November 2021 Accepted: 9 December 2021 Published: 20 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

There are a series of metaheuristic algorithms in order to finds the global best solution of inverse problems, but still, there is no evolutionary method to solve most of multimodal optimization problems. Thus, many efforts of the scientist and researchers have been made to optimize the general structure of the algorithm to resolve real-world engineering optimization problems. In this regard, various algorithms have been developed as reported in the following paragraph.

In the field of engineering, a variety of optimal optimization algorithms are used, including ant colony optimization, differential evolution, glowworm swarm optimization, artificial bee colony, genetic algorithm, cuckoo search algorithm, and particle swarm optimization. Among all these methods, PSO is the most recent and simple algorithm [1]. In the search process of the PSO, each candidate shares information with other candidates to expand the search area or space [2]. The PSO optimization algorithm aims to iteratively optimize an issue, starting with a set or population of candidate solutions, referred to as a swarm of particles in this perspective, in which each particle knows both the global best position within the swarm (and its resultant worth in the perspective of the problematic), as well as its personal best position (and its fitness cost) revealed so far during the search [3]. The particles travel randomly in the search space in an iterative process until the entire swarm converges to the global minima.

The PSO comprises three parameters: one control parameter and two learning parameters. Each parameter plays a significant role in the search process. The constant cognitive *c*1, and the social constant *c*2, give experiences to the personal *pbest* and global best *gbest*. The inertia weight balances the exploration and exploitation search domain [4].

The fundamental equations for updating position and velocity in a PSO are:

$$V\_i^k = wV\_i^k + c\_1r\_1.(pbest\_i^k - X\_i^k) + c\_2r\_2.(gbest^k - X\_i^k) \tag{1}$$

$$X\_i^k = X\_i^k + V\_i^k \tag{2}$$

where *i* denotes the *i*th particle, *k* is the generation number, *v<sup>i</sup> <sup>k</sup>* is the *i th* particle's velocity, and *X<sup>i</sup> <sup>k</sup>* is its position. For the learning parameters, the cognitive constant represented by *c*<sup>1</sup> and the social constant by *c*2, *c*<sup>1</sup> attempting to bring the particle into *Pbest* where *c*<sup>2</sup> pushing the particle into *gbest*, and *r*<sup>1</sup> and *r*<sup>2</sup> are random values ranging from 0 to 1.

Many researchers and scientists developed various formulations and strategies for the basic three parameters that were explained and described in [5]. When solving a highdimensional optimization issue, the basic PSO converges early because the parameters are inappropriately chosen and the mutation operators are incapable to optimize the problems. Researchers have recently modified the traditional PSO by adding mutation operators, hybridization with other algorithms, changing the topological structure, and introducing new inertia weight approaches for various problems and produced better results.

In order to control the premature convergence, many researchers have used different mutation operators to make the optimal algorithm more robust and improve the capability of exploration and exploitation searches of the particles. However, most of the strategies are problem-oriented; for example, student "T" mutation is used in local search, but it may fail if the distance between the current search and the optimal position is too wide [6]. The literature illustrates that the performance of a PSO is related to three basic parameters, i.e., inertia weight *w*, cognitive constant *c*1, and social constant *c*2. However, in the basic PSO, the values of *w*, *c*1, and *c*<sup>2</sup> are not appropriately designed to keep a decent balance between local and global search. Consequently, the values of the parameters must be correctly adjusted. A new concept known as the smart particle swarm optimization (SPSO) process is applied in [7] to address the aforementioned problems. The smart particle is based on the convergence factor (CF) technique, which combines memory of particle positions, the second stage is for comparison, and finally the leader declaration, to find the best optimal solution. Furthermore, some researchers have worked on energy system management and design algorithms for the purpose of developing smart artificial intelligence [8–13].

In this paper, a new approach is proposed that is focused on dynamic inertia weight with novel mathematical equations and mutation mechanisms. The mutation process is followed by the personal best particle and global best particles by a unique design roulette wheel selection method to overcome the premature convergence problem by developing proper stability between the exploration and exploitation search.

The remaining of this paper is organized as: the related work of the research is reviewed in Section 2; The novel IPSO is described in Section 3; The numerical results analysis are given in Section 4; A discussion is presented in Section 5; The application of the work is reported in Section 6; and the conclusion is given in Section 7.
