*3.2. Population Initialization*

When there is no prior information, the initial population is usually randomly generated, which often leads to revisiting a hopeless area in the search space [37]. Oppositionbased learning (OBL) considers candidate solutions as well as their opposite solutions [38]. OBL introduces a random solution and its corresponding inverse solution, which can yield more than two independent solutions. This randomly generates more promising solutions. OBL has been successfully applied in various population-based evolutionary algorithms [39,40]. To effectively increase the diversity of the initial population, we use the OBL strategy to generate the initial population, which includes two types of populations: a random initial population and an anti-population. The random initial population {*xid*, *d* = 1, 2, . . . , *D*} is generated randomly according to the form shown in Equation (5). Supposing the inverse population is {*xid*, *d* = 1, 2, . . . , *D*}, *xid* can be expressed as:

$$\mathbf{x}'\_{id} = \mathbf{x}\_{\text{max},d} + \mathbf{x}\_{\text{min},d} - \mathbf{x}\_{id\prime} \tag{10}$$

where *xmax*,*<sup>d</sup>* and *xmin*,*<sup>d</sup>* are the maximum and minimum values of the *d*th dimension of particle *xi* in the *D*-dimensional search space. After merging the random initial population and the antipopulation, we select the particles with less fitness to form a new initial population.

#### *3.3. Dynamic Inertia Weight*

The inertia weight *ω* can limit the search range of particles, which allows the particles to maintain inertia of motion and search in a new area. This means that this new evolution includes old evolutionary habits and experiences. When the inertial weight is relatively high, the new evolution can eliminate the influence of the previous evolutionary experience. This is conducive to expanding the search field, but the convergence speed can easily slow in optimization. When the inertia weight is relatively small, the particle maintains its evolutionary direction based on previous experience. When the evolutionary direction is correct, this will help speed up the convergence rate of the global optimization, but it is easy for the optimization to fall into local extremes [35].

The inertial weight *ω* affects the search speed and accuracy. For optimization problems with severe nonlinearity, the use of fixed inertia weights will result in fast convergence, and global optimization is often impossible. For the algorithm to obtain the best results in the optimization process, varying inertia weights need to be used. Using a linearly decreasing inertia weight is a traditional variable inertia weight strategy that can optimize performance well [16]. When the initial inertia weight value is large, the optimal solution range can be found quickly; then, the inertia weight value decreases, and the particles begin to search more finely.

However, because the slope is constant, the speed change always remains at the same level. If the initial iteration does not produce better points, then the accumulation of iterations and the rapid decay of speed may lead to a final local optimal value. Therefore, we use a nonlinear strategy, sine mapping, with ergodicity, nonrepetition and irregularity [34,41] to adjust the inertia weight *ω* of PSO. This strategy can not only enhance the population diversity in the search process but also enhance the ability to converge to the global optimum. The dynamic inertia weight based on sine mapping can be expressed as:

$$
\omega = k\_l = \frac{q}{4} \sin(\pi k\_l - 1), \quad k\_l \in (0, 1), \quad t = 1, 2, 3, \dots, T,\tag{11}
$$

where the range of *q* is from 0 to 4.
