*3.1. Framework*

In the memetic strategy, the overall optimization search uses PSO. To optimize an objective function, the first task is to generate the initial population. The initial population in PSO is randomly generated, which may affect the convergence speed of the algorithm and the accuracy of the final solution. In the absence of prior knowledge, we use a method based on opposition learning to replace the random initial population positions as a new initial population strategy. This can increase the chance of reaching the global optimal solution [36].

In the entire iterative search process, we hope that the search range in the early stage is as large as possible to enhance the global optimization capability. We hope that the search range in the search period does not change greatly in order to enhance the local optimization capability. These factors mean that we need to change the inertia weight to adjust the first part of the right-hand term of Equation (8). The dynamic inertia weight is applied to the previous population cognition to provide a reference for the optimization process in the current iteration.

In population evolution, the second part of the right term in Equation (8) represents the synchronization between the current position of the particle and the optimal position of the individual in the iteration. This is the process of the particle revising its own evolutionary path, reflecting the effect of the particle's own evolutionary experience on its own next evolution. The third part of the right-hand term of Equation (8) shows the process of synchronizing the current position of the particle with the best position of the group. This is the corrective behavior of the particle after observing the evolution of the surrounding particles. This kind of social behavior reflects the group's information sharing and cooperation.

After obtaining the best population in the current iteration, to further enhance the diversity of the population, we set the mutation of the individual particles for the population. The aim of this operation is to make the optimization process converge stably while maintaining a certain ability to jump out of a local optimum.
