*3.3. PSOs*

PSO is a hot topic in artificial intelligence. The particle's new position is a combination of the current position and the updated velocity. The updating of the velocity is composed of three parts, and the first part is the current velocity scaled by the weight factor (*w*). The second part is the individual best position to steer the current position under the weight of the learning factor (*c*1) and a random variable (*r1*). The third part is the global best position to steer the current position under the weight of the learning factor (*c*2) and a random variable (*r2*). The *r1* and *r2* are unrelated, as are the *c*<sup>1</sup> and *c*<sup>2</sup> [77,78].


#### **Table 2.** DEs' essential information and metrics.

Ben et al. [79] applied PSO to the SDM and compared it with other methods, concluding that PSO outperformed other methods with data supporting. In [80], Ni et al. presented an adaptive elite mutation technique for PSO (PSO-AEM) for a domain search of the optimal global position of PSO, and found that PSO-AEM had a faster speed and higher accuracy. Merchaoui et al. [81] found that PSO was prone to premature convergence, so an adaptive mutation technique was proposed and introduced into PSO to form an improved MPSO. MPSO achieved good IAE and RMSE values and fitted the characteristic curves well at different temperatures and light intensities. In [82], Guaranteed Convergent Particle Swarm Optimization (GCPSO) was presented to avoid premature convergence. In [83], an enhanced leader PSO (ELPSO) using five mutation operators to enhance the leader was designed, following the idea that a high-quality leader could pull the solution towards the excellent region. The identification results showed that ELPSO effectively improved the quality of PSO solutions. In [84], the authors presented an improved PSO (SAIW-PSO) which used the simulated annealing technique to control *w* and introduced a deterministic method for optimizing the current values. The fitting results supported the view that SAIW-PSO was accurate, fast, and effective. Kiani et al. [85] designed a dynamic inertia weight PSO (DEDIWPSO) with a double exponential function to mitigate the premature convergence. This method demonstrated excellent validity, reliability, and accuracy in

the issue covered in this work. The authors in [86] implemented PSO in parallel (PPSO) on a modern graphics processing unit (GPU). They demonstrated the very high accuracy and short elapsed time of PPSO by estimating multiple PV models' parameters. In [87], an enhanced PSO (PSO-ST) was developed using sinusoidal chaos and tangential chaos techniques to adjust the weight and learning factors. Inspired by cuckoo search random reselect parasitic nests, Fan et al. [88] developed a new method (PSOCS) by combining the random reselection strategy with PSO. The application results showed PSOCS's stability and effectiveness.


**Table 3.** DEs' experiment results.

The "N/A" means that there is insufficient data to support an average algorithm ranking using the Friedman Test on the three cases: SDM, DDM, and Photowatt-PWP201.

Tables 4 and 5 combine the essential information and numerical metrics of the PSO's variants. In the past five years, there have been numerous studies on PSO. Regarding resource consumption, PSO-AEM has the lowest TNFES of 10,000, followed by PSOCS, PSO, ELPSO, MPSO, SAIW-PSO, DEDIWPSO, PSO-ST, GCPSO, and PPSO. Regarding the ranking of MIN RMSE metrics, DEDIWPSO is first, followed by PSO-ST, GCPSO, MPSO, PPSO, and PSOCS. Although DEDIWPSO has the highest accuracy, it consumes massive computational resources. Hence, a considerable reduction in computational resource consumption while keeping accuracy constant is worthy of further research.


**Table 4.** PSOs' essential information and metrics.


**Table 5.** PSOs' experiment results.

The "N/A" means that there is insufficient data to support an average algorithm ranking using the Friedman Test on the three cases: SDM, DDM, and Photowatt-PWP201.
