*3.8. WOAs*

WOA consists of an attack prey phase responsible for exploitation and a search prey phase responsible for exploration [121,122]. The bubble net attack consists of two mechanisms, i.e., encircling prey and spiral update position, both of which have the same probability of being selected. The encircling prey mechanism can determine any position between the present and best individuals within a specific range related to the parameter *a*, which decreases from 2 to 1 as the optimization proceeds. In the spiral position update, the individual's position is determined by the spiral equation between the whale and the prey. In the search phase, individuals are updated similarly to the encircling prey mechanism, except that a random individual replaces the optimal individual.

An improved WOA (IWOA) was developed in [123] to address the premature convergence of WOA. IWOA adjusted the encircling prey mechanism and modified the updating search phase to enhance the exploration, diversity, and robustness. Experiments in different PV models showed that IWOA extracted parameters with fast convergence, high quality, good robustness, and competitiveness. In [124], Elazab et al. pioneered the application of WOA to this studied problem. Comparisons with other algorithms demonstrated that WOA can fit PV data more accurately. To further enhance the ability of WOA to cope with the studied problem, Xiong et al. [18] developed a variant of WOA (MCSWOA) by modifying the search strategy of WOA using DE's mutation equation. A crossover operator was designed to improve the algorithm's applicability in different dimensions. A selection operator was designed to ensure that the optimization process would not worsen at any time. The perfect convergence curves, RMSE values, SIAE values, and ranking indicated that MCSWOA was characterized by high accuracy, competitiveness, and fast convergence. Pourmousa et al. [125] designed a Springy WOA (SWOA) by adding a deletion stage to the WOA. Peng et al. [126] developed a new approach (ISNMWOA) by combining the Nelder-Mead simplex technique with WOA. The results demonstrated that ISNMWOA's performance was significantly higher than WOA and it ran faster than other high-performance methods.

The essential information and experimental results of the variants of GWO are summarized in Tables 14 and 15. WOA has the least computational resources, followed by ISNMWOA, MCSWOA, IWOA, and SWOA, in order. In Table 15, SWOA has the highest overall MIN RMSE ranking, followed by ISNMWOA, IWOA, and MCSWOA. SWOA has high accuracy but consumes a lot of computational resources, with 5000 iterations at a population size of 30. The accuracy of ISNMWOA is close to that of SWOA, and TNFES at 20,000 is much lower than SWOA but still needs further improvement.


**Table 14.** WOAs' essential information and metrics.

**Table 15.** WOAs' experiment results.


#### *3.9. Hybrids*

The above methods used for the studied problem are partially dominated by a single metaheuristic algorithm. In addition to them, hybrid approaches that combine two and more metaheuristics are also popular for solving this problem. The motivation behind the hybrid approaches is integrating diverse features of different algorithms to equilibrate the global and local search abilities.

In [127], Xiong et al. devised an approach (DE/WOA) that took full advantages of DE and WOA to balance diversity and convergence. Long et al. [128] developed an approach (GWOCS) introducing the opposing learning mechanism of cuckoo search (CS) for the three optimal individuals preserved by GWO to achieve improved performance. The results of benchmark functions and PV models supported the authors' expectations of performance improvement. Rizk et al. [129] developed a new method (PSOGWO) by mixing GWO and PSO to make full use of their exploration and exploitation advantages. Different PV models demonstrated the excellent performance of PSOGWO. Li et al. [130] designed a DE-based adaptive TLBO (ATLDE) by mixing DE with TLBO and adjusting the teaching and learning stages using a ranking probability mechanism. Experimental results supported ATLDE's competitiveness. In [131], the authors effectively combined DE with Harris Hawks Optimization (HHO) to form a new method (HHODE), and demonstrated the effectiveness of the improvement using RMSE values for the extracted PV parameters. Yu et al. [132] devised a new method (HAJAYADE) by replacing the two parameters of JAYA adaptively. Then, the method combined DE and introduced a mutational operator and an adaptive chaos mechanism to ensure its performance. Devarapalli et al. [133] improved the updated approach of a hybrid of GWO and sine cosine algorithm (HGWOSCA) to gain an enhanced method (EHGWOSCA). Singh et al. [47] hybridized the Dingo Optimizer and PSO to form a new hybrid algorithm (HPSODOX) and developed a four-diode PV model to reveal HPSODOX's performance. The results supported the validity of the algorithm improvement. Weng et al. [134] integrated a Backtracking Search Algorithm with TLBO to form a new method (TLBOABC) and verified the method's effectiveness well.

The essential information and experimental results of the hybrid methods are summarized in Tables 16 and 17. TLBOBSA has the lowest computational resource consumption, followed by ATLDE, DE/WOA, GWOCS, and HAJAYADE. TLBOBSA has the highest overall ranking for MIN RMSE, followed by DE/WOA, HAJAYADE, and GWOCS. TLBOBSA ranks the highest in resource consumption and accuracy, indicating that a suitable hybrid scheme can achieve significant performance. It should be noted that the MIN RMSE of HPSODOX, although very small, needs more basic information, and there are no repeated runs for the experiment, so it is impossible to evaluate the performance of this method for the time being.
