*3.5. GWOs*

GWO is a population-based metaheuristic with only two parameters [97]. Chase, encirclement, harassment, and attack are the hunt's four phases. Based on wolf rank, four types of wolves are included in GWO, with alpha being the strongest, followed by beta, delta, and omega. Wolves' mean solutions are in the solution space and are allowed to reposition. GWO only keeps the three optimal solutions, with other wolves responsible for position updating.


**Table 6.** ABCs' essential information and metrics.


**Table 7.** ABCs' experiment results.

The "NA" means that there is insufficient data to support an average algorithm ranking using the Friedman Test on the three cases: SDM, DDM, and Photowatt-PWP201.

Vinod et al. [98] pioneered the use of GWO for the SDM, and the results showed that GWO had a high degree of accuracy. The study [99] found that more populations performed better, so a multi-group grey wolf optimizer (MGGWO) was developed. The results showed that MGGWO was excellent in speed and accuracy. A new GWO (OL-BGWO) was designed in [100], which combined an orthogonal learning mechanism to improve the local exploration capability of GWO. OLBGWO's performance was evaluated in different PV models, and the results showed its excellent speed and accuracy. In [101], an improved GWO (I-GWO) was developed by introducing a hunting search mechanism based on dimensional learning. Ramadan et al. [102] introduced a domain search strategy to implement an improved GWO (IGWO) and demonstrated the algorithm's accuracy in two PV cases.

The relevant information and experimental results of the variants of GWO are summarized in Tables 8 and 9. I-GWO has the lowest resource consumption, followed by OLBGWO, GWO, MGGWO, and IGWO. Regarding overall accuracy ranking, OLBGWO is first and I-GWO is second. It is worth noting that MGGWO achieves a MIN RMSE of <sup>4</sup> × <sup>10</sup>−<sup>4</sup> on the SDM, a value not performed by any of the other algorithms counted. Variants of GWO use more computational resources, so there is much room for improvement in reducing the consumption of computational resources for GWO.

#### *3.6. JAYAs*

JAYA, which means victory in Sanskrit, combines survival of the fittest with the leader leading the population [103]. A key feature of JAYA is that there are no control parameters and no initial derivation information. When updating iteratively, the superior solution is approached quickly, and the inferior solution is moved away quickly.


#### **Table 8.** GWOs' essential information and metrics.

**Table 9.** GWOs' experiment results.


The "N/A" means that there is insufficient data to support an average algorithm ranking using the Friedman Test on the three cases: SDM, DDM, and Photowatt-PWP201.

In [104], the authors designed an improved JAYA (IJAYA) that adaptively adjusted weights and optimized the algorithm performance using chaotic elite learning methods. IJAYA showed highly competitive performance in several PV models with excellent accuracy and reliability. An improved JAYA (EOJAYA) was developed in [105] by introducing an elite opposition mechanism to modify the update scheme. In [106], the Nelder-Mead algorithm was introduced to boost JAYA and this method's effectiveness was verified well in the SDM. In [107], a PGJAYA was designed to digitize the performance of individuals in a probabilistic manner as a guide to improve the search method. Adaptive chaotic perturbation techniques were employed to elevate the solution's overall quality. The PV model parameters estimated by PGJAYA proved its accuracy and robustness. Luu and Nguyen [108] introduced an adaptive population size mechanism to form a modified JAYA (MJA), and verified its performance and feasibility in the SDM and DDM. Jian et al. [109] developed a modified JAYA (LCJAYA) by introducing a logical chaotic mapping mechanism and a chaotic mutation mechanism in the update phase and search strategy of JAYA, respectively. LCJAYA's reliability and accuracy was verified in different PV cases. In [110], a simple improved JAYA (CLJAYA) was designed by integrating learning techniques, and its efficiency and accuracy was demonstrated in benchmark functions and PV models. In [111], the authors improved a new JAYA (EJAYA) using an adaptive operator mechanism, a population size adjustment mechanism, and an opposition learning technique. The extraction of PV parameters demonstrated the effectiveness of EJAYA under different conditions. An enhanced chaotic JAYA (CJAYA) was developed in [112] by introducing an adaptive weighting strategy and three chaotic mechanisms including sine, tent, and logistic mappings. Saadaoui et al. [113] improved JAYA (MLJAYA) through three techniques: adaptive weighting, multiple learning, and chaotic perturbation. Jian and Cao [114] developed a chaotic second-order oscillation JAYA (CSOOJAYA) by using second-order oscillation factors, chaotic logistic mapping, and a mutation mechanism. The behavior of CSOOJAYA in solving the studied issue was demonstrated with good reliability and accuracy.

The essential information and experimental results of the variants of JAYA are summarized in Tables 10 and 11. Among them, the TNFES of EJAYA ranks first with 30,000, followed by CLJAYA, IJAYA, PGJAYA, LCJAYA, CJAYA, CSOOJAYA, EO-Jaya, and Jaya-NM. Regarding overall accuracy ranking, CLJAYA ranks first, followed by LCJAYA, EJAYA, ML-JAYA, PGJAYA, CSOOJAYA, and IJAYA in order. In terms of computational resources, the JAYA variants consume more. Regarding specific values of FT, the difference between most variants is small, so further research on JAYA could go towards reducing the consumption of computational resources.


**Table 10.** JAYAs' essential information and metrics.


**Table 11.** JAYAs' experiment results.

The "N/A" means that there is insufficient data to support an average algorithm ranking using the Friedman Test on the three cases: SDM, DDM, and Photowatt-PWP201.
