1. Introduction
Group intelligence is a significant concept in the field of artificial intelligence, and existing theories and application research on group intelligence have proven that group-based metaheuristic algorithms are a new method that can effectively solve optimization problems [
1]. The swarm intelligence algorithm simulates the behavior of a biological population or natural phenomena, and a group of simple individuals follows specific interaction mechanisms to complete a given complex optimization problem. Faced with increasingly complex optimization problems, especially those involving continuous and discrete variables coexisting and multi-dimensional and nonlinear optimization problems, swarm intelligence algorithms exhibit advantages such as robustness, robustness, and economy [
2]. The relevant theoretical achievements have been widely applied in path planning [
3], machine learning [
4], workshop scheduling [
5], and optimization problems. Over the past half century, many intelligent optimization algorithms have emerged. Kennedy et al. [
6], inspired by the regularity of bird clustering activities, propose PSO (Particle Swarm Optimization). The WOA (Whale Optimization Algorithm) [
7] simulated whale swarm search, encirclement, pursuit, and attack of prey to achieve optimization objectives. Arora et al. [
8] imitated the butterfly’s foraging process and proposed BOA (Butterfly Optimization Algorithm). In addition, various emerging swarm intelligence algorithms, such as HHO (Harris Hawks Optimizer) [
9], AEO (Artificial Ecosystem-based Optimization) [
10], and AVOA (African Vultures Optimization Algorithm) [
11], have been proposed successively and have attracted widespread attention.
The Hippopotamus Optimization Algorithm (HO) simulated defense and evasion strategies against predators and performed location updates. It has the advantages of high accuracy, strong local search ability, and good practicality [
12]. This algorithm still has significant research value in improving global search, enhancing local development capabilities, and avoiding local optima. This article proposes a hybrid Improved Hippopotamus Optimization Algorithm (IHO), which combines Latin hypercube sampling, Jaya algorithm’s profit-seeking and harm-avoiding ideas, and three strategies of random crossover and sequential mutation to optimize the performance of IHO. These improve its global search capability and accelerate optimization convergence speed.
Power prediction is an important component of the power generation plan in the power system and the foundation of the economic operation of the power system. The accuracy of the prediction plays a crucial role in the operation, maintenance, and planning of the entire power system [
13]. The integration of diverse loads, such as large-scale distributed power sources and electric vehicle charging stations, has brought about greater load volatility, temporal variability, and randomness, further increasing the difficulty of prediction [
14]. Traditional prediction methods, such as regression analysis [
15], time series method [
16], exponential smoothing method [
17], and Kalman filtering method [
18], have low prediction accuracy and are unable to adapt to the nonlinear characteristics of sequences. Considering the temporal and nonlinear characteristics of load sequences, scholars from various countries have conducted extensive in-depth research at present. Back Propagation (BP) neural network is one of the most widely used neural network models; however, it has the disadvantages of slow training speed and getting easily stuck in local minima [
19]. The extreme learning machine (ELM), as a single hidden layer feedforward neural network, has fewer model parameters, significantly better learning speed than support vector machines (SVM) and traditional neural networks, and the advantages of high generalization ability and small prediction error [
20]. Therefore, this article chooses the ELM as the basis for the prediction model. There have been studies on hyperparameter optimization in predictive models, but there are cases where optimization algorithms control uncertain parameters. For example, the two learning factors of the particle swarm optimization algorithm can greatly affect its global optimization ability and local search ability, leading to unstable prediction performance and poor model applicability. The weights and neuron thresholds in the ELM are randomly generated, which affects the prediction accuracy and stability of the entire prediction. Therefore, this article uses the proposed IHO algorithm to optimize the two parameters in the ELM and improve the accuracy and stability of the ELM. Then, we input the power and related data into the IHO-ELM model for training and prediction and compare and analyze it with other models such as BP, the ELM, the CSO-ELM, and the HO-ELM. Finally, the proposed model was used for PV prediction in another region to further validate the generalization ability of the IHO-ELM model in improving prediction accuracy. Contributions of this paper are summarized as follows:
Improved Hippopotamus Optimization Algorithm is proposed to solve the shortcomings of HO and improve its performance and convergence speed. The Latin hypercube sampling method is used for a more uniform initialization population, thus improving the global performance of the algorithm. The Jaya algorithm is then used to improve the quality of the solution, improve the global search ability, and accelerate the convergence speed of the algorithm. Finally, a combination of unordered dimensional sampling, random crossover, and sequential mutation is used to improve the optimization process of the Hippo algorithm.
The developed IHO is used to optimize the weight and threshold parameters in the extreme learning machine to improve its accuracy.
The proposed IHO-ELM is used to predict solar photovoltaic fluctuating outputs to validate the proposed optimization model’s accuracy and generalization ability.
4. Experiments and Results of the Algorithm
Selecting nine functions from the test function for testing, as shown in
Table 1, IHO was compared with five optimization algorithms: HO, ZOA [
24], CSO [
25], BWO [
26], and BES [
27]. Population size N = 30. The maximum number of iterations is T = 500. The performance of the algorithm is compared with the optimal value (Best), mean (Mean), standard deviation (Std) and convergence curve.
Figure 4 presents the running results of the six algorithms in nine test functions.
F1, F2, and F3 are unimodal functions used to test the convergence rate of the algorithm and local search ability.
Table 2 shows that IHO significantly outperformed the other five optimization algorithms on the unimodal function. For functions F1, F2, and F3, the optimal value and mean value of IHO are also optimal. This shows the excellent local search ability of IHO.
Multimodal test functions are used to test the performance of the algorithm for solving multimodal complex optimization problems. Specifically, the functions F10, F12, and F13 are multimodal test functions that test the ability of global search and jump out of the local optimal solution. According to the table, IHO has, compared with other algorithms, the best average on F10, F12, and F13 and searches for the smallest optimal value in all three test functions. Especially in F12 and F13, the standard deviation and mean value of IHO are orders of magnitude different from the remaining algorithms. Comparing the optimization results of the algorithm on the multimodal problem shows that IHO has good global search ability and jumps out of the local optimality.
The composite benchmark function is composed of multiple simple functions. Each simple function is a univariate function but is combined to form a composite function of high dimensions. Such functions can test how much algorithms can tolerate unfeasible solutions and their ability to solve large-scale optimization problems. IHO also showed a significant advantage in these three composite functions, with the mean, optimal value, and standard deviation being the lowest among all contrast algorithms.
5. Photovoltaic Power Prediction Based on Objective Optimization IHO-ELM
5.1. Extreme Learning Machine (ELM)
The ELM algorithm randomly selects hidden layers and input weight biases, solving the problems of slow learning rate, long iteration time, and the need for manually setting learning parameters in traditional neural networks [
28]. Assuming there is a set of training samples
,
are input vectors for network samples and
is the output vector of the network. The general form of the standard ELM with L hidden layer neurons is shown in Equation (25).
In the formula, are output weights, activates the function, is the input weight of the network, and is the unit threshold for the hidden layer. However, the connection weights and thresholds between the hidden layer and the input layer of the ELM can affect its prediction accuracy. Therefore, this paper uses the IHO to optimize these two parameters.
The steps to optimize the weight and threshold parameters in the ELM using the IHO optimization algorithm are as follows; the flowchart is shown in
Figure 5.
Step 1: Initialize the population and randomly set the initial position and velocity of each individual in the population.
Step 2: Determine fitness and calculate the fitness of each individual.
Step 3: Compare the fitness of each individual with their historical best position and update their position if the historical best position is good.
Step 4: Compare the fitness of the individual in the population with the global optimal position of the population and update its position if the global optimal position is good.
Step 5: Adjust position and speed.
Step 6: If the end condition is met, the position is good enough, or the maximum number of iterations is reached, the above iteration process ends. Otherwise, continue with Step5.
5.2. Example Analysis
To verify the validity of the proposed method, the PV power data from Yulara Solar System 38.3 kW, mono-Si, Roof, 2016, Sails in the Desert-2, from 1 April 23:45 to 6 April 3:35 in 2016, were selected for verification (Data Download|DKA Solar Centre). The sampling interval was 5 min, had a total of 1200 sampling points, and was combined with the corresponding meteorological data, including temperature, radiation, humidity, and other indicators, were combined to form the example data set.
5.3. Evaluation Indicators
MAPE (mean absolute percentage error), MSE (mean square error), MAE (mean absolute error), and RMSE (root mean square error) were selected to evaluate the model prediction accuracy; R
2 is as follows:
where
is the predicted value of PV power at the
prediction point;
is the true value of PV power at the
prediction point; Total Sum of Squares(TSS) is the variance of the true value y, Residual Sum of Squares(RSS); the value range of the determination coefficient of R
2 is [0, 1]. The closer the value is to 1, the more the data variance is explained by the model and the better the performance.
5.4. Results and Discussion
Before making a prediction, first normalize the sample data, as shown in Equation (31):
wherein
is the normalized value,
is the normalized raw data required,
and
is the maximum and minimum of the raw value, respectively.
The IHO-ELM can be used to predict the power flow chart, as shown in
Figure 6.
The computational efficiency of optimization algorithms can usually be evaluated for their performance. So, this article first analyzes the efficiency of algorithms. When optimizing the connection weights and thresholds of the ELM model, set the value of the optimization algorithm population to 20 and set the maximum number of iterations to 150. The fitness values of the IHO algorithm proposed in this paper were compared with those of the HO algorithm and CSO algorithm; the results are shown in
Figure 7.
According to
Figure 7, the CSO algorithm converges at 94 iterations with a fitness of approximately 0.013988; The HO algorithm converges after approximately 68 iterations with a fitness of approximately 0.0044711; The IHO algorithm proposed in this article converges with a fitness of approximately 0.003473 after approximately 26 iterations. The IHO algorithm has a faster convergence speed and stronger optimization ability, which can increase the probability of jumping out of local optimal solutions, finding better solutions, and improving computational efficiency.
Secondly, to assess the performance of the model presented in this paper, we compared it with the three models from the two perspectives of optimization algorithm and single model. In terms of a single model, an ELM single model is selected for comparison. In terms of optimization algorithms, a comparison was made between the CSO-optimized ELM and the unimproved HO algorithm-optimized ELM. The comparison results are presented in
Table 3 and
Table 4.
Table 3 shows the performance of different comparison models, and the results are analyzed as follows:
- (1)
Compared with the ELM, CSO-ELM, and HO-ELM, the IHO-ELM prediction model achieved the best accuracy, with MAPE, RMSE, MAE, and MSE values being 11.6626%, 0.06129, 0.047593, and 0.0037564.
- (2)
From
Figure 8,
Figure 9,
Figure 10 and
Figure 11, the accuracy of the ELM model using optimization methods is significantly higher than that of a single optimized ELM model without using optimization methods, indicating the necessity of connecting the weights and thresholds of the optimized ELM model.
- (3)
The optimized model has better accuracy than the unoptimized model and the traditional CSO-optimized model. The MAPE, RMSE, MAE, and MSE indicators of the IHO-ELM were reduced by 17.62%, 21.61%, 22.78%, and 38.5%, respectively, compared to the HO-ELM from
Table 4, significantly better than the unimproved HO optimization model.
5.5. Further Analysis
To further validate the performance and generalization ability of the proposed model in improving prediction accuracy. Select other PV power data from Yulara 3 mono-Si Roof 2016 Sails in the Desert—3|DKA Solar Centre, on 1 July 00:00–11 July 9:55 in 2016. Sampling was conducted every 15 min, resulting in a total of 3000 sampling points. Select the top 80% of the data as the training set and the bottom 20% of the data as the testing set. The decomposition steps and experimental environment are the same as above, and the results obtained are shown in
Table 5 and
Table 6. The predicted results can be more intuitively presented in
Figure 12,
Figure 13,
Figure 14 and
Figure 15.
According to
Table 5, the proposed models MAE, MAPE, MSE, and RMSE are 0.041205, 0.69879, 0.0044853, and 0.066973, respectively, which are still superior to other benchmark models. Compared to the HO-ELM, they have increased by 19.9%, 4.1%, 25%, and 13.4%, respectively. The IHO algorithm proposed in this article can improve the prediction accuracy and has good optimization accuracy.