Next Article in Journal
Modeling Polymer Microencapsulation Processes Using CFD and Population Balance Models
Previous Article in Journal
Research on the Application Maturity of Enterprises’ Artificial Intelligence Technology Based on the Fuzzy Evaluation Method and Analytic Network Process
Previous Article in Special Issue
Outdoor Performance Comparison of Bifacial and Monofacial Photovoltaic Modules in Temperate Climate and Industrial-like Rooftops
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Hybrid Optimization Technique for Solar Photovoltaic Output Prediction Using Improved Hippopotamus Algorithm

by
Hongbin Wang
,
Nurulafiqah Nadzirah Binti Mansor
*,† and
Hazlie Bin Mokhlis
Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(17), 7803; https://doi.org/10.3390/app14177803
Submission received: 10 July 2024 / Revised: 19 August 2024 / Accepted: 29 August 2024 / Published: 3 September 2024

Abstract

:
This paper introduces a novel hybrid optimization technique aimed at improving the prediction accuracy of solar photovoltaic (PV) outputs using an Improved Hippopotamus Optimization Algorithm (IHO). The IHO enhances the traditional Hippopotamus Optimization (HO) algorithm by addressing its limitations in search efficiency, convergence speed, and global exploration. The IHO algorithm used Latin hypercube sampling (LHS) for population initialization, significantly enhancing the diversity and global search potential of the optimization process. The integration of the Jaya algorithm further refines solution quality and accelerates convergence. Additionally, a combination of unordered dimensional sampling, random crossover, and sequential mutation is employed to enhance the optimization process. The effectiveness of the proposed IHO is demonstrated through the optimization of weights and neuron thresholds in the extreme learning machine (ELM), a model known for its rapid learning capabilities but often affected by the randomness of initial parameters. The IHO-optimized ELM (IHO-ELM) is tested against benchmark algorithms, including BP, the traditional ELM, the HO-ELM, LCN, and LSTM, showing significant improvements in prediction accuracy and stability. Moreover, the IHO-ELM model is validated in a different region to assess its generalization ability for solar PV output prediction. The results confirm that the proposed hybrid approach not only improves prediction accuracy but also demonstrates robust generalization capabilities, making it a promising tool for predictive modeling in solar energy systems.

1. Introduction

Group intelligence is a significant concept in the field of artificial intelligence, and existing theories and application research on group intelligence have proven that group-based metaheuristic algorithms are a new method that can effectively solve optimization problems [1]. The swarm intelligence algorithm simulates the behavior of a biological population or natural phenomena, and a group of simple individuals follows specific interaction mechanisms to complete a given complex optimization problem. Faced with increasingly complex optimization problems, especially those involving continuous and discrete variables coexisting and multi-dimensional and nonlinear optimization problems, swarm intelligence algorithms exhibit advantages such as robustness, robustness, and economy [2]. The relevant theoretical achievements have been widely applied in path planning [3], machine learning [4], workshop scheduling [5], and optimization problems. Over the past half century, many intelligent optimization algorithms have emerged. Kennedy et al. [6], inspired by the regularity of bird clustering activities, propose PSO (Particle Swarm Optimization). The WOA (Whale Optimization Algorithm) [7] simulated whale swarm search, encirclement, pursuit, and attack of prey to achieve optimization objectives. Arora et al. [8] imitated the butterfly’s foraging process and proposed BOA (Butterfly Optimization Algorithm). In addition, various emerging swarm intelligence algorithms, such as HHO (Harris Hawks Optimizer) [9], AEO (Artificial Ecosystem-based Optimization) [10], and AVOA (African Vultures Optimization Algorithm) [11], have been proposed successively and have attracted widespread attention.
The Hippopotamus Optimization Algorithm (HO) simulated defense and evasion strategies against predators and performed location updates. It has the advantages of high accuracy, strong local search ability, and good practicality [12]. This algorithm still has significant research value in improving global search, enhancing local development capabilities, and avoiding local optima. This article proposes a hybrid Improved Hippopotamus Optimization Algorithm (IHO), which combines Latin hypercube sampling, Jaya algorithm’s profit-seeking and harm-avoiding ideas, and three strategies of random crossover and sequential mutation to optimize the performance of IHO. These improve its global search capability and accelerate optimization convergence speed.
Power prediction is an important component of the power generation plan in the power system and the foundation of the economic operation of the power system. The accuracy of the prediction plays a crucial role in the operation, maintenance, and planning of the entire power system [13]. The integration of diverse loads, such as large-scale distributed power sources and electric vehicle charging stations, has brought about greater load volatility, temporal variability, and randomness, further increasing the difficulty of prediction [14]. Traditional prediction methods, such as regression analysis [15], time series method [16], exponential smoothing method [17], and Kalman filtering method [18], have low prediction accuracy and are unable to adapt to the nonlinear characteristics of sequences. Considering the temporal and nonlinear characteristics of load sequences, scholars from various countries have conducted extensive in-depth research at present. Back Propagation (BP) neural network is one of the most widely used neural network models; however, it has the disadvantages of slow training speed and getting easily stuck in local minima [19]. The extreme learning machine (ELM), as a single hidden layer feedforward neural network, has fewer model parameters, significantly better learning speed than support vector machines (SVM) and traditional neural networks, and the advantages of high generalization ability and small prediction error [20]. Therefore, this article chooses the ELM as the basis for the prediction model. There have been studies on hyperparameter optimization in predictive models, but there are cases where optimization algorithms control uncertain parameters. For example, the two learning factors of the particle swarm optimization algorithm can greatly affect its global optimization ability and local search ability, leading to unstable prediction performance and poor model applicability. The weights and neuron thresholds in the ELM are randomly generated, which affects the prediction accuracy and stability of the entire prediction. Therefore, this article uses the proposed IHO algorithm to optimize the two parameters in the ELM and improve the accuracy and stability of the ELM. Then, we input the power and related data into the IHO-ELM model for training and prediction and compare and analyze it with other models such as BP, the ELM, the CSO-ELM, and the HO-ELM. Finally, the proposed model was used for PV prediction in another region to further validate the generalization ability of the IHO-ELM model in improving prediction accuracy. Contributions of this paper are summarized as follows:
Improved Hippopotamus Optimization Algorithm is proposed to solve the shortcomings of HO and improve its performance and convergence speed. The Latin hypercube sampling method is used for a more uniform initialization population, thus improving the global performance of the algorithm. The Jaya algorithm is then used to improve the quality of the solution, improve the global search ability, and accelerate the convergence speed of the algorithm. Finally, a combination of unordered dimensional sampling, random crossover, and sequential mutation is used to improve the optimization process of the Hippo algorithm.
The developed IHO is used to optimize the weight and threshold parameters in the extreme learning machine to improve its accuracy.
The proposed IHO-ELM is used to predict solar photovoltaic fluctuating outputs to validate the proposed optimization model’s accuracy and generalization ability.

2. Hippopotamus Optimization Algorithm

HO is a novel metaheuristic algorithm (intelligent optimization algorithm) inspired by the inherent behavior of hippopotamuses. The flowchart is shown in Figure 1.

2.1. Population Initialization

The initial solution of HO is generated randomly and uses the following formula to generate a vector for the decision variable:
x i j = l b j + r × ( u b j l b j )
wherein x i j indicates the position of i the candidate solution, r is random (0, 1), l b j and u b j represent the bounds of the j decision variable.
x = [ x 1 . . . x i . . . x n ] n × m = [ x 1 , 1 . . . x i , j . . . x 1 , m . . . . . . . . . . . . x i , 1 . . . x i , j . . . x i , m . . . . . . . . . . . . x n , 1 . . . x n , j . . . x n , m ] n × m

2.2. Location Update (Exploration Phase)

As adults, male hippopotamuses will be driven away by the dominant male. Then, they will continue to compete with other male hippopotamuses and establish their own advantages. The following equation represents the position of a male hippopotamus:
x i j M = x i j + y 1 ( D I 1 x i j )
In Equation (3), x i j M indicates the position of the male hippopotamus, y 1 is random [0, 1], and D indicates the position of the dominant hippopotamus. I 1 and I 2 are integers [1, 2] (Equations (3) and (6)).
h = { I 2 × r 1 + ( ~ Q 1 ) 2 × r 2 1 r 3 I 1 × r 4 + ( ~ Q 2 ) r 5
T = exp ( l Γ )
x i j F B = { x i j + h 1 ( D I 2 M G 1 ) , T > 0.6 E , e l s e
E = { x i j + h 2 ( M G i D ) , r 6 > 0.5 l b j + r 7 ( u b j l b j ) , e l s e
r 1 r 4 is the random vector [0, 1], r 5 is random [1, 1] (Equation (4)), T is a random number from [0, 1] (Equation (6)).
Equations (6) and (7) describe the position of females or immature hippopotamuses in the population.
r 6 is random [0, 1] (Equation (7)), and if r 6 is more than 0.5, it means that the immature hippopotamus h is still near the population; Otherwise, it is already far away from the population. h 1 and h 2 are randomly selected numbers or vectors from Formula (4). In Equation (7), r 7 is random [0, 1].
x i = { x i M , F i M < F i x i , e l s e
x i = { x i F B , F i F B < F i x i , e l s e
The positions of male hippopotamuses within the population, as well as the updated positions of other vulnerable hippopotamuses, are represented by Equations (8) and (9). Fi represents the value of the objective function.

2.3. Hippo Defense against Predators (Exploration Phase)

Vulnerable hippopotamuses in the population may leave the group for some reason and easily become targets of large organisms for attack.
P j = l b j + r 8 ( u b j l b j )
D = | P j x i j |
Equation (11) represents the distance from the predator to the i leader. In the meantime, the Hippo adopted defensive behavior based on the F P j to protect themselves from predators. r 8 is the random vector [0, 1]. If F P j is less than F i , it indicates the risk of predation for hippopotamuses, in which case they will quickly move toward the predator and force them to retreat. If F P j larger, Formula (12) indicates that the territory of the predator or invading hippopotamus is further away. In this case, the hippopotamus does not move to the predator, only moving within a limited range. Its purpose is to make predators or invaders aware not to enter their territory.
x i j H R = { R L P j + ( f δ d cos ( 2 π y ) ) ( 1 D ) , F P j < F i R L P j + ( f δ d cos ( 2 π y ) ) ( 1 2 × D + r 9 ) , F P j F i
x i j H R is the posture of a hippopotamus when facing predators, R L represents changes in the position of the predator when attacking hippopotamuses. The calculation formula for the Levy distribution is Equation (13). ω and υ are random [0, 1], σ ω can be obtained by calculating Equation (14).
L ( υ ) = 0.05 × ω × σ ω | υ | 1 υ
σ ω = [ Γ ( 1 + υ ) sin ( π υ 2 ) Γ ( 1 + υ 2 ) υ × 2 ( υ 1 ) 2 ] 1 υ
In Equation (12), f is random [2, 4], d is random [1, 1.5], D is random [2, 3], δ is random [−1, 1]. r 9 is an m-dimensional random vector. From Equation (15), if F i H R is larger than F i , represents the position of the hippopotamus will be replaced; otherwise, the hippopotamus will return to the population.
x i = { x i H R , F i H R < F i x i , F i H R F i

2.4. Hippo Escape from Predators (Development Phase)

Another behavior of hippopotamuses when facing predators is to use defensive actions to repel them.
Near the current position of the hippopotamus, generating a random position. Modeling this behavior based on Equations (16)–(19). T represents the maximum number of iterations, while t represents the current number of iterations. If the newly generated position can reduce F i , it means that the hippopotamus has found a safer location nearby and replaced it.
l b j l o c a l = l b j t , u b j l o c a l = u b j t , t = 1 , 2 , , T
x i j H E = x i j + r 10 ( l b j l o c a l + s 1 ( u b j l o c a l l b j l o c a l ) )
In Equation (17), x i j H E is to search for the location of the hippopotamus to find the nearest safe location. s 1 selected from Equation (18). The scenario under consideration has stronger local search capabilities.
s 1 = { 2 × r 11 1 r 12 r 13
In Equation (18), r 11 Represents a random vector [0, 1], r 10 and r 13 are random [0, 1]. r 12 is a random variable that follows a normal distribution.
x i = { x i H E , F i H E < F i x i , F i H E F i

3. Multi-Strategy Improvement HO

Multi-strategy improvement HO uses Latin hypercube sampling instead of random initialization in the initialization stage, which generates a more uniform initial population than random numbers. In the development stage of escaping predators, the Jaya approach of seeking benefits and avoiding harm is adopted, safer locations are constantly approached, and the location of the predator is avoided. In the final selection stage, the population generated by unordered sampling, random crossover, and sequence mutation is compared with the latest generated optimal value to select the result. The HO process diagram is improved on as shown in Figure 2.

3.1. Using Latin Hypercube Sampling to Improve Population Diversity

Latin hypercube sampling (LHS) is a method of uniform sampling in a multi-dimensional space, which is used to generate a set of samples that are as uniform as possible and as rarely repeated as possible in each parameter space [21]. The process of Latin hypercube sampling is as follows:
(1)
Divide [0, 1] evenly into n aliquots, and randomly select a point in each interval.
(2)
The position of each sample point is randomly switched to ensure that the sample points on each parameter axis are evenly distributed and not repeated. The HO uses a Latin hypercube drawn evenly distributed from the interval (0, 1)
Sample to initialize the population location, as shown in the following formula:
C o o t p o s ( i ) = ( u b l b ) l h s ( 1 , D ) + l b
In this, LHS is based on the Latin hypercubic sampling function, which returns a 1D matrix. The sampling of LHS is more evenly distributed than random. Therefore, the algorithm for population initialization using LHS helps to avoid the population becoming trapped during the initialization phase.
Local optimal solutions improve the global search capability of the algorithm. Moreover, the wider initial distribution of the population also helps the algorithm to explore the favorable regions in the search space faster and improve the efficiency of the algorithm in finding the optimal solution. From Figure 1, using LHS results in more uniform initialization compared to the original algorithm’s random initialization.
To prove this, Figure 3 shows the statistical histogram comparison of random initialization and Latin hypercube sampling initialization, where the number generated is set to 1000. The results show that the distribution of values initialized using LHS is more uniform, and the distribution of histograms is more averaged.

3.2. Thoughts on Jaya Seeking Advantages and Avoiding Disadvantages

The Jaya algorithm was proposed by Rao et al. [22] as a new metaheuristic algorithm that utilizes the idea of continuous improvement, Continuously approaching the optimal individual while keeping away from the worst individual, thereby continuously improving the quality of the solution. The Jaya algorithm has fewer control parameters and strong global search ability in comparison with other intelligent optimization algorithms and is widely applied to parameter optimization, scheduling, and other fields. The formula is as follows:
h m , n , k = h m , n , k + r 1 ( h m , b e s t , k | h m , n , k | ) r 2 ( h m , w o r s t , k | h m , n , k | )
wherein h m , n , k is the value of the m of individual n in the k iteration, r 1 and r 2 represent two random variables with a value range of [0, 1]; h m , n , k is the value of the m representing the updated individual n in the k iteration; h m , b e s t , k is the m variable value representing the optimal individual in the k iteration; h m , w o r s t , k represents the m variable value of the worst individual in the k iteration.

3.3. Smooth Development Variation

Smooth development variation [23] includes unordered dimensional sampling, random crossover, and sequence mutation. In the fourth stage of the HO algorithm, this article utilizes smooth development mutation to find the minimum value and enhance the convergence efficiency and optimization accuracy of the algorithm.

3.3.1. Random Sampling

Random dimensional sampling can also prevent a decrease in population sparsity and promote vectorization in programming to reduce runtime.
r a t e = c e i l ( max ( t max I t e r , e ) ) × dim e n s i o n
r a t e is the sampling rate, max I t e r is the maximum iterations, and dim e n s i o n is the dimension of each position vector.

3.3.2. Random Crossover

Random crossover can improve exploration ability by randomly selecting individuals.
h i ( t + 1 ) = h r 1 ( h r 3 h r 2 ) , r 1 , r 2 , r 3 r a t e , a n d i
where r 1 , r 2 , r 3 are indexes selected randomly.

3.3.3. Sequence Mutation

Random crossover relies on two random individuals. When the radius is too small, it can prematurely fall into local optima. Therefore, we propose sequence mutations to address this issue. Therefore, sequence mutations and random crossover complement each other, improving exploration ability.
h i ( t + 1 ) = ( h i ( t ) + h i 1 ( t ) ) / 2
where h i ( t ) is the i individual, h i 1 ( t ) is the former individual, h i + 1 ( t ) is the new position vector.

4. Experiments and Results of the Algorithm

Selecting nine functions from the test function for testing, as shown in Table 1, IHO was compared with five optimization algorithms: HO, ZOA [24], CSO [25], BWO [26], and BES [27]. Population size N = 30. The maximum number of iterations is T = 500. The performance of the algorithm is compared with the optimal value (Best), mean (Mean), standard deviation (Std) and convergence curve.
Figure 4 presents the running results of the six algorithms in nine test functions.
F1, F2, and F3 are unimodal functions used to test the convergence rate of the algorithm and local search ability. Table 2 shows that IHO significantly outperformed the other five optimization algorithms on the unimodal function. For functions F1, F2, and F3, the optimal value and mean value of IHO are also optimal. This shows the excellent local search ability of IHO.
Multimodal test functions are used to test the performance of the algorithm for solving multimodal complex optimization problems. Specifically, the functions F10, F12, and F13 are multimodal test functions that test the ability of global search and jump out of the local optimal solution. According to the table, IHO has, compared with other algorithms, the best average on F10, F12, and F13 and searches for the smallest optimal value in all three test functions. Especially in F12 and F13, the standard deviation and mean value of IHO are orders of magnitude different from the remaining algorithms. Comparing the optimization results of the algorithm on the multimodal problem shows that IHO has good global search ability and jumps out of the local optimality.
The composite benchmark function is composed of multiple simple functions. Each simple function is a univariate function but is combined to form a composite function of high dimensions. Such functions can test how much algorithms can tolerate unfeasible solutions and their ability to solve large-scale optimization problems. IHO also showed a significant advantage in these three composite functions, with the mean, optimal value, and standard deviation being the lowest among all contrast algorithms.

5. Photovoltaic Power Prediction Based on Objective Optimization IHO-ELM

5.1. Extreme Learning Machine (ELM)

The ELM algorithm randomly selects hidden layers and input weight biases, solving the problems of slow learning rate, long iteration time, and the need for manually setting learning parameters in traditional neural networks [28]. Assuming there is a set of training samples ( h i , l i ) , h i = ( h i 1 , h i 2 , , h i n ) T R n are input vectors for network samples and l i = ( l i 1 , l i 2 , , l i m ) T R m is the output vector of the network. The general form of the standard ELM with L hidden layer neurons is shown in Equation (25).
f L ( h ) = i = 1 L β i G i ( h ) = i = 1 L β i G i ( a i , b i , H )
In the formula, β i are output weights, β i G i ( a i , b i , H ) activates the function, a i is the input weight of the network, and b i is the unit threshold for the i hidden layer. However, the connection weights and thresholds between the hidden layer and the input layer of the ELM can affect its prediction accuracy. Therefore, this paper uses the IHO to optimize these two parameters.
The steps to optimize the weight and threshold parameters in the ELM using the IHO optimization algorithm are as follows; the flowchart is shown in Figure 5.
Step 1: Initialize the population and randomly set the initial position and velocity of each individual in the population.
Step 2: Determine fitness and calculate the fitness of each individual.
Step 3: Compare the fitness of each individual with their historical best position and update their position if the historical best position is good.
Step 4: Compare the fitness of the individual in the population with the global optimal position of the population and update its position if the global optimal position is good.
Step 5: Adjust position and speed.
Step 6: If the end condition is met, the position is good enough, or the maximum number of iterations is reached, the above iteration process ends. Otherwise, continue with Step5.

5.2. Example Analysis

To verify the validity of the proposed method, the PV power data from Yulara Solar System 38.3 kW, mono-Si, Roof, 2016, Sails in the Desert-2, from 1 April 23:45 to 6 April 3:35 in 2016, were selected for verification (Data Download|DKA Solar Centre). The sampling interval was 5 min, had a total of 1200 sampling points, and was combined with the corresponding meteorological data, including temperature, radiation, humidity, and other indicators, were combined to form the example data set.

5.3. Evaluation Indicators

MAPE (mean absolute percentage error), MSE (mean square error), MAE (mean absolute error), and RMSE (root mean square error) were selected to evaluate the model prediction accuracy; R2 is as follows:
e M A P E = 1 n i = 1 n | a i b i b i | 100 %
e M A E = 1 n i = 1 n | a i b i |
e R M S E = 1 n i = 1 n ( a i b i ) 2
R 2 = 1 ( R S S / T S S )
e M S E = 1 n i = 1 n ( a i b i ) 2
where a i is the predicted value of PV power at the i t h prediction point; b i is the true value of PV power at the i t h prediction point; Total Sum of Squares(TSS) is the variance of the true value y, Residual Sum of Squares(RSS); the value range of the determination coefficient of R2 is [0, 1]. The closer the value is to 1, the more the data variance is explained by the model and the better the performance.

5.4. Results and Discussion

Before making a prediction, first normalize the sample data, as shown in Equation (31):
x = x i x min   x max x min
wherein x is the normalized value, x i is the normalized raw data required, x max and x min is the maximum and minimum of the raw value, respectively.
The IHO-ELM can be used to predict the power flow chart, as shown in Figure 6.
The computational efficiency of optimization algorithms can usually be evaluated for their performance. So, this article first analyzes the efficiency of algorithms. When optimizing the connection weights and thresholds of the ELM model, set the value of the optimization algorithm population to 20 and set the maximum number of iterations to 150. The fitness values of the IHO algorithm proposed in this paper were compared with those of the HO algorithm and CSO algorithm; the results are shown in Figure 7.
According to Figure 7, the CSO algorithm converges at 94 iterations with a fitness of approximately 0.013988; The HO algorithm converges after approximately 68 iterations with a fitness of approximately 0.0044711; The IHO algorithm proposed in this article converges with a fitness of approximately 0.003473 after approximately 26 iterations. The IHO algorithm has a faster convergence speed and stronger optimization ability, which can increase the probability of jumping out of local optimal solutions, finding better solutions, and improving computational efficiency.
Secondly, to assess the performance of the model presented in this paper, we compared it with the three models from the two perspectives of optimization algorithm and single model. In terms of a single model, an ELM single model is selected for comparison. In terms of optimization algorithms, a comparison was made between the CSO-optimized ELM and the unimproved HO algorithm-optimized ELM. The comparison results are presented in Table 3 and Table 4.
Table 3 shows the performance of different comparison models, and the results are analyzed as follows:
(1)
Compared with the ELM, CSO-ELM, and HO-ELM, the IHO-ELM prediction model achieved the best accuracy, with MAPE, RMSE, MAE, and MSE values being 11.6626%, 0.06129, 0.047593, and 0.0037564.
(2)
From Figure 8, Figure 9, Figure 10 and Figure 11, the accuracy of the ELM model using optimization methods is significantly higher than that of a single optimized ELM model without using optimization methods, indicating the necessity of connecting the weights and thresholds of the optimized ELM model.
(3)
The optimized model has better accuracy than the unoptimized model and the traditional CSO-optimized model. The MAPE, RMSE, MAE, and MSE indicators of the IHO-ELM were reduced by 17.62%, 21.61%, 22.78%, and 38.5%, respectively, compared to the HO-ELM from Table 4, significantly better than the unimproved HO optimization model.

5.5. Further Analysis

To further validate the performance and generalization ability of the proposed model in improving prediction accuracy. Select other PV power data from Yulara 3 mono-Si Roof 2016 Sails in the Desert—3|DKA Solar Centre, on 1 July 00:00–11 July 9:55 in 2016. Sampling was conducted every 15 min, resulting in a total of 3000 sampling points. Select the top 80% of the data as the training set and the bottom 20% of the data as the testing set. The decomposition steps and experimental environment are the same as above, and the results obtained are shown in Table 5 and Table 6. The predicted results can be more intuitively presented in Figure 12, Figure 13, Figure 14 and Figure 15.
According to Table 5, the proposed models MAE, MAPE, MSE, and RMSE are 0.041205, 0.69879, 0.0044853, and 0.066973, respectively, which are still superior to other benchmark models. Compared to the HO-ELM, they have increased by 19.9%, 4.1%, 25%, and 13.4%, respectively. The IHO algorithm proposed in this article can improve the prediction accuracy and has good optimization accuracy.

6. Conclusions

This paper presents a hybrid Improved Hippopotamus Optimization algorithm that effectively enhances global search and convergence efficiency by integrating Latin hypercube sampling, Jaya’s optimization principles, unordered dimension sampling, random crossover, and sequential mutation. The IHO algorithm demonstrates superior optimization accuracy, speed, and stability, compared to HO and four other algorithms across nine test functions. When applied to the extreme learning machine model, IHO significantly improves prediction accuracy for solar photovoltaic power generation, outperforming seven other prediction models. Further validation in a different region confirms the IHO-ELM model’s enhanced prediction accuracy and generalization capability. Future work will focus on integrating decomposition denoising techniques with updated optimization models to further refine prediction accuracy.

Author Contributions

H.W.: Conceptualization; methodology; formal analysis; writing—original draft. N.N.B.M.: Conceptualization; writing; validation; visualization. H.B.M.: Resources; validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors have no conflicts of interest to disclose.

References

  1. Boussaïd, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  2. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  3. Aslan, M.F.; Durdu, A.; Sabanci, K. Goal distance-based UAV path planning approach, path optimization and learning-based path estimation: GDRRT*, PSO-GDRRT* and BiLSTM-PSO-GDRRT. Appl. Soft Comput. 2023, 137, 110156. [Google Scholar] [CrossRef]
  4. Lu, Z.; Whalen, I.; Dhebar, Y.; Deb, K.; Goodman, E.D.; Banzhaf, W.; Boddeti, V.N. Multiobjective evolutionary design of deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 2020, 25, 277–291. [Google Scholar] [CrossRef]
  5. Hu, H.; Lei, W.; Gao, X.; Zhang, Y. Job-shop scheduling problem based on improved cuckoo search algorithm. Int. J. Simul. Model 2018, 17, 337–346. [Google Scholar] [CrossRef]
  6. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  7. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  8. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  9. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  10. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2020, 32, 9383–9425. [Google Scholar] [CrossRef]
  11. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  12. Amiri, M.H.; Mehrabi Hashjin, N.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Xu, W.; Gong, Q. Short-Term Load Forecasting Based on TLBGA-GRU Neural Network. Comput. Eng. 2022, 48, 69–76. [Google Scholar]
  14. Han, J.; Yan, L.; Li, Z. LfEdNet: A Task-based Day-ahead Load Forecasting Model for Stochastic Economic Dispatch. arXiv 2020, arXiv:2008.07025. [Google Scholar]
  15. Akhtar, S.; Shahzad, S.; Zaheer, A.; Ullah, H.S.; Kilic, H.; Gono, R.; Jasiński, M.; Leonowicz, Z. Short-term load forecasting models: A review of challenges, progress, and the road ahead. Energies 2023, 16, 4060. [Google Scholar] [CrossRef]
  16. Acquah, M.A.; Jin, Y.; Oh, B.-C.; Son, Y.-G.; Kim, S.-Y. Spatiotemporal Sequence-to-Sequence Clustering for Electric Load Forecasting. IEEE Access 2023, 11, 5850–5863. [Google Scholar] [CrossRef]
  17. Yan, H.; Yu, X.; Li, D.; Xiang, Y.; Chen, J.; Lin, Z.; Shen, J. Research on commercial sector electricity load model based on exponential smoothing method. In International Conference on Adaptive and Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2022; pp. 189–205. [Google Scholar]
  18. Liang, Z.; Chengyuan, Z.; Zhengang, Z.; Dacheng, Z. Short-term load forecasting based on kalman filter and nonlinear autoregressive neural network. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; pp. 3747–3751. [Google Scholar]
  19. Bian, H.; Wang, Q.; Xu, G.; Zhao, X. Research on short-term load forecasting based on accumulated temperature effect and improved temporal convolutional network. Energy Rep. 2022, 8, 1482–1491. [Google Scholar] [CrossRef]
  20. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: A new learning scheme of feedforward neural networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), Budapest, Hungary, 25–29 July 2004; pp. 985–990. [Google Scholar]
  21. Loh, W.-L. On Latin hypercube sampling. Ann. Stat. 1996, 24, 2058–2080. [Google Scholar] [CrossRef]
  22. Rao, R. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar]
  23. Wu, L.; Chen, E.; Guo, Q.; Xu, D.; Xiao, W.; Guo, J.; Zhang, M. Smooth Exploration System: A novel ease-of-use and specialized module for improving exploration of whale optimization algorithm. Knowl.-Based Syst. 2023, 272, 110580. [Google Scholar] [CrossRef]
  24. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  25. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A new bio-inspired algorithm: Chicken swarm optimization. In Advances in Swarm Intelligence, Proceedings of the 5th International Conference, ICSI 2014, Hefei, China, 17–20 October 2014; Part I 5; Springer: Berlin/Heidelberg, Germany, 2014; pp. 86–94. [Google Scholar]
  26. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  27. Alsattar, H.A.; Zaidan, A.; Zaidan, B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  28. Wang, Y.; Wang, W. Research on E-commerce GMV prediction based on LSTM-RELM combination model. Comput. Eng. Appl. 2023, 59, 321–327. [Google Scholar]
Figure 1. Flowchart of HO.
Figure 1. Flowchart of HO.
Applsci 14 07803 g001
Figure 2. Flowchart of IHO.
Figure 2. Flowchart of IHO.
Applsci 14 07803 g002
Figure 3. Population distribution map.
Figure 3. Population distribution map.
Applsci 14 07803 g003
Figure 4. Running results of the six algorithms.
Figure 4. Running results of the six algorithms.
Applsci 14 07803 g004aApplsci 14 07803 g004b
Figure 5. Flowchart of IHO-ELM.
Figure 5. Flowchart of IHO-ELM.
Applsci 14 07803 g005
Figure 6. Flowchart of IHO-ELM prediction.
Figure 6. Flowchart of IHO-ELM prediction.
Applsci 14 07803 g006
Figure 7. Iteration curves of three models.
Figure 7. Iteration curves of three models.
Applsci 14 07803 g007
Figure 8. Radar chart of IHO-ELM and HO-ELM evaluation index results.
Figure 8. Radar chart of IHO-ELM and HO-ELM evaluation index results.
Applsci 14 07803 g008
Figure 9. Bar chart of evaluation indicators for eight prediction models.
Figure 9. Bar chart of evaluation indicators for eight prediction models.
Applsci 14 07803 g009
Figure 10. Scatter plot of evaluation indicators for eight prediction models.
Figure 10. Scatter plot of evaluation indicators for eight prediction models.
Applsci 14 07803 g010
Figure 11. Curve plots of 8 models’ PV Power training and prediction sets.
Figure 11. Curve plots of 8 models’ PV Power training and prediction sets.
Applsci 14 07803 g011
Figure 12. Radar chart of PV prediction evaluation index results.
Figure 12. Radar chart of PV prediction evaluation index results.
Applsci 14 07803 g012
Figure 13. Bar chart of PV prediction evaluation index results.
Figure 13. Bar chart of PV prediction evaluation index results.
Applsci 14 07803 g013
Figure 14. Scatter plot of PV prediction evaluation index results.
Figure 14. Scatter plot of PV prediction evaluation index results.
Applsci 14 07803 g014
Figure 15. Curve plots of 8 models’ PV training and prediction sets.
Figure 15. Curve plots of 8 models’ PV training and prediction sets.
Applsci 14 07803 g015
Table 1. Test function.
Table 1. Test function.
Function TypeFunctionFunction NameMinimum Value
Unimodal testing functionF1Sphere0
F2Schwefel’s 2.220
F3Powell Sum0
Multimodal testing functionF10Ackley 10
F12Penalized0
F13Penalized 20
Composite Test FunctionF17Branin Function0.3
F21Shekel5−10
F22Shekel7−10
Table 2. Comparison of results in nine test functions.
Table 2. Comparison of results in nine test functions.
FunctionIndicatorIHOHOZOABWOBESCSO
F1Best006.79 × 10−2570.00128553.5607 × 10−492.0899 × 10−24
Mean005.7194 × 10−2480.00851681.4081 × 10−403.3275 × 10−20
Std0000.0090024.4204 × 10−406.2509 × 10−20
F2Best3.8064 × 10−2374.0835 × 10−1963.9486 × 10−1360.0210465.265 × 10−284.5561 × 10−21
Mean3.2568 × 10−2274.0886 × 10−1843.3091 × 10−1310.0627121.526 × 10−261.1554 × 10−19
Std007.3205 × 10−1310.0277471.4162 × 10−261.0822 × 10−19
F3Best005.8723 × 10−1700.170593.2083 × 10−18789.3893
Mean005.6762 × 10−1571.94863.5527 × 10−95458.8957
Std001.7211 × 10−1562.10751.0771 × 10−83255.1196
F10Best4.4410 × 10−164.4410 × 10−164.4410 × 10−160.00131553.9968 × 10−159.3157 × 10−12
Mean4.4410 × 10−164.4410 × 10−164.4410 × 10−160.00985950.0669073.5665 × 10−11
Std0000.00575890.211584.9603 × 10−11
F12Best1.5705 × 10−321.3134 × 10−60.0723440.000120933.3141 × 10−220.12557
Mean1.5705 × 10−320.000598380.184270.00101861.0599 × 10−181.8138
Std2.885 × 10−4820.000712350.0734130.000778141.6606 × 10−184.3809
F13Best1.3498 × 10−321.2145 × 10−61.81685.7503 × 10−69.0528 × 10−161.717
Mean1.3498 × 10−320.00339942.30050.000261240.0718297743.1699
Std2.885 × 10−4810.00981880.346030.000225350.06231212,768.7936
F17Best0.397900.397900.397900.398350.397890.39789
Mean0.397900.397900.397900.400440.397890.39795
Std1.6739 × 10−63.8556 × 10−109.9536 × 10−80.00211232.005 × 10−89.318 × 10−5
F21Best−10.1530−10.1530−10.1530−10.1518−10.1532−10.0264
Mean−10.1530−10.1530−10.1528−10.1321−7.3826−8.404
Std1.5003 × 10−75.7726 × 10−70.000713520.0144683.63632.4768
F22Best−10.4029−10.4029−10.4029−10.3978−10.4029−10.394
Mean−10.4029−10.4029−10.4029−10.3795−8.3033−5.4696
Std6.9042 × 10−62.1279 × 10−68.0885 × 10−50.0138863.39033.1227
Table 3. Prediction error results of 8 models.
Table 3. Prediction error results of 8 models.
Prediction ModelMAEMAPEMSERMSER2
IHO-ELM0.047600.116620.003760.061290.99998
HO-ELM0.061640.141580.006110.078190.99996
CSO-ELM0.117910.122160.033140.182030.99979
ELM0.239160.310990.104270.322910.99934
LSTM0.439450.335200.394910.628420.99750
TCN0.278540.666270.399630.632160.99747
BP0.903670.866871.830621.353100.98843
SVM11.309721.895811.376900.98801
Table 4. The two optimal models predict the error results.
Table 4. The two optimal models predict the error results.
Prediction ModelMAEMAPEMSERMSE
IHO-ELM0.047600.116620.003760.06129
HO-ELM0.061640.141580.006110.07819
Percentage reduction in error22.78%17.62%38.5%21.61%
Table 5. 8 models’ prediction error results.
Table 5. 8 models’ prediction error results.
Prediction ModelMAEMAPEMSERMSER2
IHO-ELM0.041210.698790.004490.066970.99996
HO-ELM0.051500.728370.005980.077330.99995
CSO-ELM0.060931.003700.007590.087130.99993
ELM0.263286.528000.109180.330430.99901
LSTM0.486131.288800.706880.840760.99359
TCN0.790346.859802.581301.606600.97658
SVM0.9927526.04361.713401.309000.98445
BP0.9285819.19841.793001.339000.98370
Table 6. The two optimal models’ PV predict the error results.
Table 6. The two optimal models’ PV predict the error results.
Prediction ModelMAEMAPEMSERMSE
IHO-ELM0.041210.698790.004490.06697
HO-ELM0.051500.728370.005980.07733
Percentage reduction in error19.9%4.1%25%13.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Binti Mansor, N.N.; Mokhlis, H.B. Novel Hybrid Optimization Technique for Solar Photovoltaic Output Prediction Using Improved Hippopotamus Algorithm. Appl. Sci. 2024, 14, 7803. https://doi.org/10.3390/app14177803

AMA Style

Wang H, Binti Mansor NN, Mokhlis HB. Novel Hybrid Optimization Technique for Solar Photovoltaic Output Prediction Using Improved Hippopotamus Algorithm. Applied Sciences. 2024; 14(17):7803. https://doi.org/10.3390/app14177803

Chicago/Turabian Style

Wang, Hongbin, Nurulafiqah Nadzirah Binti Mansor, and Hazlie Bin Mokhlis. 2024. "Novel Hybrid Optimization Technique for Solar Photovoltaic Output Prediction Using Improved Hippopotamus Algorithm" Applied Sciences 14, no. 17: 7803. https://doi.org/10.3390/app14177803

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop