Next Article in Journal
Stochastic Hybrid Estimator Based Fault Detection and Isolation for Wind Energy Conversion Systems with Unknown Fault Inputs
Next Article in Special Issue
Hybrid Short-Term Load Forecasting Scheme Using Random Forest and Multilayer Perceptron
Previous Article in Journal
A Critical Approach on Sustainable Renewable Energy Sources in Rural Area: Evidence from North-West Region of Romania
Previous Article in Special Issue
Empirical Comparison of Neural Network and Auto-Regressive Models in Short-Term Load Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybridizing Chaotic and Quantum Mechanisms and Fruit Fly Optimization Algorithm with Least Squares Support Vector Regression Model in Electric Load Forecasting

1
College of shipbuilding engineering, Harbin Engineering University, Harbin 150001, Heilongjiang, China
2
School of Education Intelligent Technology, Jiangsu Normal University/No. 101, Shanghai Rd., Tongshan District, Xuzhou 221116, Jiangsu, China
*
Author to whom correspondence should be addressed.
Energies 2018, 11(9), 2226; https://doi.org/10.3390/en11092226
Submission received: 13 August 2018 / Revised: 20 August 2018 / Accepted: 22 August 2018 / Published: 24 August 2018
(This article belongs to the Special Issue Short-Term Load Forecasting by Artificial Intelligent Technologies)

Abstract

:
Compared with a large power grid, a microgrid electric load (MEL) has the characteristics of strong nonlinearity, multiple factors, and large fluctuation, which lead to it being difficult to receive more accurate forecasting performances. To solve the abovementioned characteristics of a MEL time series, the least squares support vector machine (LS-SVR) hybridizing with meta-heuristic algorithms is applied to simulate the nonlinear system of a MEL time series. As it is known that the fruit fly optimization algorithm (FOA) has several embedded drawbacks that lead to problems, this paper applies a quantum computing mechanism (QCM) to empower each fruit fly to possess quantum behavior during the searching processes, i.e., a QFOA algorithm. Eventually, the cat chaotic mapping function is introduced into the QFOA algorithm, namely CQFOA, to implement the chaotic global perturbation strategy to help fruit flies to escape from the local optima while the population’s diversity is poor. Finally, a new MEL forecasting method, namely the LS-SVR-CQFOA model, is established by hybridizing the LS-SVR model with CQFOA. The experimental results illustrate that, in three datasets, the proposed LS-SVR-CQFOA model is superior to other alternative models, including BPNN (back-propagation neural networks), LS-SVR-CQPSO (LS-SVR with chaotic quantum particle swarm optimization algorithm), LS-SVR-CQTS (LS-SVR with chaotic quantum tabu search algorithm), LS-SVR-CQGA (LS-SVR with chaotic quantum genetic algorithm), LS-SVR-CQBA (LS-SVR with chaotic quantum bat algorithm), LS-SVR-FOA, and LS-SVR-QFOA models, in terms of forecasting accuracy indexes. In addition, it passes the significance test at a 97.5% confidence level.

1. Introduction

1.1. Motivation

MEL forecasting is the basis of microgrid operation scheduling and energy management. It is an important prerequisite for the intelligent management of distributed energy. The forecasting performance would directly affect the microgrid system’s energy trading, power supply planning, and power supply quality. However, the MEL forecasting accuracy is not only influenced by the mathematical model, but also by the associated historical dataset. In addition, compared with the large power grid, microgrid electric load (MEL) has the characteristics of strong nonlinearity, multiple factors, and large fluctuation, which lead to it being difficult to achieve more accurate forecasting performances. Along with the development of artificial intelligent technologies, load forecasting methods have been continuously applied to load forecasting. Furthermore, the hybridization or combination of the intelligent algorithms also provides new models to improve the load forecasting performances. These hybrid or combined models either employ a novel intelligent algorithm or framework to improve the embedded drawbacks or apply the advantages of two of the above models to achieve more satisfactory results. The models apply a wide range of load forecasting approaches and are mainly divided into two categories, traditional forecasting models and intelligent forecasting models.

1.2. Relevant Literature Reviews

Conventional load forecasting models include exponential smoothing models [1], time series models [2], and regression analysis models [3]. An exponential smoothing model is a curve fitting method that defines different coefficients for the historical load data. It can be understood that a series with the forecasted load time has a large influence on the future load, while a series with the long time from the forecasted load has a small influence on the future load [1]. The time series model is applied to load forecasting, which is characterized by a fast forecasting speed and can reflect the continuity of load forecasting, but requires the stability of the time series. The disadvantage is that it cannot reflect the impact of external environmental factors on load forecasting [2]. The regression model seeks a causal relationship between the independent variable and the dependent variables according to the historical load change law, determining the regression equation, and the model parameters. The disadvantage of this model is that there are too many factors affecting the forecasting accuracy. It is not only affected by the parameters of the model itself, but also by the quality of the data. When the external influence factors are too many or the relevant influent factor data are difficult to analyze, the regression forecasting model will result in huge errors [3].
Intelligent forecasting models include the wavelet analysis method [4,5], grey forecasting theory [6,7], the neural network model [8,9], and the support vector regression (SVR) model [10]. In load forecasting, the wavelet analysis method is combined with external factors to establish a suitable load forecasting model by decomposing the load data into sequences on different scales [4,5]. The advantages of the grey model are easy to implement and there are fewer influencing factors employed. However, the disadvantage is that the processed data sequence has more grayscale, which results in large forecasting error [6,7]. Therefore, when this model is applied to load forecasting, only a few recent data points would be accurately forecasted; more distant data could only be reflected as trend values and planned values [7]. Due to the superior nonlinear performances, many models based on artificial neural networks (ANNs) have been applied to improve the load forecasting accuracy [8,9]. To achieve more accurate forecasting performance, these models and other new or novel forecasting approaches have been hybridized or combined [9]. For example, an adaptive network-based fuzzy inference system is combined with an RBF neural network [11], the Monte Carlo algorithm is combined with the Bayesian neural network [12], fuzzy behavior is hybridized with a neural network (WFNN) [13], a knowledge-based feedback tuning fuzzy system is hybridized with a multi-layer perceptron artificial neural network (MLPANN) [14], and so on. However, these ANNs-based models suffer from some serious problems, such as trapping into local optimum easily, it being time-consuming to achieve a functional approximation, and the difficulty of selecting the structural parameters of a network [15,16], which limits its application in load forecasting to a large extent.
The SVR model is based on statistical learning theory, as proposed by Vapnik [17]. It has a solid mathematical foundation, a better generalization ability, a relatively faster convergence rate, and can find global optimal solutions [18]. Because the basic theory of the SVR model is perfect and the model is also easy to establish, it has attracted extensive attention from scholars in the load forecasting fields. In recent years, some scholars have applied the SVR model to the research of load forecasting [18] and achieved superior results. One study [19] proposes the EMD-PSO-GA-SVR model to improve the forecasting accuracy, by hybridizing the empirical mode decomposition (EMD) with two particle swarm optimization (PSO) and the genetic algorithm (GA). In addition, a modified version of the SVR model, namely the LS-SVR model, only considers equality constraints instead of inequalities [20,21]. Focused on the advantages of the LS-SVR model to deal with such problems, this paper tries to simulate the nonlinear system of the MEL time series to receive the forecasting values and improve the forecasting accuracy. However, the disadvantages of the SVR-based models in load forecasting are that when the sample size of the load is large, the time of system learning and training is highly time-consuming, and the determination of parameters mainly depends on the experience of the researchers. This has a certain degree of influence on the accuracy in load forecasting. Therefore, exploring more suitable parameter determination methods has always been an effective way to improve the forecasting accuracy of the SVR-based models. To determine more appropriate parameters of the SVR-based models, Hong and his colleagues have conducted research using different evolutionary algorithms hybridized with an SVR model [22,23,24]. In the meantime, Hong and his successors have also applied different chaotic mapping functions (including the logistic function [22,23] and the cat mapping function [10]) to diversify the population during modeling processes, and the cloud theory to make sure the temperature continuously decreases during the annealing process, eventually determining the most appropriate parameters to receive more satisfactory forecasting accuracy [10].
The fruit fly optimization algorithm (FOA) is a new swarm intelligent optimization algorithm proposed in 2011, it searches for global optimization based on fruit fly foraging behavior [25,26]. The algorithm has only four control parameters [27]. Compared with other algorithms, FOA has the advantages of being easy to program and having fewer parameters, less computation, and high accuracy [28,29]. FOA belongs to the domain of evolutionary computation; it realizes the optimization of complex problems by simulating fruit flies to search for food sources by using olfaction and vision. It has been successfully applied to the predictive control fields [30,31]. However, similar to those swarm intelligent optimization algorithms with iterative searching mechanisms, the standard FOA also has drawbacks such as a premature convergent tendency, a slow convergent rate in the later searching stage, and poor local search performance [32].
Quantum computing has become one of the leading branches of science in the modern era due to its powerful computing ability. This not only prompted us to study new quantum algorithms, but also inspired us to re-examine some traditional optimization algorithms from the quantum computing mechanism. The quantum computing mechanism (QCM) makes full use of the superposition and coherence of quantum states. Compared with other evolutionary algorithms, the QCM uses a novel encoding method—quantum bit encoding. Through the encoding of qubits, an individual can characterize any linear superposition state, whereas traditional encoding methods can only represent one specific one. As a result, with QCM it is easier to maintain population diversity than with other traditional evolutionary algorithms. Nowadays, it has become a hot topic of research that QCM is able to hybridize with evolutionary algorithms to receive more satisfactory searching results. The literature [33] introduced QCM into genetic algorithms and proposed quantum derived genetic algorithm (QIGA). From the point of view of algorithmic mechanism, it is very similar to the isolated niches genetic algorithm. Han and Kim [34] proposed a genetic quantum algorithm (GQA) based on QCM. Compared with traditional evolutionary algorithms, its greatest advantage is its better ability to maintain population diversity. Han and Kim [35] further introduced the population migration mechanism based on ure [34], and renamed the algorithm a quantum evolution algorithm (QEA). Huang [36], Lee and Lin [37,38], and Li et al. [39] hybridized the particle swam optimization (PSO) algorithm, Tabu search (TS) algorithm, genetic algorithm (GA), and bat algorithm (BA) with the QCM and the cat mapping function, and proposed the CQPSO, CQTS, CQGA, and CQBA algorithms, which were employed to select the appropriate parameters of an SVR model. The results of the application indicate that the improved algorithms obtain more appropriate parameters, and higher forecasting accuracy is achieved. The above applications also reveal that the improved algorithm, by hybridizing with QCM, could effectively avoid local optimal position and premature convergence.

1.3. Contributions

Considering the inherent drawback of the FOA, i.e., suffering from premature convergence, this paper tries to hybridize the FOA with QCM and the cat chaotic mapping function to solve the premature problem of FOA. Eventually, determine more appropriate parameters of an LS-SVR model. The major contributions are as follows:
(1)
QCM is employed to empower the search ability of each fruit fly during the searching processes of QFOA. The cat chaotic mapping function is introduced into QFOA and implements the chaotic global perturbation strategy to help a fruit fly escape from the local optima when the population’s diversity is poor.
(2)
We propose a novel hybrid optimization algorithm, namely CQFOA, to be hybridized with an LS-SVR model, namely the LS-SVR-CQFOA model, to conduct the MEL forecasting. Other similar alternative hybrid algorithms (hybridizing chaotic mapping function, QCM, and evolutionary algorithms) in existing papers, such as the CQPSO algorithm used by Huang [36], the CQTS and CQGA algorithms used by Lee and Lin [37,38], and the CQBA algorithm used by Li et al. [39], are selected as alternative models to test the superiority of the LS-SVR-CQFOA model in terms of forecasting accuracy.
(3)
The forecasting results illustrate that, in three datasets, the proposed LS-SVR-CQFOA model is superior to other alternative models in terms of forecasting accuracy indexes; in addition, it passes the significance test at a 97.5% confidence level.

1.4. The Organization of This Paper

The rest of this paper is organized as follows. The modeling details of an LS-SVR model, the proposed CQFOA, and the proposed LS-SVR-CQFOA model are introduced in Section 2. Section 3 presents a numerical example and a comparison of the proposed LS-SVR-CQFOA model with other alternative models. Some insight discussions are provided in Section 4. Finally, the conclusions are given in Section 5.

2. Materials and Methods

2.1. Least Squares Support Vector Regression (LS-SVR)

The SVR model is an algorithm based on pattern recognition of statistical learning theory. It is a novel machine learning approach proposed by Vapnik in the mid-1990s [17]. The LS-SVR model was put forward by Suykens [20]. It is an improvement and an extension of the standard SVR model, which replaces the inequality constraints of an SVR model with equality constraint [21]. The LS-SVR model converts quadratic programming problem into linear programming solving, reduces the computational complexity, and improves the convergent speed. It can solve the load forecasting problems due to its characteristics of nonlinearity, high dimension, and local minima.

2.1.1. Principle of the Standard SVR Model

Set a dataset as { ( x i , y i ) } i = 1 N , x i R n is the input vector of n-dimensional system, y i R is the output (not a single real value, but a n-dimensional vector) of system. The basic idea of the SVR model can be summarized as follows: n-dimensional input samples are mapped from the original space to the high-dimensional feature space F by nonlinear transformation φ ( · ) , and the optimal linear regression function is constructed in this space, as shown in Equation (1) [17]:
f ( x ) = w T φ ( x ) + b ,
where f ( x ) represents the forecasting values; the weight, w, and the coefficient, b, would be determined during the SVR modeling processes.
The standard SVR model takes the ε insensitive loss function as an estimation problem for risk minimization, thus the optimization objective can be expressed as in Equation (2) [17]:
  min   1 2 w T w + c i = 1 N ( ξ i + ξ i * ) s . t . { y i w T φ ( x i ) b ε + ξ i w T ( x i ) + b y i ε + ξ i * ξ i 0 ,   ξ i * 0   i = 1 , , N   ,
where c is the balance factor, usually set to 1, and ξ i and ξ i * are the error of introducing the training set, which can represent the extent to which the sample point exceeds the fitting precision ε .
Equation (2) could be solved according to quadratic programming processes; the solution of the weight, w, in Equation (2) is calculated as in Equation (3) [17]:
w * = i = 1 N ( α i α i * ) φ ( x ) ,
where α i and α i * are Lagrange multipliers.
The SVR function is eventually constructed as in Equation (4) [17]:
y ( x ) = i = 1 N ( α i α i * ) Ψ ( x i , x ) + b ,
where Ψ ( x i , x ) , the so-called kernel function, is introduced to replace the nonlinear mapping function, φ ( · ) , as shown in Equation (5) [15]:
Ψ ( x i , x j ) = φ ( x i ) T φ ( x j ) .

2.1.2. Principle of the LS-SVR Model

The LS-SVR model is an extension of the standard SVR model. It selects the binomial of error ξ t as the loss function; then the optimization problem can be described as in Equation (6) [20]:
  min   1 2 w T w + 1 2 γ i = 1 N ξ i 2 s . t .   y i = w T φ ( x i ) + b + ξ i ,   i = 1 , 2 , , N
where the bigger the positive real number γ is, the smaller the regression error of the model is.
The LS-SVR model defines the loss function different from the standard SVR model, and changes its inequality constraint into an equality constraint so that w can be obtained in the dual space. After obtaining parameters α and b by quadratic programming processes, the LS-SVR model is described as in Equation (7) [20]:
y ( x ) = i = 1 N α i Ψ ( x i , x ) + b .
It can be seen that an LS-SVR model contains two parameters, the regularization parameter γ and the radial basis kernel function, σ 2 . The forecasting performance of an LS-SVR model is related to the selection of γ and σ 2 . The role of γ is to balance the confidence range and experience risk of learning machines. If γ is too large, the goal is only to minimize the experience risk. On the contrary, when the value of γ is too small, the penalty for the experience error will be small, thus increasing the value of experience risk σ controls the width of the Gaussian kernel function and the distribution range of the training data. The smaller σ is, the greater the structural risk there is, which leads to overfitting. Therefore, the parameter selection of an LS-SVR model has always been the key to improve the forecasting accuracy.

2.2. Chaotic Quantum Fruit Fly Algorithm (CQFOA)

FOA is a population intelligent evolutionary algorithm that simulates the foraging behavior of fruit flies [26]. Fruit flies are superior to other species in smell and vision. In the process of foraging, firstly, fruit flies rely on smell to find the food source. Secondly, they visually locate the specific location of food and the current position of other fruit flies, and then fly to the location of food through population interaction. At present, FOA has been applied to the forecasting of traffic accidents, export trade, and other fields [40].

2.2.1. Fruit Fly Optimization Algorithm (FOA)

According to the characteristics of fruit flies searching for food, FOA includes the following main steps.
  • Step 1. Initialize randomly the fruit flies’ location ( X 0 and Y 0 ) of population.
  • Step 2. Give individual fruit flies the random direction and distance for searching for food by smell, as in Equations (8) and (9) [26]:
      X i = X 0 + Random   Value  
    Y i = Y 0 + Random   Value .
  • Step 3. Due to the location of food being unknown, firstly, the distance from the origin (Dist) is estimated as in Equation (10) [25], then the determination value of taste concentration (S) is calculated as in Equation (11) [25], i.e., the value is the inverse of the distance.
      D i s t i = X i 2 + Y i 2  
      S i = 1 / D i s t i  
  • Step 4. The determination value of taste concentration (S) is substituted into the determination function of taste concentration (or Fitness function) to determine the individual position of the fruit fly ( S m e l l i ), as shown in Equation (12) [26]:
    S m e l l i = Function ( S i ) .
  • Step 5. Find the Drosophila species (Best index and Best Smell values) with the highest odor concentrations in this population, as in Equation (13) [26]:
    max ( S m e l l i ) ( B e s t _ S m e l l i )   and   ( B e s t _ i n d e x ) .
  • Step 6. The optimal flavor concentration value (Optimal_Smell) is retained along with the x and y coordinates (with Best_index) as in Equations (14)–(16) [25], then the Drosophila population uses vision to fly to this position.
      O p t i m a l _ S m e l l = B e s t _ S m e l l i = current  
      X 0 = X B e s t _ i n d e x  
      Y 0 = Y B e s t _ i n d e x  
  • Step 7. Enter the iterative optimization, repeat Steps 2 to 5 and judge whether the flavor concentration is better than that of the previous iteration; if so, go back to Step 6.
The FOA algorithm is highly adaptable, so it can efficiently search without calculating partial derivatives of the target function. It overcomes the disadvantage of trapping into local optima easily. However, as a swarm intelligence optimization algorithm, FOA still tends to fall into a local optimal solution, due to the declining diversity in the late evolutionary population.
It is noticed that there are some significant differences between the FOA and PSO algorithms. For FOA, the taste concentration (S) is used to determine the individual position of each fruit fly, and the highest odor concentration in this population is retained along with the x and y coordinates; eventually, the Drosophila population uses vision to fly to this position. Therefore, it is based on the taste concentration to control the searching direction to find out the optimal solution. For the PSO algorithm, the inertia weight controls the impact of the previous velocity of the particle on its current one by using two positive constants called acceleration coefficients and two independent uniformly distributed random variables. Therefore, it is based on the inertia weight to control the velocity to find out the optimal solution.
Thus, aiming to deal with the inherent drawback of FOA, i.e., suffering from premature convergence or trapping into local optima easily, this paper tries to use the QCM to empower each fruit fly to possess quantum behavior (namely QFOA) during the modeling processes. At the same time, the cat mapping function is introduced into QFOA (namely CQFOA) to implement the chaotic global perturbation strategy to help a fruit fly escape from the local optima when the population’s diversity is poor. Eventually, the proposed CQFOA is employed to determine the appropriate parameters of an LS-SVR model and increase the forecasting accuracy.

2.2.2. Quantum Computing Mechanism for FOA

(1) Quantization of Fruit Flies
In the quantum computing process, a sequence consisting of quantum bits is replaced by a traditional sequence. The quantum fruit fly is a linear combination of state | 0 and state | 1 , which can be expressed as in Equation (17) [34,35]:
| φ = α | 0 + β | 1 ,
where α2 and β2 are the probability of states, | 0 and | 1 , respectively, satisfying α 2 + β 2 = 1 , and (α, β) are qubits composed of quantum bits.
A quantum sequence, i.e., a feasible solution, can be expressed as an arrangement of l qubits, as shown in Equation (18) [34,35]:
q = { α 1 β 1 α 2 β 2 α l β l } ,
where the initial values of α j and β j are all set as 1 / 2 to meet the equity principle, α j 2 + β j 2 = 1 ( j = 1 , 2 , , l ), which is updated through the quantum revolving door during the iteration.
Conversion between quantum sequence and binary sequence is the key to convert FOA to QFOA. Randomly generate a random number of [0,1], r a n d j , if r a n d j α j 2 , the corresponding binary quantum bit value is 1, otherwise, 0, as shown in Equation (19):
x j = { 1 r a n d j α j 2 0 e l s e .
Using the above method, the quantum sequence, q, can be transformed into a binary sequence, x; then the optimal parameter problem of an LS-SVR model can be determined using QFOA.
(2) Quantum Fruit Fly Position Update Strategy
In the QFOA process, the position of quantum fruit flies represented by a quantum sequence is updated to find more feasible solutions and the best parameters. This paper uses quantum rotation to update the position of quantum fruit flies. The quantum position of individual i (there are in total N quantum fruit flies) can be extended from Equation (18) and is expressed as in Equation (20):
q i = { α i 1 α i 2       α i l β i 1 β i 2       β i l } ,
where α i j 2 + β i j 2 = 1 ; i = 1 , 2 , , N ; j = 1 , 2 , , l ; and 0 α i j 1 , 0 β i j 1 .
Quantum rotation is a quantum revolving door determined by the quantum rotation angle, which updates the quantum sequence and conducts a random search around the position of quantum fruit flies to explore the local optimal solution. The θ i j g is the jth quantum rotation angle of the population iterated to the ith fruit fly of generation, g , and the quantum bit q i j g (due to the nonnegative position constraint of q i j g , the absolute function, abs() is used to take the absolute value of each element in the calculation result) is updated according to the quantum revolving gate U ( θ i j g ) , as shown in Equations (21) and (22) [34,35]:
q i j   g + 1 = abs ( U ( θ i j g + 1 ) × q i j g )
U ( θ i j g ) = [ cos θ i j g sin θ i j g sin θ i j g cos θ i j g ] .
In special cases, when the quantum rotation angle, θ i j g + 1 , is equal to 0, the quantum bit, q i j g + 1 , uses quantum non-gate N ¯ to update with some small probability, as indicated in Equation (23) [35]:
q i j t + 1 = N ¯ × q i j t = [ 0 1 1 0 ] × q i j t .

2.2.3. Chaotic Quantum Global Perturbation

For a bionic evolutionary algorithm, it is a general phenomenon that the population’s diversity would be poor, along with the increased iterations. This phenomenon would also lead to being trapped into local optima during modeling processes. As mentioned, the chaotic mapping function can be employed to maintain the population’s diversity to avoid trapping into local optima. Many studies have applied chaotic theory to improve the performances of these bionic evolutionary algorithms, such as the artificial bee colony (ABC) algorithm [41], and the particle swarm optimization (PSO) algorithm [42]. The authors have also employed the cat chaotic mapping function to improve the genetic algorithm (GA) [43], the PSO algorithm [44], and the bat algorithm [39], the results of which demonstrate that the searching quality of GA, PSO, ABC, and BA algorithms could be improved by employing chaotic mapping functions. Hence, the cat chaotic mapping function is once again used as the global chaotic perturbation strategy (GCPS) in this paper, and is hybridized with QFOA, namely CQFOA, which hybridizes GCPS with the QFOA while suffering from the problem of being trapped into local optima during the iterative modeling processes.
The two-dimensional cat mapping function is shown as in Equation (24) [39]:
{ y t + 1 = f r a c ( y t + z t ) z t + 1 = f r a c ( y t + 2 z t ) ,
where frac function is used to calculate the fractional parts of a real number, y, by subtracting an approached integer.
The global chaotic perturbation strategy (GCPS) is illustrated as follows.
(1)
Generate 2popsize chaotic disturbance fruit flies. For each F r u i t   f l y i (I = 1, 2, …, 2popsize), Equation (24) is applied to generate d random numbers, z j , j = 1, 2, …, d. Then, the qubit (with quantum state, | 0 ) amplitude, cos θ j i , of F r u i t   f l y i is shown in Equation (25):
cos θ j i = y j = 2 z j 1 .
(2)
Select 0.5 popsize better chaotic disturbance fruit flies. Compute the fitness value of each F r u i t   f l y from 2 popsize chaotic disturbance fruit flies, and arrange these fruit flies to be a sequence based on the order of fitness values. Then, select the fruit flies with 0.5 popsize ranking ahead in the fitness values; as a result, the 0.5 popsize better chaotic disturbance fruit flies are obtained.
(3)
Determine 0.5 popsize current fruit flies with better fitness. Compute the fitness value of each F r u i t   f l y from current QFOA, and arrange these fruit flies to be a sequence based on the order of fitness values. Then, select the fruit flies with 0.5 popsize ranking ahead in the fitness values.
(4)
Form the new CQFOA population. Mix the 0.5 popsize better chaotic disturbance fruit flies with 0.5 popsize current fruit flies with better fitness from current QFOA, and form a new population that contains new 1popsize fruit flies, and name it the new CQFOA population.
(5)
Complete global chaotic perturbation. After obtaining the new population of CQFOA, take it as the new population of QFOA and continue to execute the QFOA process.

2.2.4. Implementation Steps of CQFOA

The steps of the proposed CQFOA for parameter optimization of an LS-SVR model are as follows as shown in Figure 1.
  • Step 1. Initialization. The population size of quantum Drosophila is 1 popsize; the maximum number of iterations is Gen-max; the random search radius is R; and the chaos disturbance control coefficient is N G C P .
  • Step 2. Random searching. For quantum rotation angle, θ i j , of a random search, according to the quantum rotation angle, fruit fly locations on each dimension are updated, and then, a quantum revolving door is applied to update the quantum sequence, as shown in Equations (26) and (27) [34,35]:
      θ i j   = θ ( j ) + R × rand ( 1 )
    q i j = a b s ( [ cos θ i j sin θ i j sin θ i j cos θ i j ] × Q ( j ) ) ,
    where i is an individual of quantum fruit flies, i = 1 , 2 , , 1 popsize ; j is the position dimension of quantum fruit flies, j = 1 , 2 , , l . As mentioned above, the position of q i j is non-negative constrained, thus, the absolute function, abs() is used to take the absolute value of each element in the calculation result.
  • Step 3. Calculating fitness. Mapping each Drosophila location, q i , to the feasible domain of an LS-SVR model parameters to receive the parameters, ( γ i , σ i ) . The training data are used to complete the training processes of the LS SVR i model and calculate the forecasting value in the training stage corresponding to each set of parameters. Then, the forecasting error is calculated as in Equation (12) of CQFOA by the mean absolute percentage error (MAPE), as shown in Equation (28):
    MAPE = 1 N i = 1 N | f i ( x ) f ^ i ( x ) f i ( x ) | × 100 % ,
    where N is the total number of data points; f i ( x ) is the actual load value at point i; and f ^ i ( x ) is the forecasted load value at point i.
  • Step 4. Choosing the current optimum. Calculate the taste concentration of fruit fly, S m e l l i , by using Equation (12), and find the best flavor concentration of individual, B e s t _ S m e l l i , by Equation (13), as the optimal fitness value.
  • Step 5. Updating global optimization. Compare whether the contemporary odor concentration, B e s t _ S m e l l i = current , is better than the global optima, B e s t _ S m e l l i . If so, update the global value by Equation (14), and enable the individual quantum fruit fly to fly to the optimal position with vision, as in Equations (29) and (30), then go to Step 6. Otherwise, go to Step 6 directly.
      θ 0 = θ B e s t _ i n d e x  
      q 0 = q B e s t _ i n d e x  
  • Step 6. Global chaos perturbation judgment. If the distance from the last disturbance is equal to N G C P , go to Step 7; otherwise, go to Step 8.
  • Step 7. Global chaos perturbation operations. Based on the current population, conduct the global chaos perturbation algorithm to obtain the new CQFOA population. Then, take the new CQFOA population as the new population of QFOA, and continue to execute the QFOA process.
  • Step 8. Iterative refinements. Determine whether the current population satisfies the condition of evolutionary termination. If so, stop the optimization process and output the optimal results. Otherwise, repeat Steps 2 to 8.

3. Forecasting Results

3.1. Dataset of Experimental Examples

To test the performance of the proposed LS-SVR-CQFOA model, this paper employs the MEL data from an island data acquisition system in 2014 (IDAS 2014) [45] and the data of GEFCom2014-E [46] to carry out a numerical forecast. Taking the whole time of 24 h as the sampling interval, the load data contains 168-hour load values in total, i.e., from 01:00 14 July 2014 to 24:00 20 July 2014 in IDAS 2014 (namely IDAS 2014), and another two load datasets with the same 168-hour load values, i.e., from 01:00 1 January 2014 to 24:00 7 January 2014 (namely GEFCom2014 (Jan.)) and from 01:00 1 July 2014 to 24:00 7 July 2014 (namely GEFCom2014 (July)) in GEFCom2014-E, respectively.
The preciseness and integrity of historical data directly impact the forecasting accuracy. The data of the historical load are collected and obtained by electrical equipment. To some extent, the data transmission and measurement will lead to some “bad data” in the data of historical load, which mainly includes missing and abnormal data. If these data are used for modeling, the establishment of load forecasting model and the forecasting will bring adverse effects. Thus, the preprocessing of historical data is essential to load forecasting. In this paper, before the numerical test, the data of the MEL are preprocessed, including: completing the missing data; identifying abnormal data; eliminating and replacing unreasonable data; and normalizing data. When the input of an LS-SVR model is multidimensional with a large data size (e.g., several orders of magnitude), it may lead to problems when using the raw data to implement model training directly. Therefore, it is essential that the sample data are normalized for processing, to keep all the sample data values in a certain interval (this topic limits [0,1]), ensuring that all of the data have the same order of magnitude.
The normalization of load data is converted according to Equation (31), where i = 1 , 2 , , N (N is the number of samples); x i and y i represent the values of before and after the normalization of sample data, respectively; and min ( x i ) and max ( x i ) represent the minimal and maximal values of sample data, respectively.
  y i = x i min ( x i   ) max ( x i ) min ( x i )
After the end of the forecasting, it is necessary to use the inverse normalization equation to calculate the actual load value, as shown in Equation (32):
x i = ( max ( x i ) min ( x i ) ) y i + min ( x i ) .
The normalized data of the values in IDAS 2014, GEFCom2014 (Jan.) and GEFCom2014 (July) are collected and shown in Table 1, Table 2 and Table 3, respectively.
During the modeling processes, the load data are divided into three parts: the training set with the former 120 h, the validation set with the middle 24 h, and the testing set with the latter 24 h. Then, the rolling-based modeling procedure, proposed by Hong [18,47], is applied to assist CQFOA to look for appropriate parameters, (γ, σ), of an LS-SVR model during the training stage. Repeat this modeling procedure until all forecasting loads are received. The training error and the validation error can be calculated simultaneously. The adjusted parameters, (γ, σ), would be selected as the most suitable parameters only with both the smallest validation and testing errors. The testing dataset is never used during the training and validation stages; it will only be used to calculate the forecasting accuracy. Eventually, the 24 h’s load data are forecasted by the proposed LS-SVR-CQFOA model.

3.2. Forecasting Accuracy Indexes and Performance Tests

3.2.1. Forecasting Accuracy Index

This study uses the MAPE (mentioned in Equation (28)), the root mean square error (RMSE), and the mean absolute error (MAE) as forecasting accuracy indexes. The RMSE and MAE are defined as in Equations (33) and (34), respectively:
  RMSE = i = 1   N ( f i ( x ) f ^ i ( x ) ) 2 N
MAE = 1 N i = 1 N | f i ( x ) f ^ i ( x ) | ,
where N is the total number of data points; f i ( x ) is the actual value at point i; and f ^ i ( x ) is the forecasting value at point i.

3.2.2. Forecasting Performance Improvement Tests

To demonstrate the significant forecasting performances of the proposed model, Diebold and Mariano [48] and Derrac et al. [49] suggest that, for a small data size (24-h load forecasting) test, a Wilcoxon signed-rank test [50] is suitable. Thus, we decided to apply the Wilcoxon signed-rank test. For the same data size, a Wilcoxon test detects the significance of the difference (i.e., the forecasting errors from two forecasting models) in the central tendency. Therefore, let d i be the absolute forecasting errors from any two models on ith forecasting value: R + be the sum of ranks that d i > 0 ; R the sum of ranks that d i < 0 . If d i = 0 , then, remove this comparison and decrease the sample size. The statistics of Wilcoxon test, W, is calculated as in Equation (37):
W = min { R + , R } .
If W is smaller than or equal to the critical value, based on the Wilcoxon distribution under n degrees of freedom, then the null hypothesis (i.e., equal performance from the two compared forecasting models) could not be accepted, i.e., the proposed model achieves significance.

3.3. The Forecasting Results of the LS-SVR-CQFOA Model

3.3.1. Parameter Setting of the CQFOA Algorithm

The parameters of the proposed CQFOA algorithm for the numerical example are set as follows: the population size, popsize, is set to 200; the maximal iteration, gen-max, is set to 1000; and the control coefficient of chaotic disturbance, N G C P , is set to 15. These two parameters of the LS-SVR model are set as, γ [ 0 , 1000 ] , and σ [ 0 , 500 ] , respectively. The iterative time of each algorithm is set as the same to ensure the reliability of the forecasting results.

3.3.2. Results and Analysis

Considering the CQPSO, CQTS, and CQGA algorithms have been used to determine the parameters of an SVR-based load forecasting model in [36,37,38,39], those existing algorithms are also hybridized with an LS-SVR model to provide forecasting values to compare with the proposed model here. These alternative models include LS-SVR-FOA, LS-SVR-QFOA, LS-SVR-CQPSO (LS-SVR hybridized with chaotic quantum particle swarm optimization algorithm [36]), LS-SVR-CQTS (LS-SVR hybridized with chaotic quantum Tabu search algorithm [37]), LS-SVR-CQGA (LS-SVR hybridized with chaotic quantum genetic algorithm [38]), and LS-SVR-CQBA (LS-SVR hybridized with chaotic quantum bat algorithm [39]), in order to compare the forecasting performance of LS-SVR-based models comprehensively, this article also selects BPNN method as a contrast model. The parameters of an LS-SVR model are selected by the CQPSO, CQTS, CQGA, CQBA, FOA, QFOA, and CQFOA algorithms, respectively. The details of the suitable parameters of all models for the IDAS 2014, the GEFCom2014 (Jan.) and the GEFCom2014 (July) data are shown in Table 4, Table 5 and Table 6, respectively.
Based on the same training settings, another representative model, the back-propagation neural network (BPNN) is compared with the proposed model. The forecasting results of these models mentioned above and the actual values for IDAS 2014, GEFCom2014 (Jan.) and GEFCom2014 (July) are given in Figure 2, Figure 3 and Figure 4, respectively. This indicates that the proposed LS-SVR-CQFOA model achieves a better performance than the other alternative models, i.e., closer to the actual load values.
Table 7, Table 8 and Table 9 indicate the evaluation results from different forecasting accuracy indexes for IDAS 2014, GEFCom2014 (Jan.) and GEFCom2014 (July), respectively. For Table 7, the proposed LS-SVR-CQFOA model achieves smaller values for all employed accuracy indexes than the seven other models: RMSE (14.10), MAPE (2.21%), and MAE (13.88), respectively. For Table 8, similarly, the proposed LS-SVR-CQFOA model also achieves smaller values for all employed accuracy indexes compared to the seven other models: RMSE (40.62), MAPE (1.02%), and MAE (39.76), respectively. Similarly in Table 9, the proposed LS-SVR-CQFOA model also achieves smaller values for all employed accuracy indexes than the other seven models: RMSE (38.70), MAPE (1.01%), and MAE (37.48), respectively. The details of the analysis results are as follows.
Finally, to test the significance in terms of forecasting accuracy improvements from the proposed LS-SVR-CQFOA model, the Wilcoxon signed-rank test is conducted under two significant levels, α = 0.025 and α = 0.05, by one-tail test. The test results for the IDAS 2014, the GEFCom2014 (Jan.), and the GEFCom2014 (July) datasets are described in Table 10, Table 11 and Table 12, respectively. In these three tables, the results demonstrate that the proposed LS-SVR-CQFOA model achieved significantly better forecasting performance than the other alternative models. For example, in the IDAS 2014 dataset, for LS-SVR-CQFOA vs. LS-SVR-CQPSO, the statistic of Wilcoxon test, W = 72, is smaller than the critical statistics, W** = 81 (under α = 0.025) and W* = 91 (under α = 0.05), thus we could conclude that the proposed LS-SVR-CQFOA model is significantly outperform the LS-SVR-CQPSO model. In addition, the p-value = 0.022 is also smaller than the critical α = 0.025 and α = 0.05, which support the conclusion.

4. Discussion

Taking the IDAS 2014 dataset as an example, firstly, the forecasting results of these LS-SVR-based models are all closer to the actual load values than the BPNN model. This shows that LS-SVR-based models can simulate nonlinear systems of microgrid load more accurately than the BPNN model, due to its advantages in dealing with nonlinear problems.
Secondly, in Table 4, the selected FOA and QFOA algorithms could achieve the best solution, (γ, σ) = (581, 109) and (γ, σ) = (638, 124), with forecasting error, (RMSE = 15.93, MAPE = 2.48%, MAE = 15.63) and (RMSE = 14.87, MAPE = 2.32%, MAE = 14.61), respectively. However, the solution can be further improved by the proposed CQFOA algorithm to (γ, σ) = (734, 104) with more accurate forecasting performance, (RMSE = 14.10, MAPE = 2.21%, MAE = 13.88). Similar results could also be learned in the GEFCom2014 (Jan.) and the GEFCom2014 (July) from Table 5 and Table 6, respectively. This illustrates that the proposed approach is feasible, i.e., hybridizing the FOA with QCM and chaotic mapping function to determine more appropriate parameters of an LS-SVR model to improve the forecasting accuracy.
Comparing the LS-SVR-QFOA model with the LS-SVR-FOA model, the forecasting accuracy of the LS-SVR-QFOA model is superior to that of the LS-SVR-FOA model. This demonstrates that the QCM empowers the fruit fly to have quantum behaviors, i.e., the QFOA find more appropriate parameters of an LS-SVR model, which improves the forecasting accuracy of the LS-SVR-FOA model in which the FOA is hybridized with an LS-SVR model. For example, in Table 4, the usage of the QCM in FOA changes the forecasting performances (RMSE = 15.93, MAPE = 2.48%, MAE = 15.63) of the LS-SVR-FOA model to the much better performance (RMSE = 14.87, MAPE = 2.32%, MAE = 14.61) of the LS-SVR-QFOA model. Similar results are demonstrated in the GEFCom2014 (Jan.) and the GEFCom2014 (July) from Table 5 and Table 6, respectively.
For forecasting performance comparison between the LS-SVR-CQFOA and LS-SVR-QFOA models, the values of RMSE, MAPE, and MAE for the LS-SVR-CQFOA model are smaller than those of the LS-SVR-QFOA model. This reveals that the introduction of cat chaotic mapping function into QFOA plays a positive role in searching appropriate parameters when the population of QFOA algorithm is trapped into local optima. Then, the CQFOA finds more appropriate parameters. As a result, as shown in Table 4, employing CQFOA to select the parameters for an LS-SVR model markedly improves the performance (RMSE = 14.87, MAPE = 2.32%, MAE = 14.61) of the LS-SVR-QFOA model to the much better one (RMSE = 14.10, MAPE = 2.21%, MAE = 13.88) of the LS-SVR-CQFOA model. Similar results are illustrated in the GEFCom2014 (Jan.) and the GEFCom2014 (July) from Table 5 and Table 6, respectively.
Comparing the time-consuming problem during the parameter searching processes in all the IDAS 2014, the GEFCom2014 (Jan.), and the GEFCom2014 (July) datasets, the proposed CQFOA is less than that of the CQGA and CQBA algorithms, but more than that of the CQPSO and CQTS algorithms. However, considering the time requirements of the actual application, the increase in time compared with CQPSO (more than 7 s) and CQTS (more than 23 s) is acceptable.
Finally, some limitations should be noticed. This paper only employs an existing dataset to establish the proposed model; thus, for different seasons, months, weeks, and dates, the electricity load patterns should be changed season by season, month by month, and week by week. For real-world applications, this paper should be a good beginning to guide planners and decision-makers to establish electricity load forecasting models overlapping the seasons, months, and weeks to achieve more comprehensive results. Thus, our planned future research direction is to explore the feasibility of hybridizing more powerful novel optimization frameworks (e.g., chaotic mapping functions, quantum computing mechanism, and hourly, daily, weekly, monthly adjusted mechanism) and novel meta-heuristic algorithms with an LS-SVR model to overcome the drawbacks of evolutionary algorithms to achieve excellent forecasting accuracy.

5. Conclusions

This paper proposes a novel hybrid forecasting model by hybridizing an LS-SVR model with the QCM, the cat chaotic mapping function, and the FOA. The forecasting results show that the proposed model achieves better performance than the alternative forecasting models, by hybridizing chaotic mapping function, QCM, and other evolutionary algorithms with an LS-SVR-based model. Employing the cat chaotic mapping function to enrich the diversity of searching scope and enhance the ergodicity of the population could successfully avoid trapping into local optima, and, also proves that applying QCM to overcome the limitations of the fruit fly’s searching behaviors empowers the fruit fly to undertake quantum searching behaviors, thereby achieving more satisfactory results for MEL forecasting. The global chaotic perturbation strategy based on the cat mapping function is employed to jump out of local minima while the population of QFOA suffers from premature convergence, and also helps to improve the forecasting performance.

Author Contributions

M.-W.L. and W.-C.H. conceived and designed the experiments; G.J. and Z.Y. performed the experiments; M.-W.L. and W.-C.H. analyzed the data and wrote the paper.

Funding

Funding: This research was funded by the National Natural Science Foundation of China (51509056); the Heilongjiang Province Natural Science Fund (E2017028); the Fundamental Research Funds for the Central Universities (HEUCFG201813); the Open Fund of the State Key Laboratory of Coastal and Offshore Engineering (LP1610); Heilongjiang Sanjiang Project Administration Scientific Research and Experiments (SGZL/KY-08); and the Jiangsu Distinguished Professor Project (no. 9213618401), Jiangsu Normal University, Jiangsu Provincial Department of Education, China.

Acknowledgments

Ming-Wei Li, Jing Geng, and Yang Zhang acknowledge the support from the project grants: the National Natural Science Foundation of China (51509056); the Heilongjiang Province Natural Science Fund (E2017028); the Fundamental Research Funds for the Central Universities (HEUCFG201813); the Open Fund of the State Key Laboratory of Coastal and Offshore Engineering (LP1610); and Heilongjiang Sanjiang Project Administration Scientific Research and Experiments (SGZL/KY-08). Wei-Chiang Hong acknowledges the support from the Jiangsu Distinguished Professor Project (no. 9213618401) of Jiangsu Normal University, Jiangsu Provincial Department of Education, China.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maçaira, P.M.; Souza, R.C.; Oliveira, F.L.C. Modelling and forecasting the residential electricity consumption in Brazil with pegels exponential smoothing techniques. Procedia Comput. Sci. 2015, 55, 328–335. [Google Scholar] [CrossRef]
  2. Pappas, S.S.; Ekonomou, L.; Karampelas, P.; Karamousantas, D.C.; Katsikas, S.K.; Chatzarakis, G.E.; Skafidas, P.D. Electricity demand load forecasting of the Hellenic power system using an ARMA model. Electr. Power Syst. Res. 2010, 80, 256–264. [Google Scholar] [CrossRef]
  3. Dudek, G. Pattern-based local linear regression models for short-term load forecasting. Electr. Power Syst. Res. 2016, 130, 139–147. [Google Scholar] [CrossRef]
  4. Chen, Y.; Luh, P.B.; Guan, C.; Zhao, Y.; Michel, L.D.; Coolbeth, M.A.; Friedland, P.B.; Rourke, S.J. Short-term load forecasting: Similar day-based wavelet neural networks. IEEE Trans. Power Syst. 2010, 25, 322–330. [Google Scholar] [CrossRef]
  5. Li, S.; Wang, P.; Goel, L. Short-term load forecasting by wavelet transform and evolutionary extreme learning machine. Electr. Power Syst. Res. 2015, 122, 96–103. [Google Scholar] [CrossRef]
  6. Fan, G.F.; Wang, A.; Hong, W.C. Combining grey model and self-adapting intelligent grey model with genetic algorithm and annual share changes in natural gas demand forecasting. Energies 2018, 11, 1625. [Google Scholar] [CrossRef]
  7. Ma, X.; Liu, Z. Application of a novel time-delayed polynomial grey model to predict the natural gas consumption in China. J. Comput. Appl. Math. 2017, 324, 17–24. [Google Scholar] [CrossRef]
  8. Lou, C.W.; Dong, M.C. A novel random fuzzy neural networks for tackling uncertainties of electric load forecasting. Int. J. Electr. Power Energy Syst. 2015, 73, 34–44. [Google Scholar] [CrossRef]
  9. Ertugrul, Ö.F. Forecasting electricity load by a novel recurrent extreme learning machines approach. Int. J. Electr. Power Energy Syst. 2016, 78, 429–435. [Google Scholar] [CrossRef]
  10. Geng, J.; Huang, M.L.; Li, M.W.; Hong, W.C. Hybridization of seasonal chaotic cloud simulated annealing algorithm in a SVR-based load forecasting model. Neurocomputing 2015, 151, 1362–1373. [Google Scholar] [CrossRef]
  11. Hooshmand, R.A.; Amooshahi, H.; Parastegari, M. A hybrid intelligent algorithm Based short-term load forecasting approach. Int. J. Electr. Power Energy Syst. 2013, 45, 313–324. [Google Scholar] [CrossRef]
  12. Niu, D.X.; Shi, H.; Wu, D.D. Short-term load forecasting using Bayesian neural networks learned by hybrid Monte Carlo algorithm. Appl. Soft Comput. 2012, 12, 1822–1827. [Google Scholar] [CrossRef]
  13. Hanmandlu, M.; Chauhan, B.K. Load forecasting using hybrid models. IEEE Trans. Power Syst. 2011, 26, 20–29. [Google Scholar] [CrossRef]
  14. Mahmoud, T.S.; Habibi, D.; Hassan, M.Y.; Bass, O. Modelling self-optimised short term load forecasting for medium voltage loads using tunning fuzzy systems and artificial neural networks. Energy Convers. Manag. 2015, 106, 1396–1408. [Google Scholar] [CrossRef]
  15. Suykens, J.A.K.; Vandewalle, J.; De Moor, B. Optimal control by least squares support vector machines. Neural Netw. 2001, 14, 23–35. [Google Scholar] [CrossRef] [Green Version]
  16. Sankar, R.; Sapankevych, N.I. Time series prediction using support vector machines: A survey. IEEE Comput. Intell. Mag. 2009, 4, 24–38. [Google Scholar]
  17. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  18. Hong, W.C. Electric load forecasting by seasonal recurrent LS-SVR (support vector regression) with chaotic artificial bee colony algorithm. Energy 2011, 36, 5568–5578. [Google Scholar] [CrossRef]
  19. Fan, G.F.; Peng, L.L.; Zhao, X.; Hong, W.C. Applications of hybrid EMD with PSO and GA for an SVR-based load forecasting model. Energies 2017, 10, 1713. [Google Scholar] [CrossRef]
  20. Suykens, J.A.K.; Vanddewalle, J. Least squares support vector machines classifiers. Neural Netw. Lett. 1999, 19, 293–300. [Google Scholar] [CrossRef]
  21. Wang, J.; Hu, J. A robust combination approach for short-term wind speed forecasting and analysis—Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model. Energy 2015, 93, 41–56. [Google Scholar]
  22. Hong, W.C.; Dong, Y.; Zhang, W.; Chen, L.Y.; Panigrahi, B.K. Cyclic electric load forecasting by seasonal LS-SVR with chaotic genetic algorithm. Int. J. Electr. Power Energy Syst. 2013, 44, 604–614. [Google Scholar] [CrossRef]
  23. Ju, F.Y.; Hong, W.C. Application of seasonal SVR with chaotic gravitational search algorithm in electricity forecasting. Appl. Math. Model. 2013, 37, 9643–9651. [Google Scholar] [CrossRef]
  24. Fan, G.; Peng, L.L.; Hong, W.C.; Sun, F. Electric load forecasting by the SVR model with differential empirical mode decomposition and auto regression. Neurocomputing 2016, 173, 958–970. [Google Scholar] [CrossRef]
  25. Pan, W.T. Fruit Fly Optimization Algorithm; Tsanghai Publishing: Taipei, Taiwan, China, 2011. [Google Scholar]
  26. Pan, W.T. A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowl.-Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  27. Mitić, M.; Vuković, N.; Petrović, M.; Miljković, Z. Chaotic fruit fly optimization algorithm. Knowl.-Based Syst. 2015, 89, 446–458. [Google Scholar] [CrossRef]
  28. Wu, L.; Liu, Q.; Tian, X.; Zhang, J.; Xiao, W. A new improved fruit fly optimization algorithm IAFOA and its application to solve engineering optimization problems. Knowl.-Based Syst. 2018, 144, 153–173. [Google Scholar] [CrossRef]
  29. Han, X.; Liu, Q.; Wang, H.; Wang, L. Novel fruit fly optimization algorithm with trend search and co-evolution. Knowl.-Based Syst. 2018, 141, 1–17. [Google Scholar] [CrossRef]
  30. Zhang, X.; Lu, X.; Jia, S.; Li, X. A novel phase angle-encoded fruit fly optimization algorithm with mutation adaptation mechanism applied to UAV path planning. Appl. Soft Comput. 2018, 70, 371–388. [Google Scholar] [CrossRef]
  31. Han, S.Z.; Pan, W.T.; Zhou, Y.Y.; Liu, Z.L. Construct the prediction model for China agricultural output value based on the optimization neural network of fruit fly optimization algorithm. Future Gener. Comput. Syst. 2018, 86, 663–669. [Google Scholar] [CrossRef]
  32. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  33. Narayanan, A.; Moore, M. Quantum-inspired genetic algorithms. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 61–66. [Google Scholar]
  34. Han, K.H.; Kim, J.H. Genetic quantum algorithm and its application to combinatorial optimization problem. In Proceedings of the 2000 Congress on Evolutionary Computation, La Jolla, CA, USA, 16–19 July 2000; pp. 1354–1360. [Google Scholar]
  35. Han, K.H.; Kim, J.H. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Trans. Evol. Comput. 2002, 6, 580–593. [Google Scholar] [CrossRef] [Green Version]
  36. Huang, M.L. Hybridization of chaotic quantum particle swarm optimization with SVR in electric demand forecasting. Energies 2016, 9, 426. [Google Scholar] [CrossRef]
  37. Lee, C.W.; Lin, B.Y. Application of hybrid quantum tabu search with support vector regression for load forecasting. Energies 2016, 9, 873. [Google Scholar] [CrossRef]
  38. Lee, C.W.; Lin, B.Y. Applications of the chaotic quantum genetic algorithm with support vector regression in load forecasting. Energies 2017, 10, 1832. [Google Scholar] [CrossRef]
  39. Li, M.W.; Geng, J.; Wang, S.; Hong, W.C. Hybrid chaotic quantum bat algorithm with SVR in electric load forecasting. Energies 2017, 10, 2180. [Google Scholar] [CrossRef]
  40. Shi, D.Y.; Lu, L.J. A judge model of the impact of lane closure incident on individual vehicles on freeways based on RFID technology and FOA-GRNN method. J. Wuhan Univ. Technol. 2012, 34, 63–68. [Google Scholar]
  41. Yuan, X.; Wang, P.; Yuan, Y.; Huang, Y.; Zhang, X. A new quantum inspired chaotic artificial bee colony algorithm for optimal power flow problem. Energy Convers. Manag. 2015, 100, 1–9. [Google Scholar] [CrossRef]
  42. Peng, A.N. Particle swarm optimization algorithm based on chaotic theory and adaptive inertia weight. J. Nanoelectron. Optoelectron. 2017, 12, 404–408. [Google Scholar] [CrossRef]
  43. Li, M.W.; Geng, J.; Hong, W.C.; Chen, Z.Y. A novel approach based on the Gauss-vLS-SVR with a new hybrid evolutionary algorithm and input vector decision method for port throughput forecasting. Neural Comput. Appl. 2017, 28, S621–S640. [Google Scholar] [CrossRef]
  44. Li, M.W.; Hong, W.C.; Geng, J.; Wang, J. Berth and quay crane coordinated scheduling using chaos cloud particle swarm optimization algorithm. Neural Comput. Appl. 2017, 28, 3163–3182. [Google Scholar] [CrossRef]
  45. Xiong, Y. Study on Short-Term Micro-Grid Load Forecasting Based on IGA-PSO RBF Neural Network. Master’s Thesis, South China University of Technology, Guangzhou, China, 2016. [Google Scholar]
  46. Hong, T.; Pinson, P.; Fan, S.; Zareipour, H.; Troccoli, A.; Hyndman, R.J. Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond. Int. J. Forecast. 2016, 32, 896–913. [Google Scholar] [CrossRef] [Green Version]
  47. Hong, W.C. Application of seasonal SVR with chaotic immune algorithm in traffic flow forecasting. Neural Comput. Appl. 2012, 21, 583–593. [Google Scholar] [CrossRef]
  48. Diebold, F.X.; Mariano, R.S. Comparing predictive accuracy. J. Bus. Econ. Stat. 1995, 13, 134–144. [Google Scholar]
  49. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  50. Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
Figure 1. Chaotic quantum FOA algorithm flowchart.
Figure 1. Chaotic quantum FOA algorithm flowchart.
Energies 11 02226 g001
Figure 2. Forecasting values of LS-SVR-CQFOA and other models for IDAS 2014.
Figure 2. Forecasting values of LS-SVR-CQFOA and other models for IDAS 2014.
Energies 11 02226 g002
Figure 3. Forecasting values of LS-SVR-CQFOA and other models for GEFCom2014 (Jan.).
Figure 3. Forecasting values of LS-SVR-CQFOA and other models for GEFCom2014 (Jan.).
Energies 11 02226 g003
Figure 4. Forecasting values of LS-SVR-CQFOA and other models for GEFCom2014 (July).
Figure 4. Forecasting values of LS-SVR-CQFOA and other models for GEFCom2014 (July).
Energies 11 02226 g004
Table 1. Normalization values of load data for IDAS 2014.
Table 1. Normalization values of load data for IDAS 2014.
Time14 July15 July16 July17 July18 July19 July20 July
01:000.16170.12450.15260.22460.18700.33540.3669
02:000.07420.00000.08260.15900.13860.19240.1878
03:000.00000.01090.00000.03950.03810.10220.0919
04:000.00710.12780.09370.00000.00000.00000.0000
05:000.05310.19440.14190.11060.12180.15700.1770
06:000.07860.06110.09200.14280.17280.25580.2497
07:000.26360.17860.27240.30960.37880.40380.3943
08:000.37090.44170.34640.35860.43610.51290.4692
09:000.68720.58940.65490.74260.79700.60510.5829
10:000.95200.87460.90280.90550.98420.76320.7530
11:001.00000.93420.96500.96831.00000.81300.8332
12:000.96320.97300.90870.92170.94500.89350.8803
13:000.85521.00000.81350.82560.88210.80770.8122
14:000.82880.91520.92570.73770.83700.71850.7410
15:000.82240.81040.76630.74680.79610.60370.6882
16:000.86550.94480.85420.80990.84200.73470.7567
17:000.85520.79660.83400.81040.83230.75930.8439
18:000.94400.88090.91550.89760.95670.92860.9539
19:000.95740.86771.00000.97790.96940.97340.9741
20:000.97460.96930.96571.00000.98081.00001.0000
21:000.93720.87840.92360.94190.95460.95750.9664
22:000.87040.76970.79770.78890.84170.86340.8824
23:000.63280.55190.71930.64250.66550.58580.6035
24:000.31270.21140.27940.25590.33570.10800.0975
Table 2. Normalization values of load data for GEFCom2014 (Jan.).
Table 2. Normalization values of load data for GEFCom2014 (Jan.).
Time1 January2 January3 January4 January5 January6 January7 January
01:000.17690.05680.11270.13140.16480.07690.0532
02:000.08770.02060.03380.04800.07650.02220.0123
03:000.02340.00000.00000.00000.00870.00000.0000
04:000.00000.00840.00350.00440.00630.00760.0140
05:000.01750.07460.06340.04970.02680.05650.0862
06:000.08630.21550.21340.13680.09380.21220.2569
07:000.18350.43820.43450.30820.20900.47400.5389
08:000.27630.58020.58940.48130.35170.62770.6503
09:000.40280.64530.69720.67050.50390.68490.6581
10:000.52120.71100.76830.78600.61360.73000.6693
11:000.58190.74550.81060.80730.63330.74460.6861
12:000.60160.77510.80420.77260.60800.75730.6900
13:000.60890.76840.75920.69360.56230.73000.6788
14:000.57890.77120.71760.59500.52210.70780.6754
15:000.55630.76340.68870.54000.49370.68420.6676
16:000.57680.75560.68520.55600.55600.71090.6928
17:000.81650.88360.84790.79130.80600.85580.8411
18:001.00001.00001.00001.00001.00001.00001.0000
19:000.98100.96050.98450.94230.94160.97780.9955
20:000.89840.86860.88590.81880.80360.89200.9379
21:000.78070.77230.79080.70870.66720.79030.8489
22:000.58850.61140.62890.49820.42190.61120.6933
23:000.35960.43990.43030.28600.17740.41800.4980
24:000.19230.29570.25420.07190.00000.27640.3553
Table 3. Normalization values of load data for GEFCom2014 (July).
Table 3. Normalization values of load data for GEFCom2014 (July).
Time1 July2 July3 July4 July5 July6 July7 July
01:000.15620.16120.15830.27470.26360.16990.1063
02:000.07280.08820.07630.13020.12660.08570.0394
03:000.02380.03480.02320.04560.05540.03020.0054
04:000.00000.00000.00000.00000.00630.00000.0000
05:000.02220.01860.01810.01900.00000.00210.0302
06:000.09450.09570.10400.05890.05540.01540.1187
07:000.28110.27810.31430.20910.18720.09550.2972
08:000.46920.47360.51720.43160.41530.25210.4903
09:000.62440.62120.66370.68730.70080.44590.6424
10:000.73960.75160.77330.88780.90170.61310.7476
11:000.83060.84790.87220.97340.95610.71630.8425
12:000.89790.92090.93891.00000.95610.75700.9051
13:000.93780.96730.96780.98760.91110.78090.9434
14:000.97371.00000.99380.92870.85150.79280.9865
15:000.98790.98291.00000.85460.82430.81110.9995
16:000.99700.92900.98810.80320.84620.85741.0000
17:001.00000.85640.94230.80040.91950.91990.9962
18:000.99600.81010.90050.82790.99370.98530.9833
19:000.96870.75670.86720.82031.00001.00000.9579
20:000.91760.69070.77560.73860.94350.95790.9213
21:000.90440.64890.73770.67870.93620.94170.8975
22:000.82910.54610.63540.54280.86920.86870.7875
23:000.61380.35720.42620.32790.68830.64260.5701
24:000.40950.16780.22720.09130.43410.42130.3927
Table 4. LS-SVR parameters, MAPE, and computing times of CQFOA and other algorithms for IDAS 2014.
Table 4. LS-SVR parameters, MAPE, and computing times of CQFOA and other algorithms for IDAS 2014.
Optimization AlgorithmsLS-SVR ParametersMAPE of Validation (%)Computing Times (s)
γ σ
LS-SVR-CQPSO [36]6851251.17129
LS-SVR-CQTS [37]3571181.13113
LS-SVR-CQGA [38]6231371.11152
LS-SVR-CQBA [39]4691161.07227
LS-SVR-FOA5811091.2987
LS-SVR-QFOA6381241.32202
LS-SVR-CQFOA,7341041.02136
Table 5. Parameters combination of LS-SVR determined by CQFOA and other algorithms for GEFCom2014 (Jan.).
Table 5. Parameters combination of LS-SVR determined by CQFOA and other algorithms for GEFCom2014 (Jan.).
Optimization AlgorithmsParametersMAPE of Validation (%)Computation Times (s)
γ σ
LS-SVR-CQPSO [36]574870.98134
LS-SVR-CQTS [37]426681.02109
LS-SVR-CQGA [38]653980.95155
LS-SVR-CQBA [39]501820.9231
LS-SVR-FOA482941.5482
LS-SVR-QFOA387791.13205
LS-SVR-CQFOA,688880.86132
Table 6. Parameters combination of LS-SVR determined by CQFOA and other algorithms for GEFCom2014 (July).
Table 6. Parameters combination of LS-SVR determined by CQFOA and other algorithms for GEFCom2014 (July).
Optimization AlgorithmsParametersMAPE of Validation (%)Computation Times (s)
γ σ
LS-SVR-CQPSO [36]375920.96139
LS-SVR-CQTS [37]543591.04107
LS-SVR-CQGA [38]684620.98159
LS-SVR-CQBA [39]498900.95239
LS-SVR-FOA413481.5179
LS-SVR-QFOA384831.07212
LS-SVR-CQFOA,482790.79147
Table 7. Forecasting indexes of LS-SVR-CQFOA and other models for IDAS 2014.
Table 7. Forecasting indexes of LS-SVR-CQFOA and other models for IDAS 2014.
Compared ModelsRMSEMAPE (%)MAE
BPNN24.893.9224.55
LS-SVR-CQPSO [36]14.402.2714.21
LS-SVR-CQTS [37]14.502.2614.24
LS-SVR-CQGA [38]14.412.2414.13
LS-SVR-CQBA [39]14.452.2514.18
LS-SVR-FOA15.902.4815.62
LS-SVR-QFOA15.032.3214.69
LS-SVR-CQFOA14.102.2113.88
Table 8. Forecasting indexes of LS-SVR-CQFOA and other models for GEFCom2014 (Jan.).
Table 8. Forecasting indexes of LS-SVR-CQFOA and other models for GEFCom2014 (Jan.).
Compared ModelsRMSEMAPE (%)MAE
BPNN92.302.3490.74
LS-SVR-CQPSO [36]51.461.3150.69
LS-SVR-CQTS [37]50.851.2749.70
LS-SVR-CQGA [38]46.361.1645.31
LS-SVR-CQBA [39]42.761.0741.80
LS-SVR-FOA75.551.8973.88
LS-SVR-QFOA59.741.4757.96
LS-SVR-CQFOA40.621.0239.76
Table 9. Forecasting indexes of LS-SVR-CQFOA and other models for GEFCom2014 (July).
Table 9. Forecasting indexes of LS-SVR-CQFOA and other models for GEFCom2014 (July).
Compared ModelsRMSEMAPE (%)MAE
BPNN88.242.3185.51
LS-SVR-CQPSO [36]51.031.3349.35
LS-SVR-CQTS [37]45.731.2244.68
LS-SVR-CQGA [38]46.181.1944.46
LS-SVR-CQBA [39]40.751.0939.85
LS-SVR-FOA72.001.8869.69
LS-SVR-QFOA56.331.4954.81
LS-SVR-CQFOA38.701.0137.48
Table 10. Results of Wilcoxon signed-rank test for IDAS 2014.
Table 10. Results of Wilcoxon signed-rank test for IDAS 2014.
Compared ModelsWilcoxon Signed-Rank Test
T0.025 = 81T0.05 = 91p-Value
LS-SVR-CQFOA vs. BPNN0 T0 T0.000 **
LS-SVR-CQFOA vs. LS-SVR-CQPSO72 T72 T0.022 **
LS-SVR-CQFOA vs. LS-SVR-CQTS64 T64 T0.017 **
LS-SVR-CQFOA vs. LS-SVR-CQGA67 T67 T0.018 **
LS-SVR-CQFOA vs. LS-SVR-CQBA60 T60 T0.012 **
LS-SVR-CQFOA vs. LS-SVR-FOA50 T50 T0.009 **
LS-SVR-CQFOA vs. LS-SVR-QFOA68 T68 T0.019 **
T Denotes that the LS-SVR-CQGA model significantly outperforms the other models. ** implies the p-value is lower than α = 0.025; * implies the p-value is lower than α = 0.05.
Table 11. Results of Wilcoxon signed-rank test for GEFCom2014 (Jan.).
Table 11. Results of Wilcoxon signed-rank test for GEFCom2014 (Jan.).
Compared ModelsWilcoxon Signed-Rank Test
T0.025 = 81T0.05 = 91p-Value
LS-SVR-CQFOA vs. BPNN0 T0 T0.000 **
LS-SVR-CQFOA vs. LS-SVR-CQPSO74 T74 T0.023 **
LS-SVR-CQFOA vs. LS-SVR-CQTS75 T75 T0.024 **
LS-SVR-CQFOA vs. LS-SVR-CQGA78 T78 T0.026 **
LS-SVR-CQFOA vs. LS-SVR-CQBA80 T80 T0.027 **
LS-SVR-CQFOA vs. LS-SVR-FOA65 T65 T0.018 **
LS-SVR-CQFOA vs. LS-SVR-QFOA72 T72 T0.022 **
T Denotes that the LS-SVR-CQGA model significantly outperforms the other models. ** implies the p-value is lower than α = 0.025; * implies the p-value is lower than α = 0.05.
Table 12. Results of Wilcoxon signed-rank test for GEFCom2014 (July).
Table 12. Results of Wilcoxon signed-rank test for GEFCom2014 (July).
Compared ModelsWilcoxon Signed-Rank Test
T0.025 = 81T0.05 = 91p-Value
LS-SVR-CQFOA vs. BPNN0 T0 T0.000 **
LS-SVR-CQFOA vs. LS-SVR-CQPSO73 T73 T0.023 **
LS-SVR-CQFOA vs. LS-SVR-CQTS76 T76 T0.024 **
LS-SVR-CQFOA vs. LS-SVR-CQGA77 T77 T0.026 **
LS-SVR-CQFOA vs. LS-SVR-CQBA79 T79 T0.027 **
LS-SVR-CQFOA vs. LS-SVR-FOA65 T65 T0.018 **
LS-SVR-CQFOA vs. LS-SVR-QFOA71 T71 T0.022 **
T Denotes that the LS-SVR-CQGA model significantly outperforms the other models. ** implies the p-value is lower than α = 0.025; * implies the p-value is lower than α = 0.05.

Share and Cite

MDPI and ACS Style

Li, M.-W.; Geng, J.; Hong, W.-C.; Zhang, Y. Hybridizing Chaotic and Quantum Mechanisms and Fruit Fly Optimization Algorithm with Least Squares Support Vector Regression Model in Electric Load Forecasting. Energies 2018, 11, 2226. https://doi.org/10.3390/en11092226

AMA Style

Li M-W, Geng J, Hong W-C, Zhang Y. Hybridizing Chaotic and Quantum Mechanisms and Fruit Fly Optimization Algorithm with Least Squares Support Vector Regression Model in Electric Load Forecasting. Energies. 2018; 11(9):2226. https://doi.org/10.3390/en11092226

Chicago/Turabian Style

Li, Ming-Wei, Jing Geng, Wei-Chiang Hong, and Yang Zhang. 2018. "Hybridizing Chaotic and Quantum Mechanisms and Fruit Fly Optimization Algorithm with Least Squares Support Vector Regression Model in Electric Load Forecasting" Energies 11, no. 9: 2226. https://doi.org/10.3390/en11092226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop