*Article* **A General Intelligent Optimization Algorithm Combination Framework with Application in Economic Load Dispatch Problems**

#### **Jinghua Zhang \* and Ze Dong**

Hebei Engineering Research Center of Simulation Optimized Control for Power Generation, North China Electric Power University, Baoding 071003, China; dongze33@126.com

**\*** Correspondence: 52151053@ncepu.edu.cn; Tel.: +86-139-3086-8002

Received: 17 May 2019; Accepted: 4 June 2019; Published: 6 June 2019

**Abstract:** Recently, a population-based intelligent optimization algorithm research has been combined with multiple algorithms or algorithm components in order to improve the performance and robustness of an optimization algorithm. This paper introduces the idea into real world application. Different from traditional algorithm research, this paper implements this idea as a general framework. The combination of multiple algorithms or algorithm components is regarded as a complex multi-behavior population, and a unified multi-behavior combination model is proposed. A general agent-based algorithm framework is designed to support the model, and various multi-behavior combination algorithms can be customized under the framework. Then, the paper customizes a multi-behavior combination algorithm and applies the algorithm to solve the economic load dispatch problems. The algorithm has been tested with four test systems. The test results prove that the multi-behavior combination idea is meaningful which also indicates the significance of the framework.

**Keywords:** population-based intelligent optimization algorithm; multi-behavior combination; algorithm framework; Economic load dispatch (ELD)

#### **1. Introduction**

Many real-world problems can be modeled as optimization tasks. It is an important solution to solve the optimization problem with the population-based intelligent optimization algorithm such as evolutionary computation, swarm intelligence.

In the early stage, population-based intelligent optimization algorithms tend to be concise, and population intelligence is acquired through the simple behavior of individuals. However, some of the optimization problems in the real world are more complicated, and the existing algorithms may reflect their shortcomings. Economic load dispatch (ELD) is a typical optimization problem in power systems. The goal of ELD is to rationally arrange the power output of each generating unit in a power plant or power system so as to minimize fuel costs on the premise of system load and operational constraints. Due to the valve point effect of the thermal generating unit and the operational constraints, ELD problem presents non-convex, high-dimensional, nonlinear and discontinuous characteristics, which make the problem complicated. As the scale of the power system increases and the model for the problem becomes more sophisticated by considering more conditions, the optimization task becomes more difficult. In recent years, population-based intelligent optimization algorithms have been applied to solve ELD problems. Early research directly adopted classic intelligent optimization algorithms such as the genetic algorithm (GA) [1], particle swarm optimization (PSO) [2,3], or evolution programming (EP) [4]. There are also some studies that improve the classical algorithm for the ELD problem, such as an improvement for GA [5,6], quantum PSO (QPSO) [7], new PSO with local random search

(NPSO-LRS) [8], distributed Sobol PSO and TSA (DSPSO-TSA) [9]. The new algorithms proposed in recent years have also been applied to ELD problems, such as ant colony optimization (ACO) [10], differential evolution algorithm (DE) [11], artificial immune system (AIS) [12], modified artificial bee colony (MABC) [13], improved harmony search (IHS) [14], tournament-based harmony search (THS) [15], biogeography based optimization (BBO) [16], chaotic teaching-learning-based optimization with Lévy flight (CTLBO) [17], grey wolf optimization (GWO) [18]. Additionally, hybrid optimization algorithms have also been proposed by the combination of multiple intelligent optimization algorithms or algorithm components to deal with the ELD problem, such as a hybrid PSO (HPSO) [19], modified TLA (MTLA) [20], DE-PSO method [21], differential harmony search algorithm [22], hybrid differential evolution with biogeography-based optimization (BBO-DE) [23].

The current research on population-based intelligence optimization algorithms presents a new trend. Novel algorithms proposed recently tend to be complicated, such as simulating the more complicated biological population, natural phenomena, or social organization. A population no longer simply adopts one behavior; conversely multiple behaviors are performed during the search process. In [24–27], individuals play different roles in the population and perform different behaviors; in [28–30], the population performs different operations step by step during the process of completing the optimization task; or the search process is divided to different stages and the behavior of the population is different in stages [25]. In conclusion, the research direction of the current new algorithm is to use complex mechanisms to deal with diverse and complex real-world problems.

On the other hand, the researchers try to improve the algorithm through the combination of multiple algorithms or algorithm components with different characteristics. Because the algorithms or components of different characteristics may be suitable for different problems, the robustness of the algorithm may be improved; furthermore, the combination of algorithms with different characteristics is likely to complement each other and get better results. Therefore, an ensemble of multiple algorithms or algorithm components could make the optimization algorithm particularly efficient for complicated optimization problems [31].

Some studies combine multiple search operators in an algorithm. In the literature [24,25,27,28], different search operators are switched with specific probabilities. Some studies execute the operators in the specific order [29]. In [32], several operators are all calculated and the results with good fitness are selected. In the research of differential evolution algorithms, a variety of mutation operators are proposed, and the research on adaptive ensemble of multiple mutation operators has attracted the interest of researchers [33–37]. Some studies have further explored the combination of multiple crossover operators and multiple mutation operators in differential evolutionary algorithms [38]. In addition to differential evolution algorithms, there are also studies on ensembles of multiple search operators based on other algorithms, such as based on PSO [39,40], ABC [41], BBO [42], TLA [20]. There are also studies on ensemble of other algorithm components, such as individual topology [43,44], constraint processing components [45]. In addition to the combination of algorithm components, there are also studies with algorithm-level combinations which directly combine or adaptively select different algorithms [46,47].

From the current research on the improvement of the existing algorithms or new algorithms, it seems that an important trend is to combine multiple search behaviors, and construct high-level population running mechanisms. Therefore, in this paper, we introduce the idea of algorithm combination into application practice. Different from the traditional algorithm design, this paper focuses on the ideas and methods of algorithm combination and implements it as a general framework. Because no free lunch theorem [48] indicated that no matter how the algorithm improves, it is impossible to be better than all other algorithms on all problems. Therefore, an algorithm designed for a problem may be not suitable to another problem. The researchers maybe need to design corresponding optimization algorithms for each type of optimization problem. Thus, a flexible framework may be useful since various algorithms (components) can be combined and different combination modes and strategies can be adopted, which can bring diversity for algorithm design.

We adopt the idea of multi-behavior combination and proposed a unified multi-behavior combination model. Then, an agent-based framework is designed to support the model. The framework has the following three functions: (1) the existing algorithms and techniques can be extracted and reused as multiple behaviors; (2) the performance of the algorithm can be improved through the appropriate multi-behavior combination; and (3) through the framework technology, the corresponding combination algorithm can be customized for specific problems. At last, we customize a multi-behavior combination algorithm for the power system economic load dispatch problems. The test results show the effectiveness of the multi-behavior combination algorithm and the feasibility of the framework.

The remainder of this paper is organized as follows. Section 2 defines an intelligent optimization algorithm combination model based on the idea of multi-behavior combination. Section 3 gives an agent-based general framework for the model. Section 4 defines the ELD problem, and gives our algorithm customized with the framework. Section 5 gives the test with four classical ELD cases. Finally, Section 6 presents the conclusions.

#### **2. A Unified Multi-Behavior Combination Model for Intelligent Optimization Algorithm**

The ensemble of multiple algorithms or multiple algorithm components is a research hotspot of intelligent optimization algorithms. In this paper, we try to give a general algorithm combination framework which is hoped to support various algorithm combination mode. Since both algorithm and algorithm component represent specific search behaviors, this kind of research tries to introduce different behaviors in the search process of population. Therefore, this paper analyzes the characteristics of the population-based intelligent optimization algorithm in order to determine the unified multiple-behavior combination model.

#### *2.1. Multi-Behavior Combination*

Any population-based intelligent optimization algorithm requires a population with multiple individuals. The algorithm needs to perform iteration in the search process. The individual of the population evolves according to the algorithm mechanism in each iteration. Some algorithms simulate complex multi-population model, and multiple populations coexist and communicate by specific communication mechanisms. Therefore, a population-based intelligent optimization algorithm can be considered as a three-dimensional model, as shown in Figure 1. We consider the population-based optimization algorithm to be a three dimensional structure: x-dimension, y-dimension, and z-dimension; where x represents the individual dimension of the population, y represents the iterative dimension of the loop, and z represents the parallel population dimension. The x-y forms an algorithm plane that can search independently. Multiple algorithm planes are parallel on the z-dimension and influence each other by exchange individuals. A simple algorithm degenerates to a point in the z-dimension, meaning a single population, and has the same behavior in both the x- and y-dimensions, representing that the algorithm has only one behavior. Current algorithm research tends to improve algorithm performance and robustness through a combination of multiple behaviors. From the three-dimensional model, it can be seen that each dimension can be combined with different behaviors, and a combination of behaviors in different dimensions contains a different optimization idea that should bring different effects. This article divides the multi-behavior combination into three levels:


(3) Population level: the combination of multiple behaviors at the population level means that each sub-population may have its own behavior. In this combination, each sub-population is independent, and a sub-population communication mechanism is needed.

**Figure 1.** A three-dimensional model for population-based intelligent optimization algorithm.

#### *2.2. Behavior*

Different algorithms or algorithm components are used to represent particular behaviors. The combination of multiple behaviors can be a combination of algorithms or algorithm components. Algorithm component is constituent element of the algorithm. For multi-behavior combination, it is necessary to extract the behavior pattern from the algorithms in unit of one iteration which we called the operator of an algorithm. Since the algorithm is a population-based algorithm, the core operator is individual (population) evolution operator, which represents an evolutionary behavior from the parent individuals (population) to the child individuals (population). Some algorithms can be further decomposed to extract the auxiliary operators, which are the certain technological elements of an algorithm such as parameter control strategy, constraint processing technology, etc., and can also be used as the combination elements. This paper categorizes the behaviors into the following six types of operator:


strategies, such as random values, descending linearly by evolution generation. Some studies proposed adaptive parameter control strategies. Some of these strategies are general and can be extracted as operators.


The multi-behavior combination can be an ensemble of multiple individual evolution operators, or multiple auxiliary operators, or an ensemble of multi-type multiple operators.

#### *2.3. Combination Strategy*

In addition to three combination modes, a combination of multiple behaviors also needs to establish corresponding combination strategies. For individual level combination, each individual's behavior is needed to be determined; and for the combination at iteration level, each evolutionary generation is needed to determine the behavior for it; and the combination at population levels should determine sub-population behavior, and establish sub-population communicate mechanism. In this paper, we adopt the reverse thinking to take the behavior as the core. For an ensemble of behaviors, if individual level combination is used, individual groups should be determined for each behavior, including group size and grouping methods; if an iteration level combination is used, evolutionary generations should be determined for each behavior; and if population level combination is used, sub-population (is also an individual group) should be determined for each behavior including communication mechanism. Thus, each behavior has its individual group, evolutionary generations, and change strategy. Thereby, the behavior combination can be unified in a general framework. The combination of multiple behaviors can be in a competitive or collaborative manner.

#### 2.3.1. Collaborative Strategy

The collaborative strategy means that the related behaviors have a cooperative relationship, that they will support each other. The collaborative relationship is generally pre-customized, and the goal of collaboration is achieved through different search characteristics of multiple behaviors. Therefore, according to the behavior characteristic, the individual group, or generations, or communicating mechanism can be defined under specific combination mode. The communication mechanism of individual groups can be re-grouping (which is more suitable for individual level combination), or migrating and mixing individuals between different individual groups (which is more suitable for population level combination).

#### 2.3.2. Competitive Strategy

The competition strategy means that multiple behaviors compete with each other and decide the winner. The winner would acquire more resources as the reward. The competitive resource of the individual level combination is the number of individuals, and the resource of the iteration level combination is execution probabilities. The population level combination is generally suitable for the collaborative strategy, but the sub-populations can also compete for the amount of communication resources.

Although the resources to be competed are different, they can be unified. We define the resources for competition as probability, that is, the resource occupancy rate of each behavior. It is defined as a vector *P* = {*P*1, *P*2,... *Pm*}, *m* is the number of behaviors, and *Pi* is the resource occupancy of the *i*th behavior, 1 ≤ *i* ≤ *m*. The basis of the competition strategy is the evaluation model of behavior. Define evaluation model data for all behaviors as a vector *soState* = {*soState*1, *soState*2, ... , *soStatem*}, *soStatei* is the model score for the *i*th behavior, and *soStatebest* is the score for the optimal behavior.

The paper gives two competing strategies, one is a full-competitive strategy and the other is a semi-competitive strategy. The full-competitive strategy maximizes the resources of the excellent behavior set, and the definition of the excellent behavior set is shown in Equation (1), *Pi* is adjusted according to Equations (2) and (3) [50], *rate* is the maximum adjustment ratio.

$$\text{best} = \left\{ \text{i} \middle| \text{soState} \text{i} > \text{avg}(\text{soState}) \land \frac{\text{soState} \text{i} - \text{avg}(\text{soState})}{\text{soState}\_{\text{best}} - \text{avg}(\text{soState})} > 0.5 \text{i} \in [1, m] \right\} \tag{1}$$

$$
\Delta\_i = rate \cdot \frac{soState\_{best} - soState\_i}{soState\_{best}} i \notin \text{best} \tag{2}
$$

$$P\_i = \begin{cases} \ P\_i + \frac{\sum\_{i \text{best}} \Delta\_i}{|\text{best}|} \text{ if } i \in \text{best} \\\ P\_i - \Delta\_i \text{ otherwise} \end{cases} \tag{3}$$

The semi-competitive strategy defines a base ratio vector *Pbase* = {*Pbase*,1, *Pbase*,2,.., *Pbase,m*}, and *Pbase,i* defines the resource ratio that the *i*th behavior at least occupies. The remaining ratios are proportionally assigned based on the model scores of the behaviors. The adjustment method of *Pi* is as follows:

$$P\_i = P\_{\text{base},i} + (1 - \sum\_{i=1}^{m} P\_{\text{base},i}) \cdot \frac{\text{soState}\_i}{\sum\_{i=1}^{m} \text{soState}\_i} \tag{4}$$

#### *2.4. Evaluation Model*

Two kinds of evaluation may be needed in the process of behavior combination. One type of evaluation is to evaluate the population in order to understand the evolution state and select the appropriate algorithm. The other type of evaluation is the evaluation of behavior since the optimization performance of each behavior needs to be known for the competitive strategy. It may be necessary to decide which behavior is the winner based on the relative performance of multiple behaviors. Both types of evaluation are judged according to the population. Various evaluation mechanisms are established through the relevant information of the parent population and its offspring population. We divide the evaluation mechanism into two categories:

#### 2.4.1. Fitness Based Evaluation

The evaluation mechanism based on fitness is a more intuitive evaluation mechanism. Obtaining the optimal fitness is the goal of the algorithm. Therefore, the fitness evaluation mechanism is suitable for evaluating the performance of the behavior.

(a) Success rate (*SR*). The *SR* refers to the successful improvement rate for offspring to parent population. It is as follows:

$$SR = \frac{|SI|}{NP} \tag{5}$$

where *NP* is the individual number of the population, *SI* is the set of individuals which enter the next generation from offspring population, and |*SI*| means the individual number of *SI*.

(b) Successful individual fitness improvement mean (*SFIM*). The *SFIM* is a mean value for all successful individual. If the population selection operator is binary selection as DE used, it can be computed as Equation (7). If the population selection operator is not binary selection, the Equation (8) can be used.

$$
\Delta f\_k = |f(\mathbf{u}\_{k,\emptyset}) - f(\mathbf{x}\_{k,\emptyset})| \tag{6}
$$

$$SFIM = \frac{\sum\_{k=1}^{|SI|} \Delta f\_k}{|SI|} \tag{7}$$

$$SFIM = \frac{|\Sigma\_{k=1}^{|PI|} f(\mathbf{x}\_{k, \mathbf{\mathcal{G}}}) - \Sigma\_{k=1}^{|SI|} f(u\_{k, \mathbf{\mathcal{G}}})|}{|SI|} \tag{8}$$

where Δ*fk* is the fitness improvement value of the offspring individual *uk,g* to the parent individual xk,g. *f* is the fitness function (objective function), *SI* has the same meaning with Equation (5), and *PI* is the set of parent individuals being replaced.

#### 2.4.2. Evaluation Based on Other Information

Although the evaluation mechanism based on fitness is the most direct and effective method, it lacks other state information of the reference population. Considering the population performing the optimization as a stochastic system, information entropy can be used to evaluate the state of the population. Information entropy can be seen as a measure of system ordering, and the changes in entropy can be used to observe changes in populations. We also adopt an evaluation model based on information entropy measurement in the target space. The information entropy for discrete information is shown as follows:

$$H = -\sum\_{i=1}^{n} p\_i \log\_2 p\_i \tag{9}$$

where n represents the number of discrete random variables, and *pi* represents the probability of the *i*th discrete random variable, *pi* ∈ [0,1], and *p*<sup>1</sup> + *p*<sup>2</sup> +...+ *pn* = 1.

Let *NP* to be the individual number of the population, and the maximum and minimum objective values of the population are *fmin*, *fmax*. Divide [*fmin*, *fmax*] in the target space into *NP* sub-domains as the *NP* discrete random variables. Then *pi* is the individual percentage of the *i*th sub-domain.

$$p\_{\bar{i}} = c\_{\bar{i}} / \text{NP} \tag{10}$$

where *ci* represents the number of individuals with objective values in the *i*th sub-domain. *H* is the information entropy of the population representing the population diversity.

According to the multi-behavior combination model, a multi-behavior combination algorithm can be defined according to Table 1.


**Table 1.** Multi-behavior combination algorithm attributes.

#### **3. Agent-based General Algorithm Combination Framework**

In order to customize the multi-behavior combination algorithm conveniently, the model needs to be designed as a framework. In this section we construct an agent-based general framework.

First, an operator component library is built for the operators that represent various candidate behaviors. The core of the framework is to construct the search agent which represents a behavior in runtime. Multiple behavior set are expressed by a search agent list. Therefore, operators that implement a behavior are encapsulated in a search agent. Each search agent is autonomous and can perform the search in a centralized or distributed environment independently. The combination of algorithms or combination of operator components is transformed into a combination of search agents. Multi-behavior combination is modeled by a combination agent which describes the combination scheme and determines the execution logic in order to schedule the search agents. The combination component library includes some related combination modes and strategies. A multi-agent system environment is defined for information exchange among search agents, including the entire population information, general settings of algorithm, information communication region. The framework structure is shown in Figure 2.

**Figure 2.** Agent-based framework structure.

#### *3.1. Operator Component Library*

The operator components in the operator component library are designed or extracted from existing algorithms. The six types of operators summarized in Section 2.2 can be embedded in the framework. A uniform interface for each type of operator is defined. Each operator takes one iteration as the extraction unit. The data structure corresponding to an operator component should be constructed if some information needs to be maintained between the two iterations of the operator. Most algorithms have the same structure on evolution operation: at first parent individual selection, then individual evolution to generate offspring, finally population selection. These algorithms can be decomposed into the three types of operator components if they are general and can be reused. And these algorithms can be assembled by the corresponding operator components when in runtime. There are also algorithms that cannot be decomposed to general components, thus, the entire evolution operation in one iteration can be extracted into an independent individual evolution operator component.

#### *3.2. Search Agent*

Each search agent represents a behavior that can search independently, and therefore, it is an iterative unit of a complete algorithm. It contains the information needed by a complete algorithm, and is defined as a five-tuple:

SAinfo = (AlInfo, GroupInfo, RuntimeInfo, OffspringInfo, ModelInfo)


A single-step search behavior is defined to enable the search agent to run step by step. The algorithm is identical for all search agents, and the difference is the operators and group. The algorithm is showed in Algorithm 1.


#### *3.3. Combination Agent*

The combination agent includes the data structure for describing the combination scheme and the algorithm for executing the combination mechanism. Each search agent represents a specific behavior. The combination scheme defines the combination mode and combination strategy for multiple behaviors (multiple search agents). In the framework, we predefined three basic combination modes as {individual level combination, iteration level combination, population level combination}. Each combination mode needs to define its population grouping strategy and execution strategy, which determines the individual group and execution manner for all search agents in a combination mode. The predefined strategies are shown in the Table 2 and are supported by the combination component library which includes the three combination mode components, grouping components, competitive strategy components, group communication components. The flow chart of three combination mode components in one iteration is shown in Figure 3. It can be seen that the other strategy components are called by the combination mode components. The combination agent executes corresponding combination mode components according to the combination scheme described.



**Figure 3.** The flow charts of three combination mode components in one iteration: (**a**) Individual level combination; (**b**) Iteration level combination; (**c**) Population level combination.

#### *3.4. Algorithm Reuse and Algorithm Customization under the Framework*

Most population-based optimization algorithms can be integrated into the framework. In this paper, an algorithm that adopts the same behavior in three dimensions is called a single-behavior algorithm. An algorithm that adopts different behaviors in any dimension is called a multi-behavior combination algorithm. Most existing algorithms can be classified as single-behavior algorithms or multi-behavior combination algorithms.

Both single-behavior and multi-behavior algorithms can be decomposed to extract the six types of technical unit operators if they have. For the multi-behavior combination algorithms, the combination modes adopted by the algorithms generally belong to the three basic combination modes. However, the combination strategy may be various. We predefined some combination strategies, which may also be extracted as components from the current multi-behavior algorithms and reused by the combination agent.

By analyzing the behaviors and the combination method of the existing algorithms, the operator components and the combination strategy components are extracted, and the existing algorithms can be recombined in the framework. More importantly, the new algorithm can be customized under the framework.

Customizing an algorithm first requires the determination of the optimization idea of multi-behavior combination, selection of the corresponding operators, and the combination of related operators into multiple search agents. Then, the combination scheme is designed, including combination mode, combination strategy, grouping strategy, and some fixed parameters. Then, the algorithm can be executed by the framework. The algorithm of the framework is as Algorithm 2.

#### **Algorithm 2.** Algorithm of the framework.


To demonstrate the working principle of this framework, we give the examples on introducing DE and its variants jDE [51], SHADE [52], and teaching-learning-based optimization algorithm (TLBO) [29] into the framework:

#### 3.4.1. DE, jDE and SHADE

At first, the individual evolution operator, the parent individual selection operator, the population selection operator, and the parameter control operator can be extracted from the three algorithms as operator components. The DE and jDE has the same individual evolutionary operator: DE/rand/1 mutation strategy and the binomial crossover strategy. The individual evolution operator of the SHADE algorithm adopts the DE/current-to-pBest/1 mutation strategy and binomial cross strategy. The DE and jDE also have the same parent individual selection operator: random selection. The parent individual of SHADE has two types, one is the best parent individual, the other is the ordinary parent individual. The best parent individual selection operator adopts a random selection among the top p% individuals (sorted by fitness), and the ordinary parent individual selection operator adopts a random selection among population and failure individual archive. For the population selection operator, the three algorithms all use binary greedy selection. For the parameter control operator, DE uses the fixed value strategy, jDE and SHADE use adaptive parameter control strategies [51,52], which can be reused and are suitable to be extracted as operator components.

The three algorithms are all single-behavior algorithms. When using one of them, the related operator components can be selected and assembled into a search agent. For example, the SHADE algorithm has four operator components. The identifier (ID) of each component is specified in the AlInfo structure of the search agent. Algorithm 1 automatically calls the corresponding component for an iterative execution. The predefined execution order of operator components is: (1) execute the parameter components for the first call and generate the parameter value set, (2) execute the best parent individual selection component to generate the optimal parent individual set, (3) execute the ordinary parent individual selection component to generate other parent individual set, (4) execute the individual evolution component to generate the offspring, (5) execute the population selection component to generate the population of next generation, (6) execute the second call of the parameter control component to generate the parameter strategy model (needed by adaptive parameter control strategy), and (7) execute the algorithm evaluation component and generate algorithm evaluation model. If any step is not required, the component ID is set to 0, and the step is skipped. The search agent is called by the framework iteratively; thus, the SHADE algorithm can be supported by the framework.

#### 3.4.2. Teaching-Learning-Based Optimization (TLBO)

TLBO algorithm is a multi-behavior combination algorithm. It contains two behaviors in the iteration level, one is teaching behavior and the other is learning behavior. It can be introduced into the framework by four steps: (1) extract operators for each behavior; (2) select the operator components and construct the search agent for each behavior with description in AlInfo of each search agent; and (3) define the combination method of the two behaviors and describe it in the data structure of the combination agent. For the TBLO algorithm, the combination mode is an iteration level combination. For this combination mode, it does not need to divide the population for the search agents. It only needs to assign individuals according to the GroupInfo of the search agent. The group of the two behaviors in the TLBO algorithm is the entire population which can be described by the GroupInfo. The multi-behavior combination strategy is a collaborative strategy, the search agents are executed in a predefined fixed order: one by one. (4) The framework executes the combination agent iteratively, and the combination agent calls the component for iteration level combination. The algorithm flow of the component for iteration level combination is showed in Figure 3b.

Most of the algorithms adopt one combination mode for multiple behavior and can be customized by simple settings under the framework. However, a few studies explore the more complex combination, which can be seen as a further combination of these three basic combination modes. The components of the framework can also be reused to construct complex multi-level combination algorithm; however, the combination agent may need some modification.

#### **4. Economic Load Dispatch Model and Algorithm Customizing**

#### *4.1. The Model of Economic Load Dispatch Problem*

Economic load dispatch is a typical optimization problem in power systems. The goal is to minimize fuel costs by rationally arranging the active power output of each generating unit in a power plant or power system on the premise of meeting system load demand and operational constraints.

#### 4.1.1. Problem Definition

The objective function of the optimization problem is as follows:

$$\text{minF} = \sum\_{i=1}^{N\_{\overline{X}}} F\_i(P\_i) \tag{11}$$

where *F* is total fuel cost, *Ng* is the total number of online generating units of the system, *Pi* is the power output (in MW) of *i*th generator, *Fi(Pi)* is the cost function of the ith generator as Equation (12), if the valve point effect is considerd, *Fi(Pi)* can be described as Equation (13).

$$F\_i(P\_i) = a\_i P\_i^2 + b\_i P\_i + c\_i \tag{12}$$

$$F\_i(P\_i) = a\_i P\_i^2 + b\_i P\_i + c\_i + |c\_i \sin(f\_i(P\_i^{\min} - P\_i))| \tag{13}$$

where *ai, bi, ci, ei, fi* is the cost coefficients of the *i*th generator.

In addition to the objective function, the problem needs to meet some constraints:

#### 4.1.2. Generator Output Constraint

$$P\_i^{\text{min}} \le P\_i \le P\_i^{\text{max}} \tag{14}$$

where *Pi min* and *Pi max* are lower and upper bounds for power output of the *i*th generator.

4.1.3. Power Balance Constraints

$$\sum\_{i=1}^{N\_{\mathcal{S}}} P\_i = P\_D + P\_L \tag{15}$$

where *PD* is the total system loads and *PL* is the total power loss in all transmission lines. *PL* can be obtained by B-coefficient method as Equation (16).

$$P\_L = \sum\_{i=1}^{N\_{\overline{\mathcal{X}}}} \sum\_{j=1}^{N\_{\overline{\mathcal{X}}}} P\_i B\_{ij} P\_j + \sum\_{i=1}^{N\_{\overline{\mathcal{X}}}} B\_{0i} P\_i + B\_{00} \tag{16}$$

where *Bij*, *B*0*i*, *B*<sup>00</sup> are coefficients of the power loss matrix.

#### 4.1.4. Ramp Rate Limit Constraints

$$IP\_i - P\_i^{t-1} \le lIR\_i \; , \; P\_i^{t-1} - P\_i \le DR\_i \tag{17}$$

$$\max(P\_j^{\min}, P\_l^{t-1} - DR\_l) \le P\_l \le \min(P\_j^{\max}, P\_l^{t-1} + LR\_l) \tag{18}$$

where *Pi <sup>t</sup>*−<sup>1</sup> is the previous power output, *URi* and *DRi* are the up-ramp and down-ramp limit of the *i*th generator.

#### 4.1.5. Prohibited Operating Zones Constraints

The generating units may have certain zones where operation is restricted. Consequently, the feasible operating zones are discontinuities as follows:

$$\begin{cases} \begin{array}{c} P\_i^{\min} \le P\_i \le P\_{i,1} \text{\textquotedblleft} \\\ P\_{i,j} - 1^{pzu} \le P\_i \le P\_{i,j} \text{\textquotedblright} \\\ P\_{i,\nu\_i^{pzu}} \le P\_i \le P\_i^{\max} \end{array} \end{cases} \tag{19}$$

where *ni* is the total number of prohibited operating zones, *Pi,jpzl* and *Pi,jpzu* are the lower and upper limits of the *j*th prohibited zone for the *i*th generating unit respectively.

#### *4.2. Customizing Multi-Behavior Combination Algorithm*

The algorithm we customized for ELD problem is a multi-behavior combination algorithm. The behaviors come from the operator component library. At present, we only give small number of operators including some individual evolution operators (mainly DE algorithm operator and its invariants including mutation step and crossover step), some parent individual selection operators, some parameter control operators, and a population selection operator (just DE adopted) and its two constraint-based version with penalty functions method [45] and ε-constraint handling method [45].

The ε-constraint handling method gives the constraints a relaxation scope, and if the constraint violation value of individuals are less than ε, the individuals are considered feasible. Then in the population selection operator (an operator of one-to-one choice between parent individual and offspring individual), two feasible individuals are compared and selected according to objective value, and in the other case, individuals are selected according to constraint violation degree. The ε is varied as follows [45]:

$$\begin{aligned} \boldsymbol{\varepsilon}(0) &= \boldsymbol{\nu}(\mathbf{x}\_{\boldsymbol{\theta}}) \\ \boldsymbol{\varepsilon}(k) &= \begin{cases} \boldsymbol{\varepsilon}(0) \left(1 - \frac{k}{T\_{\boldsymbol{\varepsilon}}}\right)^{\boldsymbol{\alpha}} & 0 < k < T\_{\boldsymbol{\varepsilon}} \\ 0 & k \ge T\_{\boldsymbol{\varepsilon}} \end{cases} \end{aligned} \tag{20}$$

where ε(0) is the initial ε value, *X*<sup>θ</sup> is the top θth individual and θ = (0.05 × *NP*), *v* is constraint violation function. *cp* and *Tc* are parameters, and *Tc* is a specific generation in iteration.

For the multi-behavior algorithm, we give three behaviors which come from DE algorithm and its variants jDE [51], SHADE [52], etc. The three behaviors are constructed with the operators in the operator component library. The design for behaviors is showed in Table 3. Behavior1 and behavior2 have the similar algorithms with difference in parameter control operator. Behavior2 and behavior3 have the similar parameter control operator with difference in algorithm operation. Behavior 1 is used for randomness and diversity. Behavior2 is a fitness improvement instructed search because its parameter control strategy tends to the parameter values which bring larger fitness improvement. Behavior 3 is an optimal individual set instructed search, and is used for gathering in the optimal regions. The combination method of the three behaviors is designed as Table 4. Based on the combination method in Table 4, the three behaviors are executed in a fixed order from behavior 1 to 3 for the iteration level, so the search process is a repeated local small loop from diversity exploration to exploitation.




Just like the example in Section 3.4, the three behaviors are organized as search agents. It only needs to describe the operator components designed in Table 3 with AlInfo of search agent, then the search agent can execute these components in order with one iteration. However, the execution manner of multiple search agents is determined by the design in Table 4. The combination agent interprets the combination design and executes the search agents in a designed manner. The combination method is similar to the TLBO algorithm, but we have three behaviors. The behavior combination happens at the iteration level and the combination strategy is collaborative strategy which adopts the fixed execution manner: the three behaviors are executed one by one.

The algorithm of this paper is a general constrained optimization algorithm. The optimization objective is Equation (11). *Xi* = {*Pi,*1, *Pi,*2, ... *Pi,Ng*} is an individual vector, and is also a solution of the ELD problem, which means the power outputs of all the online generating units. In the current population-based intelligent optimization algorithm for solving ELD problems, the constraint processing can be divided into two categories. One is to process the individuals that violate the constraints and use the problem-specific knowledge to modify the individuals and satisfy the constraints as much as possible. However, because the constraints are more difficult to fully satisfy, the algorithm still needs to provide constraint processing technology. Another type of algorithm does not perform special processing, but only uses general constraint processing techniques. In this paper, the constraint of Equation (14) is processed as boundary. When the transmission loss is not considered, the equality constraint is simply processed according to Equation (15) by recomputing the power output of the last generating unit, and the other inequality constraints are not processed. When considering the transmission loss, the constraint relationship is complicated, and we do not make special treatment, the constraints are processed only through the ε-constraint handling method. The tolerance of the equality constraint is set as 10–4, which can be updated according to the problem. In order to facilitate the comparison, the algorithm proposed by this paper is named as multi-behavior combination differential evolution algorithm (MBC-DE).

#### **5. Experiment and Discussion**

In this section, the proposed algorithm and framework is applied in ELD problems. The ELD problem is a non-convex, nonlinear, discontinuous constrained optimization problem. The equality constraint involved is power balance constraint, inequality constraints are ramp rate limit constraints, and prohibited operating zones constraints. The valve point effect may or may not be considered in the objective function, and transmission losses may or may not be considered in the equality constraint. The transmission losses are calculated by B-coefficient method. We use four test system which varies with different difficulty levels. For each test case, 20 independent runs are performed and the results of the best, worst and mean fuel costs are recorded and compared with other algorithms. Each behavior of the multi-behavior algorithm is tested independently in order to analyze the effects of the multi-behavior combination. The three single-behavior algorithms are named as Behavior1~Behavior3. The population size is set as 50 for all cases, and the max iteration is 2000 generation.

(1) Test Case 1: Test case 1 is a small system. The system is comprising six generators, and the ramp rate limit, prohibited operating zone and transmission losses are all considered without valve point loading effect. The system data is from [2,3]. The load demand is 1263 MW.

Table 5 gives the best output for this case, it can be seen that the equality constraint accuracy is less than 10-4, and all the inequality constraints are satisfied (see the system data in [2]). Table 6 give the comparison between single behavior and multi-behavior algorithm. Among the three behaviors, Behavior3 performs best, and MBC-DE inherits its characteristics and obtains a smaller standard deviation. Table 7 give the comparison between MBC-DE and other algorithms. From the data in the table, the algorithm MBC-DE goes beyond some algorithms, but there are also some algorithms that are better than MBC-DE. In test case 1, MBC-DE falls into a local extremum. The three sub-algorithms of MBC-DE are all differential evolution and the search trajectory is similar. No behavior can help MBC-DE to jump out of this local extremum. This also give us the confidence on algorithm combination research by introducing new algorithm or algorithm components into the framework, to enrich the algorithm behaviors.

**Table 5.** Best outputs for test case 1 with PD = 1263 MW (6-units system).



**Table 6.** Algorithm comparison between multi-behavior and single-behavior for test case 1.

**Table 7.** Algorithm comparison between MBC\_DE and other algorithms for test case 1.


(2) Test Case 2: Test case 2 is a 15 units system. The ramp rate limit and transmission losses are considered. There are 4 generators having prohibited operating zone. The valve point loading effect is neglected. The system data is also from [2,3]. The load demand of the system is 2630MW.

Table 8 gives the best output of MBC-DE for this case, it can be seen that the equality constraint accuracy is also less than 10-4, and all the inequality constraints are satisfied (see the system data in [2]). Table 9 give the comparison between single behavior and multi-behavior algorithm. Among the three behaviors, Behavior3 still performs best, and MBC-DE is better than Behavior3 in mean and worst value. We believe this is due to the combination of multiple behavior, although Behavior2 and 1 is worse than Behavior3. Table 10 shows the comparison between MBC-DE and other algorithms. The data in the table shows that the algorithm we proposed achieved better results than others in case 2.

**Table 8.** Best outputs for test case 2 with PD = 2630 MW (15-units system).



**Table 9.** Algorithms comparison between multi-behavior and single-behavior for test case 2.

**Table 10.** Algorithms comparison between MBC\_DE and other algorithms for test case 2.


(3) Test Case 3: test case 3 is a 40 units system with valve point effect being considered. The ramp rate limit and transmission losses are neglected. The load demand is 10,500 MW. The system data is from [4].

Table 11 gives the best output of MBC-DE for this case; it can be seen that the equality constraint is satisfied. Table 12 give the comparison between single behavior and multi-behavior algorithm. Among the three behaviors, Behavior1 and Behavior2 all performs well, and MBC-DE is better than them. We think the Behavior3 act as the supporting role, although it works poorly on its own. Table 13 give the comparison between MBC-DE and other algorithms. The data in the table show that the algorithm we proposed achieved better results. For the best value, MBC-DE is better than others, but for the mean value, MDE and DE/BBO are better.

**Table 11.** Best outputs for test case 3 with PD = 10,500 MW (40-units system).



**Table 12.** Algorithm comparison between multi-behavior and single-behavior for test case 3.

**Table 13.** Algorithm comparison between MBC\_DE and other algorithms for test case 3.


(4) Test Case 4: test case 4 is a large-scale power system with 140 generators. The valve point effect is considered. The ramp rate limit and transmission losses are neglected. The load demand is 49,342 MW. The system data is from [59].

Table 14 gives the best output of MBC-DE for this case; and the equality constraint is satisfied. Table 15 give the comparison between single behavior and multi-behavior algorithm. Among the three behaviors, Behavior3 has the best result and the worst result, this means that the algorithm has large fluctuations and instability. MBC-DE has improved in both the optimal value and stability which represents the effect of algorithm combination. Table 16 gives the comparison between MBC-DE and other algorithms. The data in the table show that the algorithm we proposed achieved better results.


**Table 14.** Best outputs for test case 4 with PD = 49342 MW (140-units system).


**Table 14.** *Cont*.

**Table 15.** Algorithms comparison between multi-behavior and single-behavior for test case 4.


**Table 16.** Algorithms comparison between MBC\_DE and other algorithms for test case 4.


Implementing the algorithm in the form of a framework maybe lose some efficiency. The reason is that in order to achieve generality and easy assemblage, various technical units need to be designed as components which can be implemented as functions in Matlab, and thus the overhead of function call is generated. This gap can be reduced through programming technology. In fact, the running time of the algorithm is less than 30s with Matlab R2018a environment on a 1.99 GHz, 16 GB RAM Notebook computer. The time is feasible for some online systems. If the system has strict real-time requirements, the framework can also be used as a debugging tool to determine the algorithm. Then, code refactoring can be performed. The algorithm we proposed is not restricted by the constraint form and has a certain versatility.

#### **6. Conclusions**

Through the experiments in this paper, we can see that the performance of the population-based intelligent optimization algorithm can be improved by appropriate multi-behavior combination. A behavior has specific search trajectory, so it is easy to trap into the same local extremum. Multiple behaviors have different search trajectories, and can lead to more diversity, so the combination of multiple behaviors is more likely to be suitable for complex optimization problems. The ELD

Problem has a complex solution space caused by multiple constraints, the algorithms are prone to trap into the local extremum. From the related research, it can be seen that different algorithms are gradually improving the optimization results, which also shows that the previous algorithm has fallen into the local extremum. Therefore, combining the search characteristics of different algorithms makes it possible to obtain good results. The algorithm in this paper is a combination of three DE variants. From the test results, the proposed algorithm with multiple behaviors is better than the single behavior used by the algorithm. However, the sub-algorithms all belong to the DE algorithm, thus, their search trajectories are similar, the algorithm results are improved only a little, and some studies have yielded better values. This makes us more convinced of the significance of implementing a multi-behavior combination framework. With more candidate behaviors, the algorithm may be improved further. Therefore, our follow-up work is to introduce more algorithms into the framework to enrich the candidate behaviors. Another research opportunity involves improving the multi-behavior combination strategy.

Multi-behavior combination is an important research idea of current intelligent optimization algorithms. This article hopes to introduce this idea into practical applications. Different from the traditional research, the focus of this paper is to realize the idea as a general framework. Various algorithm components can be extracted from the all kinds algorithm and introduced into the framework. By performing behavior design and the combination method design, the new algorithm can be assembled. It brings convenience to the customization of new algorithms and to algorithm combination research more generally. Although the algorithm is tailored to solve the ELD problem, it is obvious that the framework can apply to other practical problems with simple customization.

**Author Contributions:** J.Z. did the programming and writing; Z.D. instructed the idea and writting.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **An Improved DA-PSO Optimization Approach for Unit Commitment Problem**

#### **Sirote Khunkitti 1, Neville R. Watson 2, Rongrit Chatthaworn 1, Suttichai Premrudeepreechacharn <sup>3</sup> and Apirat Siritaratiwat 1,\***


Received: 18 May 2019; Accepted: 17 June 2019; Published: 18 June 2019

**Abstract:** Solving the Unit Commitment problem is an important step in optimally dispatching the available generation and involves two stages—deciding which generators to commit, and then deciding their power output (economic dispatch). The Unit Commitment problem is a mixed-integer combinational optimization problem that traditional optimization techniques struggle to solve, and metaheuristic techniques are better suited. Dragonfly algorithm (DA) and particle swarm optimization (PSO) are two such metaheuristic techniques, and recently a hybrid (DA-PSO), to make use of the best features of both, has been proposed. The original DA-PSO optimization is unable to solve the Unit Commitment problem because this is a mixed-integer optimization problem. However, this paper proposes a new and improved DA-PSO optimization (referred to as iDA-PSO) for solving the unit commitment and economic dispatch problems. The iDA-PSO employs a sigmoid function to find the optimal on/off status of units, which is the mixed-integer part of obtaining the Unit Commitment problem. To verify the effectiveness of the iDA-PSO approach, it was tested on four different-sized systems (5-unit, 6-unit, 10-unit, and 26-unit systems). The unit commitment, generation schedule, total generation cost, and time were compared with those obtained by other algorithms in the literature. The simulation results show iDA-PSO is a promising technique and is superior to many other algorithms in the literature.

**Keywords:** dragonfly algorithm; metaheuristic; particle swarm optimization; unit commitment

#### **1. Introduction**

The development of electricity markets has made it even more crucial to determine the optimal generator schedule to minimize costs while meeting load demand. Traditional economic dispatch (ED) does not perform decisions on which generators to commit and assumes all generators must be dispatched within their minimum and maximum generator limits. Unit Commitment (UC) is the optimization problem of determining the optimal set of in-service and out-of-service generating units and their output during the scheduling period to minimize the total production costs while satisfying all the constraints [1]. In the UC problem, two decision processes involved are unit scheduling and ED. The unit scheduling process is to determine the on/off status of generating units in each hour of the planning horizon while considering minimum up- and down-time of the units. ED aims to find the optimal power generation of the in-service generating units to meet the load demand and spinning reserve during each hour while maintaining generating unit limits.

The UC problem has been considered to be a large-scale, non-convex, and mixed-integer non-linear combinatorial optimization problem, which makes the UC problem difficult to be solved. In the past, many methods have been proposed to solve the UC problem [2]. Some of the proposed techniques for solving the UC problem are; integer programming [3,4], branch-and-bound methods [5], dynamic programming (DP) [6–11], mixed-integer programming [12], Lagrangian relaxation methods (LR) [13,14], priority list method [15]. However, each of these methods has some drawbacks when solving the UC problem. For instance, the integer and mixed-integer programming methods, which use linear programming to find an integer part of the solution require too large memory for large systems, and this results in a large computation burden. The computation time of the Branch-and-bound increases exponentially with system size. Although DP is flexible, it sometimes requires a large amount of computation time if various constraints are considered. The disadvantage of LR is the difficulty confronted in providing optimal solutions when solving complex problems. The priority list method is fast and easy to implement, but it cannot confirm the quality of the solution for the same reason as LR.

Apart from these traditional techniques, many metaheuristic algorithms have been applied, such as; genetic algorithms (GA) [16], particle swarm optimization (PSO) combined with the Lagrangian relaxation (PSO-LR) [17], evolutionary programming (EP) [18], new genetic approach (NGA) [19], local convergence averse binary particle swarm optimization (LCA-PSO) [20], improved binary particle swarm optimization (IPSO) [20], mutation-based particle swarm optimization (MPSO) [20], a two-stage genetic-based technique (TSGA) [21], inter-coded genetic algorithm (ICGA) [22], binary-coded genetic algorithm (BCGA) [22], simulated annealing (SA) [23], Seeded Memetic algorithm (SM) [23], a hybrid algorithm comprising of particle swarm optimization and grey wolf optimizer (PSO-GWO) [24] and hybrid particle swarm optimization (HPSO) [25]. These have been successfully applied to solving the UC problem due to their ability to find a near global solution and deal with large-scale non-linear problems. Moreover, several works have previously studied the scheduling of generation units in small to large power systems. For example, fuzzy-based particle swarm optimization (FPSO) has been proposed to minimize the operation cost and emission for ships [26], conditional value-at-risk (CVaR) method has been introduced to maximize the expected profit of a microgrid operator [27], a hybrid PSO and selective PSO method (PSO&SPSO) has been used to solve a proposed a day-ahead operational scheduling framework for reconfigurable microgrids (RMGs) [28], a metaheuristic approach based on PSO has been applied to solve an optimal simultaneous hourly reconfiguration and day-ahead scheduling framework in smart distribution systems [29], a stochastic model for optimal scheduling of security-constrained UC associated with demand response (AC-SUCDR) has been presented in [30], a two-stage stochastic programming model has been developed to minimize the expected cost of microgrid under different time-based rate programs [31], and a Fuzzy Self-Adaptive Particle Swarm Optimization (FSAPSO) has been applied to solve multi-operation management of a typical microgrids and of a renewable microgrid [32,33].

Many metaheuristic optimization algorithms have been proposed to solve other types of complex optimization problems such as in an optimal power-flow (OPF). Examples are; grey wolf optimizer (GWO) [34], dragonfly algorithm (DA) [35], ant colony optimization (ACO) [36] and artificial bee colony (ABC) [37]. However, these algorithms cannot solve a mixed-integer combinational optimization problem in their native form. A hybrid dragonfly algorithm and particle swarm optimization (DA-PSO) is a recent optimization method which has been applied to efficiently solve a complex optimization problem which is a multi-objective optimization problem [38]. Nevertheless, it is unable to solve the mixed-integer combinational optimization problem. Therefore, this paper proposes an improved DA-PSO algorithm (iDA-PSO) that can solve the UC problem. This is achieved by applying a sigmoid function to the DA-PSO to find the optimal on/off status of generating units, which is the mixed-integer part of the UC problem. The algorithm is tested of four test systems of differing sizes. Five-unit, six-unit, ten-unit, and 26-unit generating systems are used to investigate the effectiveness of the proposed approach. The simulation results were compared with other algorithms in the literature.

#### **2. Formulation of the UC Problem**

The UC problem aims to find the optimal generation schedule, which is gauged by the value of the objective function while satisfying a set of constraints.

#### *2.1. Objective Function*

The objective function is the total production costs over the scheduling horizon, and this must be minimized to obtain the optimal generator schedule. The total production costs consist of fuel cost and start-up cost of the operating units. Therefore, the objective function is:

$$TCP = \sum\_{t=1}^{T} \sum\_{i=1}^{N\_{\mathcal{S}}} \left[ f\_{\text{Cost}}(P\_{\mathcal{S}^i}^t) + ST\_i^t (1 - u\_i^{t-1}) \right] u\_i^{t\_i} \tag{1}$$

where *TPC* is the total production cost (\$), *T* is the total scheduling period, *Ng* is the number of generating units, *Pgi<sup>t</sup>* is the active power generation of the *i*th unit at time *t*, *STi <sup>t</sup>* is the start-up cost of the *i*th unit at time *t*, *ui <sup>t</sup>* is the on or off status of the *i*th unit at time *t*, and *fCost*(*Pgi<sup>t</sup>* ) is the fuel cost function of the *i*th unit for the generator power output *Pgi<sup>t</sup>* which is calculated as:

$$f\_{\text{Cost}}(P^t\_{\text{gj}}) = a\_i P^2\_{\text{gj}} + b\_i P\_{\text{gj}} + c\_i \tag{2}$$

where *ai*, *bi*, and *ci* are the fuel cost coefficients of the *i*th generator.

The start-up cost is the cost of bringing the off-line unit on-line. It depends on the time that the unit has been off-line before starting up which is presented as follows:

$$\begin{array}{ll} \text{1.87}^t\_i = \begin{cases} \begin{array}{cc} \text{HSC}\_i & \text{if } \quad \text{MDT}\_i \le T^t\_{i,off} \le (\text{MDT}\_i + \text{CSH}\_i) \\ \text{CSC}\_i & \text{if } \qquad T^t\_{i,off} > (\text{MDT}\_i + \text{CSH}\_i) \end{array} \end{array} \tag{3}$$

where *HSCi* is the hot start-up cost of the *i*th unit, *CSCi* is the cold start-up cost of the *i*th unit, *MDTi* is the minimum down-time of the *i*th unit, *Tt i,o*ff is the number of off hours of the *i*th unit until time *t* and *CSHi* is the cold start hour of the *i*th unit.

#### *2.2. Constraints*

The optimization of the objective function must satisfy constraints imposed by the operational requirements. The set of constraints are as follows:

#### 2.2.1. Power Balance Constraint

$$\sum\_{i=1}^{N\_{\mathcal{S}}} P\_{\mathcal{S}^i}^t u\_i^t = P\_D^t \tag{4}$$

where *P<sup>t</sup> <sup>D</sup>* is the active power demand at time *t*.

2.2.2. Spinning Reserve Constraint

$$\sum\_{i=1}^{N\_{\overline{\mathcal{K}}}} P\_{\circ i(\text{max})} u\_i^t \ge P\_D^t + P\_R^t \tag{5}$$

where *Pgi*(max) is the maximum active power of the *i*th unit, and *Pt <sup>R</sup>* is the active power reserve at time *t*.

2.2.3. Generation Limit Constraints

$$P\_{\mathcal{S}^i(\min)} \le P\_{\mathcal{S}^i}^t \le P\_{\mathcal{S}^i(\max)}\tag{6}$$

where *Pgi*(min) is the minimum active power of the *i*th unit.

2.2.4. Minimum Up-Time Constraint

$$T\_{i,out}^t \geq MIT\_i \tag{7}$$

where *T<sup>t</sup> i,on* is the number of on hours of the *i*th unit until time *t*, and *MUTi* is the minimum up-time of the *i*th unit.

2.2.5. Minimum Down-time Constraint

$$T\_{i\rho ff}^t \geq MDT\_i \tag{8}$$

#### **3. Overview of DA-PSO Optimization Algorithm and Related Algorithms**

DA-PSO optimization algorithm is a hybrid algorithm which original combined the frameworks of the DA and PSO algorithms. This section aims to describe the formulations and concepts of the related algorithms including DA, PSO, and DA-PSO.

*3.1. DA*

DA is a metaheuristic method motivated by the flocking behavior of dragonflies in nature [35], and it has been successfully applied to solve complicated optimization problems, such as the OPF problem [39]. There are two main swarming goals of dragonflies, which are hunting (or static swarm), and migrating (or dynamic swarm). These can be related to two main phases of optimization, which are exploitation and exploration phases. The behavior of swarms follows three traditional rules [40]. The first rule is separation, which is to ensure collision avoidance. That is individuals avoid colliding with others in the neighborhood. Secondly, alignment, referring to velocity matching of an individual to that of other individuals in the neighborhood. The other is cohesion meaning the distance away of individuals to the center of mass of the neighborhood. Moreover, since survival is the main propose of any swarm, all the population should be attracted to food sources and repelled by the presence of enemies. Accordingly, the position updating of individuals are imitated from the aforementioned behavior, and can be mathematically formulated as follows:

Separation is formulated as follows:

$$S\_I = -\sum\_{j=1}^{N} X - X\_j \tag{9}$$

where *Si* is the separation of the *i*th individual, *N* is the number of neighboring individuals, *X* is the current individual position, *Xj* is the position of the *j*th neighboring individual.

Alignment is formulation is:

$$A\_{\bar{l}} = \frac{\sum\_{j=1}^{N} \mathbf{V}\_{\bar{l}}}{N} \tag{10}$$

where *Ai* is the alignment of the *i*th individual, *Vj* is the velocity of the *j*th neighboring individual.

Cohesion is formulation is:

$$\mathbf{C}\_{\bar{l}} = \frac{\sum\_{j=1}^{N} \mathbf{X}\_{\bar{l}}}{N} - \mathbf{X} \tag{11}$$

where *Ci* is the cohesion of the *i*th individual.

Attraction towards a food source is formulated as:

$$F\_l = X^\star - X \tag{12}$$

where *Fi* is the food source of the *i*th individual, *X*<sup>+</sup> is the food source position.

Repulsion from an enemy is formulated as:

$$E\_i = X^- + X \tag{13}$$

where *Ei* is the enemy of the *i*th individual, *X*<sup>−</sup> is the enemy position.

The velocity of artificial dragonflies can be simulated by considering step vector (**Δ***X*) representing the direction of their movement, which is calculated by the following equation:

$$
\Delta \mathbf{X}^{t+1} = \left( s\mathbf{S}\_l + a\mathbf{A}\_l + c\mathbf{C}\_l + f\mathbf{F}\_l + e\mathbf{E}\_l \right) + a^t \Delta \mathbf{X}^t \tag{14}
$$

where **Δ***X* is the step vector of an artificial dragonfly, *t* is the present iteration, *s* is the separation weight, *a* is the alignment weight, *c* is the cohesion weight, *f* is the food factor, *e* is the enemy factor. The inertia weight factor, ω*<sup>t</sup>* , is given by:

$$
\omega^t = \omega\_{\text{max}} - \frac{\omega\_{\text{max}} - \omega\_{\text{min}}}{Iter\_{\text{max}}} \times liter \tag{15}
$$

The position of the artificial dragonflies is another factor to be considered to simulate their movement, which is computed using:

$$X^{t+1} = X^t + \Delta X^{t+1} \tag{16}$$

where *X* is the position of an artificial dragonfly.

In the case of no neighboring solutions, the artificial dragonflies need to employ a *Levy* flight, which is a random walk to improve the exploration phase. The position of dragonflies in this situation is given by:

$$X^{t\star 1} = X^t + L\omega vy(d) \times X^t \tag{17}$$

where the following equation is used to calculate the *Levy* flight:

$$Lcvy(d) = 0.01 \times \frac{r\_1 \times \sigma}{|r\_2|^{\frac{1}{\beta}}} \tag{18}$$

where *r1*, *r2* are two uniformly generated random number in [0,1], β is a constant which is equal to 1.5 in this work. The parameter σ is calculated using the following equation:

$$\sigma = \left( \frac{\Gamma(1+\beta) \times \sin\left(\frac{\pi\beta}{2}\right)}{\Gamma\left(\frac{1+\beta}{2}\right) \times \beta \times 2^{\left(\frac{\beta-1}{2}\right)}} \right)^{1/\beta} \tag{19}$$

where Γ(*x*)=(*x* − 1)!

*3.2. PSO*

PSO is one of the well-known population-based evolutionary and swarm intelligence algorithms, and has been successfully applied to solve many problems in different fields [41–43]. Moreover, PSO has been effectively employed to be hybrid with many other optimization algorithms because of its simplicity and fast convergence speed [24,38,44]. PSO was originally proposed by Eberhart and Kennedy in 1995 by mimicking the concepts of bird flocking and fish schooling behaviors [45]. In PSO, each particle flies around a multi-dimensional search space and represents a possible solution in an

optimization problem. Each particle comprises of a position *Xi* and a velocity *Vi*. The particles are initialized in the search space with random velocity and position values. In each iteration, the velocity of each particle is updated based on its personal best experience, *Xt pbesti*, and the best experience among the whole swarm, *X<sup>t</sup> gbest*, found so far. Therefore, the velocity and position of each particle can be mathematically formulated as follows:

$$\mathbf{V\_{i}^{t+1}} = \boldsymbol{\omega^{t}} \times \mathbf{V\_{i}^{t}} + \mathbf{C\_{1}} \times rand\_{1} \times (\mathbf{X\_{gbest\_{i}}^{t}} - \mathbf{X\_{i}^{t}}) + \mathbf{C\_{2}} \times rand\_{2} \times (\mathbf{X\_{gbest}^{t}} - \mathbf{X\_{i}^{t}}) \tag{20}$$

$$\mathbf{X}\_{i}^{t+1} = \mathbf{X}\_{i}^{t} + \mathbf{V}\_{i}^{t+1} \tag{21}$$

where *Vi* is the velocity of the *i*th particle, *t* is the number of iteration, ω*<sup>t</sup>* is defined as in (15), *C1* and *C2* are acceleration coefficients, *rand1* and *rand2* are uniformly generated random numbers, *Xi* is the position of the *i*th particle, *Xpbesti* is the personal best position of the *i*th particle, *Xgbest* is the global best position among the whole swarm.

#### *3.3. DA-PSO*

DA-PSO is a recently developed hybrid metaheuristic algorithm motivated by combining the advantages of the DA and PSO algorithms [38]. PSO applies both personal and global best experiences of the particles to find the optimal solution, is consequently good at exploitation, and often converges on the optimal solution quickly. However, PSO is sometimes trapped in the local optima rather than the global because it converges too quickly on an optimal solution. Conversely, DA is good at exploration since it employs the Levy flight to increase the stochastic behavior in the searching process. However, DA takes too long time to converge on the optimal solution. The hybrid DA-PSO algorithm was proposed to overcome these problems by merging the good exploration of DA together with the good exploitation of PSO, and it has been proven to successfully solve a complicated optimization problem such as multi-objective optimal power-flow (MO-OPF) problems, which is evident in [38]. The idea of the DA-PSO algorithm is that in the exploration phase, DA is employed to initially explore the solution space to provide the global solution area, and the best position of DA is provided. In the exploitation phase, the PSO equations are calculated but the velocity equation of PSO, Equation (20), is modified by replacing the global best position by the provided best position found so far by DA. The PSO then finds a better optimal solution from this starting point. Thus, the modified version of PSO equations can be written as:

$$\mathbf{V\_{i}^{t+1}} = \boldsymbol{\omega^{t}} \times \mathbf{V\_{i}^{t}} + \mathbf{C\_{1}} \times rand\_{1} \times \left(\mathbf{X\_{pbest\_{i}}^{t}} - \mathbf{X\_{i}^{t}}\right) + \mathbf{C\_{2}} \times rand\_{2} \times \left(\mathbf{X\_{DA}^{t+1}} - \mathbf{X\_{i}^{t}}\right) \tag{22}$$

$$\mathbf{X}\_{i}^{t\star 1} = \mathbf{X}\_{i}^{t} + \mathbf{V}\_{i}^{t\star 1} \tag{23}$$

#### **4. An Improved DA-PSO Optimization Approach (iDA-PSO) for UC Problem**

The iDA-PSO algorithm is proposed to solve the UC problem by improving the traditional DA-PSO algorithm. An approach for the improvement, the related computational formulations, and the application of the approach are explained below.

#### *4.1. An Approach of Improving DA-PSO to Solve a Binary Problem*

Although many efficient metaheuristic algorithms have been proposed in recent years, most of them cannot be applied to solve problems involving binary values such as the UC problem, which is the objective of this work. The contribution of this work is including binary values in the optimization thereby developing an efficient metaheuristic algorithm able to solve the UC problem. The hybrid metaheuristic algorithm DA-PSO operates only on real value; however, it was taken as the starting point to develop the improved DA-PSO (iDA-PSO) approach, which is proposed in this paper.

The Binary PSO (BPSO) was proposed by Kennedy and Eberhart by a modification of the traditional PSO to enable solving binary problems [46]. They also showed that the BPSO could successfully solve the test functions from [47]. In the BPSO, a particle is seen to move by flipping the number of bits. Consequently, the velocity of the particle can be represented by the change of probabilities of bit changed per iteration. In other words, a particle moves in a search space by only taking on values of 0 or 1, where each velocity (*Vt i,gi*) represents the probability of a bit of position (*X<sup>t</sup> i,gi*) which takes the value 1. Since the position (*Xt i,gi*) and the personal best (*Xt pbest,i,gi*) are integers (0 or 1), and the velocity (*V<sup>t</sup> i,gi*), which is a probability, needs to be limited to be in the range [0,1]. A function used to accomplish this is called the sigmoid function and is mathematically formulated as follows:

$$S(V\_{i, \text{ç}i}^t) = \frac{1}{1 + \exp(-V\_{i, \text{ç}i}^t)}\tag{24}$$

The sigmoid function limits the velocity within the appropriate range to be used as a probability. The change in position is defined by comparing with the random uniformly generated numbers between 0 and 1 which is formulated as follows:

$$\text{If } rand() < S(V\_{i, \text{gj}}^t), \text{ then } X\_{i, \text{gj}}^t = 1, \text{ else } X\_{i, \text{gj}}^t = 0 \tag{25}$$

In the UC problem, *Vgimax* is set to limit the range of *Vi,gi*, so *S*(*V<sup>t</sup> i,gi*) is not too close to 0 or 1. A higher value of *Vgimax* represents a lower frequency of changing the state of a generator.

To improve the DA-PSO algorithm to be able to solve the UC problem, the sigmoid function described above is applied in the process of the DA-PSO algorithm. The equation of updating the position of dragonflies, Equations (16) and (17), are both replaced by the sigmoid function, Equation (25). Similarly, the position equation of PSO, Equation (23), is also replaced by the sigmoid function equation to find the on/off status of each generator.

#### *4.2. Priority List*

A unit operating at its maximum power output normally has a lower cost per produced unit than that operating at other power output levels; hence, a unit should be operated at its maximum power output. Priority list, in this case, is based on the average full-load cost (α) of a unit that is defined as the cost per maximum power of a unit as the following:

$$\alpha\_{i} = \frac{f\_{\text{Cost}}(P\_{\text{girmax}})}{P\_{\text{girmax}}} = a\_{i}P\_{\text{girmax}} + b\_{i} + \frac{c\_{i}}{P\_{\text{girmax}}} \tag{26}$$

where a unit with the least α*<sup>i</sup>* is prioritized to be dispatched first.

#### *4.3. Spinning Reserve Constraint Satisfaction*

The unit scheduling from the heuristic search may not satisfy the spinning reserve constraint. There are two main ways to deal with the unsatisfying-constraint results. The first one is a penalty function, which transforms the constrained problem into an unconstrained one. However, when the problem is highly constrained, it may be hard to find the near global solution because of the reduction of the search space. The other is to repair the violations that have occurred, which approach used in this paper. The implementation of repairing the spinning reserve violation is expressed below:


#### *4.4. Minimum Up-Time and Down-Time Constraints Satisfaction*

The results obtained for unit scheduling from the previous process may violate the minimum upand down-time constraints required in the UC problem. To repair the violations of these constraints, the following implementation is employed.

Step 1. At each hour *t*, calculate the accumulated current on/off hours of the *i*th unit at hour *t*, *Tt i,cur* by referring to the accumulated hours of the previous state, *Tt i,prev*. If *t* = 1, *T<sup>t</sup> i, prev* = initial state; else *Tt i,prev* = accumulated on/off hours of the previous state, *Tt*<sup>−</sup>*<sup>1</sup> i,cur*.

Step 2. At each unit *i*

Step 2.1. If *ui <sup>t</sup>* = 1 and *T<sup>t</sup> i,prev* ≥ 1, *T<sup>t</sup> i,cur* = *Tt i,prev* + 1 Step 2.2. If *ui <sup>t</sup>* = 1 and *T<sup>t</sup> i,prev* ≤ −*MDTi*, *T<sup>t</sup> i,cur* = 1 Step 2.3. If *ui <sup>t</sup>* = 0 and *T<sup>t</sup> i,prev* ≤ −1, *T<sup>t</sup> i,cur* = *T<sup>t</sup> i,prev* − 1 Step 2.4. If *ui <sup>t</sup>* = 0 and *T<sup>t</sup> i,prev* ≥ *MUTi*, *T<sup>t</sup> i,cur* = −1 Step 2.5. If *ui <sup>t</sup>* = 0 and *T<sup>t</sup> i,prev* < *MUTi*, set *ui <sup>t</sup>* = 1 and *Tt i,cur* = *Tt i,prev* + 1 Step 2.6. If *ui <sup>t</sup>* = 1 and *T<sup>t</sup> i,prev* > −*MDTi*, set *ui <sup>t</sup>* = 0 and *Tt i,cur* = *Tt i,prev* − 1

Step 3. If *i* < *Ng*, *i* =*i* + 1 and go to step 2; otherwise, go to step 4. Step 4. If *t* < *T*, *t* = *t* + 1 and go to step 1; otherwise, stop this process.

#### *4.5. Economic Dispatch*

Repairing the minimum up- and/or down-time constraints may result in either excessive generation or spinning reserves, which leads to a high generation cost, or insufficient generation, which cannot meet the load demand and spinning reserve. In case of the excessive spinning reserve, the committed units with the minimum priority will be decommitted by simultaneously considering the minimum up- and down-time constraints and spinning reserve constraint until no unit can be decommitted. In other words, the minimum up- and down-time constraints and the spinning reserve constraints must be checked before decommitting a unit. Moreover, after decommitting a unit, the accumulated current on/off time, *Tt i,cur*, must be updated according to the change of a unit. In the case of the insufficient generation, which cannot meet the load demand and spinning reserve, conversely, the uncommitted units with the highest priority will be committed without violating the minimum up- and down-time constraints until the generations from the committed units satisfy the spinning reserve constraints (i.e., Equation (5)). Similarly, after committing a unit, the accumulated current on/off hours, *Tt i,cur*, must be updated according to the change of a unit. After updating the status of the units without any violations of the constraints, to solve the ED problem, the lambda-iteration method [1] is employed to find the optimal values of *Pt gi* of all committed units to meet the load demand while satisfying the power balance and generation limit constraints. The implementation of these processes can be explained as follows:

Step 1. At each hour *t*, check if *Ng i*=1 *Pgi*(max)*u <sup>t</sup> <sup>i</sup>* <sup>≥</sup> *<sup>P</sup><sup>t</sup> <sup>D</sup>* <sup>+</sup> *Pt <sup>R</sup>*, go to step 2; otherwise, go to step 8.

Step 2. Calculate α*<sup>i</sup>* by using (26) for all committed unit at hour *t*, sort them in a descending order, and name it descending order list (*DOLt* ). Name the first unit in the *DOLt* to be the lowest priority (*LPt* ).

$$\text{Step 3.}\quad \text{Compute the excessive spinning reserve by ExcessReserve} = \sum\_{i=1}^{N\_{\tilde{\mathcal{S}}}} P\_{\tilde{\mathcal{S}}i(\max)} \mu\_{\tilde{i}}^t - P\_D^t - P\_R^t.$$


$$\text{Step 9.}\quad \text{Compute the linking spinning reservoir by } Lack \\ \text{Reserve} = \sum\_{i=1}^{N \text{g}} P\_{\text{g}i(\text{max})} \mathbf{u}\,\, ^t\_i - P\_D^t - P\_R^t.$$


The application of the iDA-PSO approach for solving the UC problem is as follows:


The flowchart of the iDA-PSO approach for solving the UC problem is presented in Figure 1.

**Figure 1.** *Cont.*

**Figure 1.** *Cont.*

**Figure 1.** Flowchart of Improved Dragonfly Algorithm-Particle Swarm Optimization (iDA-PSO) approach for Unit Commitment (UC) problem.

#### **5. Numerical Results**

The effectiveness of the iDA-PSO algorithm is now examined by solving the UC problem using 24-hour scheduling time horizon for four systems of different sizes. The systems are the 5-unit system [48], 6-unit system [48], 10-unit system [16] and 26-unit system [49]. The spinning reserve requirement is equal to 10% of the total load demand of each hour in the 5-unit, 6-unit, and 10-unit systems. However, the spinning reserve requirement in the 26-unit system is equal to 5% of the total load demand of each hour as in [49]. The data of each system comprising of generator maximum and minimum limits, fuel cost coefficients, minimum up- and down-time limits, hot and cold start costs, cold start hours, and initial status of the units can be found in Tables 1–4. The 24-hour load demand for the 5-unit, 6-unit, 10-unit and 26-unit systems are provided in Tables 5–8, respectively. For each test system, the proposed approach operated for 30 independent runs, and the number of the population and maximum iteration number were set to be 100 and 200, respectively.

**Table 1.** System data for 5-unit system.


**Table 2.** System data for 6-unit system.



**Table 3.** System data for 10-unit system.


**Table 4.** System data for 26-unit system.

**Table 5.** 24-hour load demand for 5-unit system.


**Table 6.** 24-hour load demand for 6-unit system.


**Table 7.** 24-hour load demand for 10-unit system.


**Table 8.** 24-hour load demand for 26-unit system.


The simulation results of the proposed iDA-PSO approach for the 5-unit system are shown in Table 9, and the convergence curve is presented in Figure 2. The unit schedule and generation schedule for the 24-hour duration and the total generation cost are presented in this Table. The total generation cost through the scheduling duration obtained from the proposed iDA-PSO algorithm is equal to \$11,830.94. The total generation cost provided by the iDA-PSO solution is better than that obtained by PSO-GWO, which is documented in the literature, for solving this UC problem. PSO-GWO achieved a generation cost of \$12,281 [24].


**Table 9.** Commitment and generation schedule of the 5-unit system by Improve Dragonfly Algorithm-Particle Swarm Optimization (iDA-PSO) approach.

**Figure 2.** Convergence curve of the iDA-PSO approach for the 5-unit system.

Table 10 presents the unit schedule, generation schedule for the 24-hour duration and the total generation cost obtained by the proposed algorithm for the 6-unit system, and Figure 3 demonstrates the convergence curve of the algorithm for this system. The total generation cost of the proposed approach, which is \$13,292.28, is once again better than that of the PSO-GWO, which is \$13,600 [24].


**Table 10.** Commitment and generation schedule of the 6-unit system by iDA-PSO.

**Figure 3.** Convergence curve of the iDA-PSO approach for the 6-unit system.

For the 10-unit system, the simulation results including unit and generation schedule for the 24-hour duration and the total generation cost of the iDA-PSO approach are given in Table 11, and its convergence curve is provided in Figure 4. Through the scheduling duration, the total generation cost provided by the iDA-PSO is equal to \$565,807.3094, which is slightly worse than those obtained by some algorithms in the literature. However, the total generation cost obtained by the iDA-PSO is significantly better than that of many algorithms presented in the literature. The algorithms GA [16], DP [16], LR [16], PSO-LR [17], EP [18], NGA [19], LCA-PSO [20], IPSO [20], MPSO [20], TSGA [21], ICGA [22], BCGA [22], SA [23], SM [23], PSO-GWO [24], HPSO [25], improve Lagrangian relaxation method (ILR) [25] and greedy randomized adaptive search procedure (GRASP) [50] are compared with the proposed approach as shown in Table 12. The best, average and worst generation costs and the computation times of the proposed iDA-PSO and other algorithms are also presented in Table 12. The computation time of the proposed iDA-PSO is slightly slower than those of some algorithms because of the sequential process of both DA and PSO.

**Table 11.** Commitment and generation schedule of the 10-unit system by iDA-PSO.


**Figure 4.** Convergence curve of the iDA-PSO approach for the 10-unit system.

**Table 12.** Simulation results of the iDA-PSO approach compared with other algorithms in the literature for the 10-generating unit system.


In the larger 26-unit system, the outcome of the UC for 24-hour duration together with the total generation cost provided by the proposed approach are shown by the non-zero numbers in Table 13, and Figure 5 displays the convergence curve of the proposed approach for this system. The total generation cost obtained by the iDA-PSO is equal to \$741,587.7088 and is better than those of other algorithms, including GA [49], discrete binary particle swarm optimization (BPSO) [49], and modified particle swarm optimization (MPSO) [51] in the literature as presented in Table 14.


**Table 13.** Generation schedule of the 26-unit system by iDA-PSO.

**Figure 5.** Convergence curve of the iDA-PSO approach for the 26-unit system.

**Table 14.** Simulation results of the iDA-PSO approach compared with other algorithms for the 26-generating unit system.


From the generation schedule of each system, it can be noticed that the different units are dispatched in different ways. This is because the different units have different fuel cost coefficients, generation limits, minimum up- and down-time constraints, hot and cold and start-up costs and cold start hours, etc. Therefore, the units which have the cheapest fuel cost coefficient should be prioritized to be firstly dispatched, and the units which have the highest fuel cost coefficient should be dispatched only in the high-demand hour. However, these also depend on the start-up cost of each unit. Another noticeable point is most of the units keep a constant level of production over different time intervals. This is because when any unit has been turned off and turned on again, the start-up cost is added to the total generation cost causing a higher cost. Thus, if the units have low fuel cost coefficients and high maximum power generation, it is unnecessary to turn them off and on again.

According to all simulation results presented in Tables 9–14, the proposed approach can efficiently find the optimal unit schedule during 24-hour time horizon for four different system sizes. The total generation cost obtained by the proposed iDA-PSO approach is better than that of the recently proposed algorithm, PSO-GWO, for the 5- and 6-unit systems. For the 10-unit system, the iDA-PSO could provide considerably better total generation cost than many algorithms in the literature. The iDA-PSO could also produce considerably better total generation cost than several algorithms in the literature for the larger 26-unit system. Thus, adopting the sigmoid function to the recently proposed efficient optimization algorithm, DA-PSO, could make it able to solve the UC problem, which is a mixed-integer combinational optimization problem. The optimal on/off status of generating units, which is the mixed-integer part of the UC problem, could be efficiently provided for all studied systems, and the

optimal total generation costs could also be obtained and are significantly better than that of many algorithms in the literature.

#### **6. Conclusions**

This paper has presented an improved DA-PSO algorithm that is capable of solving the UC problem in an electrical power system. The DA-PSO is a recent and efficient optimization algorithm, which has been proven to successfully solve a complicated optimization problem, which is a multi-objective, such as the OPF problem. However, DA-PSO cannot solve mixed-integer combination optimization problem such as the UC problem. To overcome this limitation, a new iDA-PSO algorithm has been proposed which employed the sigmoid function to enable finding the optimal on/off status of generation units, while satisfying the system constraints. The four test systems of different sizes (consisting of 5-unit, 6-unit, 10-unit and 26-unit systems) were used to demonstrate the effectiveness of the iDA-PSO algorithm. The proposed approach proved reliable by could successfully finding the optimal results for the generation schedule for a 24-hour duration for the test systems. The total generation costs over the scheduled time horizon obtained by iDA-PSO are less than those of many algorithms reported in the literature. Thus, applying the sigmoid function to the DA-PSO algorithm could enable it to solve the UC problem, which is a mixed-integer combinational problem, and the iDA-PSO also has a superiority over many algorithms reported in the literature. In the future work, the iDA-PSO approach could be improved and tested against other hybrid metaheuristic approaches such as fuzzy adaptive PSO.

**Author Contributions:** Conceptualization, S.K. and N.R.W.; Methodology, S.K.; Software, S.K.; Validation, S.K., N.R.W., A.S., and R.C.; Formal Analysis, S.K. and N.R.W.; Investigation, S.K.; Resources, N.R.W.; Data Curation, S.R.; Writing-Original Draft Preparation, S.R.; Writing-Review and Editing, S.K., N.R.W., A.S., and S.P.; Visualization, S.K.; Supervision, N.R.W., and A.S.; Project Administration, N.R.W., and A.S.; Funding Acquisition, A.S.

**Funding:** This research was funded by the Thailand Research Fund through the Royal Golden Jubilee Ph.D. Program (Grant no. PHD/0192/2557) to Mr Sirote Khunkitti and Professor Dr Apirat Siritaratiwat.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*
