Next Article in Journal
Application of Thermography and Convolutional Neural Network to Diagnose Mechanical Faults in Induction Motors and Gearbox Wear
Previous Article in Journal
eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Optimization Algorithm Inspired by Egyptian Stray Dogs for Solving Multi-Objective Optimal Power Flow Problems

Department of Electrical Energy Engineering, Collage of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport (AASTMT), Smart Village Campus, Giza 12577, Egypt
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2024, 7(6), 122; https://doi.org/10.3390/asi7060122
Submission received: 6 August 2024 / Revised: 30 October 2024 / Accepted: 26 November 2024 / Published: 3 December 2024

Abstract

:
One of the most important issues that can significantly affect the electric power network’s ability to operate sustainably is the optimal power flow (OPF) problem. It involves reaching the most efficient operating conditions for the electrical networks while maintaining reliability and systems constraints. Solving the OPF problem in transmission networks lowers three critical expenses: operation costs, transmission losses, and voltage drops. The OPF is characterized by the nonlinearity and nonconvexity behavior due to the power flow equations, which define the relationship between power generation, load demand, and network component physical constraints. The solution space for OPF is massive and multimodal, making optimization a challenging concern that calls for advanced mathematics and computational methods. This paper introduces an innovative metaheuristic algorithm, the Egyptian Stray Dog Optimization (ESDO), inspired by the behavior of Egyptian stray dogs and used for solving both single and multi-objective optimal power flow problems concerning the transmission networks. The proposed technique is compared with the particle swarm optimization (PSO), multi-verse optimization (MVO), grasshopper optimization (GOA), and Harris hawk optimization (HHO) and hippopotamus optimization (HO) algorithms through MATLAB simulations by applying them to the IEEE 30-bus system under various operational circumstances. The results obtained indicate that, in comparison to other used algorithms, the suggested technique gives a significantly enhanced performance in solving the OPF problem.

1. Introduction

With the advancement of industry and growth in society, modern power systems have turned out to be very complex. Sustainability and reliability in electric power networks can only be achieved by optimizing the variables of control so that OPF objectives are achieved. The control parameters cover generation, transmission, and distribution systems, and their direct influence is on system stability, economic dispatch, and efficient power flow [1,2,3,4,5].
In the generation sector, active and reactive power output and voltage set points are controlled to meet demand requirements at minimum voltage levels by means of minimum operational cost. Transmission line control is enforced with transformer tap settings, phase-shifting transformers and FACTS devices for voltage stability and power flow control [3,6,7,8]. On the other side, distribution networks perform load shedding and apply voltage regulators and capacitor banks to provide voltage control and reactive power management [9,10,11,12]. While each control variable can be optimized individually, their interaction is very critical to ensure power balance and efficient power flow [13,14]. On the other hand, solving power flow equations alone cannot provide the solution to achieve economic as well as reliable system operation [15,16].
OPF plays a critical role in optimizing power system control parameters to ensure efficient operations. In general, a single objective could be minimum voltage drop, minimum active power losses, or minimum fuel cost, while multi-objective optimization could include more than one among those stated above. The linear and nonlinear programming methods along with mixed-integer programming are considered classic versions of OPF techniques that failed in handling the highly complicated large-scale nonconvex problems. Therefore, metaheuristic algorithms became popular owing to their flexible and powerful techniques in solving optimization problems [17,18,19].
Among these, PSO is widely applied for power system optimization due to its simplicity and fast convergence. Inspired by the bird flocking and schooling of fish, iteratively, PSO adjusts solutions concerning both individual and group experiences [20,21,22,23,24]. The other most used technique is genetic algorithms, which are based on the principles of natural selection to avoid local optima and have been very effective in hard optimization environments [25,26,27].
Recent developments in metaheuristics have also proposed hybrid systems that combine heuristic methods with nature-inspired algorithms. Examples include powerful and flexible optimization solutions such as the Dragonfly Algorithm, proposed based on dragonfly swarming behavior [28], or the Ant Lion Optimizer, which acts based on ant lion hunting strategies [29]. Similarly, GWO and HHO have taken inspiration from the social hierarchy in wolves and cooperative behaviors in hawks [30,31], while MVO has integrated several concepts borrowed from physics into its algorithm [32,33]. Modified multi-objective variants of ALO and GWO have been developed to handle complex multi-objective OPF problems [34,35].
Other popular algorithms are MFO, inspired by the navigation mechanisms of moths showing excellent performance in local optimum escaping [36], GOA, inspired by modeling the swarming of grasshoppers, and HO, inspired by the territorial walking style of hippopotamuses [37,38,39]. WOA, proposed with inspiration from whales’ bubble-net hunting strategy, has become one of the most well-known algorithms because of its simplicity and robustness against different optimization problems [40]. Other methods include, among many others, BBO [41], the hybrid PSO–Gravitational Search Optimization [42,43,44], and many other variants continuously in evolution, with proof of efficiency in a wide application range [45,46,47,48,49,50,51,52]. Although a few strategies are at hand, some of them still face difficulties while finding the optimal combination of control variables for OPF. These point toward ongoing requirements for further advancement.
This work presents a novel meta-heuristic optimization technique, ESDO, for solving the OPF problem. The OPF’s objective function is initially single-oriented to minimize fuel costs, power losses, and voltage drops individually, and then multi-objective to reduce all three variables simultaneously. The suggested technique is evaluated using MATLAB simulations on the IEEE 30-bus test system and compared to PSO, MVO, GOA, HHO, and HO algorithms.
The rest of this paper is organized as follows: Section 2 shows the mathematical modeling of the OPF objective function, including its relative constraints. Section 3 presents the proposed methods for solving the OPF problem. Section 4 includes a detailed analysis of the four cases.

2. The Mathematical Modeling of OPF Objective Function

The OPF issue is crucial for power system designers and operators. It involves minimizing an objective function by changing the control variable. In this section, the standard mathematical representation of the OPF problem, variables, constraints, and objectives will be outlined.

2.1. Problem Formulation

The goal of the OPF problem is to optimize the OPF equation that has selected objective function through controlling power system control parameters while adhering to several equalities and inequalities restrictions, simultaneously. It can be formulated as follows in Equation (1):
  F x , u
The objective function of system optimization is denoted by F x , u where the objective function F is the function to be reduced or maximized, and the control vector x includes generator active power outputs P G (except for the slack bus), generator voltages VG, transformer tap settings T, and shunt voltage compensations QC, Therefore, vector x is stated below in Equation (2):
x   =   P G 2 ,   ,   P G N G ,   V G 1 ,   ,   V G N G ,   T 1 ,   ,   T N T ,   | Q C 1 ,   ,   Q C N C  
where N G is the number of generators, N T is the number of transformers, and NC is the number of reactive power compensators. P G i controls the output power of the generator attached to bus i. T i represents the transformer tap setting connected to the bus i. Q C i indicates the reactive power value of the reactive power compensation unit linked to bus i.
The state vector determines the status of the power system. u vector can be written as presented in Equation (3):
u = P G 1 ,   | V L 1 ,   ,   V L N L ,   | Q G 1 ,   ,   Q G N G ,   | S l 1 ,   ,   S l N T L
where N L is the load bus count, N T L is the transmission lines count, and V L i is bus i’s load voltage. Q G i is the reactive power of the generator linked to bus i, whereas S l i is the load flowing over transmission line i.
It is important to note that P G 1 , which is the output power of the slack bus in the case of the generator’s output power, is not included in the control vector as a result of its reliance on the state vector. For this reason, bus 2 is where the output power bus number in the control vector begins.

2.2. Constraints

As was previously established, the OPF issue involves both equality and inequality constraints where both of them depend on vectors x and u as shown in Equations (4) and (5):
g x , u = 0
h x , u 0
where the function g x , u represents the equality constraints, and the function h x , u represents the inequality constraints.

2.2.1. Equality Constraints

In OPF, equality constraints are expressed as non-linear load flow equations that state that total generated power equals total load power and power losses, as indicated in Equations (4) and (5).
P G i P D i V i j = 1 N B V j G i j cos θ i j + B i j sin θ i j = 0
Q G i Q D i V i j = 1 N B V j G i j sin θ i j B i j cos θ i j = 0
where P G i is the active power generation, Q G i is the reactive power generation, P D i is the active load demand, Q D i is the reactive load demand, NB is the number of busses, θ i j   is the voltage angle between busses i and j, and Gij and Bij are the real and imaginary terms of the bus admittance matrix, illustrated in Equation (8), corresponding to the ith row and jth column, respectively.
I 1 I 2 I i = Y 11 Y 12 Y 1 j Y 21 Y 22 Y 2 j Y 11 Y 11 Y i j V 1 V 2 V i
where I i   , V i are the current and the voltage of bus i, respectively, while Y i j   is the admittance element of the ith row and jth column.

2.2.2. Inequality Constraints

Regardless that the power flow relations seem to be represented in simple equations as shown in Equations (6) and (7), solving these inequalities requires complicated computational mathematics due to the numerous constraints of each system component’s operating limits. The system components’ constraints are listed below:
  • Generator constraints:
    V m i n G i V G i V m a x G i , i = 1 ,   ,   N G
    P m i n G i P G i P m a x G i , i = 2 ,   ,   N G
    Q m i n G i Q G i Q m a x G i , i = 1 ,   ,   N G
    where V m i n G i ,   V m a x G i indicates the lowest and highest voltage boundaries for the ith bus, P m i n G i ,   P m a x G i defines the active power limits of each ith generator, and the same for Q m i n G i ,   Q m a x G i but for reactive power.
  • Transformer constraints:
    T m i n i T i T m a x i , i = 1 ,   ,   N T
    where T m i n i and T m a x i represents the minimum and maximum values of the ith tap changer transformer Ti.
  • Shunt VAR compensators constraints:
    Q m i n C i Q C i Q m a x C i , i = 1 ,   ,   N C
    where Q m i n C i is lower limit and Q m a x C i is the maximum one for each compensator connected to bus i.
  • Security constraints:
    V m i n L i V L i V m a x L i , i = 1 ,   ,   N L
    S l i S m a x l i , i = 1 ,   ,   N T L
    where S l i   and S m a x l i are the min and max limit of the apparent power of the ith transmission line, V m i n L i and V m a x L i are the voltage limits for the same transmission line.
It is worth mentioning that the state vector u, which determines the status of the power system, contains inequality constraints that affect the general objective mentioned in Equation (1) as a penalty factor. The u vector inequality constraints are illustrated below in Equations (16)–(19).
P G s l l i m = P G s l m i n ,                      P G s l < P G s l m i n P G s l m a x ,                      P G s l > P G s l m a x P G s l ,         P G s l m i n P G s l P G s l m a x
V L i l i m = V L i m i n ,                      V L i < V L i m i n V L i m a x ,                      V L i > V L i m a x V L i ,         V L i m i n V L i V L i m a x
Q G i l i m = Q G i m i n ,                      Q G i < Q G i m i n Q G i m a x ,                      Q G i > Q G i m a x Q G i ,         Q G i m i n   Q G i Q G i m a x
S G s l l i m = S l i m a x ,                      S l i > S l i m a x   S l i ,                      S l i     S l i m a x
where P G s l   is the active power generation at the slack bus, V L i   is load bus voltages, Q G i   is reactive power generation, and S l i   is branch apparent power flow. Thus, the new expanded objective function to be solved is shown in Equation (20):
F p x , u = F x , u + λ V i = 1 N L V L i V l i m L i 2 + λ P P G s l P l i m G s l 2 + λ Q i = 1 N G Q G i Q l i m G i 2 + λ S i = 1 N T L S l i S l i m l i 2
where λ V ,   λ P , λ Q   and   λ S   are the penalty factors for voltage, active, reactive and apparent power, F p x , u stands for the augmented objective function of the system optimization, where an original function F(x,u) includes a basic optimization goal.
The additional terms, weighted by the Lagrange multipliers   λ V ,   λ P , λ Q   and   λ S   serve as penalty terms designed to enforce constraints related to voltage V, active power P, reactive power Q, and apparent power S. The approach used to derive this equation follows a penalty function method in optimization. By incorporating quadratic penalty terms, the optimization process can handle both the main objective and constraint enforcement within the same framework. The quadratic form ensures that the penalty increases proportionally with the magnitude of constraint violations, which drives the system towards feasible solutions where the constraints are satisfied, or violations are minimized. The terms of the equation are described specifically as follows:
  • The term   λ V i = 1 N L V L i V l i m L i 2 penalizes deviations in the voltage V L i at load buses from their limits V l i m L i .
  • The term λ P P G s l P l i m G s l 2 imposes a penalty on the active power P G s l at the slack generator, ensuring it stays within its limits.
  • The term λ Q i = 1 N G Q G i Q l i m G i 2 penalizes reactive power deviations Q G i at the generator buses from their limits Q l i m G i .
  • The term λ S i = 1 N T L S l i S l i m l i 2 enforces constraints on the apparent power S l i across transmission lines to avoid overloading.

2.3. Objective Function

In this paper, three single and one multi-objective functions were examined as the objective functions in the OPF problem. The mathematical representation for each objective function is as follows:

2.3.1. Fuel Cost (FC)

The cost of fuel required by generators in thermal power plants is regarded as the major issue in electricity generation. OPF challenges focus on the entire fuel cost of the system, with generator cost characteristics stated as a quadratic function of the output power.
The OPF solution targets reducing the total generator costs (USD/h), the cost objective function is presented below in Equation (21):
F 1 x , u = i = 1 N G a i + b i P G i + c i P G i 2
where NG is the number of generators, a i , b i , and c i are the fuel cost coefficients of the ith generator unit, and P G i is the real power generation of the ith generator.

2.3.2. Active Power Loss (APL)

Transmission lines always experience power loss because of wire resistance. As a result, active power loss is a significant variable to consider. The mathematical model for power loss is provided below in Equation (22):
F 2 x , y =   j = 1 N T L i = 1 N L g i j V i 2 + V j 2 2 V i V j cos θ i j
where g i j is the conductance value of the transmission line between buses i and j. V i and θ i are the voltage magnitude and angle of bus i, while V j and θ j are the voltage magnitude and angle of bus j.

2.3.3. Voltage Drops (VDs)

VD is a key indicator of network quality and system security. The indication is the total divergence of voltages at all load buses in the network from the target value (often 1 p.u.). The overall voltage deviation is computed in Equation (23) as described below:
F 3 x , y = i = 1 N L V i V ref i 2 = i = 1 N L V i 1 2
where NL is the load bus number, V i is voltage magnitude at bus i, and V ref i is the reference value of the voltage magnitude of the ith bus, which is usually set to 1 p.u.

2.3.4. Minimization of Fuel Cost, Active Power Loss, and Voltage Drops

As mentioned earlier in the paper, the three mentioned objectives are all essential for solving OPF. Therefore, integrating two or more targets inside the same objective function is a desirable goal. The multi-objective function F4 is utilized to optimize the fuel cost, power loss and voltage drop together in order to solve the optimal power flow. The mathematical modeling of the multi-objective function is mentioned below in Equation (24):
F 4 x , y = M i n { F 1 x , y + W p F 2 x , y + W v F 3 x , y }
where F 1 x , y is the fuel cost function, F 2 x , y is the active power loss function, F 3 x , y is the voltage drop function, W p   and   W v are suitable weighting factors for power loss and voltage drop, respectively.

3. Egyptian Stray Dogs Algorithm

The proposed algorithm is inspired by Egyptian stray dogs’ natural behaviors; in this section, the meta-heuristic algorithm is illustrated deeply including its natural inspiration, key components and the mathematical formulation.

3.1. Natural Inspiration

The ESDO algorithm also heads a list of optimizations inspired by animals. For example, PSO is inspired by the movement of flocks of birds or shoals of fish, in which particles (solutions) modify their positions according to their individual and collective memory [20,21,22,23,24]. GOA is inspired by the dynamics of grasshopper swarming [37,38], HHO resembles cooperative hunting strategies from Harris hawks [30,31], and HO emulates the territorial behavior of hippos while moving from water to land and vice versa [39]. The rest of the algorithms, namely Prairie Dog Optimization (PDO) and German Shepherd Dog Optimization (GSDO) model canine behaviors. PDO utilizes a frequency wave strategy like that of prairie dogs to perform efficient explorations [53], while GSDO emulates the trained behavior of search dogs [54]. Thus, making use of scent detection for targets.
ESDO draws inspiration from the natural behaviors of stray dogs and utilizes this inspiration to solve optimization problems. Drawing inspiration from the way a stray dog defines its territory, scavenges opportunistically, adapts to environmental changes, and uses a complex social hierarchy, ESDO employs a different approach to navigating large search spaces [55]. These can be further categorized into scavenging, resting, social hierarchies, defensive actions, territoriality, and entertainment as shown in Figure 1. Similarly, these will guide the algorithm toward effective explorations and exploitations of search spaces for optimal solutions to the given problem.
One such major behavior is territoriality, whereby stray dogs mark and protect areas to ensure availability and access to food, raising their survival chances. In ESDO, this is reflected in territorial behavior by focusing the search effort on the most promising areas while defending these areas from other less efficient algorithms.
The entertainment behavior, through play, allows the stray dogs to range and gauge their environment for social bonding. In optimization, this improves exploration and non-convergence to allow a new set of approaches toward the solution. Similarly, the defending behavior against intruders allows ESDO to maintain concentrated searches around high potential areas and avoid override by less optimum solutions.
Social hierarchy is an important aspect in the groups of stray dogs to ensure resource allocation goes well and that there is a semblance of order. ESDO mimics this by structuring solutions in the hierarchy, giving precedence to those with higher potential, which ensures that the allocation of resources is carried out in an efficient manner during optimization. Resting behavior, allowing dogs to recuperate, is also manifested in the algorithm through the avoidance of constant search to evade entrapment in local optima, making exploration more effective.
Scavenging behavior is, finally, the rule of opportunistic searching for food by stray dogs in different environments; such a feature is crucially important in survival. In ESDO, this trait has been used to explore various sections of the search space with the aim of finding creative solutions that might otherwise remain inaccessibly hidden by traditional methods of exploration.
In other words, the survivalist behaviors of stray dogs are rich in inspiring the ESDO algorithm. It makes use of territorial, playful, defensive, and scavenging behaviors to inspire both the phases of exploration and exploitation. Because of this, ESDO enjoys the merit of a flexible and adaptive search process that can uncover novel solutions that would remain undiscovered by traditional optimization techniques.

3.2. Algorithm Key Components

The ESDO algorithm involves several phases and variables, including search agents, random walk mechanism, exploration, exploitation, social interaction, energy function, and algorithm control parameters. It is believed that these phases are considered the main elements driving the algorithm and, therefore, will be explained below:

3.2.1. Search Agents

In ESDO, the individual stray dogs represent the search agents. It iteratively updates the position of each agent in the search space based on the mechanism of territorial exploration. During territorial exploration, while exploring the territory, agents roam randomly and during periods of exploitation, they refine promising solutions. The number of agents stands for the diversity of possible solutions.

3.2.2. Random Walk Mechanism

The random walk mechanism has its inspiration from the erratic motions of stray dogs when performing scavenging. It allows the search agents to visit a wide span of areas in the search space and adaptively cope with dynamic environmental situations to enhance the possibility of convergence toward the global optimum. The given random walk approach will be helpful to avoid the local optima problem and obtain adaptability with dynamic problem environments, and it has very simple yet effective computation when compared to others for navigating large search spaces.

3.2.3. Exploration Phase

During the exploration phase, in ESDO, agents move randomly in the search space. This process can be compared with stray dogs moving about foraging in their territory. This is a very crucial process in ESDO due to the fact that it saves the algorithm from the convergence to the local optima. The agents will cover most of the area in the search space, thus increasing the possibility of finding the global optimum.

3.2.4. Exploitation Phase

The ESDO algorithm emphasizes both exploration and exploitation phases to optimize solution search. During exploration, the algorithm searches for new and diversified solutions by emulating the opportunistic behaviors of stray dogs. In contrast, during the exploitation phase, these solutions are refined with inspiration from stray dogs’ mannerisms of defense and securing resources. This is the phase of local search wherein an attempt is made to better the solution and reach the best solution.
The separation between these phases is crucial because, unlike the more random and broad exploration in searching for novelty, exploitation is more structured and narrower, aiming at refinement. A proper balance between exploration and exploitation is necessary; too much exploration can result in a slow convergence, while over-exploitation may result in being trapped in local optima. This trade-off between exploration and exploitation is major within the algorithm.

3.2.5. Social Interaction Phase

The ESDO algorithm is driven by an interaction phase related to a social hierarchy, contextualizing the complex social structures and communication mechanisms in groups of stray dogs in Egypt. It is very important in ensuring efficient and effective exploration and exploitation of the search space through four primary behaviors: information sharing, dominance and submission, cooperation, and competition.
  • Information Sharing
Agents share information about the location of high-quality solutions and direct others toward the most promising areas for faster convergence.
2.
Dominance and Submission
The global process is governed by elitism where the best solutions—alphas and beta agents—drive the search.
3.
Cooperation
This means that agents cooperate in an efficient way to explore the search space, covering all areas with as few duplication as possible.
4.
Competition
Agents for the resources, or, more precisely, solutions, compete with one another, carrying on the process of searching in a continually improved fashion.
These social dynamics amplify ESDO’s ability to balance exploration and exploitation in favor of diversity and focused refinement of solutions. It is due to these mimicked behaviors that ESDO can efficiently search out complex problem spaces and converge on high-quality solutions.

3.2.6. Energy Function

Fitness or energy function forms the base for evaluating the quality of solutions computed by ESDO search agents. This function depends on problems and is applied to force the search toward the optimal solution and its implementation is changed to the objective function that is required to be optimized.

3.2.7. Algorithm Control Parameters

Algorithm control parameters in ESDO include the number of search agents, the maximum number of iterations, and the exploration and exploitation probabilities. These parameters are used to control the behavior of the search agents and the overall performance of the algorithm.

3.3. Mathematical Formulation

For a given optimization problem, the number of stray dogs (N) represents the number of search agents in every iteration, with each dog (i) defined according to the lower and higher boundaries [L,U]. First, the initial positions of the search agents are defined by Equation (25):
P i , initial = L + rand 0,1 × U L
where L and U are lower and upper boundaries for search space and rand(0,1) is a randomly generated number between 0 and 1. The energy or fitness is then calculated according to the objective function for each dog as shown below in Equation (26):
E i = F P i
where F P i is the objective function that is required to be optimized.
At the end of this stage, the territory size for each dog is defined. The territory size parameter acts like a hyperparameter that determines how much space there will be in the territory where each dog looks around for new positions. More precisely, if a dog wants to explore a new position in its own territory, then the new position is created by adding a random number to the current position multiplied by the territory size and the difference between the upper and lower bounds of the search space.
This parameter is assigned to be 0.1 as the territory size for the default value; obviously, it can be changed depending on the problem. Therefore, a small value of territory size may provide more local exploration, while a large one leads to more global exploration.
The main optimization loop runs for a maximum number of iterations, Maximum iteration, in every iteration, the algorithm updates the location and energy values of dogs, recognizes alpha and beta dogs, which are the best and second-best energy values and updates the locations by considering these leaders. A new position for each dog i is explored either within its own territory or out of the territory, which is controlled by a random number rand(0,1) as mentioned in Equations (27) and (28):
P i , new = P i + rand 0,1 × territory _ size × U L
P i , new = L + rand 0,1 × 0.5 × U L
where Equation (27) is utilized when the random is less than 0.95 to update the new position of the dog within its territory, while Equation (28) is utilized to update the new dog’s position outside the territory at the random values higher than 0.95. It thus allows dogs to search mostly within their own territories but makes occasional larger jumps to other areas of solution space. After exploring a new position, the energy of the new position is evaluated using Equation (29):
E new = F P i , n e w
If the new position yields a better solution (lower energy), then the dog’s position and energy are updated as follows in Equation (30):
P best , i = P i , E best , i = E i
Then, alpha and beta dogs are identified where the alpha is the dog with the best energy and the beta is the dog with the second-best energy, as shown in Equations (31) and (32), respectively:
α = arg min i   E best , i
β = arg min i α   E best , i
After that, the positions of dogs are adjusted based on the positions of the alpha and beta dogs according to three scenarios:
  • If the dog’s energy is the same as the alpha dog’s energy, it moves toward the alpha dog’s position as shown in Equation (33):
P i = P i + rand 0,1 × P α P i
  • If the dog’s energy is between the alpha and beta dogs’ energy, it moves towards the beta dog’s position, illustrated below in Equation (34):
P i = P i + rand 0,1 × P β P i
  • If the dog’s fitness is significantly higher than the alpha dog’s fitness, it makes a small adjustment towards the alpha dog’s position:
P i = P α + rand 0,1 × 0.05 × P α P i
In this the algorithm, the territory size is adapted based on the progress of the algorithm. As the iterations progress, the territory size is reduced to encourage exploitation over exploration; this is described in Equation (36) below:
If   l > 0.8 × Max   iteration   then   T = T 2
where l is the current iteration and T is the territory size defined in the initialization phase. The algorithm continues to carry out the same sequence until some stopping threshold is reached. This might be a maximum number of iterations or little change in energy values. After reaching this threshold, the algorithm returns the best position and energy found as illustrated in Equation (37):
TargetEnergy = E α , TargetPosition = P α
This detailed mathematical formulation captures the essential steps and behaviors of the ESDO algorithm, including initialization, position updates, energy evaluations, identification of alpha and beta dogs, and the adjustments based on their leadership. Algorithm 1 describes the pseudo-code and Figure 2 illustrates the detailed flowchart that describes the pseudo-code for the proposed algorithm.
Algorithm 1 Egyptian Stray Dog Algorithm Pseudo-code
Input: Number of dogs “D”, bounds “[L,U]”, maximum number of iterations “Max_iter”, number of dimensions (variables) “dim”, Objective function to minimize “F” and the Territory size.
Output: Best energy found “TargetEnergy”, Position corresponding to the best energy “TargetPosition”
Initialization:
1. Initialize the optimization problem with solution space bounds [L,U], number of dogs “D” and initial territory size.
2. For each dog i:
a. Initialize the dog’s position P i , initial within its territory using Equation (25):
P i , initial = L + rand 0,1 × U L
b. Compute the dog’s energy or objective function value E i ,   i n t i a l   using Equation (26):
E i = F P i
Main Optimization Loop:
For L = 1 to Max_iter:
1. Update Positions and Energies:
For each dog i:
  • Generate a random number r:
    r r a n d 0,1
  • If r < 0.95:  // Explore a new position P i , new within its own territory using Equation (27)
    P i , new = P i + rand 0,1 × territory _ size × U L
  • Else:     // Explore a new position P i , new outside its own territory using Equation (28)
    P i , new = L + rand 0,1 × 0.5 × U L
  • Compute the new energy E new   for the new position using the objective function.
    E new = F P i , new
  • If E new < E i :
    P i = P i , new , E i = E new
2. Update Best-Known Positions and Energies:
For each dog i:
  • If E i < E best , i :
    P best , i = P i , E best , i = E i
3. Identify Alpha and Beta Dogs:
  • Alpha Dog:
    α = arg min i E best , i
  • Beta Dog:
    β = arg min i α E best , i
4. Adjust Positions Based on Leaders:
For each dog i:
  • If E i = E best , α
// If the dog’s energy is the same as the alpha dog’s energy, it moves towards the alpha dog’s position
P i = P i + rand 0,1 × P α P i
  • Else If E best , α < E i < E best , β
// If the dog’s energy is between alpha and beta dogs’ energy, it moves towards the beta dog’s position:
P i = P i + rand 0,1 × P β P i
  • Else If E i > 1.5 × E best , α
// If the dog’s current energy is significantly higher than the alpha dog’s energy, adjust its position closer to the alpha dog
P i = P α + rand 0,1 × 0.05 × P α P i
5. Adapt Territory Size:
If   C u r r e n t   i t e r a t i o n > 4 5 × Max   iteration   then   Territory   size = Territory   size 2
6. At pre-defined intervals or iterations, have a subset of the dogs’ rest, ensuring that their positions remain unchanged for that iteration.
Repeat Main Optimization Loop until a stopping criterion is met.
Finalization:
After completing the iterations, the algorithm returns the best energy, second-best found and the corresponding best and second best positions.

4. Results and Discussion

This section presents OPF results obtained by ESDO algorithm and compared to PSO, GOA, HHO, MVO and HO techniques on the IEEE 30-bus power systems. The IEEE 30-bus evaluation system has 41 transmission lines, 6 generators at buses 1, 2, 5, 8, 11, and 13, 4 transformers with off-nominal tap ratio at lines 6–9, 6–10, 4–12, and 28–27, and 9 shunt Var compensators at buses 10, 12, 15, 17, 20, 21, 23, 24, and 29. At a base of 100 MVA, the overall load demands are Pload = 2.834 p.u. and Qload = 1.262 p.u. There are 24 control variables, including 5 generator active power outputs, 6 generator voltage magnitudes, 4 transformer tap settings, and 9 shunt Var compensator power injections. Figure 3 illustrates a single-line network design including the control variables’ upper and lower limits. The system data was gathered from [56,57,58,59]. Table 1 lists active and reactive power limitations in addition to generator cost coefficients, whereas Table 2 and Table 3 offer system load and line data.
In this part, four case studies will be provided to evaluate performance across various operating modes. In the first scenario, the OPF’s aim was to reduce costs, but the second case demonstrates the performance of the proposed algorithm when the intention was to minimize transmission losses. Case 3 illustrates the behavior of the suggested algorithm when seeking to reduce voltage drop on transmission lines. The last scenario presents the behavior of the suggested algorithm when through multi-objective function that has three objectives: minimize cost, power losses, and voltage drops. The proposed algorithm’s performance is compared to the PSO algorithm, which is a well-known technique for solving the OPF problem, MVO, GOA, HHO, and HO algorithms. All simulations are carried out using MATLAB “2024a”.

4.1. Case 1: Minimization of Fuel Cost

The ESDO algorithm has been applied to find the minimum total fuel cost of generating units, which is needed to meet the system load demands at each time interval and satisfy the generator limits. The quadratic nature of the fuel cost function involves the optimization of active power output at each generator. ESDO served efficiently in balancing exploration and exploitation while determining an optimal set of generator outputs to minimize the fuel consumption cost over the power system. The solution of Case 1 illustrated that ESDO outperformed the traditional algorithms such as PSO and GA by determining a minimum total operating cost along with faster convergence and more reliable avoidance of local optima. Being able to dynamically explore the search space, ESDO found cost-effective generation schedules that minimized fuel expenditure overall and were equally applicable for large-scale power systems where fuel cost minimization is one of the major economic concerns.
The suggested algorithms attempt to decrease operating costs by decreasing generator fuel consumption; hence, Equation (21) is used as the objective function for the OPF algorithm. In Table 4, the optimal values of the control parameters for ESDO are shown and compared to other algorithms. These results show that the proposed algorithm gives the lowest fuel cost most quickly, indicating its superior efficiency and effectiveness on all other techniques. The convergence curves shown in Figure 4 expressed the performances of PSO, MVO, GOA, HHO, HO, and ESDO in reducing fuel costs with respect to the iterations. The y-axis is fuel cost in USD/h, and the x-axis is the number of iterations. The black curve is representative of ESDO, which has the lowest cost of fuel achieved in the fewest number of iterations; therefore, this shows better efficiency and optimization efficacy. All of the mentioned nature-inspired optimization algorithms converge more slowly and result in a higher final cost compared with ESDO. HHO and HO show intermediate performance, they converge faster than PSO, MVO, and GOA, but do not achieve as low a cost as ESDO. The results obtained show that, overall, ESDO has an edge over the other algorithms in reaching the optimal solution with a smaller number of iterations.
As previously mentioned in the pseudo-code in Algorithm 1; after completing the iterations, the algorithm returns the best energy, second-best energy found and the corresponding best and second-best positions which are the alpha and beta dogs. Figure 5 represents the values of the fuel cost which are the objective function values for each algorithm and compares it with alpha and beta dog’s values obtained by ESDO, which are highlighted in green and yellow, respectively. The curve shows that the ESDO obtained the best values for the given objective function compared to the other algorithms.
Table 5 compares fuel cost and their corresponding values for power loss and V.D. for all the techniques including the best and the second-best values for the ESDO. ESDO Alpha Dog is the best algorithm when the cost and power loss are considered, but HHO is the best in voltage deviation. On the other hand, ESDO Beta Dog also gives very good performance considering cost and power loss but has high voltage deviation.

4.2. Case 2: Minimization of Power Loss

In Case 2, minimization of power loss was aimed for. This occurs due to the resistance in transmission lines that tends to impede the flow of power from generators into the loads. Power loss minimization directly enhances the efficiency of the system and strengthens the reliability of power delivery. ESDO optimizes reactive power flows, transformer tap settings, and voltage control measures to reduce the active power losses in the network. The transmission losses are significantly reduced by ESDO when comparing methods like MVO and GOA in Case 2. ESDO manages to provide a balance between global exploration and local exploitation, and it could determine the voltage profiles and tap settings which gave the minimum losses when maintaining system stability. This will make ESDO a very important tool in enhancing energy efficiency in power systems since a reduction in power losses implies more sustainable operations and reduced operational costs in the long run.
In this case, the proposed algorithms aim to reduce the power losses, which means that Equation (22) is selected as the objective function for the OPF algorithm. Table 6 shows the best control parameter values for ESDO and compares them to other techniques. The results presented indicate the suggested techniques perform better in terms of achieving the lowest power loss, showing greater optimization capabilities in this case.
Figure 6 shows the convergence curve in Case 2, where ESDO achieves the lowest power loss (~4.5 MW) at the 25th iteration and continues to improve, indicating a strong capability for optimization. PSO and HO also perform well, soon converging at around 6 MW without additional improvement. MVO quickly converged to about 8 MW, but GOA and HHO showed a modest but steady decline in power loss, with a sharp drop around the 40th iteration in the case of HHO. In summary, ESDO’s superior performance demonstrated its ability to reduce power loss more efficiently than the other algorithms examined in this study.
Figure 7 shows that the ESDO variants have very superior performance regarding minimal power loss, with the lowest values at 4.62 MW and 4.67 MW for the ESDO Alpha Dog and ESDO Beta Dog, respectively. This is far better than other tested algorithms. The MVO has the highest power loss of 7.98 MW and thus is the least effective in this context. PSO, HO, HHO, and GOA present moderate performance with a power loss ranging from 5.74 MW to 6.15 MW. It can, therefore, be concluded that ESDO variants are better at optimizing this specific case study. Table 7 compares the power loss for all the algorithms and the corresponding values for cost and voltage drop where ESDO Alpha Dog has the least power loss of 4.617 MW, closely followed by ESDO Beta Dog at 4.673 MW. This demonstrates superior performance in reducing power loss as against other algorithms. While the variants of ESDO—Alpha Dog and Beta Dog—have the best performances in the minimization of power loss, they are rather expensive in terms of their cost and voltage deviation. PSO and HO provide a balance with lower costs and voltage stability but at the cost of higher power loss than ESDO variants.

4.3. Case 3: Minimization of Voltage Drops

In this scenario, the proposed algorithms are intended to reduce transmission voltage drop at each bus, hence Equation (23) is chosen as the objective function for the OPF algorithm. Table 8 illustrates the optimal control parameter values for ESDO and how they compare to other approaches. The findings reveal that the recommended strategies work better in terms of achieving the lowest voltage drop, demonstrating more optimization capabilities in this specific scenario. Figure 8 shows the convergence curve for Case 3. PSO, MVO, and HO all converge to stable voltage drops of approximately 0.20 p.u. within the first few iterations, showing fast optimization. HHO has similarly shown a gradual decrease, with a drop at the 40th iteration and stabilization slightly around 0.214 p.u. ESDO maintains a large voltage drop initially but lowers rapidly after the 35th iteration, finally stabilizing at approximately 0.146 p.u., indicating improved voltage stability. GOA converges slowly and stabilizes at about 0.3 p.u. Overall, ESDO and MVO have the highest ultimate voltage stability and so are the most successful algorithms in this context.
Figure 9 is a bar chart of voltage drop in p.u. for all algorithms including ESDO beta dog. It can be depicted from the figure that the best performance is obtained by ESDO Alpha Dog with a voltage drop of 0.1466 p.u., thus showing very good voltage stability. Also, the PSO and ESDO Beta Dog performances are also outstanding, with voltage drops of 0.1834 p.u. and 0.1831 p.u., respectively. In contrast, MVO, HHO, and HO perform moderately, with their ranges of voltage drop being from 0.2127 p.u. down to 0.2146 p.u. GOA seems to be the worst performer in this regard, yielding a voltage drop of 0.3048 p.u. That is to say, ESDO Alpha Dog attained the best values to minimize voltage drop in this scenario, followed by ESDO Beta Dog and PSO. Table 9 compares the optimized voltage drop (V.D.) by different optimization algorithms and their respective cost and power loss. The best performance in voltage stability can be seen with ESDO Alpha Dog, which has the lowest voltage drop of 0.1466 p.u., while both versions have the cheapest cost, where ESDO Beta Dog results in the cheapest cost of USD 816.5565/h. However, they give very high values of power loss of 9.550 MW and 9.459 MW. In contrast, HO has the lowest power loss of 6.470 MW, though the voltage drop in this case is moderate, and it has a higher cost compared to the ESDO variants. On the other hand, GOA has the worst voltage drop of 0.3048 p.u. and high power loss of 8.192 MW, while MVO is the most expensive regarding cost, at USD 885.9045/h. Overall, the ESDO Alpha Dog and Beta Dog are very good in terms of voltage stability and cost efficiency but at the expense of higher power losses.

4.4. Case 4: Minimization of Fuel Cost, Power Losses, and Voltage Drops

Previously, the proposed algorithm optimized a specific objective, such as fuel cost, power loss, or voltage drop. However, this may impact other attributes as can see in Table 5, Table 7 and Table 9. In Table 5, ESDO succeeded in achieving the least fuel cost but it failed to achieve at the same time the least V.D. Also, Table 7 shows the ESDO reaches the best value for power loss at the expense of high fuel cost and voltage drop. In the third scenario, the ESDO variants attained very low V.D. compared to other techniques, despite having the highest power loss as shown in Table 9. In order to handle this issue, the suggested approach in Case 4 utilizes Equation (24) as a multi-objective function to update the parameters with the aim to reduce fuel costs, power losses, and voltage drops, simultaneously. Table 10 shows all the parameter values for all algorithms in this case.
Table 9. V.D. comparison.
Table 9. V.D. comparison.
FunctionPSOMVOGOAHHOHOESDO
Alpha DogBeta Dog
V.D. (p.u)0.18340.21390.30480.21460.21270.14660.1831
Cost (USD/h)831.6215885.9045850.2231834.5388837.6748816.9713816.5565
Ploss (MW)6.975.7188.1927.4876.4709.5509.459
Figure 10 illustrates Case 4’s convergence curve; this graph presents the objective function values over 50 iterations for different optimization algorithms. The PSO curve represents the highest initial objective function value lying above 1180; this dropped drastically between iterations 5 and 20 and then gradually stabilized at around 1020 by the 50th iteration. MVO, represented by the cyan-colored curve, stays very low from the start, then slowly decreases with minor bumps, before it converges and stabilizes around 1020. GOA shows, in general, very slow convergence; it started high and improved very little before it converged and stabilized at a higher objective function value compared to other algorithms. The HHO starts at about 1140, sharply drops in the early iterations, quickly stabilizes, and further improves slightly after iteration 25, finally stabilizing below 1040. HO starts high, dropping rapidly within the first few iterations, then converges smoothly to some value above 1020. ESDO starts near 1080 with a gradual and steady decrease that resulted in the lowest objective function value of about 1010, which shows the best overall performance on the minimization of the objective function.
Table 11 provides a multi-objective comparison of various optimization algorithms based on cost in (USD/h), power loss in MW, and voltage drop in V.D. (p.u.). ESDO Beta Dog yields the lowest cost at USD 819.8483/h with a low voltage drop of 0.2041 p.u., while the power loss is a bit high at 8.01 MW. ESDO Alpha Dog has the best cost efficiency (USD 825.7937/h) among all, along with the lowest voltage drop (0.1851 p.u.), even though the power loss is at a medium-high 7.65 MW. HO has the best power loss of 6.38 MW with a reasonable voltage drop of 0.2004 p.u., but at a higher cost of USD 824.1647/h compared to both variants of ESDO. The results for PSO show that it has a moderate power loss of 5.55 MW and a moderate cost of USD 873.1963/h, while it presents a good performance in a voltage drop of 0.1899 p.u. The worst voltage drop is represented by GOA, with values of 0.4756 p.u. and high power losses of 7.059 MW, but for a slightly lower cost of USD 827.2878/h. MVO has a well-balanced performance of USD 829.1404/h cost for a power loss of 7.22 MW with a higher voltage drop of 0.2478 p.u. HHO is not as cost-efficient, at USD 838.8745/h, and it has a high voltage drop of 0.2681 p.u. with a power loss of 6.38 MW. The overall observation shows that all the variants of ESDO are cost-efficient and powerful in voltage stability; however, they come at higher power losses when compared to other algorithms like HO and PSO, which have better power loss metrics but come at higher costs.
Based on the results shown in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 and the convergence curves for the four cases shown in Figure 4, Figure 6, Figure 8 and Figure 10, the performance of ESDO was compared with other well-known algorithms, namely PSO, MVO, GOA, HHO, and HO. A comparison of key characteristics of these algorithms is presented below in Table 12.
Table 12 shows the advantages of ESDO over PSO, MVO, GOA, HHO, and HO as follows: The ESDO algorithm was inspired by Egyptian stray dogs for fast convergence with excellent dynamic balance between exploration and exploitation, thus being highly effective to avoid local optima. On the contrary, PSO presents fast convergence, yet it usually suffers from stagnation in convergence during the final stages because of weaker exploration. MVO maintains a strong balance between exploration and exploitation with medium-order convergence speed, whereas GOA is effective in global exploration but converges more slowly compared to ESDO. HHO includes fast convergence along with adaptive, dynamic balance. The mechanisms involved in ESDO are finer because they are based on behavior.
Finally, HO provides good adaptability; however, without offering dynamic balance, as provided by ESDO. All the algorithms of this class, including ESDO, grade out as a medium concerning computational complexity. There is an important difference, though: ESDO can maintain a strong balance between exploration and exploitation and avoid local optima. Its application range is especially fitting for complex optimization problems, hence versatile and efficient compared to its counterparts.

5. Conclusions

A Novel metaheuristic algorithm inspired by Egyptian stray dogs’ behaviors is introduced in this paper to solve the OPF problem. Four cases are studied including minimizing each fuel cost, power loss and voltage drop as a single objective function and minimizing all of them as a multi-objective function.
ESDO demonstrates faster convergence, and a superior exploration–exploitation balance compared to the other algorithms, making it highly effective in dynamic and complex environments. Four objective functions are evaluated in this paper to show the performance of the ESDO technique in solving different OPF functions, summarized as follows:
  • Case 1: From the point of view of fuel cost minimization, the lowest fuel cost corresponds to the ESDO in comparison with PSO, MVO, and GOA. That is why ESDO is more efficient than other algorithms in the context of operating cost reduction.
  • Case 2: ESDO’s Alpha Dog and Beta Dog strategy shows a minimum power loss in comparison with the other algorithms, which significantly outperforms other algorithms on this point and confirms ESDO’s capability to maintain low power loss for system stability.
  • Case 3: In voltage drop minimization, ESDO gave excellent voltage stability, presenting the lowest voltage drop compared to all the algorithms tested, proving its strength in dealing with voltage-related constraints.
  • Case 4: Convergence analysis shows that ESDO reduced the multi-objective function values to their minimum possible values more consistently than others and outperformed its competitors with reference to speed and final solution quality.
To summarize, these results clearly indicate that ESDO is a highly effective optimization technique for OPF, outperforming both classical and modern algorithms in terms of efficiency, accuracy, and robustness across various operational scenarios.

Author Contributions

Conceptualization, M.A. and H.Y.D.; methodology, M.A. and M.H.E.; software, M.A. and M.H.E.; validation, M.A., H.Y.D., M.F.M. and M.H.E.; formal analysis, H.Y.D. and M.H.E.; investigation, M.H.E. and H.Y.D.; resources, M.H.E., M.A., M.F.M. and H.Y.D.; writing—original draft preparation, M.H.E., M.A. and H.Y.D.; writing—review and editing, M.A., M.F.M. and H.Y.D.; supervision, M.A., H.Y.D. and M.F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Grigsby, L.L. Power System Stability and Control; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  2. Grainger, J.J.; Stevenson, W.D. Power System Analysis; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
  3. Woods, A.J.; Wollenberg, B.F. Power Generation, Operation and Control; John Wiley & Sons: New York, NY, USA, 1996. [Google Scholar]
  4. Mansouri, A.; Ammar, A.; el Magri, A.; Elaadouli, N.; Younes, E.K.; Lajouad, R.; Giri, F. An adaptive control strategy for integration of wind farm using a VSC-HVDC transmission system. Results Eng. 2024, 23, 102359. [Google Scholar] [CrossRef]
  5. Mansouri, A.; el Magri, A.; Lajouad, R.; Giri, F. Novel adaptive observer for HVDC transmission line: A new power management approach for renewable energy sources involving Vienna rectifier. IFAC J. Syst. Control. 2024, 27, 100255. [Google Scholar] [CrossRef]
  6. Berizzi, A.; Bovo, C.; Marannino, P. Multiobjective optimization techniques applied to modern power systems. In Proceedings of the Power Engineering Society Winter Meeting, Columbus, OH, USA, 28 January–1 February 2001. [Google Scholar]
  7. Hingorani, N.G.; Gyugyi, L. Understanding FACTS: Concepts and Technology of Flexible AC Transmission Systems; IEEE Press: New York, NY, USA, 2000. [Google Scholar]
  8. Zhang, X.P.; Rehtanz, C.; Pal, B. Flexible AC Transmission Systems: Modelling and Control; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  9. Glover, J.D.; Sarma, M.S.; Overbye, T.J. Power System Analysis and Design, 4th ed.; Thompson Corporation: Stamford, CT, USA, 2008; p. 255. [Google Scholar]
  10. Ahmad, S.; Asar, A.U. Reliability Enhancement of Electric Distribution Network Using Optimal Placement of Distributed Generation. Sustainability 2021, 13, 11407. [Google Scholar] [CrossRef]
  11. McDonald, J.D. Electric Power Substations Engineering; CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar]
  12. Northcote-Green, J.; Wilson, R. Control and Automation of Electrical Power Distribution Systems; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  13. Vedam, R.S.; Sarma, M.S. Power Quality; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  14. Machowski, J. Power System Dynamics and Stability; Prentice Hall: New Jersey, NY, USA, 1997. [Google Scholar]
  15. Carpentier, J. Contribution to the study of economic dispatching. Bull. Fr. Soc. Electr. 1962, 3, 431–447. [Google Scholar]
  16. Dommel, H.; Tinney, W. Optimal power flow solutions. IEEE Trans. Power Appar. Syst. 1968, PAS-87, 1866–1876. [Google Scholar] [CrossRef]
  17. Abido, M.A. Optimal power flow using particle swarm optimization. Electr. Power Energy Syst. 2002, 24, 563–571. [Google Scholar] [CrossRef]
  18. He, S.; Wen, J.Y.; Prempain, E.; Wu, Q.H.; Fitch, J.; Mann, S. An improved particle swarm optimization for optimal power flow. In Proceedings of the 2004 International Conference on Power System Technology, Singapore, 21–24 November 2004. [Google Scholar]
  19. Zhao, B.; Guo, C.X.; Cao, Y.J. Improved particle swarm optimization algorithm for OPF problems. In Proceedings of the IEEE/PES Power Systems Conference and Exposition, New York, NY, USA, 10–13 October 2004; pp. 233–238. [Google Scholar]
  20. Wang, C.-R.; Yuan, H.-J.; Huang, Z.-Q.; Zhang, J.-W.; Sun, C.-J. A modified particle swarm optimization algorithm and its application in optimal power flow problem. In Proceedings of the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; Volume 5, pp. 2885–2889. [Google Scholar]
  21. Musirin, I.; Aminudin, N.; Othman, M.M.; Rahman TK, A. Particle Swarm Optimization Technique in Economic Power Dispatch Problems. In Proceedings of the 4th International Power Engineering and Optimization Conference, Shah Alam, Malaysia, 23–24 June 2010. [Google Scholar]
  22. Liang, R.H.; Tsai, S.R.; Chen, Y.T.; Tseng, W.T. Optimal power flow by a fuzzy based hybrid particle swarm optimization approach. Electr. Power Syst. Res. 2011, 81, 1466–1474. [Google Scholar] [CrossRef]
  23. Niknam, T.; Narimani, M.R.; Aghaei, J.; Azizipanah-Abarghooee, R. Improved particle swarm optimisation for multi-objective optimal power flow considering the cost, loss, emission and voltage stability index. IET Gener. Transm. Distrib. 2012, 11, 1012–1022. [Google Scholar] [CrossRef]
  24. Radosavljevi’c, J.; Klimenta, D.; Jevti´c, M.; Arsi´c, N. Optimal Power Flow Using a Hybrid Optimization Algorithm of Particle Swarm Optimization and Gravitational Search Algorithm. Electr. Power Compon. Syst. 2015, 43, 1958–1970. [Google Scholar] [CrossRef]
  25. Lai, L.L.; Ma, J.T.; Yokoyama, R.; Zhao, M. Improved genetic algorithms for optimal power flow under both normal and contingent operation states. Int. J. Electr. Power Energy Syst. 1997, 19, 287–292. [Google Scholar] [CrossRef]
  26. Bakirtzis, A.G.; Biskas, P.N.; Zoumas, C.E.; Petridis, V. Optimal power flow by enhanced genetic algorithm. IEEE Trans. Power Syst. 2002, 22, 60. [Google Scholar]
  27. Kumari, M.S.; Maheswarapu, S. Enhanced Genetic Algorithm based computation technique for multi-objective Optimal Power Flow solution. Int. J. Electr. Power Energy Syst. 2010, 32, 736–742. [Google Scholar] [CrossRef]
  28. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Applic. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  29. Mirjalili, S. The Ant Lion Optimizer; School of Information and Communication Technology, Griffith University: Nathan, Australia, 2015. [Google Scholar]
  30. El-Fergany, A.A.; Hasanien, H.M. Single and Multi-objective Optimal Power Flow Using Grey Wolf Optimizer and Differential Evolution Algorithms. Electr. Power Compon. Syst. 2015, 43, 1548–1559. [Google Scholar] [CrossRef]
  31. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  32. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Applic 2016, 27, 495–513. [Google Scholar] [CrossRef]
  33. Abdelsalam, M.; Diab, H.Y. Optimal Coordination of DOC Relays Incorporated into a Distributed Generation-Based Micro-Grid Using a Meta-Heuristic MVO Algorithm. Energies 2019, 12, 4115. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Jangir, P.; Saremi, S. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2017, 46, 79–95. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.D.S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
  36. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  37. Gad, Y.; Diab, H.; Abdelsalam, M.; Galal, Y. Smart Energy Management System of Environmentally Friendly Microgrid Based on Grasshopper Optimization Technique. Energies 2020, 13, 5000. [Google Scholar] [CrossRef]
  38. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Hammouri, A.I.; Faris, H.; Ala’M, A.-Z.; Mirjalili, S. Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems. Knowl.-Based Syst. 2018, 145, 25–45. [Google Scholar] [CrossRef]
  39. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus Optimization Algorithm: A Novel Nature-Inspired Optimization Algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef] [PubMed]
  40. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Bhattacharya, A.; Chattopadhyay, P.K. Application of biogeography-based optimisation to solve different optimal power flow problems. IET Gener. Transm. Distrib. 2011, 5, 70–80. [Google Scholar] [CrossRef]
  42. Duman, S.; Güvenc, U.; Sönmez, Y.; Yörükeren, N. Optimal power flow using gravitational search algorithm. Energy Convers. Manag. 2012, 59, 86–95. [Google Scholar] [CrossRef]
  43. Bhattacharya, A.; Roy, P.K. Solution of multi-objective optimal power flow using gravitational search algorithm. IET Gener. Transm. Distrib. 2012, 6, 751–763. [Google Scholar] [CrossRef]
  44. Jahan, M.S.; Amjady, N. Solution of large-scale security constrained optimal power flow by a new bi-level optimisation approach based on enhanced gravitational search algorithm. IET Gener. Transm. Distrib. 2013, 7, 1481–1491. [Google Scholar] [CrossRef]
  45. El-Sehiemy, R.A.; Shafiq, M.B.; Azmy, A.M. Multi-phase search optimisation algorithm for constrained opti-mal power flow problem. Int. J. Bio-Inspired Comput. 2014, 6, 275–289. [Google Scholar] [CrossRef]
  46. He, X.; Wang, W.; Jiang, J.; Xu, L. An improved artificial bee colony algorithm and its application to multi-objective optimal power flow. Energies 2015, 8, 2412–2437. [Google Scholar] [CrossRef]
  47. Arul, R.; Ravi, G.; Velusami, S. Solving optimal power flow problems using chaotic self-adaptive differential harmony search algorithm. Electr. Power Compon. Syst. 2013, 41, 782–805. [Google Scholar] [CrossRef]
  48. Lan, Z.; He, Q.; Jiao, H.; Yang, L. An Improved Equilibrium Optimizer for Solving Optimal Power Flow Problem. Sustainability 2022, 14, 4992. [Google Scholar] [CrossRef]
  49. Warid, W.; Hizam, H.; Mariun, N.; Abdul-Wahab, N.I. Optimal power flow using the Jaya algorithm. Energies 2016, 9, 678. [Google Scholar] [CrossRef]
  50. Bouchekara, H.R.E.H.; Abido, M.A. Optimal power flow using differential search algorithm. Electr. Power Compon. Syst. 2014, 42, 1683–1699. [Google Scholar] [CrossRef]
  51. Christy, A.A.; Raj, P.A.D.V. Adaptive biogeography based predator-prey optimization technique for optimal power flow. Int. J. Electr. Power Energy Syst. 2014, 62, 344–352. [Google Scholar] [CrossRef]
  52. Severino, E.R.; Di Silvestre, M.L.; Badalamenti, R.; Nguyen, N.Q.; Guerrero, J.M.; Meng, L. Optimal power flow in islanded microgrids using a simple distributed algorithm. Energies 2015, 8, 11493–11514. [Google Scholar] [CrossRef]
  53. Ezugwu, A.E.; Agushaka, J.O.; Abualigah, L.; Mirjalili, S.; Gandomi, A.H. Prairie Dog Optimization Algorithm. Neural Comput. Appl. 2022, 34, 20017–20065. [Google Scholar] [CrossRef]
  54. Lenin, K. Real power loss reduction by German shepherd dog, explore–save and line up search optimization algorithms. Ain Shams Eng. J. 2022, 13, 101688. [Google Scholar] [CrossRef]
  55. Martinez, E.; Cesário, C.S.; Dias, J.V.; Silva, I.O.; Souza, V.B. Community perception and attitudes about the behavior of stray dogs in a college campus. Acta Vet. Bras. 2018, 12, 10–16. [Google Scholar] [CrossRef]
  56. Alsac, O.; Stott, B. Optimal load flow with steady-state security. IEEE Trans. Power Appar. Syst. 1974, 93, 745–751. [Google Scholar] [CrossRef]
  57. Christie, R.D. Power Systems Test Case Archive; University of Washington, Department of Electrical Engineering: Seattle, WA, USA, 1993. [Google Scholar]
  58. Available online: http://labs.ece.uw.edu/pstca/pf30/pg_tca30bus.htm (accessed on 1 June 2024).
  59. Ismail, N.A.M.; Zin, A.A.M.; Khairuddin, A.; Khokhar, S. A comparison of voltage stability indices. In Proceedings of the 2014 IEEE 8th International Power Engineering and Optimization Conference (PEOCO2014), Langkawi, Malaysia, 24–25 March 2014; pp. 30–34. [Google Scholar]
Figure 1. Stray dogs’ common behaviors.
Figure 1. Stray dogs’ common behaviors.
Asi 07 00122 g001
Figure 2. Egyptian Stray Dog Algorithm flowchart.
Figure 2. Egyptian Stray Dog Algorithm flowchart.
Asi 07 00122 g002
Figure 3. IEEE 30-bus single-line network.
Figure 3. IEEE 30-bus single-line network.
Asi 07 00122 g003
Figure 4. Case 1 convergence curve.
Figure 4. Case 1 convergence curve.
Asi 07 00122 g004
Figure 5. Fuel cost for different algorithms.
Figure 5. Fuel cost for different algorithms.
Asi 07 00122 g005
Figure 6. Case 2 convergence curve.
Figure 6. Case 2 convergence curve.
Asi 07 00122 g006
Figure 7. Power loss for different algorithms.
Figure 7. Power loss for different algorithms.
Asi 07 00122 g007
Figure 8. Case 3 convergence curve.
Figure 8. Case 3 convergence curve.
Asi 07 00122 g008
Figure 9. V.D. for different algorithms.
Figure 9. V.D. for different algorithms.
Asi 07 00122 g009
Figure 10. Case 4 convergenve curve.
Figure 10. Case 4 convergenve curve.
Asi 07 00122 g010
Table 1. Generator data for IEEE 30-bus test system.
Table 1. Generator data for IEEE 30-bus test system.
Bus No.Active Power LimitsReactive Power LimitsCost Factors
PGmin (p.u.)PGmax (p.u.)QGmin (p.u.)QGmax (p.u.)abc
10.52.5−0.22020037.5
20.20.8−0.210175175
50.150.5−0.150.80100625
80.10.35−0.150.6032583.4
110.10.3−0.10.50300250
130.120.4−0.150.60300250
Table 2. Load data for IEEE 30-bus test system.
Table 2. Load data for IEEE 30-bus test system.
Bus TagActive Power
P (p.u.)
Reactive Power
Q (p.u.)
100
20.2170.127
30.0240.012
40.0760.016
50.9420.190
600
70.2280.109
80.3000.300
900
100.0580.020
1100
120.1120.075
1300
140.0620.016
150.0820.025
160.0350.018
170.0900.058
180.0320.009
190.0950.034
200.0220.007
210.1750.112
2200
230.0320.016
240.0870.067
2500
260.0350.023
2700
2800
290.0240.009
300.1060.019
Table 3. Transmission network line data for IEEE 30-bus test system.
Table 3. Transmission network line data for IEEE 30-bus test system.
Line TagFrom BusTo BusR (p.u.)X (p.u.)B (p.u.)Tap Settings
1120.01920.05750.0264
2130.04520.18520.0204
3240.05700.17370.0184
4340.01320.03790.0042
5250.04720.19830.0209
6260.05810.17630.0187
7460.01190.04140.0045
8570.04600.11600.0102
9670.02670.08200.0085
10680.01200.04200.0045
116900.208001.078
1261000.556001.069
1391100.20800
1491000.11000
1541200.256001.032
16121300.14000
1712140.12310.25590
1812150.06620.13040
1912160.09450.19870
2014150.22100.19970
2116170.08240.19320
2215180.10700.21850
2318190.06390.12920
2419200.03400.06800
2510200.09360.20900
2610170.03240.08450
2710210.03480.07490
2810220.07270.14990
2921220.01160.02360
3015230.10000.20200
3122240.11500.17900
3223240.13200.27000
3324250.18850.32920
3425260.25440.38000
3525270.10930.20870
36282700.396001.068
3727290.21980.41530
3827300.32020.60270
3929300.23990.45330
408280.06360.20000.0214
416280.01690.05990.0065
Table 4. Case 1 results.
Table 4. Case 1 results.
ParameterPSOMVOGOAHHOHOESDO
PG2 (MW)38.1457.00542.44835.45537.74852.466
PG5 (MW)26.225.69735.15425.51424.48621.587
PG8 (MW)21.561022.27514.13220.14123.425
PG11 (MW)20.7114.65512.85913.03819.53616.481
PG13 (MW)19.1723.88214.98822.87318.64717.671
VG1 (p.u.)1.05081.06591.03061.07991.07241.0662
VG2 (p.u.)1.04451.04691.01251.06771.05921.0553
VG5 (p.u.)0.98571.01341.02381.03591.02111.0328
VG8 (p.u.)0.99851.02361.00681.02921.051.027
VG11 (p.u.)1.00621.08861.03191.03061.05871.0558
VG13 (p.u.)0.98941.02231.03011.02471.04781.0157
T11 (p.u.)0.95390.9631.07140.96141.07171.0394
T12 (p.u.)1.05420.99841.01451.00571.06370.9512
T15 (p.u.)1.01191.08381.01841.07020.98660.9574
T36 (p.u.)0.92291.01110.91490.95420.97920.9945
Qc10 (MVAR)2.493.01360.91941.53281.63275.2471
Qc12 (MVAR)1.952.58493.07012.37171.58154.7483
Qc15 (MVAR)1.442.97032.6260.62340.77911.54
Qc17 (MVAR)1.73.80872.71840.75750.77854.2336
Qc20 (MVAR)4.050.63242.10942.37471.07075.4377
Qc21 (MVAR)3.370.11621.75260.28751.44052.3971
Qc23 (MVAR)2.661.57494.16882.58823.57046.5002
Qc24 (MVAR)0.772.98512.17070.64391.16962.6493
Qc29 (MVAR)2.683.01641.11511.30922.58221.3045
Cost (USD/h)812.2991811.2007819.1972810.3851808.1815804.8769
Ploss (MW)9.058.78589.03769.3948.758.360
V.D. (p.u)0.47780.394580.434270.392370.42760.42463
Table 5. Fuel cost comparison.
Table 5. Fuel cost comparison.
FunctionPSOMVOGOAHHOHOESDO
Alpha DogBeta Dog
Cost (USD/h)812.2991811.2007819.1972810.3851808.1815804.8769805.6415
Ploss (kW)9.058.78589.03769.3948.758.3608.132
V.D. (p.u)0.47780.394580.434270.392370.42760.424630.58534
Table 6. Case 2 results.
Table 6. Case 2 results.
ParameterPSOMVOGOAHHOHOESDO
PG2 (MW)41.5168.9558053.0167.35143.308
PG5 (MW)34.819.03944.675022.94642.242
PG8 (MW)16.8721.28814.53925.69433.61131.505
PG11 (MW)3017.55624.31327.65529.98628.36
PG13 (MW)22.2638.30626.4717.6094037.615
VG1 (p.u.)1.06581.0291.01751.08871.03061.0586
VG2 (p.u.)1.04091.01211.01161.07471.01741.0471
VG5 (p.u.)1.02530.96070.99131.00470.98541.0378
VG8 (p.u.)1.01750.99240.98441.0131.00661.0375
VG11 (p.u.)0.97560.97950.95841.06811.11.0823
VG13 (p.u.)1.06961.07051.06751.06471.091.0594
T11 (p.u.)1.01030.95871.00510.98211.07391.0137
T12 (p.u.)1.01331.09670.99940.95421.08190.9866
T15 (p.u.)1.03871.00261.03341.020.98311.0394
T36 (p.u.)0.99920.96370.98230.94621.02581.0225
Qc10 (MVAR)1.874.06272.00294.19894.82596.5128
Qc12 (MVAR)1.113.2221.1432.27963.7735.7736
Qc15 (MVAR)3.40.55124.92762.2291.34364.0751
Qc17 (MVAR)3.062.70734.7572.41672.72584.3258
Qc20 (MVAR)3.693.20852.83171.1514.90924.4399
Qc21 (MVAR)2.730.88773.24760.2814.9565.357
Qc23 (MVAR)1.513.88543.01583.574155.0097
Qc24 (MVAR)2.932.70223.558353.91782.995
Qc29 (MVAR)2.550.1154.97970.51293.25197.6933
Ploss (MW)6.1507.9805.8335.7535.8994.617
Cost (USD/h)830.8549842.7470890.4579879.9424869.1381877.7591
V.D. (p.u)0.41890.52700.50010.67400.42350.7154
Table 7. Power loss comparison.
Table 7. Power loss comparison.
FunctionPSOMVOGOAHHOHOESDO
Alpha DogBeta Dog
Ploss (MW)6.1507.9805.8335.7535.8994.6174.673
Cost (USD/h)830.8549842.7470890.4579879.9424869.1381877.7591875.6038
V.D. (p.u)0.41890.52700.50010.67400.42350.71540.8116
Table 8. Case 3 results.
Table 8. Case 3 results.
ParameterPSOMVOGOAHHOHOESDO
PG2 (MW)47.4420.15523.46654.82258.04826.137
PG5 (MW)38.345.48145.12938.15930.54721.297
PG8 (MW)21.4834.89222.8612.10831.92222.968
PG11 (MW)17.5921.79613.90126.98620.92322.007
PG13 (MW)24.3636.9761216.79230.20218.583
VG1 (p.u.)1.05831.04241.06841.05761.05121.0335
VG2 (p.u.)1.04821.01771.03761.05461.03841.0169
VG5 (p.u.)1.00180.99720.99411.00681.01841.0043
VG8 (p.u.)1.01571.00581.01751.00390.99310.9941
VG11 (p.u.)1.04551.00791.04111.02751.00521.0166
VG13 (p.u.)1.00511.04661.06410.99611.03411.0185
T11 (p.u.)0.9470.97581.0250.94410.93760.9886
T12 (p.u.)0.97520.99980.99490.94361.02480.9378
T15 (p.u.)0.97760.9231.06391.00970.94610.9685
T36 (p.u.)0.99710.96240.94750.94890.95570.9528
Qc10 (MVAR)2.431.72232.09073.52731.41472.3706
Qc12 (MVAR)1.072.29494.27812.75193.53790.8577
Qc15 (MVAR)2.771.21782.86114.83862.10594.3805
Qc17 (MVAR)2.440.5362.64671.34694.15534.0145
Qc20 (MVAR)1.752.05672.99781.74841.18424.9584
Qc21 (MVAR)1.774.99610.95584.95853.66363.9269
Qc23 (MVAR)51.51992.23522.67122.00543.3669
Qc24 (MVAR)1.351.56410.25382.84022.37345.2544
Qc29 (MVAR)54.25584.5530.12663.72561.9815
V.D. (p.u)0.18340.21390.30480.21460.21270.1466
Cost (USD/h)831.6215885.9045850.2231834.5388837.6748816.9713
Ploss (MW)6.975.7188.1927.4876.4709.550
Table 10. Case 4 results.
Table 10. Case 4 results.
ParameterPSOMVOGOAHHOHOESDO
PG2 (MW)7437.67952.83440.92549.32861.185
PG5 (MW)4327.431.53634.8823.03325.226
PG8 (MW)27.9827.5725.64429.79912.38125.964
PG11 (MW)20.4616.72423.03419.04227.33717.551
PG13 (MW)22.1736.52724.91533.3731.229.078
VG1 (p.u.)1.01651.05681.05861.04251.03991.0254
VG2 (p.u.)1.00761.03611.03481.02351.02861.0116
VG5 (p.u.)1.00441.00070.96780.99781.02291.0229
VG8 (p.u.)0.9981.00411.00711.02051.00740.9996
VG11 (p.u.)1.061.08581.08961.05051.05121.058
VG13 (p.u.)1.06251.04311.03751.0141.02021.0205
T11 (p.u.)0.99040.94081.03320.98310.93330.9728
T12 (p.u.)0.96571.08890.96620.97141.04030.9575
T15 (p.u.)1.0321.05040.94560.96060.99770.9811
T36 (p.u.)0.9520.95250.9320.99390.93450.9795
Qc10 (MVAR)3.783.16620.03992.17372.93411.4981
Qc12 (MVAR)4.891.37413.1120.63442.93673.3776
Qc15 (MVAR)0.454.09244.0950.80953.81711.5789
Qc17 (MVAR)2.923.49212.83131.51411.30533.6618
Qc20 (MVAR)4.413.32573.45263.12923.75792.1352
Qc21 (MVAR)1.822.16312.41512.65510.94921.9634
Qc23 (MVAR)1.640.20030.7492.97910.09182.5902
Qc24 (MVAR)0.680.41663.89983.0973.02362.8996
Qc29 (MVAR)3.873.35934.59611.02880.03383.7976
Cost (USD/h)873.1963829.1404827.2878838.8745824.1647825.7937
Ploss (MW)5.557.227.0596.388.137.65
V.D. (p.u)0.18990.24780.47560.26810.20040.1851
Table 11. Multi-objective comparison.
Table 11. Multi-objective comparison.
FunctionPSOMVOGOAHHOHOESDO
Alpha DogBeta Dog
Cost (USD/h)873.1963829.1404827.2878838.8745824.1647825.7937819.8483
Ploss (MW)5.557.227.0596.388.137.658.01
V.D. (p.u)0.18990.24780.47560.26810.20040.18510.2041
Table 12. Algorithms key characteristics comparison.
Table 12. Algorithms key characteristics comparison.
AlgorithmInspirationConvergenceExploration–Exploitation BalanceComputational ComplexitySusceptibility to Local Optima
PSOBird flocking/Fish schoolingFast, but may stagnateGood, but weak in later stagesLowHigh
MVOMultiverse theoryModerateStrong balanceMediumLow
GOAGrasshopper swarming behaviorModerate, but slowStrong global explorationMediumModerate
HHOHarris hawks hunting strategyFastAdaptive and dynamicMediumLow
HOHippopotamus water–land behaviorGoodStrong adaptabilityMediumModerate
ESDOEgyptian stray dogs’ behaviorFastExcellent dynamic balanceMediumLow
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

ElMessmary, M.H.; Diab, H.Y.; Abdelsalam, M.; Moussa, M.F. A Novel Optimization Algorithm Inspired by Egyptian Stray Dogs for Solving Multi-Objective Optimal Power Flow Problems. Appl. Syst. Innov. 2024, 7, 122. https://doi.org/10.3390/asi7060122

AMA Style

ElMessmary MH, Diab HY, Abdelsalam M, Moussa MF. A Novel Optimization Algorithm Inspired by Egyptian Stray Dogs for Solving Multi-Objective Optimal Power Flow Problems. Applied System Innovation. 2024; 7(6):122. https://doi.org/10.3390/asi7060122

Chicago/Turabian Style

ElMessmary, Mohamed H., Hatem Y. Diab, Mahmoud Abdelsalam, and Mona F. Moussa. 2024. "A Novel Optimization Algorithm Inspired by Egyptian Stray Dogs for Solving Multi-Objective Optimal Power Flow Problems" Applied System Innovation 7, no. 6: 122. https://doi.org/10.3390/asi7060122

APA Style

ElMessmary, M. H., Diab, H. Y., Abdelsalam, M., & Moussa, M. F. (2024). A Novel Optimization Algorithm Inspired by Egyptian Stray Dogs for Solving Multi-Objective Optimal Power Flow Problems. Applied System Innovation, 7(6), 122. https://doi.org/10.3390/asi7060122

Article Metrics

Back to TopTop