Next Article in Journal
Integrating Long Short-Term Memory and Genetic Algorithm for Short-Term Load Forecasting
Previous Article in Journal
Estimating Air Density Using Observations and Re-Analysis Outputs for Wind Energy Purposes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability Based Genetic Algorithm Applied to Allocation of Fiber Optics Links for Power Grid Automation

by
Henrique Pires Corrêa
*,
Rafael Ribeiro de Carvalho Vaz
,
Flávio Henrique Teles Vieira
and
Sérgio Granato de Araújo
Information and Communication Engineering Group—INCOMM, School of Electrical, Mechanical and Computer Engineering, Federal University of Goiás, Goiânia 74605-010, Brazil
*
Author to whom correspondence should be addressed.
Energies 2019, 12(11), 2039; https://doi.org/10.3390/en12112039
Submission received: 2 May 2019 / Revised: 21 May 2019 / Accepted: 25 May 2019 / Published: 28 May 2019
(This article belongs to the Section F: Electrical Engineering)

Abstract

:
In this work, we address the problem of allocating optical links for connecting automatic circuit breakers in a utility power grid. We consider the application of multi-objective optimization for improving costs and power network reliability. To this end, we propose a novel heuristic for attributing reliability values to the optical links, which makes the optimization converge to network topologies in which nodes with higher power outage indexes receive greater communication resources. We combine our heuristic with a genetic algorithm in order to solve the optimization problem. In order to validate the proposed method, simulations are carried out with real data from the local utility. The obtained results validate the allocation heuristic and show that the proposed algorithm outperforms gradient descent optimization in terms of the provided Pareto front.

1. Introduction

The increasingly stringent requirements imposed on energy utilities imply greater investments by these companies in their main infrastructure. Such requirements are related to quality of service regulation by supervising agencies, which forces companies to invest in technologies that enable the operation of their grids according to established standards. Among the parameters whose observance is solicited by the utilities, there are continuity indexes, which measure quality of service in terms of frequency and duration associated with blackout events registered in the distribution grid [1,2].
The attainment of adequate continuity indexes is associated with investments in automation mechanisms for the distribution grid. Intensely automated grids belong to the smart grid classification, which, among other characteristics, refers to power grids capable of operation and fault clearing during complex contingency situations, without the need for human assistance. In particular, the automation of breakers is very important, since it allows rapid supply reestablishment after the correction of faults along the distribution grid [3].
In this work, we consider the problem of optimizing the allocation of fiber optics links for connecting automatic breakers in a distribution grid. The optimization objective is to attain high reliability for breaker operation and, thus, improvement of supply quality perceived by consumers connected to the grid. It is assumed that the breakers communicate via fiber optics, whose allocation must reflect a compromise between reliability and cost. Results concerning the number of feasible solutions are established, suggesting the use of a genetic algorithm as adequate for the problem. A proposed allocation heuristic is incorporated into the objective function evaluation during the optimization process. Solution constraints that contemplate limitations associated with the physical problem are also considered.
This paper is organized as follows. In Section 2, we briefly review previous works regarding reliability optimization of automated power grids. In Section 3, we discuss the objective functions and constraints inherent to the problem. A simple method for estimating reliability with significantly reduced computational complexity is proposed. Furthermore, we propose a heuristic for attributing reliability values to optical links in order to improve the distribution of reliability gains throughout the power grid. In Section 4, we contemplate possible solution methods for the problem and conclude that using a genetic algorithm is the best option. In Section 5, we present results obtained in a case study, which was carried out by applying the proposed methods to real data of the local distribution utility.

2. Related Works

The increase in interest in implementing smart grids has given rise to multiple solutions regarding the reliable operation of automated grids. The design of communication links associated with automated grid elements and the adequate allocation of distributed generation are problems associated with modern grids. In these scenarios, reliability measures attributed to grid service become dependent on the distributed power sources and failure rates of the communication links [4].
In this sense, power grid control and planning are submitted to a great number of stringent operation criteria that must be satisfied simultaneously. As a consequence, a recent trend in power systems research is the application of heuristics and multi-objective algorithms for optimizing modern grids with respect to reliability and other quality parameters. This procedure is convenient for grid management, since it contemplates multiple opposing parameters and extends the decision-making process by providing a set of optimal solutions.
The optimization of power dispatch and allocation of distributed generation has been considered in recent works from the viewpoint of various grid quality parameters. In [5], the control parameters of distributed DC-AC inverters were optimized to improve grid stability and power sharing. In [6], the reconfiguration strategy of grid switches was considered for simultaneously optimizing power losses, voltage deviation, and service outage. In [7], optimization of equipment cost and service reliability was achieved by optimizing the allocation of distributed generation and reconnecting switches.
Other works carried out multi-objective optimization considering external factors impacted by power system operation. In [8], a genetic algorithm was used to optimize power dispatch in order to minimize fuel costs and ambient impact due to gas emission by thermal units. Similarly, evolutionary algorithms were applied in [9] for optimizing polluting gas emission, grid voltage deviation, and active power losses. In [10], the ant-lion optimization heuristic was applied to distributed generation allocation to achieve optimal combinations of investment benefit, power losses, and voltage stability.
The above works considered system reliability as a function of power dispatch and allocated distributed generation. However, reliability associated with smart grid communication networks is of a different nature. The former provide knowledge concerning the grid capacity for consumer demand coverage, whereas the latter represent guarantees of appropriate automation triggering if a fault occurs. Hence, reliability measures related to communication networks must consider failure rates associated with the power feeders, which are measured by historical records of grid power outages.
Many recent works have considered the problem of planning the communication infrastructure deployed in the grid via optimization of phasor measurement unit (PMU) placement. This equipment acquires the instantaneous phasor estimates needed to assess the current grid state, which are subsequently relayed to control systems by means of communication links [11].
In [12,13], integer linear programming and backtracking search were respectively used for optimizing PMU allocation to minimize the number of units and maximize measurement redundancy. In [14], information theory was used to formulate the PMU placement in terms of mutual information, which was then optimized by means of a greedy algorithm in order to reduce the number of PMU units while guaranteeing complete system observability. In [15,16], respectively, variable neighborhood search heuristic and genetic algorithms were applied for optimizing PMU placement considering minimum observability constraints. In [17], a multi-objective differential evolution algorithm was used to optimize placement cost and measurement reliability.
We observe that the majority of works concerning optimization of grid communication infrastructure is related to the allocation of PMUs, whereas the existence of links necessary for information relaying is tacitly assumed. However, complete deployment of the infrastructure depends on the choice of communication links adequate for reliable operation on the power grid.
In [18], a method was proposed for correlating communication link failures with power grid reliability. The approach considered the grid and associated network as given and established correlation parameters that enabled estimation of grid reliability based on topological parameters of the communication network. In [19], the authors proposed evaluating reliability via Monte Carlo simulation coupled with probabilistic failure models derived for the grid components. In [20], methods for evaluating the reliability of link topologies as a function of individual link reliabilities was proposed. However, no optimization procedure was proposed for link allocation.
The above methods are applicable when grid and network topologies are previously known. Therefore, they are not suitable for the scenario in which, for a given power grid, an optimal communication network must be deployed. Moreover, practical link allocation must also consider network cost, which is disregarded in these works.
The existing works in the literature indicated that there is a lack of optimization methods applicable to the allocation of communication links for power grids with automated elements. In this work, we propose a method for optimizing link allocation taking into account cost and power system reliability. It will be shown that, for a problem of this nature, the use of genetic algorithms is not only natural, but also a good choice due to the size of the feasible solution space and the nature of the objective functions.

3. Problem Formulation

In this section, we break down the problem into multiple parts. We initially discuss the computation of network reliability for arbitrary topologies, proposing a simple method for estimating reliability with reduced complexity. Subsequently, we propose a novel heuristic for attributing reliability values to individual links, which takes into account power outage indexes of the subjacent power grid. Finally, we elaborate on network cost computation and constraints associated with the link allocation problem.

3.1. Reliability Computation

The value of reliability associated with a communication network is a measure of the probability of transmission failure. In normal operation, networks must provide one or more communication routes for any node pair. Considering the possible occurrence of contingencies in individual links, it is necessary to provide redundancies in the network design [21]. This procedure enables information flow between nodes during operation with faulted links.
Let us consider the computation of two-terminal reliability, which is the probability of successful transmission between a given node pair [22]. In other terms, the two-terminal reliability C i j associated with a node pair ( R i , R j ) is equal to P ( R i R j ) , where R i R j indicates successful transmission between nodes R i and R j .
The general form of the two-terminal reliability is C i j = 1 k = 1 β i j P L k , where P L k is the failure probability associated with one of the β i j existing paths between the nodes. Each path is composed of multiple links, whose individual reliability values are c i j . We remark that c i j C i j , since the right-hand side parameter is the global reliability between the nodes, whereas c i j is the reliability of a single link. It is clear that, for a given path L k , k { 1 , 2 , , β i j } , we have P L k = 1 C L k = 1 L k c r s , where c r s is the reliability of a link belonging to path L k . However, since we are considering all possible failures, the dependence of different paths must be taken into account. Let L k be the greatest path such that L k L k and L k L k . Since the failure of L k implies that of L k , it is clear that we must consider a reliability value C L k C L k when accounting for the failure of L k [23]. Hence, C i j can be expressed as:
C i j = 1 k = 1 β i j 1 ( r , s ) L k c r s ( u , v ) L k c u v = 1 k = 1 β i j 1 ( r , s ) [ L k L k ] c r s
We note that the above computations assume there is only one link per node pair. However, even  if this is not case (that is, at least one node pair possesses series and/or parallel links), Equation  (1) may be used if the network is previously reduced to an equivalent topology with one link for each node pair. Reduction can be accomplished by using the following equations:
c s = i = 1 m c i
c p = 1 i = 1 m ( 1 c i )
where c s and c p are, respectively, the equivalent reliabilities of series and parallel associations of links c i , i { 1 , 2 , , m } .
Optimizing a single two-terminal reliability in the network is a method susceptible to the introduction of bias in the solutions obtained, since only one pair of nodes would be considered. In this sense, there would be no guarantee of convergence to an optimal network topology in terms of average reliability. Hence, we shall now consider the global network reliability, which is given by C = P ( R i R j , i , j ) .
In order to compute C, the probability of each possible transmission attempt R i R j must be known, since the relative frequency of a given attempt weights its contribution to C. Let T i j = P T ( R i R j ) be the probability that transmission R i R j will be attempted. It is clear that its contribution to C is Δ C ( R i R j ) = T i j · C i j . Hence, the network reliability can be computed with the following equation:
C = i = 1 n j = 1 n Δ C ( R i R j ) = i = 1 n j = 1 n T i j · C i j
where n is the total number of network nodes, and we define C i i 0 . Assuming no previous information is available concerning transmission attempt probabilities, it is reasonable to suppose all attempts are equiprobable. Let A n , 2 denote the number of permutations of two elements chosen from a set of n objects; our assumption gives T i j = 1 A n , 2 = 1 n ( n 1 ) . Substituting in Equation (4), we have:
C = i = 1 n j = 1 n 1 n ( n 1 ) C i j = 1 n i = 1 n 1 n 1 j = 1 n C i j = 1 n i = 1 n C i
where C i is the average of two-terminal reliabilities taken over R i . Hence, C can be computed by obtaining each C i and computing the average of C i over all nodes. We note that C i has an important interpretation, namely C i = P ( R i R j , j ) , which is the probability of a successful transmission originating in R i .
Based on this discussion, we highlight the following reliability values and their properties:
  • Node transmission reliability C i : Associated with a single node R i , it is a measure of the average two-terminal reliabilities that involve the given node:
    C i = P ( R i R j , j ) = 1 n 1 j = 1 n C i j
  • Global transmission reliability C: Associated with a given network topology, it consists of the average between all C i node reliability values:
    C = P i = 1 n ( R i R j , j ) = 1 n i = 1 n C i
We propose using C as one of the objective functions for optimizing link allocation, since it represents an average value of reliability associated with the network and takes into account the distribution of reliability throughout the network.
The possibly high values of path numbers β i j may pose a problem in terms of computational complexity. In order to reduce the processing effort for calculating each C i j , we propose an approximation that consists of considering only the K paths with greater reliability that compose each path L k . If K is such that K E [ β i j ] , computational effort can be saved. The approximate values obtained for C i j are reasonable approximations, given K is not exceedingly small.
We observe that in (1), the smaller a given path’s C L k = L k c r s is, the more negligible its contribution to C i j is, since its corresponding term in the product series tends to one. Hence, the procedure of ignoring paths with smaller reliabilities is justified.
The computation of each C i j presupposes knowledge of values for the link reliabilities c i j . For now, let us assume such values are given and describe the proposed evaluation method. It consists of the following steps:
  • Use Equations (2) and (3) to reduce all series/parallel link associations in order to obtain a topology with only one link per node pair;
  • Use an enumeration algorithm to obtain the K paths from R i to R j with the highest associated reliability values C L k ;
  • Use Equation (1) and the obtained paths to compute C i j .
Step 1 provides an equivalent graph with independent paths, for which Equation (1) may be applied for reliability computation, whereas Step 2 implements our proposal of only using the highest reliability paths for computing two-terminal reliabilities. Step 3 consists of applying Equation (1) to the obtained enumerated paths for computing C i j .
We propose using Yen’s algorithm [24] as the enumeration procedure for Step 2. This algorithm outputs the K paths with the smallest ( r , s ) L k c r s . However, we want to enumerate paths with higher reliabilities, which are given by products ( r , s ) L k c r s . To solve this problem, we adapt the algorithm as follows. Consider the function f ( x ) = l o g 10 x ; it is clear that f ( x ) > 0 and is strictly increasing for x ( 0 , 1 ] . Since 0 < c i j 1 , the function maps high reliability links to lower values. Furthermore, the  property f ( x 1 x 2 ) = f ( x 1 ) + f ( x 2 ) maps product terms into sums.
Hence, it is enough to supply all f ( c i j ) as input to Yen’s algorithm. After enumeration, the c i j themselves (and not their transformed values) may be used for computing C i j in Step 3.
The proposed algorithm requires as input the nodes R i , the possible links to be allocated, and  their reliability values c i j . We present this information as a list of links and their respective reliabilities Γ = ( R i R j , c i j ) ; 1 i , j n . This representation is adopted in order to account for multiple links in a single connection from R i to R j (in series or parallel). The list Γ is then reduced, by means of Equations (2) and (3), to a new list Γ r e d u c e d in which there is only one link between each node pair, allowing the computations to be carried out.
Once this reduction is made, an adjacency matrix weighted by the equivalent c i j can be defined for simultaneously handling link reliability and allocation information. Let A = [ a i j ] n x n be the standard adjacency matrix, in which a i j = 1 if a given topology contains the link from R i to R j and a i j = 0 otherwise. Defining A = [ a i j ] n × n , with a i j = a i j · c i j , this matrix can be used for reliability computations and for defining link presence ( a i j 0 ) or absence ( a i j = 0 ) .
The proposed procedure for computing C is given in pseudocode form in Algorithm 1.
Algorithm 1: Computation of global reliability.
Input: R i , 1 i n ; Γ ; K.
Output: C.
1:
C i j 1 , i , j .
2:
C i 0 , i .
3:
C 0 .
4:
for 1 i n do
5:
for 1 j n do
6:
  Use Γ to reduce series/parallel associations.
7:
  return Γ r e d u c e d and A.
8:
  Generate A by using A and Γ r e d u c e d .
9:
  Generate A a u x , in which a i j ( a u x ) = l o g 10 a i j .
10:
  Execute Yen’s algorithm over A a u x .
11:
  return Paths L j , 1 j K .
12:
  for 1 u K do
13:
    C i j C i j · 1 [ ( r , s ) ( L u L u ) ] c r s .
14:
  end for
15:
   C i j 1 C i j .
16:
   C i C i + C i j ( n 1 ) .
17:
end for
18:
C C + C i n .
19:
end for
20:
returnC.

3.2. Reliability Attribution Heuristic

In a communication network, the relative importance of individual nodes for the system determines how critical the maintenance of their connectivity is. This suggests that network redundancies must be implemented towards securing communication between the most critical system nodes. For communications among automated grid breakers, it is clear that node importance is related to power outage indexes. In fact, if a feeder protected by a given breaker has frequent contingencies, it is more important to guarantee the breaker connection to the network than it would be if the associated feeder were not faulty.
Taking this into account, we propose a novel heuristic for prioritizing breakers associated with faulty feeders during optimization of optical link allocation. The procedure works by attributing reliabilities c i j to the network links as functions of grid continuity indexes.
The main index used for evaluating power continuity in Brazil is named DEC. This parameter represents the monthly average blackout time (in hours) measured in a given consumer group supplied by a feeder and is given by [1]:
D E C = 1 N c o n s i = 1 N c o n s D I C ( i ) ,
where D I C ( i ) is the monthly blackout time for the ith consumer of the group and N c o n s denotes the total number of consumers in the group.
The measurement and storage of D E C values for all consumer groups is a regulatory requirement imposed on Brazilian utilities. Hence, this index is a reliable measurement of power continuity in the grid. Given the breakers and DEC values of their associated feeders, we propose attributing each c i j value as a function of the DEC values associated with feeders protected by breakers R i and R j , as follows.
Considering that high values of DEC in a feeder imply frequent outages, it is clear that breakers protecting feeders with high DEC must be prioritized during communication link allocation. In this sense, we propose attributing link failure probabilities p i j = 1 c i j equal to the normalized value of the highest DEC associated with feeders protected by nodes R i and R j .
We justify this heuristic in the following manner: for a greater value of p i j attributed to a certain link, the optimization algorithm will select solutions that possess redundancies around this link in order to maximize global reliability C. Therefore, redundancy allocation in the communication network will prioritize breakers that protect feeders with higher DEC values. In other words, the selected solutions will be the ones that attain greater communication reliability for breakers operating in faulty feeders.
Before indicating how to attribute the c i j to the network links, we note that there are two main types of breaker connections in power grids, namely grid and tie breakers [25]. Their difference resides in the natural state of operation and number of feeders to which they are connected: grid breakers are normally closed and protect one feeder, whereas tie breakers are normally open and connected between two feeders. Due to this difference, we shall propose slightly different heuristics for attributing c i j values in each case.
Let D E C ( F i ) denote the DEC of a feeder associated with node R i . Considering that DEC is measured monthly, we have 0 h D E C ( F i ) 720 h. Given the network nodes and their associated feeder(s) with respective DEC value(s), we propose the following strategy for attributing reliabilities c i j to the network links:
  • Normalize all D E C ( F i ) by Δ t = 720 h. The new values will be such that D E C ( F i ) [ 0 , 1 ] ;
  • For every breaker R i , verify if its type is a grid or tie. In the first case, define p i = D E C ( F i ) . Otherwise, define p i = max F i D E C ( F i ) .
  • For every link from R i to R j , attribute p i j = max p i , p j . Hence, c i j = 1 max p i , p j .
The parameter p i associated with R i is equal to the DEC of the worst performing feeder to which this breaker is connected. Hence, p i can be interpreted as a measure of grid failure probability associated with R i . For this attribution heuristic, it is clear that p i j p i 0 and p i j p j 0 . Thus, the assigned p i j increase for greater node DEC values. This shows that nodes with unreliable feeders tend to decrease C, which makes the optimization process select network topologies that prioritize link redundancies between high-DEC nodes. The proposed strategy is summarized in Algorithm 2.
Algorithm 2: Computation of link reliabilities.
Input: R i ; D E C ( F i ) , 1 i n .
Output: c i j , 1 i , j n
1:
D E C ( F i ) D E C ( F i ) Δ t , i .
2:
for 1 i n do
3:
if R i is grid-type then
4:
   p i D E C ( F i ) .
5:
else
6:
   p i max F i D E C ( F i ) .
7:
end if
8:
end for
9:
p i j max p i , p j , i , j .
10:
c i j 1 p i j , i , j .
11:
return c i j .
Considering that all c i j are attributed in a heuristic manner, we need a reference network topology against which we may compare C values computed for arbitrary networks. In this sense, we chose as a reference the shortest-perimeter ring topology, whose reliability we denote as C r i n g . Thus, for any network topology, we can compute C C r i n g as a reliability measure, as will be shown in the computer experiments of Section 5.

3.3. Network Cost Computation

In the deployment of fiber optics links, cost is mainly associated with the required length of fiber needed. Hence, the network cost is a function of the distances between all node pairs that are connected by links. Let the node positions be identified in an arbitrary coordinate system by means of the set R = { ( x i , y i ) , 1 i n } . Assuming the coordinates are known and denoting the distance between R i and R j by d i j , we consider the euclidean distance d i j = ( x i x j ) 2 + ( y i y j ) 2 . Disregarding multiplicative factors for converting distance into monetary units, we compute the cost Q of a given topology as the sum of link lengths. Considering that optical links are implemented for bidirectional communication [26], we have d i j = d j i and a i j = a j i . A factor 1 2 must be used to account for the bidirectional links. Hence, given the adjacency matrix A, the cost can be computed by:
Q = 1 2 i = 1 n j = 1 n a i j d i j = i = 1 n j = 1 i a i j d i j
We note that a i i = 0 , as the existence of a self-connecting node is meaningless in the problem context. We now address handling of equivalent link costs in case the list Γ contains series or parallel links. If there are series links to be reduced, it is clear that the total distance is equal to the sum of individual link lengths. Hence, costs may simply be summed. For parallel links, the individual connections have approximately the same length. Therefore, the equivalent cost is equal to the individual link length multiplied by the total number of links.
We shall use network cost Q as the second objective function to be optimized, alongside reliability C. Therefore, solutions obtained will represent different levels of compromise between these opposing factors. This approach is necessary in solving practical link allocation problems, since the amount of resources available for network deployment is limited.

3.4. Problem Constraints

Optimization problems usually present constraints in terms of the design variables and objective functions. In the link allocation problem, reasonable constraints would be maximum cost and minimum reliability values for all network solutions. However, this type of constraint disregards the connectivity of individual nodes.
For instance, a minimum value for C would not punish a network that achieves high reliability for all nodes, except for one that is left isolated. Likewise, a maximum Q value may cause links with good reliability that are never chosen due to their higher lengths. Based on these considerations, in what follows, we define alternative constraints that naturally limit cost and reliability, while still accounting for individual nodes.
We consider a physical constraint that can be given in terms of the design variables. Every  automated breaker has a maximum number of communication ports. Hence, there is a limit n e of links that can be connected to any network node. Supposing equal port limitation for all nodes, we address this constraint by defining a violation index [27] given by a step-form function:
d i = ( n i n e ) · u ( n i n e ) = 0 , if n i n e ; n i n e , if n i > n e
where n i is the number of links connected to node R i . Analogously, we consider a minimum connectivity constraint n i n f for all nodes. This can be interpreted as ensuring any given node is supplied with sufficient link redundancies. The associated violation index is defined as:
d i = ( n f n i ) · u ( n f n i )
Depending on the type of optimization algorithm used to solve a problem, violation indexes can be implemented differently. For example, in gradient-based methods, they can be implemented as regularization terms in the objective function, whereas in genetic algorithms, they may be processed independently by means of binary tournament methods. The implementation of violation handling will be discussed in Section 4.

4. Choice of Optimization Method

The variables of the allocation problem are all a i j with 1 i n and j < i . The restriction in the second index is due to bidirectional communication in optical links ( a i j = a j i ) and the absence of self-connected nodes ( a i i = 0 ) . Hence, the total number of variables is n 2 = n ( n 1 ) 2 . Henceforth, we denote the set of all design variables by the vector a .
Practical optimization problems such as the one considered in this work are frequently intractable from the analytical point of view. This is due to a high number of variables and the involved form of the objective functions and their derivatives. In such cases, numerical solutions based on optimization algorithms must be used for solving the problem.
There are multiple classes of optimization algorithms, whose adequacy for application depends on problem representation, the domain, constraints, the ease of calculating objective function derivatives, and the computational complexity. In what follows, we will describe these characteristics for the allocation problem and establish the use of a genetic algorithm as most adequate.
The considered problem demands multi-objective optimization, since the compromise between cost and network reliability must be considered. Therefore, the objective functions to be optimized are C and Q, with the constraints established in Section 3. The number of design variables is n 2 , which may be high if the considered network has a large number of nodes. Problems with many variables can still be tractable if the objective functions are easy to compute. This is the case for Q, which is simply a linear combination of the variables. However, the computation of C is involved because all paths between every pair of nodes must be enumerated.
Given the computational cost of obtaining C and the large number of variables a i j , second-order gradient-based optimization is impractical due to the large dimension of the Hessian matrix. Even if first-order optimization were considered, the burden imposed by the computation of derivatives would be excessive. To elaborate on this, let the reliability objective function be E C ( a ) . Computing a single derivative E C a i j ( a ) would require enumeration of all paths for two different topologies, namely for a i j = 1 and a i j = 0 . If O ( S ) is the complexity of the enumeration algorithm, this procedure entails a complexity of O ( n 2 ) · O ( S ) , as there are n 2 n 2 node pairs to consider. Finally, computing E C a ( a ) consists of repeating the procedure for all components of a . Hence, the complexity of a first-order method for computing C would be of order O ( n 4 ) · O ( S ) .
Furthermore, in gradient-based methods, E C ( a ) should contain a regularization term for punishing unfeasible solutions. In this sense, we would have E C ( a ) = E C o ( a ) + λ R C ( a ) , where  E C o is a function of C, R C ( a ) is the regularization term, and λ is the weight given to this term. The choice of λ is not trivial and may be problematic, since its optimal value may vary for different instances of the optimization problem.
We now argue that genetic algorithms can be used for optimization without the difficulties previously presented. Since this type of algorithm consists of evaluating the fitness of stochastically-generated solutions, the problem of computing derivatives is immediately eliminated. Furthermore, the constraint handling procedures are usually independent of objective function evaluation, making weight parameters such as λ unnecessary. Since derivatives do not need to be computed, reliability evaluation will present only O ( n 2 ) · O ( S ) complexity.
The allocation problem variables are naturally coded as a binary digit string given by a , since all variables are such that a i j 0 , 1 . This type of representation is required for applying the genetic algorithm operators, such as mutation and crossover, to the solution ensemble. The absence of a need to convert the problem data into an abstract binary representation shows it is naturally fit for the data representation used in genetic algorithms.
As a final argument for the use of genetic algorithms, we shall establish a loose lower bound for the number of feasible solutions that is proportional to [ ( n 1 ) ! ] . The large problem domain is another reason for considering genetic algorithms. We make the following proposition:
Theorem 1.
The number N s of feasible solutions for the problem:
  • minimize a Q ( a ) ;
  • maximize a C ( a ) ;
  • subject to n i n e and n i n f , i = 1 , 2 , , n
has a lower bound given by N s K s · ( n 1 ) ! , where K s < [ n · ( n e n f ) + 1 ] and K s N * .
Proof of Theorem 1.
Let N s ( k ) denote the number of solutions that have i = 1 n n i = k , with n · n f k n m a x , where n m a x < n · n e . Clearly, N s = k = n · n f n m a x N s ( k ) . Now, we consider a particular case with the most severe node connection restrictions, namely n e = n f = 2 . We disregard the case of nodes with only one connection because this would imply topologies with isolated node pairs.
For the particular case chosen, denote the number of possible solutions by N s * . Since all nodes are forced to have two connections, the amount of solutions is the number of node circular permutations, N s * = [ ( n 1 ) ! ] . Returning to the general problem, it is clear that we have N s ( k ) N s * for every k. Therefore, we have:
N s k = n · n f n m a x N s * = ( n m a x n · n f + 1 ) · ( n 1 ) ! = K s · ( n 1 ) !
Since n m a x n · n f , we have K s N * . Furthermore, using the fact that n m a x < n · n e , we get K s < [ n · ( n e n f ) + 1 ] , and the proof is concluded. □
The lower bound obtained is very loose, since for large k, the number of possible solutions tends to be much higher than that for k = 2 . However, in spite of being conservative, the obtained bound grows with ( n 1 ) ! , which shows that the feasible domain is very large. Coupled with the previous discussions, this fact makes a strong case for using genetic algorithms.

4.1. Implementing the Optimization Method with NSGA-II

We opted for implementing the optimization routine with the non-dominated sorting genetic algorithm II (NSGA-II) [27]. The main reasons for selecting this genetic algorithm are its satisfactory computational complexity and lack of tuning parameters. Furthermore, when compared to NSGA-III, it has similar performance for two-objective problems and a simpler implementation of the solution crowding operator [28]. For M objectives and N individuals per generation, the complexity of NSGA-II is O ( M N 2 ) per generation. The algorithm manages constraints separately from objective function evaluation, which means a weight parameter for constraint violation is not needed. The ranking of solutions is done on a binary tournament basis, in which the criteria are solution fitness and constraint violations.
A weakness of many multi-objective genetic algorithms is the fact that they require a prespecified parameter σ for controlling the variety of obtained solutions. Ideally, the Pareto front should be as diverse as possible. In order to obtain good results, trial and error with various σ values may be necessary. Furthermore, the optimal parameter value is prone to change for different problem instances. This difficulty is surpassed in NSGA-II by using a crowding operator that naturally selects the most diverse solutions: if two solutions are tied in the ranking process, the solution most distant from the rest is selected by the operator.
Aside from the ranking and crowding operators, which were implemented as given in Deb et al. [27], we used mutation and crossover operators for population evolution. We opted for uniform crossover and bitwise mutation with probabilities p c = 0.7 and p m = 0.03 , respectively. Using uniform crossover in this problem is more reasonable than k-point crossover, since exchanging large bit substrings of two solution vectors is very likely to generate network solutions with isolated nodes. The initial population was generated randomly with a uniform distribution, and the evolutionary process was set to run for G = 30 generations with N = 100 individuals each.
All algorithm parameters were selected by means of preliminary manual tests. Values of N and G were gradually increased until their variation did not significantly contribute to solution front variety and convergence, respectively. For selecting mutation and crossover probabilities, we used the standard metaheuristic of starting with low values of p m and high values of p c . This procedure avoids stagnation and high degrees of randomness in the evolutionary process [29]. The selected values are those for which the best results were obtained in the preliminary tests.
The NSGA-II algorithm is designed for minimizing all of its objective functions. In the considered allocation problem, cost must be minimized and reliability maximized. In order to adapt the reliability objective to NSGA-II, we ran the optimization in Q and C .
The optimization algorithm implementation with NSGA-II is described in Algorithm 3. We denote by Ω the set of all lists Γ that represent possible solutions. Furthermore, G and N are the numbers of generations and individuals per generation, respectively.
Algorithm 3: Network optimization via NSGA-II.
Input: R i , 1 i n ; G; N; p m ; p c .
Output: Pareto front.
1:
Generate a random family of N lists Γ i ( 0 ) Ω , i = 1 , 2 , , N .
2:
for 1 j G do
3:
for 1 i N do
4:
  Compute Equation (9).
5:
  Execute Algorithm 1.
6:
  return ( Q i , C i ) .
7:
end for
8:
 Execute NSGA-II over objectives Q and ( C ) .
9:
return New family of N lists Γ i ( j ) Ω , i = 1 , 2 , , N .
10:
end for
11:
return Pareto front.

4.2. Alternative Implementation with Objective Priority

An important aspect in practical optimization problems is the attribution of priority levels to objective functions, which is not covered by NSGA-II. However, the algorithm compensates for this by supplying as output a Pareto front containing solutions with different degrees of optimality for the objective functions. However, from the perspective of a decision maker, it may be desirable to have a priority mechanism that yields a single solution. In the problem context, this corresponds to the utility having a previously established reliability-cost ratio.
In this sense, we propose a simple alternative implementation that can be substituted for the standard NSGA-II when desired. It consists of making a linear combination of Q and C and using the same routine as in Algorithm 3 for optimizing this linear combination in a mono-objective fashion. In other terms, the objective function is given by:
Z = r Q · Q Q m a x r C · C ,
where r C > 0 and r Q > 0 are, respectively, weights associated with the original objective functions C and Q; these weights are such that r C + r Q = 1 . Cost is normalized by the maximum cost solution Q m a x so that both objectives Q and C belong to the interval [ 0 , 1 ] , which is needed for the correct weight attribution [30].
If multiple objective functions are linearly combined with weights that sum to one, the mono-objective optimization will converge to a solution belonging to the Pareto front that would be obtained via multi-objective optimization [30]. In this sense, the use of weights establishes a mechanism for automatically selecting desired regions of the solution space. This is of interest for a decision maker that possesses previous information concerning the desired solution, since the search is already directed towards the most adequate solution of the Pareto front.

4.3. Computational Complexity

Now that the optimization method is established, we derive its computational complexity. We initially consider the complexity of path enumeration, namely that of Yen’s algorithm. This algorithm has a complexity that depends on the auxiliary method used for obtaining the shortest path [24]. Due to its simplicity, we used Dijkstra’s algorithm as the auxiliary method. In this case, the search complexity is O ( K n 3 ) [31], where n is the number of nodes and K is the maximum number of paths to be enumerated in each iteration.
We previously established a complexity of O ( n 2 ) · O ( S ) for evaluating the reliability of a single solution. Substituting O ( S ) = O ( K n 3 ) , we get a complexity O ( K n 5 ) . From Equation (9), the computation of a solution cost is seen to be O ( n 2 ) . Considering that the reliability and cost of a solution are computed sequentially, we have a complexity O ( K n 5 + n 2 ) = O ( K n 5 ) per solution. The complexity of NSGA-II for processing one population is O ( M N 2 ) . Since each solution has an evaluation complexity of O ( K n 5 ) , we get a total complexity of O ( K M N 2 n 5 ) per generation.
Reasoning in a similar manner for first-order gradient algorithms, we obtain a complexity O ( K n 7 ) for computing each solution. If the algorithm was of a stochastic gradient-descent type, it would require multiple iterations and increase the complexity even further. Hence, using NSGA-II is advantageous in terms of complexity at least to an order of O ( n 2 ) per solution.
It is clear that the complexity of the proposed method strongly depends on n. On the other hand, it advantageously has polynomial behavior. It can be seen that a good choice of K can help decrease complexity. However, this parameter must be kept sufficiently high in order to guarantee low approximation errors during reliability computations.

5. Case Study

The proposed heuristic and optimization method were applied to a practical problem considered in a research project by the local distribution utility (CELG-D) for automating a group of breakers in its grid. The considered scenario consisted of the allocation of optical links for n = 12 recently installed grid breakers with physical constraints n f = 2 , n e = 4 .
In Table 1, we present the coordinates of all nodes in the coordinate system adopted by the utility. The p i parameters computed for each node by using Algorithm 2 are also given. The computation was based on DEC data in the utility database. The positions of all breakers are illustrated in Figure 1, where we also display the reference ring network topology.
We then applied Algorithm 3 to the obtained p i for attributing all link reliabilities c i j . These values are given in the matrix C o = [ c i j ] 12 × 12 as follows:
C o = 0 0.9175 0.9805 0.9765 0.8460 0.9805 0.9660 0.9175 0.9825 0.9815 0.9775 0.8460 0.9175 0 0.9280 0.9240 0.7935 0.9280 0.9135 0.8650 0.9300 0.9290 0.9250 0.7935 0.9805 0.9280 0 0.9870 0.8565 0.9910 0.9765 0.9280 0.9930 0.9920 0.9880 0.8565 0.9765 0.9240 0.9870 0 0.8525 0.9870 0.9725 0.9240 0.9890 0.9880 0.9840 0.8525 0.8460 0.7935 0.8565 0.8525 0 0.8565 0.8420 0.7935 0.8585 0.8575 0.8535 0.7220 0.9805 0.9280 0.9910 0.9870 0.8565 0 0.9765 0.9280 0.9930 0.9920 0.9880 0.8565 0.9660 0.9135 0.9765 0.9725 0.8420 0.9765 0 0.9135 0.9785 0.9775 0.9735 0.8420 0.9175 0.8650 0.9280 0.9240 0.7935 0.9280 0.9135 0 0.9300 0.9290 0.9250 0.7935 0.9825 0.9300 0.9930 0.9890 0.8585 0.9930 0.9785 0.9300 0 0.9940 0.9900 0.8585 0.9815 0.9290 0.9920 0.9880 0.8575 0.9920 0.9775 0.9290 0.9940 0 0.9890 0.8575 0.9775 0.9250 0.9880 0.9840 0.8535 0.9880 0.9735 0.9250 0.9900 0.9890 0 0.8535 0.8460 0.7935 0.8565 0.8525 0.7220 0.8565 0.8420 0.7935 0.8585 0.8575 0.8535 0
Given the c i j and node coordinates, we first computed cost Q r i n g and reliability C r i n g for the ring network (Figure 1). As previously discussed, the reliability C r i n g is used as a reference for comparison with the obtained solutions. It is also of interest to compare solution costs to Q r i n g because this topology is considered low cost while attaining good reliability. The obtained values were Q r i n g = 0.0497 and C r i n g = 0.8283 . Considering the good compromise between cost and reliability achieved by ring networks, it is expected that the Pareto front should be in the neighborhood of the reference ring solution.
The proposed optimization algorithm was run for G = 30 generations with N = 100 individuals each, with the value K = 100 for Yen’s algorithm maximum path number parameter. At first, standard multi-objective optimization was considered. In a second run of the algorithm, we used the alternative implementation with objective priority, testing for different weights r V and r Q . The obtained results are described in what follows.

5.1. Weightless Optimization

Initially, regular multi-objective optimization was applied. The obtained Pareto front is depicted in Figure 2. In Figure 3, Figure 4 and Figure 5, we present the network topologies, costs, and reliabilities of three solutions sampled from the Pareto front (denoted S 1 , S 2 , and S 3 in Figure 2).
The sampled solutions allow us to visualize the solution behavior along the Pareto front. It can be seen that, the closer to the Q-axis a solution is, the higher its reliability and cost are. The relative reliability gains C i C i ( r i n g ) for each node, together with the overall gain C C ( r i n g ) , are given in Table 2 for the sampled solutions S 1 , S 2 , and S 3 .

5.2. Optimization with Weight Assignment

The algorithm with objective priority was run for the weight values shown in Table 3. The solutions obtained for each ( r Q , r C ) pair, with their respective reliability and cost values, are given in Figure 6, Figure 7 and Figure 8. The reliability gains for each solution are given in Table 4. The Pareto front points corresponding to the obtained solutions are labeled, in Figure 2, as P 1 , P 2 , and P 3 . As expected, all solutions belong to the front obtained via multi-objective optimization.

5.3. Analysis of the Results

The results in Table 2 and Table 4 show greater reliability gains in nodes associated with higher DEC indexes. In this sense, the heuristic for allocating greater network resources to the high-DEC nodes is validated. To further illustrate this, in Figure 9, we plot node reliability gains together with DEC values for the weightless multi-objective optimization results.
The results depicted in Figure 9 show that link allocation was such that reliability gains were higher for nodes with worse associated outage values. The only exception to this relation is the set of Nodes 9–11. However, this behavior can be explained by the fact that, in spite of having lower outage values than Node 10, Nodes 9 and 11 had neighboring nodes (8 and 12) with significantly higher outage values. It is also important to note that the slopes of the gain curves had the same general behavior as the slope of the outage curve, which further suggests greater network resource allocation for worse performing feeders.
We note that, despite a relatively high N value, the obtained Pareto front does not have a very large amount of solutions. This can be explained by the binary domain of a i j , which makes a small number of solutions maintain dominance throughout the evolution process of NSGA-II. Furthermore, the constraints n f n n e reduce the number of feasible solutions.
As shown in Figure 2, one of the obtained solutions is the ring network itself. This is also a reasonable result: the ring topology has the least cost among all solutions that provide the minimum required connectivity between grid breakers. Therefore, it always maintains dominance with respect to other solutions in terms of the Q objective function.
The weight-assignment method converged to reasonable regions of the solution space for all weight pairs. Furthermore, convergence to a point belonging to the Pareto front was always verified, which is in accordance with the theoretical considerations. In particular, the attribution of high priority to cost correctly resulted in convergence to the ring topology.

5.4. Reliability Estimation Performance

As previously discussed, the parameter K must be adequately chosen for preventing large errors in reliability estimation. In order to validate the estimations of this case study in terms of error and to illustrate dependence on K, we consider a test scenario in which estimation is prone to much higher errors than in the previously considered real case. For this scenario, we consider the case in which all links have equal and low-valued c i j .
If the individual reliabilities c i j are equal, the reliability of any path L k does not have dominant terms, as can be seen in Equation (1). This means that excluding paths from the reliability computation will cause more significant losses in accuracy. In addition, since the c i j are low-valued, convergence of the external product series in Equation (1) is slow and more terms are required in order to compute C i j precisely.
In the test case, we used the same system with n = 12 nodes, but with path reliabilities c i j = 0.1 ; it is clear that this value is much lower than that found in the real system. Thus, reliability estimation is more demanding. Fifty topologies were randomly generated with a uniform distribution, and their reliabilities were estimated by using all values K = 10 · i , i = 1 , 2 , , 20 .
All computed reliability values were normalized in relation to the ones obtained with the parameter K = 10 . Denoting by C ( i ) ( K ) the normalized value obtained for the ith topology, we computed the ensemble average C ( a v g ) ( K ) = 1 50 · i = 1 50 C ( i ) ( K ) , K . We also computed the increments Δ C ( a v g ) ( K ) = C ( a v g ) ( K + 1 ) C ( a v g ) ( K ) in order to evaluate the convergence of the ensemble average of estimated reliabilities for growing K. The results are given in Figure 10.
The results show that estimation error is rapidly reduced for increasing K, which causes rapid convergence of the estimated reliability. As an example, the highlighted point in Figure 10 shows that, for K > 40 , the ensemble average C ( a v g ) ( K ) increases less than 0.01 for an additional path enumerated with Yen’s algorithm. Considering these results and the fact that the selected test case is more demanding in terms of reliability computation than the optimization problem, our choice of K = 100 in the case study is justified.
We note that this type of test case can be used because the convergence of C ( a v g ) ( K ) implies that the estimation error ϵ ( K ) = C ( a v g ) ( K ) C ( a v g ) converges to zero, where C ( a v g ) is the real reliability average. In fact, for growing K, the number of ignored paths decreases, and due to the enumeration of paths with decreasing reliabilities in Yen‘s algorithm, the increments C ( a v g ) ( K ) are decreasing. Finally, since β i j is finite, it is clear that ϵ ( K ) = 0 for K max ( i , j ) β i j .

5.5. Performance Comparison with the Steepest-Descent Gradient Method

In Section 4, we compared the proposed optimization with gradient-based methods in terms of theoretical computational complexity. To provide a concrete comparison, we also apply a local search-type algorithm to solve the allocation problem. We also adopted this procedure because, aside from complexity, it is important to test the capacity of different methods to avoid local minima and converging to high-quality solutions.
The chosen algorithm was the standard discrete-domain steepest descent search [32], which we now briefly describe. Given an arbitrary solution vector a , we define its neighborhood N ( a ) as the set of all other solutions a i such that | a a i | = 1 . We then define the objective function f ( a ) to be minimized and randomly choose an initial solution a o . The first step consists of evaluating the objective for all vectors in N ( a o ) and returning the vector a = argmin [ a N ( a o ) ] f ( a ) . This vector is now treated as the current solution a , and the procedure is iterated by minimizing the objective function in N ( a ) . The algorithm may be stopped for a given number of iterations or according to a specified convergence criterion.
For this approach, we formulate the allocation problem with a single objective function that encompasses reliability, cost, and constraints. Considering that the algorithm is mono-objective and has no priority mechanisms, in each algorithm run, we randomly chose weights for Q and C in order to obtain more varied solutions. Hence, we used the following objective function:
f ( a ) = r Q · Q ( a ) Q m a x r C · C ( a ) + i = 1 n ( d i + d i ) ,
where r Q U ( 0 , 1 ) and r C = 1 r Q , with the weights being generated in every algorithm run. The last term is the regularization term and is equal to the sum of violation constraints. We attributed a weight λ = 1 to this term, in order to harshly punish unfeasible solutions.
The optimization was set to 30 steepest-descent runs, with 100 iterations in each run. These numbers were selected to match respectively the generation and population numbers used with NSGA-II.
In order to compare the optimization approaches statistically, each one was run a total of 40 times for the study case. In what follows, we compare the results in terms of time performance, solution dominance, and statistical properties of the obtained hypervolume indicators.

5.5.1. Comparison between Least-Dominated Solution Fronts

As a preliminary means of comparison, among the 40 solution fronts obtained for each optimization method, we selected the one with the smallest number of dominated solutions with respect to the remaining fronts. In Figure 11, we compare the selected least-dominated fronts.
Furthermore, we compare the selected fronts in terms of execution time per solution. Let t N S G A ( i ) denote the average evaluation time of all solutions belonging to generation i. Furthermore, define t ( i ) as the time required for computing the ith solution of the steepest-descent method. In Figure 12, we plot the ratio t ( i ) t N S G A ( i ) for all i = 1 , 2 , , G , where G = 30 is the total generation number.
The comparisons show that steepest descent optimization yielded a worse solution front than the one obtained with NSGA-II. In fact, Figure 11 shows that all solutions found with the steepest descent were dominated by the NSGA-II solutions with respect to both objectives.
Figure 12 shows that computational time for the steepest descent method was significantly higher per solution than that of NSGA-II. It is interesting to note that the deduction (from Section 4) that complexity per solution of steepest descent is O ( n 2 ) greater than that of NSGA-II was validated by the results. Considering that each generation of NSGA-II has N = 100 individuals, the results showed that the execution time for one generation was in the same order of magnitude of the execution time for a single steepest-descent solution. Hence, NSGA-II had a significant advantage of providing a greater variety of solutions than steepest descent for a fixed time interval.

5.5.2. Statistical Comparison

In spite of the conclusions that may be drawn from comparisons between solution front instances, statistical analysis of optimization methods must be carried out to evaluate their capabilities to provide high quality solution fronts consistently. In order to compare the considered algorithms, we computed the hypervolume indicator and its main statistical indexes for each set of 40 solution fronts.
The hypervolume of a solution front is a measure of its coverage of the solution space. Hence, it is an effective parameter for evaluating solution front quality. The computation of the hypervolume requires the choice of a reference point in the solution space. A heuristic criterion for selecting a reference point that attributes hypervolume values in proportion to solution front quality is given by [33]:
r 1 = r 2 1 + 1 N p 1
where R ( r 1 , r 2 ) is the reference point in the normalized solution space coordinates and N p is the number of solutions in the Pareto front. Considering that N p N = 30 , we adopt a conservative value N p = 10 in Equation (15), which yields a lower bound equal to 1.11 . Hence, we choose the normalized reference point R ( 1.2 , 1.2 ) for computing front hypervolumes. The solution space cost and reliability values were normalized by their respective maximum values among all obtained solution fronts.
The following summary statistics of normalized hypervolume were computed: minimum value, maximum value, average, and standard deviation. The comparison of hypervolume statistics for both algorithms is given in Table 5, and the boxplot of the corresponding data is given in Figure 13.
In order to further compare the time performance of the algorithms, we also evaluated the execution time per solution for all solution fronts. The corresponding summary statistics and boxplot for this parameter are given in Table 6 and Figure 14, respectively.
The statistical results suggest that the proposed NSGA-II approach yields significantly better time performance and solution front quality than the steepest descent algorithm. The average front hypervolume for NSGA-II was 38 % greater in relation to the one obtained with steepest descent, whereas the ratio between average execution time for NSGA-II and steepest descent was 1.93 % . As similarly noted in Section 5.5.1, the results also support the conclusion that time complexity is smaller by an order O ( n 2 ) for the proposed method.

6. Conclusions

In this work, we proposed a new reliability-based heuristic for optimizing optical link deployment in a network for connecting automated power grid breakers. The heuristic consisted of attributing link reliability values as functions of grid outage indexes. The proposed heuristic was implemented with an approximate reliability computation method for reducing complexity. Multi-objective optimization considering reliability and network cost was then carried out with NSGA-II. The choice of a genetic algorithm was justified by the problems that would be encountered by using gradient-descent methods, such as computational complexity, and involved evaluation of reliability derivatives.
The method was validated with a 12-node case study using real information from the database of the local distribution utility. The results showed that the proposed algorithm, aside from optimizing the objective functions, allocated more network resources to nodes located in feeders with higher outage indexes, as intended. Furthermore, it was shown that the performance of the proposed algorithm was superior in comparison to first-order gradient search, both in terms of quality (evaluated by dominance and hypervolume indicator) of the obtained solution fronts and computation time per solution.
It was shown that the proposed approach had a computational complexity that was polynomial in the number of network nodes. Hence, we believe the proposed method has good scalability for application in problems with node numbers of higher magnitudes.
The main contribution of this work consists of the proposal and validation of a novel procedure for optimizing the allocation of communication links associated with automated power grid breakers. The presented multi-objective approach, coupled with the proposed heuristic, enables the optimization of network topology in terms of cost and reliability while prioritizing reliability gains of breakers associated with high-outage feeders. Furthermore, the output of a Pareto front is useful in aiding the decision-making process by the stakeholders in real applications.
We believe the proposed optimization method is of interest because it simultaneously considers power grid contingency indexes and network cost for optimizing the link allocation. To the best of our knowledge, no previous work has addressed the optimization of communication links between elements in the power grid from this point of view.

Author Contributions

Writing: H.P.C.; revision and editing: F.H.T.V. and S.G.d.A.; software: R.R.d.C.V.; theoretical development: H.P.C. and R.R.d.C.V.; case study acquisition: S.G.d.A.

Funding

This research was funded by CELG-D and FUNAPEunder Grant Number 08.161.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

P and P T Probabilities of successful transmission and attempt of transmission
R i Node i
C i j Two-terminal reliability between nodes i and j
β i j Number of different paths between nodes i and j
P L k and C L k Probability of transmission failure and success for path k
c i j and p i j Reliability and failure probability of link between nodes i and j
c s and c p Equivalent reliabilities of series and parallel links
c i Reliability of the link that belongs to a series or parallel association
CGlobal network reliability
Δ C ( R i R j ) Reliability contribution of the node pair
A n , m Number of permutations of n elements taken in groups of m
C i Transmission reliability of node i
KMaximum number of paths in Yen’s algorithm
Γ and Γ r e d u c e d List and reduced list of possible links to be allocated
EExpected value operator
A and a i j Adjacency matrix and its element
t and Δ t Time instant and time interval
p i Measure of failure probability associated with node i
d i j Distance between nodes i and j
QNetwork cost
d i Constraint associated with node i
n i Number of connections for node i
n e and n f Upper and lower limits of node connections
uStep function
nNumber of network nodes
E Objective function
a Vector of variables
N s Number of feasible solutions
G and NNumber of generations and individuals per generation
p c and p m Crossover and mutation probabilities
r Q and r C Cost and reliability weights
C r i n g and Q r i n g Reliability and cost of ring topology
C ( i ) Normalized reliability value of the ith sample topology
C ( a v g ) Average normalized value of sample topologies
Q m a x Cost of the maximum cost topology
N p Number of solutions of non-dominated front
R ( r 1 , r 2 ) Reference point for hypervolume computation

References

  1. National Agency of Electrical Energy (ANEEL). Module 8 of the Procedures for Distribution of Electrical Energy in the National Electric System (PRODIST), 8th ed.; National Agency of Electrical Energy (ANEEL): Sao Paulo, Brazil, 2018.
  2. Billinton, R.; Allan, R. Reliability Evaluation of Engineering Systems: Concepts and Techniques; Springer: New York, NY, USA, 2013. [Google Scholar]
  3. Blackburn, J.; Domin, T. Protective Relaying: Principles and Applications, 4th ed.; Power Engineering (Willis) Series; Taylor & Francis: New York, NY, USA, 2014. [Google Scholar]
  4. Schacht, D.; Lehmann, D.; Kalisch, L.; Vennegeerts, H.; Krahl, S.; Moser, A. Effects of configuration options on reliability in smart grids. CIRED-Open Access Proc. J 2017, 2017, 2250–2254. [Google Scholar] [CrossRef]
  5. Mahmoudi, M.; Fatehi, A.; Jafari, H.; Karimi, E. Multi-objective micro-grid design by NSGA-II considering both islanded and grid-connected modes. In Proceedings of the 2018 IEEE Texas Power and Energy Conference (TPEC), College Station, TX, USA, 8–9 February 2018; pp. 1–6. [Google Scholar] [CrossRef]
  6. Yang, J.; Zhou, C.; Sun, J.; Xu, J.; Qi, J. The NSGA-II based computation for the multi-objective reconfiguration problem considering the power supply reliability. In Proceedings of the 2012 China International Conference on Electricity Distribution, Shanghai, China, 10–14 September 2012; pp. 1–4. [Google Scholar] [CrossRef]
  7. Pombo, A.V.; Murta-Pina, J.; Pires, V.F. Distributed energy resources network connection considering reliability optimization using a NSGA-II algorithm. In Proceedings of the 2017 11th IEEE International Conference on Compatibility, Power Electronics and Power Engineering (CPE-POWERENG), Cadiz, Spain, 4–6 April 2017; pp. 28–33. [Google Scholar] [CrossRef]
  8. Li, Y.; Wang, J.; Zhao, D.; Li, G.; Chen, C. A two-stage approach for combined heat and power economic emission dispatch: Combining multi-objective optimization with integrated decision making. Energy 2018, 162, 237–254. [Google Scholar] [CrossRef] [Green Version]
  9. Li, Y.; Li, Y.; Li, G.; Zhao, D.; Chen, C. Two-stage multi-objective OPF for AC/DC grids with VSC-HVDC: Incorporating decisions analysis into optimization process. Energy 2018, 147, 286–296. [Google Scholar] [CrossRef] [Green Version]
  10. Li, Y.; Feng, B.; Li, G.; Qi, J.; Zhao, D.; Mu, Y. Optimal distributed generation planning in active distribution networks considering integration of energy storage. Appl. Energy 2018, 210, 1073–1081. [Google Scholar] [CrossRef] [Green Version]
  11. Khorram, E.; Jelodar, M.T. PMU placement considering various arrangements of lines connections at complex buses. Int. J. Electr. Power Energy Syst. 2018, 94, 97–103. [Google Scholar] [CrossRef]
  12. Chen, X.; Sun, L.; Chen, T.; Sun, Y.; Rusli; Tseng, K.J.; Ling, K.V.; Ho, W.K.; Amaratunga, G.A.J. Full Coverage of Optimal Phasor Measurement Unit Placement Solutions in Distribution Systems Using Integer Linear Programming. Energies 2019, 12, 1552. [Google Scholar] [CrossRef]
  13. Shafiullah, M.; Abido, M.A.; Hossain, M.I.; Mantawy, A.H. An Improved OPP Problem Formulation for Distribution Grid Observability. Energies 2018, 11, 3069. [Google Scholar] [CrossRef]
  14. Wu, Z.; Du, X.; Gu, W.; Ling, P.; Liu, J.; Fang, C. Optimal Micro-PMU Placement Using Mutual Information Theory in Distribution Networks. Energies 2018, 11, 1917. [Google Scholar] [CrossRef]
  15. Cruz, M.A.; Rocha, H.R.; Paiva, M.H.; Segatto, M.E.; Camby, E.; Caporossi, G. An algorithm for cost optimization of PMU and communication infrastructure in WAMS. Int. J. Electr. Power Energy Syst. 2019, 106, 96–104. [Google Scholar] [CrossRef]
  16. Vigliassi, M.P.; Massignan, J.A.; Delbem, A.C.B.; London, J.B.A. Multi-objective evolutionary algorithm in tables for placement of SCADA and PMU considering the concept of Pareto Frontier. Int. J. Electr. Power Energy Syst. 2019, 106, 373–382. [Google Scholar] [CrossRef]
  17. Peng, C.; Sun, H.; Guo, J. Multi-objective optimal PMU placement using a non-dominated sorting differential evolution algorithm. Int. J. Electr. Power Energy Syst. 2010, 32, 886–892. [Google Scholar] [CrossRef]
  18. Shuvro, R.A.; Wang, Z.; Das, P.; Naeini, M.R.; Hayat, M.M. Modeling impact of communication network failures on power grid reliability. In Proceedings of the 2017 North American Power Symposium (NAPS), Morgantown, WV, USA, 17–19 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
  19. Armendariz, M.; Gonzalez, R.; Korman, M.; Nordström, L. Method for reliability analysis of distribution grid communications using PRMs-Monte Carlo methods. In Proceedings of the 2017 IEEE Power Energy Society General Meeting, Chicago, IL, USA, 16–20 July 2017; pp. 1–5. [Google Scholar] [CrossRef]
  20. Xu, S.; Qian, Y.; Hu, R.Q. Reliable and resilient access network design for advanced metering infrastructures in smart grid. IET Smart Grid 2018, 1, 24–30. [Google Scholar] [CrossRef]
  21. Marseguerra, M.; Zio, E.; Podofillini, L.; Coit, D.W. Optimal design of reliable network systems in presence of uncertainty. IEEE Trans. Reliab. 2005, 54, 243–253. [Google Scholar] [CrossRef]
  22. Kim, Y.H.; Case, K.E.; Ghare, P.M. A Method for Computing Complex System Reliability. IEEE Trans. Reliab. 1972, 21, 215–219. [Google Scholar] [CrossRef]
  23. Melsa, J.; Sage, A. An Introduction to Probability and Stochastic Processes; Dover Books on Mathematics; Dover Publications, Incorporated: New York, NY, USA, 2013. [Google Scholar]
  24. Yen, J.Y. An algorithm for finding shortest routes from all source nodes to a given destination in general networks. Q. Appl. Math. 1970, 27, 526–530. [Google Scholar] [CrossRef]
  25. Montoya, O.D.; Grajales, A.; Hincapié, R.A.; Granada, M.; Gallego, R.A. Methodology for optimal distribution system planning considering automatic reclosers to improve reliability indices. In Proceedings of the 2014 IEEE PES Transmission Distribution Conference and Exposition-Latin America (PES T D-LA), Medellin, Colombia, 10–13 September 2014; pp. 1–6. [Google Scholar] [CrossRef]
  26. Keiser, G. Optical Fiber Communications; McGraw-Hill Education: Berkshire, UK, 2010. [Google Scholar]
  27. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  28. Jain, H.; Deb, K. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach. IEEE Trans. Evol. Comput. 2014, 18, 602–622. [Google Scholar] [CrossRef]
  29. Angelova, M.; Pencheva, T. Tuning Genetic Algorithm Parameters to Improve Convergence Time. Int. J. Chem. Eng. 2011, 2011, 1–7. [Google Scholar] [CrossRef] [Green Version]
  30. Deb, K.; DEB, K. Multi-Objective Optimization Using Evolutionary Algorithms; Wiley Interscience Series in Systems and Optimization; Wiley: New York, NY, USA, 2001. [Google Scholar]
  31. Bouillet, E.; Ellinas, G.; Labourdette, J.; Ramamurthy, R. Path Routing in Mesh Optical Networks; Wiley: New York, NY, USA, 2007. [Google Scholar]
  32. Nocedal, J.; Wright, S. Numerical Optimization; Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2006. [Google Scholar]
  33. Ishibuchi, H.; Imada, R.; Setoguchi, Y.; Nojima, Y. How to Specify a Reference Point in Hypervolume Calculation for Fair Performance Comparison. Evol. Comput. 2018, 26, 411–440. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Breaker coordinates and reference ring topology.
Figure 1. Breaker coordinates and reference ring topology.
Energies 12 02039 g001
Figure 2. Pareto front obtained for weightless optimization.
Figure 2. Pareto front obtained for weightless optimization.
Energies 12 02039 g002
Figure 3. Solution S 1 with high reliability in Q-C space.
Figure 3. Solution S 1 with high reliability in Q-C space.
Energies 12 02039 g003
Figure 4. Solution S 2 with reliability and cost balance in Q-C space.
Figure 4. Solution S 2 with reliability and cost balance in Q-C space.
Energies 12 02039 g004
Figure 5. Solution S 3 with low cost in Q-C space.
Figure 5. Solution S 3 with low cost in Q-C space.
Energies 12 02039 g005
Figure 6. Solution P 1 obtained for r C = 0.8 and r Q = 0.2 .
Figure 6. Solution P 1 obtained for r C = 0.8 and r Q = 0.2 .
Energies 12 02039 g006
Figure 7. Solution P 2 obtained for r C = 0.5 and r Q = 0.5 .
Figure 7. Solution P 2 obtained for r C = 0.5 and r Q = 0.5 .
Energies 12 02039 g007
Figure 8. Solution P 3 obtained for r C = 0.2 and r Q = 0.8 .
Figure 8. Solution P 3 obtained for r C = 0.2 and r Q = 0.8 .
Energies 12 02039 g008
Figure 9. Node reliability gains plotted together with respective DEC values.
Figure 9. Node reliability gains plotted together with respective DEC values.
Energies 12 02039 g009
Figure 10. Normalized reliability values and increments for varying K.
Figure 10. Normalized reliability values and increments for varying K.
Energies 12 02039 g010
Figure 11. Comparison of the obtained Pareto fronts.
Figure 11. Comparison of the obtained Pareto fronts.
Energies 12 02039 g011
Figure 12. Ratio of average solution evaluation times.
Figure 12. Ratio of average solution evaluation times.
Energies 12 02039 g012
Figure 13. Boxplot of the hypervolume statistics.
Figure 13. Boxplot of the hypervolume statistics.
Energies 12 02039 g013
Figure 14. Boxplot of the execution time statistics.
Figure 14. Boxplot of the execution time statistics.
Energies 12 02039 g014
Table 1. Coordinates and p i of network nodes.
Table 1. Coordinates and p i of network nodes.
NodeLatitude ( x ) Longitude ( y ) p i = max F i DEC ( F i )
1−49.269734−16.6904750.030
2−49.269209−16.6914830.135
3−49.269359−16.6863510.009
4−49.256929−16.6902500.017
5−49.262528−16.6907680.278
6−49.267333−16.6843560.009
7−49.263707−16.6820120.038
8−49.259385−16.6824140.135
9−49.262919−16.6778480.005
10−49.265832−16.6813610.007
11−49.256081−16.6875500.002
12−49.264181−16.6866510.278
Table 2. Reliability gains (weightless optimization).
Table 2. Reliability gains (weightless optimization).
Node C i C i ( ring ) ( S 3 ) C i C i ( ring ) ( S 2 ) C i C i ( ring ) ( S 1 )
11.09411.13781.1599
21.13111.19251.2279
31.08661.12801.1480
41.10891.20081.2469
51.17441.28441.3511
61.06921.11711.1460
71.07211.12591.1497
81.08871.16681.2010
91.07481.13411.1593
101.07091.13751.1467
111.10611.19631.2407
121.21231.29701.3439
Overall1.10531.17331.2061
Table 3. Chosen weight pairs.
Table 3. Chosen weight pairs.
Pair r C r Q
P 1 0.8 0.2
P 2 0.5 0.5
P 3 0.2 0.8
Table 4. Relative gains (weight optimization).
Table 4. Relative gains (weight optimization).
Node C i C i ( ring ) ( P 3 ) C i C i ( ring ) ( P 2 ) C i C i ( ring ) ( P 1 )
11.00001.13881.1541
21.00001.19241.2214
31.00001.12951.1453
41.00001.20711.2370
51.00001.28581.3380
61.00001.11971.1382
71.00001.14251.1425
81.00001.17351.1946
91.00001.13791.1565
101.00001.12161.1392
111.00001.20271.2312
121.00001.29581.3333
Overall1.00001.17571.1987
Table 5. Statistics of the normalized hypervolume.
Table 5. Statistics of the normalized hypervolume.
ParameterNSGA-IISteepest Descent
Average 0.6617 0.4795
Standard Deviation 0.0156 0.0267
Maximum Value 0.6817 0.5197
Minimum Value 0.6363 0.4156
Table 6. Statistics of execution time per solution (in seconds).
Table 6. Statistics of execution time per solution (in seconds).
ParameterNSGA-IISteepest Descent
Average 3.5389 183.7337
Standard Deviation 2.8003 6.4778
Maximum Value 10.6169 194.4681
Minimum Value 1.3776 159.0220

Share and Cite

MDPI and ACS Style

Pires Corrêa, H.; Ribeiro de Carvalho Vaz, R.; Henrique Teles Vieira, F.; Granato de Araújo, S. Reliability Based Genetic Algorithm Applied to Allocation of Fiber Optics Links for Power Grid Automation. Energies 2019, 12, 2039. https://doi.org/10.3390/en12112039

AMA Style

Pires Corrêa H, Ribeiro de Carvalho Vaz R, Henrique Teles Vieira F, Granato de Araújo S. Reliability Based Genetic Algorithm Applied to Allocation of Fiber Optics Links for Power Grid Automation. Energies. 2019; 12(11):2039. https://doi.org/10.3390/en12112039

Chicago/Turabian Style

Pires Corrêa, Henrique, Rafael Ribeiro de Carvalho Vaz, Flávio Henrique Teles Vieira, and Sérgio Granato de Araújo. 2019. "Reliability Based Genetic Algorithm Applied to Allocation of Fiber Optics Links for Power Grid Automation" Energies 12, no. 11: 2039. https://doi.org/10.3390/en12112039

APA Style

Pires Corrêa, H., Ribeiro de Carvalho Vaz, R., Henrique Teles Vieira, F., & Granato de Araújo, S. (2019). Reliability Based Genetic Algorithm Applied to Allocation of Fiber Optics Links for Power Grid Automation. Energies, 12(11), 2039. https://doi.org/10.3390/en12112039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop