Next Article in Journal
A Web-Based Tool for Automatic Detection and Visualization of DNA Differentially Methylated Regions
Previous Article in Journal
Coordinate Transformations-Based Antenna Elements Embedded in a Metamaterial Shell with Scanning Capabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LQR-Based Sparsification Algorithms of Consensus Networks

1
AI Software Engineering, Seoul Media Institute of Technology, Seoul 07590, Korea
2
Department of Electrical Engineering, Konkuk University, Seoul 05029, Korea
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(9), 1082; https://doi.org/10.3390/electronics10091082
Submission received: 19 March 2021 / Revised: 28 April 2021 / Accepted: 29 April 2021 / Published: 3 May 2021
(This article belongs to the Section Networks)

Abstract

:
The performance of multiagent systems depends heavily on information flow. As agents are populated more densely, some information flow can be redundant. Thus, there can be a tradeoff between communication overhead and control performance. To address this issue, the optimization of the communication topology for the consensus network has been studied. In this study, three different suboptimal topology algorithms are proposed to minimize the linear quadratic regulator (LQR) cost considering the communication penalty, since the optimal solution requires a brute-force search, which has exponential complexity. The first two algorithms were designed to minimize the maximum eigenvalue of the Riccati matrix for the LQR, while the third algorithm was designed to remove edges sequentially in a greedy manner through evaluating the LQR cost directly. The first and second algorithms differ in that the active edges of a consensus network are determined at the end of the iterations in the first, while sequentially in the second. Numerical evaluations show that the proposed algorithms reduce the LQR cost significantly by optimizing communication topology, while the proposed algorithm may achieve optimal performance with a properly chosen parameterization for a small consensus network. While the three algorithms show similar performance with the increasing number of agents, the quantized terminal cost matrix optimization (QTCMO) algorithm shows significantly less complexity within the order of several tenths than those of the other two algorithms.

1. Introduction

With advances in networking technology, communication speed and delay have been greatly improved. While the control system over networks used to focus on controlling a single device remotely over the network, a large number of devices are cooperatively operating together, which is often referred to as a multiagent system [1]. Multiagent systems are employed in many different industries, such as power networks [2], smart factories [3], and smart cities [4]. The control of this type of system can be classified as centralized, decentralized, and distributed. Even though centralized control can provide the best performance under ideal conditions, its operation is often limited by the computational complexity and communication overhead. While decentralized control is free from these limitations, its performance is possibly limited by the lack of sufficient information. Thus, distributed control, which uses partial information, has garnered attention recently [5].
Distributed control algorithms are known to have a tradeoff between performance and communication overhead. Some known benefits of distributed algorithms are enforced security, robustness to the failure of each agent, and distributed parallel computation [6]. Many different types of distributed algorithms have been developed from the constrained optimization theory using Lagrangian decomposition or Karush–Kuhn–Tucker (KKT) necessary conditions [6]. One particularly interesting field of a distributed control is consensus control, where each agent collectively determines its action based on the information from neighbors to achieve a common goal.
It has been extensively applied to the cooperative coordination of multiagent systems or a collective distributed estimation over multi-sites [7]. A linear quadratic regulator (LQR) control has often been studied to find an optimal or suboptimal consensus algorithm. The optimal LQR control for consensus problems with single integrator dynamics and a single leader was shown to be given by the Laplacian matrix associated with a complete directed graph [8]. A suboptimal consensus LQR control for a hierarchical leader–follow network, where the leader sends information to the followers unidirectionally, was developed with state feedback and output feedback. The proposed algorithm was shown to synchronize nine rolls of a paper converting machine, where the first roll was set as a leader [9]. To save further communication costs, event-triggered control was also introduced to the cyber physical security problem [10].
The consensus problem has also been applied to the development of distributed estimation. An optimal and suboptimal Kalman Consensus Filter (KCF) was developed in the form of a distributed Kalman filter to have a consensus on a state estimate [11]. To overcome the limitation of KCF due to the lack of information on cross covariance and inconsideration of the naive node, which does not have valid observation, a generalized KCF (GKCF) was proposed in the form of achieving a weighted average consensus [12]. Surveys on consensus problems can be found in [13] for the static consensus problem, [14] for the dynamic consensus problem, and [15] for optimal consensus protocols.
Previous research focuses on the optimization of the network topology to minimize the convergence speed of the consensus or to sparsify the current topology to minimize communication overhead or improve the control characteristics, which is summarized in Table 1.
One of the major control problems associated with multiagent systems is to optimize the topology to improve control performance and/or communication overhead. Consensus algorithms have been developed to be tailored to specific topologies, such as the switching topology [16] and the general directed topology [17]. The optimal topology for the consensus problem of MASs with the single leader being stationary was shown to be the star topology for the first and second order dynamics [18]. Control issues associated with the interaction topology of the agents were surveyed for the formation control schemes [19]. Sufficient conditions for the mean square consensusability were formulated into mean square stabilization problems in consideration of delay, packet erasure, and network topology [20]. A sufficient condition for the asymptotic stability of a sparse networked control system with distributed observer-based controllers was given as a feasibility problem of a convex binary mixed margin program [21]. A delay-aware controller was also proposed to deal with communication delays and communication overhead, such that it can promote sparsity on information transmission and set the elements of control gain associated with nonarrival information due to delay [22]. A novel approach to deal with convergence speed was proposed from exploiting the community decomposition, such that topology optimization can be done through identification with the hub nodes of the decomposed topology [23].
Previous research focuses on the optimization of the network topology to minimize the convergence speed of the consensus, or simply to a communication overhead of the problem without considering the control cost. However, control cost is one of the most important design parameters to be considered in the design of a controller. Thus, we present three algorithms to optimize a network topology for a consensus problem through removing existing neighbor relations, taking into account the control cost. These algorithms reduce the number of communication links for information flows without creating a completely new topology. To this end, the proposed algorithms reduce the number of edges in consideration of the LQR cost, which is likely to provide a good tradeoff between the control cost and the communication overhead. This will be critical when a large number of multiagents are densely located since redundant information may be exchanged between neighbors, which results in the waste of communication and computing resources. The proposed algorithms optimize the tradeoff between control cost and communication overhead by minimizing the LQR cost. This is the main original contribution of this work.
This paper is organized as follows. In Section 2, a multiagent system with a single integrator dynamic and the problem formulation with the LQR cost are presented. Two iterative neighbor decision algorithms are proposed in Section 3. The numerical evaluations of the proposed algorithms are made in Section 4. Finally, concluding remarks are made in Section 5.

2. System Model and Problem Formulation

We consider a distributed control system consisting of N agents where each agent can communicate with neighbors only. It is assumed that the dynamics system of each agent follows an integrator system. The corresponding system model of the ith agent in a discrete-time system can be given as
x k + 1 i = x k i + T s u k i
where T s is the sampling period and u k i is the control signal of the ith agent. When u k i can only use the neighbor state information, it can be represented as
u k i = j = 1 N a i j h i j x k i x k j
where a i j has a nonzero value if j belongs to the set of neighbors of the agent i, otherwise 0. h i j is a binary variable which determines the structure of information flow. Stacking up its agent in a single vector, Equation (2) can be collectively represented as
u k = L ˜ x k
where [ L ˜ ] i , i = j = 1 , j i N a i j h i j , [ L ˜ ] i , j i = a i j h i j , and [   ] i , j is the element in the ith row and the jth column of the matrix in the bracket. Similarly, the state vector in a stacked form can be found from Equations (1) and (3) as
x k + 1 = I T s L ˜ x k
To determine h i j , we consider LQR control cost J k T which is defined as
J k T = k = 0 k T i = 1 N j = 1 i 1 ( x k i x k j ) T Q i j x k i x k j + k = 0 k T i = 1 N u k i T R i u k i + λ i = 1 N j = 1 N h i j
where k T is the terminal time, and λ is a parameter which controls the effect of controlling the number of neighbors.
It is noted that the first term in Equation (5) will be 0 when every agent reaches consensus. Thus, this term represents the quadratic difference in states between all possible pairs of states. Since adding a constant term for a given λ does not change the optimal solution, we consider the following constrained LQR cost as follows:
J k T = k = 0 k T i = 1 N j = 1 i 1 ( x k i x k j ) T Q i j ( x k i x k j ) + k = 0 k T i = 1 N u k i T R i u k i + λ ( i = 1 N j = 1 N h i j + i = 1 N d i )
J k T can be rewritten with a vector form as
J k T = k = 0 k T x k T Q x k + k = 0 k T u k T R u k + λ 1 T D + H 1
where [ H ] i , j = h i j , D i , i = d i > 0 , D i , i j = 0 , Q i , i = l = i + 1 N Q l i + l = 1 i 1 Q i , l , Q i , j < i = Q i , j , Q i , j > i = Q j , i , R ] i , i = R i , R ] i , j i = 0 , and 1 is a column vector of length N with all elements being 1.
Since H has binary elements on nondiagonal positions, and 0 on the diagonal positions, the smallest eigenvalue will be larger than N . Thus, as long as d i N , i   D + H will be a positive definite matrix. Even though H seems to appear on the last term in Equation (6), it also has an effect on the control signal u k , which is evident from Equation (2). Thus, an algorithm to minimize the LQR cost efficiently will be developed in the next section.

3. Topology Optimization Algorithms

In this section, we derive a neighbor survival algorithm which keeps some of the neighbors as they are while discarding others. To this end, the finite LQR cost can be represented through a value function for dynamic programming, which can be posed as
V k = x k T Q x k + min u k ( u k T R u k + V k + 1 )
For the optimal u k , V k is a quadratic function of the state x k , which is given by
V k = x k T P k x k
where P k is a nonnegative definite matrix for all k . From Equations (8) and (9), the LQR Riccati equation can be given as
P k = Q + A ¯ T P k + 1 A ¯ A ¯ T P k + 1 B ¯ ( R + B ¯ T P k + 1 B ¯ ) 1 B ¯ T P k + 1 A ¯
In Equation (7), the last term can be considered as the terminal cost without a loss of generality, with the assumption that the average consensus is achieved. When the assumption holds, x k 0 + 1 = μ 1 , which results in λ 1 T D + H 1 = λ u 2 x k 0 + 1 T D + H x k 0 + 1 . Then, the LQR Riccati Equation (11) at the terminal step can be written as
P k 0 = Q + λ ˜ A ¯ T Y A ¯ λ ˜ A ¯ T Y B ¯ ( R + λ ˜ B ¯ T Y B ¯ ) 1 B ¯ T λ ˜ Y A ¯
where λ ˜ = λ u 2 , and Y > 0 . In addition, an upper bound of V k can easily be derived from the trace inequality of t r X Y t r X t r Y for X 0 and Y 0 as
V k N ρ P k t r x k x k T
where ρ is the spectral radius of the matrix in the bracket.
Furthermore, the method of representing the Riccati operator for the LQR problem in [24] is exploited:
ψ F , X = A + F B ¯ X A + F B ¯ + Q + F R F T
When F = A ¯ T X B ¯ ( R + B ¯ T P N B ¯ ) 1 and X = P k 0 , it corresponds to the Riccati inequality for the LQR problem at the N 1 th step. For given X, Fx, a solution for minimizing ψ F , X can easily be calculated from a necessary condition for the quadratic equation as
F X = A ¯ T X B ¯ ( R + B ¯ T P N B ¯ ) 1
It is noted that ψ F X , X is a monotonically increasing function of X, such that ψ F X , X ψ F X , Y for X Y . From this property, we have the following inequality:
ψ F X , X ψ F Y , X ψ F Y , Y
This implies that min F ψ F , X is a monotonically increasing function of X in the following way [24]:
ψ F X , X ψ F X , Y
The upper bound of J k T is given by the upper bound of V 0 from Equations (11) and (15) as
J k T N ρ P 0 t r x 0 x 0 T N ρ P k 0 t r x 0 x 0 T
Thus, we are interested in minimizing the upper bound of J k T   in Equation (16), rather than minimizing J k T   directly, since the original minimization problem is a set of functions involved over time. The corresponding optimization problem can be posed as
min Y α Q + λ ˜ A ¯ T Y A ¯ λ ˜ A ¯ T Y B ¯ ( R + λ ˜ B ¯ T Y B ¯ ) 1 B ¯ T λ ˜ Y A ¯ α I Y = D + H , [ H ] i j 0 , 1 , i , j 1 , 2 , N
One direct method to find the solution is to try every possible H , which results in excessive complexity as the number of agents increases. Thus, we propose an algorithm which iteratively updates the matrix and applies binary decisions to the matrix at the end of each iteration. The proposed algorithm, which we call a quantized terminal cost matrix optimization (QTCMO) algorithm, is summarized in Figure 1. It starts with initializing a nonnegative definite matrix Y as D . It also defines a set S , such that all indices corresponding to the edge between neighbor nodes can be included. That is, S represents a set of indices of the elements in Y 0 which are optimizing variables. For a given Y , F is selected to minimize ψ F , Y in step 1. For a given F , Y is determined to minimize ψ F , Y further in step 2.
In step 2, several constraints are made so that the performance of the proposed algorithm can be improved through limiting the feasible regions. l 2 = 1 N [ H ] l 1 , l 2 1 is imposed so that each node can have at least one neighbor. H ] l 1 , l 2 0 , H ] l 1 , l 2 1 , l 1 , l 2 S is imposed to reduce the quantization error in later stages. Repeating steps 1 and 2 monotonically decreases ψ F , Y . When it converges, an element-wise rounding operation denoted as r o u n d ( ) is applied so that the resulting element in H * is 0 or 1. The condition that the number of edges is at least 1 does not guarantee the connectivity of H * . Thus, it sequentially adds edges. To this end, H S = l = 1 N 1 H * l is computed. The nonzero elements of H S represent the connectivity through direct or indirect neighbors. Thus, the column of H S with the largest number of zeros and the row having 0 at the intersection with that column are selected so that the corresponding position of H S is set to 1. This procedure continues until H * belongs to the set of connected subgraphs of A .
Proposition 1. 
The Algorithm in Figure 1 Converges Locally.
Proof of Proposition 1. 
ψ F , Y monotonically decreases with repeating step 1 and step 2 due to Equation (15). ψ F , Y is lower bounded by ψ F 0 , Y * where Y * is the optimal solution for Equation (18). Thus, the local convergence is guaranteed. □
Even though the proposed algorithm is guaranteed to converge, the quantization process may degrade the performance significantly when the resulting elements of H i + 1 at the end of each iteration are distributed uniformly. To deal with this issue, a greedy terminal cost matrix optimization (GTCMO) algorithm is proposed in Figure 2. It initializes and updates F and H i + 1 in exactly the same way as in the QTCMO algorithm. Then, the element of H i + 1 with the largest magnitude and its symmetric element are set as 1 in step 3. Then, step 4 enforces the elements of H i + 1 having a value of less than ε ˜ to be 0. Thus, iterating these steps reduces the number of optimizing variables in a greedy way. The algorithm stops when the number of optimizing variables becomes 0. Even though this algorithm is guaranteed to stop with finite iterations, there is no guarantee on the monotonicity of Y * due to the nonlinearity associated with step 3 and step 4. While the QTCMO algorithm applies quantization at the end of the iteration, the GTCMO algorithm is considered to apply the quantization sequentially where each quantization is made adaptively to prior quantization. The same procedure to add an edge to make H * a connected graph as the one in the QTCMO is applied.
One may try to optimize the structure directly with an increase in complexity. A sequential edge removal (SEQ) algorithm is summarized in Figure 3. It starts by initializing a set of the edges for a given A . A ˜ , which is an optimizing adjacent matrix, is initialized as A . The LQR cost with an edge removed is calculated for each edge, guaranteeing the connectivity of a network without it. Let a set of connected subgraphs of A ˜ be denoted by C A ˜ . In step 1, the LQR cost is J A ˜ i , j where A ˜ i , j is the same as A ˜ , except the edge connecting node i and node j being removed. The edge for the removal is determined so that the removal of that edge can result in the minimization of the LQR cost. This process repeats as long as the sequentially selected edge for removal reduces the LQR cost. The maximum number of iterations is guaranteed to be less than n 2 . The complexity associated with checking out the connectivity after the edge removal and the LQR cost is likely to be significant.
The objective and summary of the implementation procedures are summarized in Table 2.

4. Evaluations of the Algorithms

In this section, the proposed algorithms are evaluated through numerical simulations. Two sets of numerical experiments are considered. The first set is for characterizing the proposed algorithm with a simple network configuration shown in Figure 4. The second set is to characterize the average performance for randomly a generated network configuration. The LQR cost, total number of edges, and computational complexity will be evaluated.
Figure 4 shows a network consisting of five nodes and eight edges. Even though it is not unique, the simplest connected subgraph has four edges. Thus, some edges can be redundant from the perspective of the LQR cost. To provide the baseline performance of the proposed algorithms, the optimal brute-force search method is considered. This method finds the connected subgraph with the minimum LQR cost through evaluating all the possible connected subgraphs. The total number of edges of the proposed algorithms are provided in Table 3. Both the optimal method and the GTCMO always find the subgraph with the smallest number of edges regardless of λ . They also provide the smallest LQR cost even though they are not discernable in Figure 4. The QTCMO algorithm is considered to work in a coarse way in the sense that the total number of edges is the same as the one in the optimal method for a large λ , while it keeps all edges for a small λ . The SER method is found to be more responsive to λ . Even though the QTCMO and SER fail to find the optimal sparse network configuration for small λ , all the proposed algorithms are shown to provide optimal performance for a large λ . Figure 5 shows that all methods provide almost the same cost. Since the QTCMO fails to find the optimal number of edges for a small λ , its cost is found to be relatively larger than the optimal cost by 24.1% maximum when λ = 1 .
To find out the average characteristic of the proposed algorithms, nodes were uniformly generated such that their location could be uniformly distributed over the square with a length of 1, where nodes within the squared distance of 0.5 were set as neighbors. Two hundred different realizations were made so that the average number of edges, average cost, and average processing time could be calculated. λ was set to 1. Since the complexity of the optimal method is prohibitive, the proposed methods only are compared.
Figure 6 shows the average number of edges resulting from applying the proposed algorithms. It is observed that the average number of edges resulting from applying the proposed algorithms is reduced. All the proposed algorithms are found to sparsify the network by almost same amount, even though the resulting networks are often different. The ratio of reduction is found to increase from 28.3% to 78.7% as N increases since the number of neighbors for a given node tends to increase with N.
The average cost resulting from each algorithm is compared in Figure 7. The larger the number of edges, the larger the cost to find the result. Since all the proposed algorithms provide almost the same average number of edges, the costs associated with the proposed algorithms are almost same. Figure 8 shows the computational complexity of the proposed algorithms in average processing time. The average processing time of the QTCMO is found to be the lowest. The number of iterations does not increase much with N. Thus, the increase in complexity depends on the complexity of the optimization with a linear matrix inequality. On the contrary, the complexity of the GTCMO is the largest. The number of iterations associated with the greedy procedure depends on the number of edges, which is proportional to N, whereas the complexity per iteration is larger than that of the QTCMO due to the additional step. Even though the complexity of the SER algorithm is in the middle, it heavily depends on the number of control steps. Thus, as the number of control steps increases, its complexity is likely to increase proportionally.
The two numerical experiments show that all proposed algorithms provide nearly similar performance as long as λ is chosen properly. As long as the control step is not too large, the QTCMO algorithm can be considered to perform well with the least complexity.

5. Conclusions

In this paper, three different algorithms for sparsifying the communication network for the average consensus problem were proposed. The proposed algorithms were designed to minimize the LQR cost through either minimizing the Riccati matrix at the terminal step or removing edges in a greedy manner. An experiment over a simple hypothetical network has shown that all the proposed algorithms show near optimal LQR performance. Another experiment showed that all the proposed algorithms provide almost similar performance, while the QTCMO algorithm can be implemented with the least complexity.
Even though a simple integrator model was considered in this paper, an extension to a conventional linear system model can be easily developed. In addition, by considering disturbance and measurement noise using LQG control, this work can be extended to dynamic consensus problems, such as vehicle formations.
One interesting way to address the complexity of the topology optimization of consensus networks is likely to exploit state-of-the-art methods in a genetic algorithm (GA), which is found to provide an efficient solution to complex problems, such as the traveling salesman problem [25]. An articulated topology optimization may be developed by exploiting the selection and mutation schemes [26] or Mendel′s rule of heredity [27] in a GA, which should be given attention as a possible direction for future research.

Author Contributions

Conceptualization, J.Y. and Y.C.; methodology, J.Y.; software, Y.C.; formal analysis, J.Y.; writing—original draft preparation, J.Y.; writing—review and editing, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Basic Science Research Program through the National Research Foundation of Korea by the Ministry of Science and ICT under Grant NRF-2017R1A2B4007398.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Deloach, S.A.; Wood, M.F.; Sparkman, C.H. Multiagent Systems Engineering. Int. J. Softw. Eng. Knowl. Eng. 2001, 11, 231–258. [Google Scholar] [CrossRef]
  2. McArthur, S.D.J.; Davidson, E.M.; Catterson, V.M.; Dimeas, A.L.; Hatziargyriou, N.D.; Ponci, F.; Funabashi, T. Multi-Agent Systems for Power Engineering Applications—Part I: Concepts, Approaches, and Technical Challenges. IEEE Trans. Power Syst. 2007, 22, 1743–1752. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, S.; Wan, J.; Zhang, D.; Li, D.; Zhang, C. Towards smart factory for industry 4.0: A self-organized multi-agent system with big data based feedback and coordination. Comput. Netw. 2016, 101, 158–168. [Google Scholar] [CrossRef] [Green Version]
  4. Roscia, M.; Longo, M.; Lazaro, C.G. Smart City by multi-agent systems. In Proceedings of the International Conference on Renewable Energy Research and Applications (ICRERA), Madrid, Spain, 26–29 September 2013.
  5. Zhu, Y.; Yang, F.; Li, C.; Zhang, Y. Simultaneous Stability of Large-scale Systems via Distributed Control Network with Partial Information Exchange. Int. J. Control. Autom. Syst. 2018, 16, 1502–1511. [Google Scholar] [CrossRef]
  6. Molzahn, D.K.; Dorfler, F.; Sandberg, H.; Low, S.H.; Chakrabarti, S.; Baldick, R.; Lavaei, J. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems. IEEE Trans. Smart Grid 2017, 8, 2941–2962. [Google Scholar] [CrossRef]
  7. Garin, F.; Schenato, L. A Survey on Distributed Estimation and Control Applications Using Linear Consensus Algorithms. Sens. Control Auton. Veh. 2010, 406, 75–107. [Google Scholar]
  8. Cao, Y.; Ren, W. Optimal Linear-Consensus Algorithms: An LQR Perspective. IEEE Trans. Syst. Man Cybern. Part. B (Cybern.) 2010, 40, 819–830. [Google Scholar]
  9. Nguyen, D.H. A sub-optimal consensus design for multi-agent systems based on hierarchical LQR. Automatica 2015, 55, 88–94. [Google Scholar] [CrossRef]
  10. Li, X.; Zhou, Q.; Li, P.; Li, H.; Lu, R. Event-Triggered Consensus Control for Multi-Agent Systems Against False Data-Injection Attacks. IEEE Trans. Cybern. 2019, 50, 1856–1866. [Google Scholar] [CrossRef] [PubMed]
  11. Olfati-Saber, R. Kalman-Consensus Filter: Optimality, Stability, and Performance. In Proceedings of the Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009. [Google Scholar]
  12. Kamal, A.T.; Ding, C.; Song, B.; Farrell, J.A.; Roy-Chowdhury, A.K. A Generalized Kalman Consensus Filter for Wide-Area Video Networks. In Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011. [Google Scholar]
  13. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and Cooperation in Networked Multi-Agent Systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef] [Green Version]
  14. Kia, S.S.; Van Scoy, B.; Cortes, J.; Freeman, R.A.; Lynch, K.M.; Martinez, S. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control. Syst. 2019, 39, 40–72. [Google Scholar] [CrossRef] [Green Version]
  15. Sun, H.; Liu, Y.; Li, F.; Niu, X. A survey on optimal consensus of multi-agent systems. In Proceedings of the Chinese Automation Congress (CAC), Jinan, China, 22 October 2017. [Google Scholar]
  16. Zhang, H.; Yang, R.; Yan, H.; Yang, F.H. ∞ consensus of event-based multi-agent systems with switching topology. Inf. Sci. 2016, 370–371, 623–635. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, H.; Yu, W.; Wen, G.; Chen, G. Fixed-Time Consensus of Nonlinear Multi-Agent Systems with General Directed Topologies. IEEE Trans. Circuits Syst. II Express Briefs 2019, 66, 1587–1591. [Google Scholar] [CrossRef]
  18. Ma, J.; Zheng, Y.; Wang, L. LQR-based optimal topology of leader-following consensus. Int. J. Robust Nonlinear Control 2014, 25, 3404–3421. [Google Scholar] [CrossRef]
  19. Oh, K.-K.; Park, M.-C.; Ahn, H.-S. A survey of multi-agent formation control. Automatica 2015, 53, 424–440. [Google Scholar] [CrossRef]
  20. Zheng, J.; Xu, L.; Xie, L.; You, K. Consensusability of Discrete-Time Multiagent Systems with Communication Delay and Packet Dropouts. IEEE Trans. Autom. Control. 2019, 64, 1185–1192. [Google Scholar] [CrossRef]
  21. Razeghi-Jahromi, M.; Seyedi, A. Stabilization of Networked Control Systems with Sparse Observer-Controller Networks. IEEE Trans. Autom. Control. 2015, 60, 1686–1691. [Google Scholar] [CrossRef] [Green Version]
  22. Dibaji, S.M.; Annaswamy, A.; Chakrabortty, A.; Hussain, A. Sparse and Distributed Control of Wide-Area Power Systems with Large Communication Delays. In Proceedings of the Annual American Control Conference, Wisconsin Center, Milwaukee, WI, USA, 27–19 June 2018. [Google Scholar]
  23. Zhang, M.M.; Yun, H.Q.; Ju, W. The Topology Optimization Rule for Multi-Agent System Fast Consensus. In Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), Xiamen, China, 10–12 August 2018. [Google Scholar]
  24. Gupta, V.; Murray, R.M.; Shi, L.; Sinopoli, B. Networked Sensing, Estimation and Control Systems; Technical Report; Department of Control and Dynamical Systems, California Institute of Technology: Pasadena, CA, USA, 2009. [Google Scholar]
  25. Singh, G.; Gupta, N.; Khosravy, M. New crossover operators for real coded genetic algorithm (RCGA). In Proceedings of the 2015 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 28–30 November 2015; pp. 135–140. [Google Scholar]
  26. Gupta, N.; Patel, N.; Tiwari, B.N.; Khosravy, M. Genetic Algorithm Based on Enhanced Selection and Log-Scaled Mutation Technique. In Proceedings of the Future Technologies Conference, Vancouver, BC, Canada, 15–16 November 2018. [Google Scholar]
  27. Gupta, N.; Khosravy, M.; Patel, N.; Dey, N.; Mahela, O.P. Mendelian evolutionary theory optimization algorithm. Soft Comput. 2020, 24, 1–46. [Google Scholar] [CrossRef]
Figure 1. A quantized terminal cost matrix optimization algorithm.
Figure 1. A quantized terminal cost matrix optimization algorithm.
Electronics 10 01082 g001
Figure 2. A greedy terminal cost matrix optimization algorithm.
Figure 2. A greedy terminal cost matrix optimization algorithm.
Electronics 10 01082 g002
Figure 3. Sequential edge removal algorithm.
Figure 3. Sequential edge removal algorithm.
Electronics 10 01082 g003
Figure 4. A network consisting of 5 nodes and 8 edges.
Figure 4. A network consisting of 5 nodes and 8 edges.
Electronics 10 01082 g004
Figure 5. The LQR cost of the proposed algorithm for λ.
Figure 5. The LQR cost of the proposed algorithm for λ.
Electronics 10 01082 g005
Figure 6. Average number of edges for different N values.
Figure 6. Average number of edges for different N values.
Electronics 10 01082 g006
Figure 7. The LQR cost of the proposed algorithm for different N values.
Figure 7. The LQR cost of the proposed algorithm for different N values.
Electronics 10 01082 g007
Figure 8. Average processing time for different N values.
Figure 8. Average processing time for different N values.
Electronics 10 01082 g008
Table 1. The objective and the criterion in existing literature.
Table 1. The objective and the criterion in existing literature.
ReferencesObjectiveCriterion
[18]Topology optimizationMinimization of the sum of weighted squared consensus error and weighted squared control input
[20]Feasibility condition of topology Mean-square consensusability
[21]Sparsest topology for a distributed observer-controller networkMinimization of the number of edges with constraints on the control gain and stabilization conditions
[22]Sparsest topology for damping the power oscillations of wide area power systemsEnhancement of the closed-loop damping factor in consideration of participation factor
[23]Sparsification of the existing topology Acceleration of the convergence speed of the consensus
Proposed AlgorithmsSparsification of the existing topology Minimization of sum of the LQR cost and the number of edges
Table 2. The summary of the three proposed algorithms.
Table 2. The summary of the three proposed algorithms.
AlgorithmsObjectiveSummary of Procedure
QTCMO m i n F , Y ψ F , Y U p d a t e     F   a n d     Y   i t e r a t i v e l y   u n i t l     c o n v e r g e n c e Q u a n t i z e     t h e     e l e m e n t     o f     H     a s     1     o r     0 I f   t h e   n e t w o r k     t o p o log y   i s     n o t     c o n n e c t e d ,     a d d     e d g e s
GTCMO m i n F , Y ψ F , Y U p d a t e     F   a n d     Y   Q u n a t i z e     t h e     m a x i m a l     e l e m e n t     i n     H     a s     1     Q u n a t i z e     t h e     s m a l l     e l e m e n t s     i n     H     a s     0     R e p e a t     i t   u n t i l     c o n v e r g e n c e I f   t h e   n e t w o r k     t o p o log y   i s     n o t     c o n n e c t e d ,     a d d     e d g e s
SER m i n H J k T F i n d     a n     e d g e     f o r     e x c l u s i o n R e p e a t     i t     u n t i l     c o n v e r g e n c e
Table 3. The resulting total number of edges with the proposed sparsifying algorithms for λ.
Table 3. The resulting total number of edges with the proposed sparsifying algorithms for λ.
λOpt.QTCMOGTCMOSER
10−44848
10−34848
10−24846
10−14844
1004844
1014444
1024444
1034444
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, J.; Choi, Y. LQR-Based Sparsification Algorithms of Consensus Networks. Electronics 2021, 10, 1082. https://doi.org/10.3390/electronics10091082

AMA Style

Yang J, Choi Y. LQR-Based Sparsification Algorithms of Consensus Networks. Electronics. 2021; 10(9):1082. https://doi.org/10.3390/electronics10091082

Chicago/Turabian Style

Yang, Janghoon, and Yungho Choi. 2021. "LQR-Based Sparsification Algorithms of Consensus Networks" Electronics 10, no. 9: 1082. https://doi.org/10.3390/electronics10091082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop