Next Article in Journal
Metric Learning with Dynamically Generated Pairwise Constraints for Ear Recognition
Previous Article in Journal
The (T, L)-Path Model and Algorithms for Information Dissemination in Dynamic Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Individual Security and Network Design with Malicious Nodes

1
Institute of Informatics, University of Warsaw, Banacha 2, 02-097 Warsaw, Poland
2
CMAP, École Polytechnique, CNRS and INRIA, Route de Saclay, 91128 Palaiseau CEDEX, France
*
Author to whom correspondence should be addressed.
Information 2018, 9(9), 214; https://doi.org/10.3390/info9090214
Submission received: 15 August 2018 / Revised: 21 August 2018 / Accepted: 23 August 2018 / Published: 25 August 2018
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Networks are beneficial to those being connected but can also be used as carriers of contagious hostile attacks. These attacks are often facilitated by exploiting corrupt network users. To protect against the attacks, users can resort to costly defense. The decentralized nature of such protection is known to be inefficient, but the inefficiencies can be mitigated by a careful network design. Is network design still effective when not all users can be trusted? We propose a model of network design and defense with byzantine nodes to address this question. We study the optimal defended networks in the case of centralized defense and, for the case of decentralized defense, we show that the inefficiencies due to decentralization can be mitigated arbitrarily well when the number of nodes in the network is sufficiently large, despite the presence of the byzantine nodes.

1. Introduction

Game theoretic models of interdependent security have been used to study security of complex information and physical systems for more than a decade [1]. One of the key findings is that the externalities resulting from security decisions made by selfish agents lead to potentially significant inefficiencies. This motivates research on methods for improving information security, such as insurance [2] and network design [3,4]. We study the problem of network design for interdependent security in the setup where a strategic adversary collaborates with some nodes in order to disrupt the network.

1.1. The Motivation

Our main motivation is computer network security in the face of contagious attack by a strategic adversary. Examples of contagious attacks are stealth worms and viruses that gradually spread over the network, infecting subsequent unprotected nodes. Such attacks are considered among the main threats to cyber security [5]. Moreover, the study of the data from actual attacks demonstrates that the attackers spend time and resources to study the networks and choose the best place to attack [5]. Zou et al. and Chen et al. [6,7] developed models of the spread of internet worms, and Dainotti et al. [8] developed an approach to analyse the network traffic generated by internet worms.
Direct and indirect infection can be prevented by taking security measures that are costly and effective (i.e., provide sufficiently high safety to be considered perfect). Examples include using the right equipment (such as dedicated high quality routers), software (antivirus software, firewall), and following safety practices. All of these measures are costly. In particular, having antivirus software is cheap but using it can be considered to be costly, safety practices may require staff training, staying up to date with possible threats, creating backups, updating software, hiring specialized, well-paid staff. The security decisions are made individually by selfish nodes. Each node derives benefits from the nodes it is connected to (directly or indirectly) in the network. An example is Metcalfe’s law (attributed to Robert Metcalfe [9], a co-inventor of Ethernet) stating that each node’s benefits from the network are equal to the number of nodes it can reach in the network, and the value of a connected network is equal to the square of the number of its nodes. An additional threat faced by the nodes in the network is the existence of malicious nodes whose objectives are aligned with those of the adversary: they aim to disrupt the network [10,11].

1.2. Contribution

We study the effectiveness of network design for improving system security with malicious (or byzantine) players and strategic adversary. To this end, we propose and study a three stage game played by three classes of players: the designer, the adversary, and the nodes. Some of the nodes are malicious and cooperate with the adversary. The identity of the nodes is their private information, known to them and to the adversary only. The designer moves first, choosing the network of links between nodes. Then, costly protection is assigned to the nodes. We consider two methods of protection assignments: the centralized one, where the designer chooses the nodes to receive protection, and the decentralized one, where each node decides individually and independently whether to protect or not. Lastly, the adversary observes the protected network and chooses a single node to infect. The protection is perfect and each non-byzantine node can be infected only if she is unprotected. The byzantine nodes only pretend to use the protection and can be infected regardless of whether they are protected or not. After the initial node is infected, the infection spreads to all the nodes reachable from the origin of infection via a path containing unprotected or byzantine nodes. We show that if the protection decisions are centralized, so that the designer chooses both the network and the protection assignment, then either choosing a disconnected network with unprotected components of equal size or a generalized star with protected core is optimal. When protection decisions are decentralized, then, for a sufficiently large number of nodes, the designer can resort to choosing the generalized star as well. In the case of sufficiently well-behaved returns from the network (including, for example, Metcalfe’s law), the protection chosen by the nodes in equilibrium guarantees outcomes that are asymptotically close to the optimum. Hence, in such cases, the inefficiencies due to defense decentralization can be fully mitigated even in the presence of byzantine nodes.

1.3. Related Work

This work falls into broad research on network security. The majority of research in this area focuses on developing technologies for protecting against, detecting and mitigating malicious attacks on network components that threaten to compromise the whole network. The most recent examples of such technologies are the security solutions for cloud storage and cloud computing systems ([12,13]) and vehicular ad hoc networks ([14,15]). For example, Pooranian et al. [13] addresses the security of data deduplication in cloud storage. The authors propose a random response approach to deduplication allowing for maintaining a reasonable level of gains from deduplication and, at the same time, providing higher privacy by avoiding deterministic responses. Another example is [15], where the authors address the problem of security of vehicular ad hoc networks, considering the scenario of platooning, where a number of vehicles move in a column. They propose methods for mitigating effects of a malicious attack on a vehicle in the platoon. Yet another example is [16], which studies an approach to detection of anomalous network events, particularly denial of service attacks.
A suitable application related to our study are wireless sensory networks. This is due to the inherent power and memory limitations of such networks, which motivates economizing on deployment of security solutions, as well as to the fact that they are often located in hostile environments with intelligent adversaries (see [17] for a survey of wireless sensory networks security, [18] for a classification of security threats to wireless sensory networks, and [19,20] for recent contributions addressing security of such networks when the set of nodes changes dynamically). In line with [18], our work concerns availability, as the primary security goal, and secure localization, as the secondary security goal. Relevant attack types, based on [17], are active attacks that lead to nodes becoming unavailable, which may also prevent communication with other nodes in the network. This includes node outage, node malfunction and worm attacks. Relevant security mechanisms that, in our model, are abstracted as “protection”, are intrusion detection and resilient to node capture. There are two main approaches to provide protection against intrusion and undesirable behaviour of nodes in wireless ad hoc networks: detection and exclusion [21,22,23,24,25], and incentivising the right behaviour. Important examples of the first approach are solutions based on watchdogs ([21,25]) that allow for discovering malicious nodes by equipping network nodes with watchdogs that overhear internet traffic of their neighbours. Malicious nodes are then excluded from the networks. Kargl et al. [24] proposes improved detection methods based on a larger suite of sensors to enhance detection effectiveness. In [22], an exclusion method based on nodes’ reputation is proposed. An approach closer in spirit to the approach we consider in this paper is based on creating incentives for the nodes to behave in a desirable manner. The strength of wireless ad hoc networks comes from nodes sharing their computational power to transfer traffic. This, however, is costly and nodes may be selfish, withholding their resources while free riding on the services of other nodes. Butty et al. and Zhong et al. [26,27] propose an incentivising mechanism based on using virtual currencies or credits.
Our work addresses a similar problem, where nodes in the network contribute to the security of the overall system by making individual protection decisions. Since protection is costly, this creates a danger of selfish behaviour, where some nodes choose no protection and rely on other nodes choosing protection. In contrast to the network traffic problem, described above, the nodes share a common goal in that they all benefit from the network being secure. This allows for creating an incentivising mechanism without money, by choosing the right network topology that makes some of the nodes exposed to attacks in case they do not protect.
There are two, overlapping strands of literature that our work is most related to: the interdependent security games [1] and multidefender security games [28,29,30]. Early research on interdependent security games assumed that the players only care about their own survival and that there are no benefits from being connected [31,32,33,34,35,36,37]. In particular, the authors of [33] study a setting in which the network is fixed beforehand, nodes only care about their own survival, attack is random, protection is perfect, and contagion is perfect: infection spreads between unprotected nodes with probability 1. The focus is on computing Nash equilibria of the game and estimating the inefficiencies caused by defense decentralization. They show that finding one Nash equilibrium is doable in polynomial time, but finding the least or most expensive one is NP-hard. They also point out the high inefficiency of decentralized protection, by showing unboundedness of the price of anarchy. In [34,35], techniques based on local mean field analysis are used to study the problem of incentives and externalities in network security on random networks. In a more recent publication [37], individual investments in protection are considered. The focus is on the strategic structure of the security decisions across individuals and how the network shapes the choices under random versus targeted attacks. The authors show that both under- and overinvestment may be present when protection decisions are decentralized. Slightly different, but related, models are considered in [38,39,40,41,42]. In these models, the defender chooses a spanning tree of a network, while the attacker chooses a link to remove. The defender and the adversary move simultaneously. The attack is successful if the chosen link belongs to the chosen spanning tree. Polynomial time algorithms for computing optimal attack and defense strategies are provided for several variants of this game. For a comprehensive review of interdependent security games, see an excellent survey [1].
Multidefender security games are models of security where two or more defenders make security decisions with regard to nodes, connected in a network, and prior to an attack by a strategic adversary. Each of the defenders is responsible for his own subset of nodes and the responsibilities of different defenders are non-overlapping. The underlying network creates interdependencies between the defenders’ objectives, which result in externalities, like in the interdependent security games. The distinctive feature of multidefender security models is the adopted solution concept: the average case Stackelberg equilibrium. The model is two stage. In the first stage, the defenders commit to mixed strategies assigning different types of security configurations across the nodes. In the second stage, the adversary observes the network and chooses an attack. The research focuses on equilibrium computation and quantification of inefficiencies due to distributed protection decisions.
Papers most related to our work are [3,4,11,43]. The authors of [10] introduce malicious nodes to the model of [33]. The key finding in that paper is that the presence of malicious nodes creates a “fear factor” that reduces the problem of underprotection due to defense decentralization. Inspired by [10,11], we also consider malicious nodes in the context of network defense. We provide a formal model of the game with such nodes as a game with incomplete information. Our contribution, in comparison to [11], lies in placing the players in a richer setup, where nodes care about their connectivity as well as their survival, and where both underprotection (i.e., insufficiently many nodes protect as compared to an optimum) and overprotection (excessively many nodes protect as compared to an optimum) problems are present. This leads to a much more complicated incentives structure. In particular, the presence of malicious nodes may lead to underprotection, as nodes may be unable to secure sufficient returns from choosing protection on their own.
Cerdeiro et al. [3,4] consider the problem of network design and defense prior to the attack by a strategic adversary. In a setting where the nodes care about both their connectivity and their survival, the authors study the inefficiencies caused by defense decentralization and how they can be mitigated by network design. The authors show that both underprotection as well as overprotection may appear, depending on the costs of protection and network topology. Both inefficiencies can be mitigated by network design. In particular, the underprotection problem can be fully mitigated by designing a network that creates a cascade of incentives to protect. Our work builds on [3,4] by introducing malicious nodes to the model. We show how the designer can address the problem of uncertainty about the types of nodes and, at the same time, mitigate the inefficiencies due to defense decentralization. Lastly, in [43], a model of decentralized network formation and defense prior to the attack by adversaries of different profiles is considered. The authors show, in particular, that, despite the decentralized protocol of network formation, the inefficiencies caused by defense decentralization are relatively low.
The rest of the paper is structured as follows. In Section 2, we define the model of the game, which we then analyze in Section 3. In Section 4, we discuss possible modifications of our model. We provide concluding remarks in Section 5. Appendix A contains the proofs of the most technical results.

2. The Model

There are ( n + 2 ) players: the designer (D), the nodes (V), and the adversary (A). In addition, each of the nodes is of one of two types: a genuine node (type 1) or a byzantine node (type 0). We assume that there are at least n = 3 nodes and that there is a fixed amount n B 1 of byzantine nodes. The byzantine nodes cooperate with the adversary and their identity is known to A. All the nodes know their own type only. On the other hand, the adversary has complete information about the game. We suppose that he infects a subset of n A 1 nodes. A network over a set of nodes V is a pair G = ( V , E ) , where E { i j : i , j V } is the set of undirected links of G. Given a set of nodes V, G ( V ) denotes the set of all networks over V and G = U V G ( U ) is the set of all networks that can be formed over V or any of its subsets. The game proceeds in four rounds (the numbers n 3 , n B 1 , n A 1 are fixed before the game):
  • The types of the nodes are realized.
  • D chooses a network G G ( V ) , where G ( V ) is the set of all undirected networks over V.
  • Nodes from V observe G and choose, simultaneously and independently, whether to protect (what we denote by 1) or not (denoted by 0). This determines the set of protected nodes Δ . The protection of the byzantine nodes is fake and, when attacked, such a node gets infected and transmits the infection to all her neighbors.
  • A observes the protected network ( G , Δ ) and chooses a subset I V consisting of | I | = n A 1 nodes to infect. The infection spreads and eliminates all unprotected or byzantine nodes reachable from I in G via a path that does not contain a genuine protected node from Δ . This leads to the residual network obtained from G by removing all the infected nodes.
Payoffs to the players are based on the residual network and costs of defense. The returns from a network are measured by a network value function Φ : U V G ( U ) R that assigns a numerical value to each network that can be formed over a subset U of nodes from V.
A path in G between nodes i , j V is a sequence of nodes i 0 , , i m V such that i = i 0 , j = i m , m 1 , and i k 1 i k E for all k = 1 , , m . Node j is reachable from node i in G if i = j or there is a path between them in G. A component of a network G is a maximal set of nodes C V such that for all i , j C , i j , i and j are reachable in G. The set of components of G is denoted by C ( G ) . Given a network G and a node i V , C i ( G ) denotes the component C C ( G ) such that i C . Network G is connected if | C ( G ) | = 1 .
We consider the following family of network value functions:
Φ ( G ) = C C ( G ) f ( | C | ) ,
where the function f : R 0 R is increasing, strictly convex, satisfies f ( 0 ) = 0 , and, for all x 1 , satisfies the inequalities
f ( 3 x ) 2 f ( 2 x ) , f ( 3 x + 2 ) f ( 2 x + 2 ) + f ( 2 x + 1 ) .
In other words, the value of a connected network is an increasing and strictly convex function of its size. The value of a disconnected network is equal to the sum of values of its components. These assumptions reflect the idea that each node derives additional utility from every node she can reach in the network. In the last property, we assume that these returns are sufficiently large: the returns from increasing the size of a component by 50 % are higher than the returns from adding an additional, separate, component of the same size to the network. Such form of network value function is in line with Metcalfe’s law, where the value of a connected network over x nodes is given by f ( x ) = x 2 , as well as with Reed’s law, where the value of a connected network is of exponential order with respect to the number of nodes (e.g., f ( x ) = 2 x 1 ).
Before defining payoff to a node from a given network, defense, and attack, we formally define the residual network. Given a network G = ( V , E ) and a set of nodes Z V , let G Z denote the network obtained from G by removing the nodes from Z and their connections from G. Thus, G Z = ( V Z , E [ V Z ] ) , where E [ V Z ] = { i j E : i , j V Z } . Given defense Δ and the set of byzantine nodes B, the graph A ( G Δ , B ) = G Δ B is called the attack graph. By infecting a node i V , the adversary eliminates the component of i in the attack graph, C i ( A ( G Δ , B ) ) (We define C i ( A ( G Δ , B ) ) = for every i Δ B ). Hence, if the adversary infects a subset I V of nodes, then the residual network (i.e., the network that remains) after such an attack is R ( G Δ , B , I ) = G i I C i ( A ( G Δ , B ) ) .
Nodes’ information about whether they are genuine or byzantine is private. Similarly, the adversary’s information about the identity of the byzantine nodes is private. As usual in games with incomplete information, private information of the players is represented by their types. The type of a node i V is represented by θ i { 0 , 1 } ( θ i = 1 means that i is genuine and θ i = 0 means that i is byzantine) and the type of the adversary is represented by θ A V n B (Given a finite set X, X t denotes the set of subsets of X of cardinality t). A vector θ = ( θ 1 , , θ n , θ A ) of players’ types is called a type profile. The type profile must be consistent so that the byzantine nodes are really known to the adversary. The set of consistent type profiles is
Θ = { ( θ 1 , , θ n , θ A ) : θ A = { i V : θ i = 0 } , | θ A | = n B } .
Remark 1.
We point out that B V is the set of byzantine nodes (i.e., the true state of the world) while θ A denotes the beliefs of the adversary. The consistency assumption implies that the beliefs of the adversary are correct and θ A = B .
The adversary aims to minimize the gross welfare (i.e., the sum of nodes’ gross payoffs), which is equal to the value of the residual network. Given a network G, the set of protected nodes Δ , and the type profile θ Θ , the payoff to the adversary from infecting the set of nodes I is
u A ( G , Δ , I θ ) = Φ ( R ( G Δ , B , I ) ) = C C ( R ( G Δ , B , I ) ) f ( | C | ) .
The designer aims to maximize the value of the residual network minus the cost of defense. Notice that this cost includes the cost of defense of the byzantine nodes. Formally, a designer’s payoff from network G under defense Δ , the set of infected nodes I, and the type profile θ is equal to
u D ( G , Δ , I θ ) = Φ ( R ( G Δ , I , B ) ) | Δ | c = C C ( R ( G Δ , I , B ) ) f ( | C | ) | Δ | c .
The gross payoff to a genuine (i.e., not a byzantine) node j V in a network G is equal to f ( | C j ( G ) | ) / | C j ( G ) | . In other words, each genuine node gets the equal share of the value of her component. The net payoff of a node is equal to the gross payoff minus the cost of protection. A genuine node gets payoff 0 when removed. Defense has cost c R > 0 . The byzantine nodes have the same objectives as the adversary and their payoff is the same as that of A . Formally, payoff to node j V given network G with defended nodes Δ , set of infected nodes I, and type profile θ Θ is equal to
u j ( G , Δ , I θ ) = u A ( G , Δ , I θ ) if θ j = 0 , f ( | C j ( R ( G Δ , B , I ) ) | ) | C j ( R ( G Δ , B , I ) ) | if θ j = 1 , j Δ , and j i I C i ( A ( G Δ , B ) ) , f ( | C j ( R ( G Δ , B , I ) ) | ) | C j ( R ( G Δ , B , I ) ) | c if θ j = 1 and j Δ , 0 if θ j = 1 , j Δ , and j i I C i ( A ( G Δ , B ) ) .
The adversary and the byzantine nodes make choices that maximize their utility. The designer and the nodes have incomplete information about the game and we assume that they are pessimistic, making choices that maximize the worst possible type realization [44]. Formally, the pessimistic utility of genuine (i.e., of type θ j = 1 ) node j from network G, set of protected nodes Δ , and set of infected nodes I, is
U ^ j ( G , Δ , I ) = inf ( θ j , 1 ) Θ u j ( G , Δ , I ( θ j , 1 ) ) .
Similarly, the pessimistic utility of the designer from network G, a set of protected nodes Δ , and a set of infected nodes I, is
U ^ D ( G , Δ , I ) = inf θ Θ u D ( G , Δ , I θ ) .
To summarize, the set of players is P = V { D , A } . The set of strategies of player D is S D = G ( V ) . A strategy of each node j is a function δ j : G ( V ) × { 0 , 1 } { 0 , 1 } that, given network G G ( V ) and node’s type θ j { 0 , 1 } , provides the defense decision δ j ( G , θ j ) of the node. The individual strategies of the nodes determine function Δ : G ( V ) × { 0 , 1 } V 2 V providing, given network G G ( V ) and nodes’ types profile θ A { 0 , 1 } V , the set of defended nodes Δ ( G θ A ) = { j V : δ j ( G , θ j ) = 1 } . The set of strategies of each node j V is S j = 2 G ( V ) × { 0 , 1 } . A strategy of player A is a function x : G ( V ) × 2 V × V n B V n A that, given network G G ( V ) , set of protected nodes Δ V , and adversary’s type θ A V n B , provides the set of nodes to infect x ( G , Δ , θ A ) . The set of strategies of player A is S A = V n A G ( V ) × 2 V × V n B .
Abusing the notation slightly, we use the same notation for utilities of the players from the strategy profiles in the game. Thus, given strategy profile ( G , Δ , x ) and type profile θ , the payoff to player j V { D , A } is u j ( G , Δ , x θ ) = u j ( G , Δ ( G ) , x ( G , Δ ( G ) ) θ ) , the pessimistic payoff to player j V B is
U ^ j ( G , Δ , x ) = inf ( θ j , 1 ) Θ u j ( G , Δ ( G ) , x ( G , Δ ( G ) ) ( θ j , 1 ) ) ,
and the pessimistic payoff to the designer is given by
U ^ D ( G , Δ , x ) = inf θ Θ u D ( G , Δ ( G ) , x ( G , Δ ( G ) ) θ ) .
By convention, we say that the pessimistic payoff of a byzantine node is the same as her payoff. We are interested in subgame perfect mixed strategy equilibria of the game with the preferences of the players defined by the pessimistic payoffs. We call them the equilibria, for short. We make the usual assumption that, when evaluating a mixed strategy profile, the players consider the expected value of their payoffs from the pure strategies. In the case of the designer and the genuine nodes, these are expected pessimistic payoffs.
Throughout the paper, we will also refer to the subgames ensuing after network G is chosen. We will denote such subgames by Γ ( G ) and call them network subgames. We will abuse the notation by using the same letters to denote the strategies in Γ ( G ) and in Γ . The set of strategies of each node i V in game Γ ( G ) is { 0 , 1 } { 0 , 1 } . Given type profile θ A { 0 , 1 } V , individual strategies of the nodes determine function Δ : { 0 , 1 } V 2 V that provides the set of defended nodes Δ ( θ A ) = { j V : δ j ( θ j ) = 1 } . The set of strategies of the adversary in Γ ( G ) is V n A 2 V × V n B .
All the key notations are summarized in Table 1.

2.1. Remarks on the Model

We make a number of assumptions that, although common for interdependent security games, are worth commenting on. Firstly, we assume that protection is perfect. This assumption is reasonable when available means of protection are considered sufficiently reliable and, in particular, deter the adversary towards the unprotected nodes. Arguably, this is the case for protection means used in cybersecurity. Secondly, we assume that the designer and genuine nodes are pessimistic and maximize their worst-case payoff. Such an approach is common in computer science and is in line with trying to provide the worst-case guarantees on system performance. One can also take the probabilistic approach (by supposing that the distribution of the byzantine nodes is given by a random variable). In Section 4, we discuss how our results carry over to such model.

3. The Analysis

We start the analysis by characterizing the centralized defense model, where the designer chooses both the network and the defense assignment to the nodes. After that, the adversary observes the protected network and nodes’ types and chooses the nodes to infect. We focus on the first nontrivial case n B = n A = 1 . In this case, we are able to characterize networks that are optimal to the designer. The topology of these networks is based on the generalized k-stars. We then turn to the decentralized defense and study the cost of decentralization. It turns out that the topology of k-star gives an asymptotically low cost of decentralization not only for the simple case studied earlier but for all possible values of parameters n B and n A . This is enough to prove our main result, Theorem 1, providing bounds on the price of anarchy.

3.1. Centralized Defense

Fix the parameters n B and n A and suppose that the designer chooses both the network and the protection assignment. This leads to a two stage game where, in the first round, the designer chooses a protected network ( G , Δ ) and in the second round the adversary observes the protected network and nodes’ types (recognizing the byzantine nodes) and chooses the nodes to attack. Payoffs to the designer and to the adversary are as described in Section 2 and we are interested in subgame perfect mixed strategy equilibria of the game with pessimistic preferences of the designer. We call them equilibria, for short. Notice that, since the decisions are made sequentially, there is always a pure strategy equilibrium of this game. In this section, we focus only on such equilibria. Furthermore, the equilibrium payoff to the designer is the same for all equilibria. We denote this payoff by U ^ D ( n , c ) .
In the rest of this subsection we focus on the case n B = n A = 1 . In this case, when the protection is chosen by the designer, two types of protected networks can be chosen in an equilibrium (depending on the value function and the cost of defense): a disconnected network with no defense or a generalized star with protected core and, possibly, one or two unprotected components. Before stating the result characterizing equilibrium defended networks and equilibrium payoffs to the designer, we need to define the key concept of a generalized star and some auxiliary quantities. We start with the definition of a generalized star. If G = ( V , E ) is a network and V V is a subset of nodes, then we denote by G [ V ] the subnetwork of G induced by V , i.e., the network G [ V ] = ( V , { i j E : i , j V } ) .
Definition 1 
(Generalized k-star). Given k 1 and a set of nodes, V, a generalized k-star over V is a network G = ( V , E ) such that the set of nodes V can be partitioned into two sets, C (the core) of size | C | = k and P (the periphery), in such a way that G [ C ] is a clique, every node in P is connected to exactly one node in C, and every node in C is connected to n / k 1 or n / k 1 nodes in P.
Roughly speaking, a generalized k-star is a core-periphery network with the core consisting of k nodes and the periphery consisting of the remaining n k nodes. The core is a clique, each periphery node is connected to exactly one core node and they are distributed evenly across the core nodes. An example of a generalized star is depicted in Figure 1.
Now, we turn to defining some auxiliary quantities. For any n 3 such that n mod 6 3 , we define
w 0 ( n ) = w 1 ( n ) = f n 2 + f ( 1 ) 1 { n mod 2 = 1 } ,
and, for every n such that n mod 6 = 3 , we define
w 0 ( n ) = w 1 ( n ) = max 2 f n 3 , f n 1 2 + f ( 1 ) .
Given n nodes, w 0 ( n ) is the maximal network value the designer can secure against a strategic adversary by choosing an unprotected network composed of three components of equal size or two components of equal size and possibly one disconnected node. This is also the maximal network value the designer can secure by choosing such a network with one protected node because, in the worst case scenario, the protected node is byzantine and may be infected.
For every k { 3 , , n } , let
w k ( n ) = f n 1 n 1 k + f ( 1 ) , if   n   mod   k = 1 , f n n k , otherwise .
Given n nodes and k 3 , w k ( n ) is the network value that the designer can secure by choosing a generalized k-star, with one node disconnected in the case of k dividing n 1 , having all core nodes protected and all periphery nodes unprotected.
We also define the following quantities:
A q = min f ( n q ) , f n q 2 + f ( q ) , B q = min f ( n q 1 ) , f n q 1 2 + f ( q ) ,
h q ( n ) = max A q , B q + f ( 1 ) ,
w 2 ( n ) = max q { 0 , , n 2 } h q ( n ) .
Given n nodes, w 2 ( n ) is the network value that the designer can secure by choosing a network composed of a generalized 2-star with a protected core and unprotected periphery, an unprotected component (of size q { 0 , , n 2 } ), and possibly one node disconnected from both of these components.
Finally, we define
K * ( n , c ) = arg max k { 0 , , n } w k ( n ) k c .
We point out that K * ( n , c ) never contains 1 (because c > 0 ). We are now ready to state the result characterizing equilibrium defended network and pessimistic equilibrium payoffs to the designer.
Proposition 1.
Let n B = n A = 1 , n 3 , c > 0 , and k K * ( n , c ) . Then, the pessimistic equilibrium payoff to the designer is equal to U ^ D ( n , c ) = w k k c . Moreover, there exists an equilibrium network ( G , Δ ) that has | Δ | = k protected nodes and the following structure:
(i) 
G has at most three connected components.
(ii) 
If k 3 and n mod k 1 , then G is a generalized k-star with protected core and unprotected periphery.
(iii) 
If k 3 and n mod k = 1 , then G is composed of a generalized k-star of size ( n 1 ) with protected core and unprotected periphery and a single unprotected node.
(iv) 
If k = 0 and n mod 6 3 , then G has two connected components of size n / 2 and, if n mod 2 = 1 , a single unprotected node.
(v) 
If k = 0 and n mod 6 = 3 , then G either has the structure described in Proposition 1 or G is composed of three components of size n / 3 , depending on the term achieving the maximum in Equation (4).
(vi) 
If k = 2 , then G is composed of a generalized 2-star with protected core and unprotected periphery, an unprotected component of size q { 0 , , n 2 } and, possibly, a single unprotected node. The size q is the number achieving maximum in Equation (6). The existence of a single unprotected node depends on the term achieving maximum in Equation (5).
The intuition behind this result is as follows. When the cost of defense is high, the designer is better off by not using any defense and partitioning the network into several components. Since the strategic adversary will always eliminate a maximal such component, the designer has to make sure that all the components are equally large. Due to the divisibility problems, one component may be of lower size. Thanks to our assumptions on the component value function f, the number of such components is at most three. Moreover, if there are exactly three components, then they are of equal size or the smallest one has size 1.
When the cost of defense is sufficiently low, it is profitable to the designer to protect some nodes. If the number of protected nodes is not smaller than 3 then, by choosing a generalized k-star with a fully protected core (of optimal size k 3 depending on the cost) and unprotected periphery, the designer knows that the strategic adversary is going to attack either the byzantine node (if she is among the core nodes) or any unprotected node (otherwise). An attack on the byzantine core node destroys that node and all periphery nodes attached to her. Thus, in the worst case, a core node with the largest number of periphery nodes connected to her is byzantine. By distributing the core nodes evenly, the designer minimizes the impact of this worst case scenario. Due to the divisibility problems, it may happen that some of the core nodes are connected to a higher number of periphery nodes. If this is the case for one core node only, then it is better to the designer to disconnect this one node from the generalized star. By doing so, the designer spares this node from destruction.
The case when there are exactly two protected nodes is special. Indeed, in this case, choosing a generalized 2-star with a protected core is not better than using no protection at all. This is because, in the worst case, the byzantine node is among the two protected ones. Therefore, it would be better to the designer to split the network into two unprotected components—this would result in the same network value after the attack without the need to pay the cost of protection. On the other hand, if the network consists of a generalized 2-star with protected core and an unprotected component, then the argument above ceases to be valid: even if the byzantine node is among the protected ones, splitting them may give the adversary an incentive to destroy the unprotected component. Therefore, a protection of two nodes may be used as a resource that ensures that one component survives the attack.
It is interesting to compare this result to an analogous result obtained in [3,4] for a model without byzantine nodes. There, depending on the cost of protection, three equilibrium protected networks are possible: an unprotected disconnected network (like in the case with a byzantine node), a centrally protected star, and a fully protected connected network. The existence of a byzantine node leads to a range of core-protected networks between the centrally protected star and the fully protected clique (which is a generalized n-star). Notice that a pessimistic attitude towards incomplete information results in the star network never being optimal: if only one node is protected then, in the worst case, the designer expects this node to be byzantine, which leads to losing all nodes after the attack by the adversary. Therefore, at least two nodes must be protected if protection is used in an equilibrium. Proof of Proposition 1 is given in Appendix A.
Example 1.
Fix a network value function f and suppose that n B = n A = 1 . Then, for a given ( n , c ) , Proposition 1 leads to a pseudopolynomial-time algorithm for finding an equilibrium network ( G , Δ ) . More precisely, one can simply enumerate all possible networks discussed in Proposition 1 and choose the one with maximal payoff to the Designer. Table 2 presents how the optimal network changes for different cost values when f ( x ) = x 2 and n { 12 , 30 , 50 } . For these values of n, it is never optimal to have one node that is disconnected from the rest of the network. Moreover, as we can see, for a given number n of nodes, not all possible generalized k-stars arise as optima. It is interesting to note that 3-stars have never appeared in our experiments as optimal networks for the value function f ( x ) = x 2 . Similarly, we have not found an example where it is optimal to defend exactly 2 nodes. The case where there is no defense but the network is split into 3 equal parts arises when n = 9 and the cost is high enough (i.e., c > 6.2 ), as already established in [3].
Remark 2.
In this section, we have characterized the optimal networks for the case n B = n A = 1 . Nevertheless, we have not found a network that has a substantially different structure than the ones described here and performs better for general values of n B and n A . We therefore suspect that the characterization for the general case is similar to the case n B = n A = 1 .

3.2. Decentralized Defense

Now, we turn attention to the variant of the model where defense decisions are decentralized. Our goal is to characterize the inefficiencies caused by decentralized protection decisions for general values of n B and n A . To this end, we need to compare equilibrium payoffs to the designer under centralized and decentralized defense. We start by establishing two results about the existence of equilibria in the decentralized defense game.
Firstly, since the game is finite, we get equilibrium existence by the Nash theorem. Notice that our use of the pessimistic aggregation of the incomplete information about types of nodes determines a game where the utilities of the nodes and the designer are defined by the corresponding pessimistic utilities. This game is finite and, by Nash theorem, it has a Nash equilibrium in mixed strategies. This leads to the following existence result.
Proposition 2.
There exists an equilibrium of Γ.
Proof. 
It can be shown that a stronger statement holds. More precisely, one can prove that, for any n , c , there exists an equilibrium e such that the strategies of the nodes do not depend on their types. Let us sketch the proof. We consider a modified model in which the nodes do not know their types (i.e., every node thinks that she is genuine, but some of them are byzantine). In this model, the (mixed) strategies of nodes are functions δ ˜ j : G ( V ) Σ ( { 0 , 1 } ) (If X is a finite set, then by Σ ( X ) we denote the set of all probability measures on X),
and every node receives a pessimistic utility of a genuine node, as defined in Equation (2). The strategies and payoffs to the adversary and the designer are as in the original model. Let x ¯ : G ( V ) × 2 V × V n B V n A denote any optimal strategy of the adversary (i.e., a function that, given a defended network and the position of the byzantine nodes B, returns a subset of nodes that is optimal to infect in this situation). If we fix x ¯ , then the game turns into a two-stage game (the designer makes his action first and then the nodes make their actions) with complete information. Therefore, this game has a subgame perfect equilibrium in mixed strategies. This equilibrium, together with x ¯ , forms an equilibrium e in the original model because, in the original model, a byzantine node cannot improve her payoff by a unilateral deviation. ☐
Fix the parameters n B , n A and let ε ( n , c ) denote the set of all equilibria of Γ with n nodes and the cost of protection c > 0 . Let U ^ D ( n , c ) denote the best payoff the designer can obtain in the centralized defense game (as discussed in Section 3.1). The price of anarchy is the fraction of this payoff over the minimal payoff to the designer that can be attained in equilibrium of Γ (for the given cost of protection c),
PoA ( n , c ) = U ^ D ( n , c ) min e ε ( n , c ) E U ^ D ( e ) .
Although pure strategy equilibria may not exist for some networks, they always exist on generalized stars. Moreover, when these stars are large enough, by choosing such a star, the designer can ensure that all genuine core nodes are protected. This is enough to characterize the price of anarchy as n goes to infinity (with a fixed cost c). The next proposition characterizes equilibria on generalized stars.
Proposition 3.
Let e ε be any equilibrium of Γ. Let G = ( V , E ) be a generalized k-star. Denote | V | = n , x = n k n A + 1 , and y = n n B n k . Furthermore, suppose that n k n B + 1 and x 2 . If the cost value c belongs to one of the intervals ( 0 , f ( 1 ) ) , ( f ( 1 ) , f ( x ) x ) , ( f ( y ) y , + ) , then the following statements about e restricted to Γ ( G ) hold:
  • all genuine nodes use pure strategies,
  • if c < f ( 1 ) , then all genuine nodes are protected,
  • if f ( 1 ) < c < f ( x ) x , then all genuine core nodes are protected and all genuine periphery nodes are not protected,
  • if f ( y ) y < c , then all genuine nodes are not protected.
Our main result estimates the price of anarchy using Proposition 3.
Theorem 1.
Suppose that, for all t 0 , the function f satisfies
lim n + f ( n ) / f ( n t ) = 1 .
Then, for any cost level c > 0 and any fixed parameters n B 1 , n A 1 we have
lim n + PoA ( n , c ) = 1 .
Proof. 
We start by noting that function f is superadditive (c.f. Lemma A1 in Appendix A). By superadditivity of f, the pessimistic payoff to the designer can be trivially bounded by U ^ D ( n , c ) f ( n ) . We now want to give a lower bound on the quantity min e ε ( n , c ) E U ^ D ( e ) . As we show in Appendix B (Lemma A5), lim x + f ( x ) x = + . Let N 1 + n A be a natural number such that f ( x ) x > c for all x N n A + 1 . For any n ( n B + 1 ) ( N + 1 ) , we define k = n N + 1 n B + 1 . Observe that, if we denote x = n k n A + 1 , then we have x n k n A N n A + 1 . Hence, if the designer chooses a generalized k-star, then Proposition 3 shows that all genuine core nodes are protected in any equilibrium. In particular, we have min e ε ( n , c ) E U ^ D ( e ) f ( n n B n k n A + 1 ) n c . Moreover, we can estimate
n k n k + 1 n n N + 1 1 + 1 n n N + 1 n 2 ( N + 1 ) + 1 = 2 N + 3 .
Hence, using Lemma A5, we get
lim n + PoA ( n , c ) lim n + f ( n ) f ( n n B ( 2 N + 3 ) n A + 1 ) n c = lim n + 1 f ( n n B ( 2 N + 3 ) n A + 1 ) f ( n ) n c f ( n ) = 1 .
 ☐
Remark 3.
Notice that the condition of Theorem 1 is verified for f ( x ) = x a with a 2 . Hence, in the case of such functions f, the price of anarchy is 1, so the inefficiencies due to decentralization are fully mitigated by the network design. This is true, in particular, for Metcalfe’s law.

4. Extensions of the Model

In the previous section, we have shown that the topology of generalized k-star mitigates the costs of decentralization in our model. Nevertheless, our approach can be used to show similar results in a number of modified models. For instance, one could consider a probabilistic model, in which n B byzantine nodes are randomly picked from the set of nodes V (and the distribution of this random variable is known to all players). Then, the designer and nodes optimize their expected utilities, not the pessimistic ones (where the expectation is taken over the possible positions of the byzantine nodes). In this case, we still can give a partial characterization of Nash equilibria on generalized k-stars. More precisely, one can show that, if the assumptions of Proposition 3 are fulfilled and f ( 1 ) < c < f ( x ) x , then all genuine core nodes are protected. This is exactly what we need in the proof of Theorem 1. Therefore, the price of anarchy in the probabilistic model also converges to 1 as the size of the network increases.

5. Conclusions

We studied a model of network defense and design in the presence of an intelligent adversary and byzantine nodes that cooperate with the adversary. We characterized optimal defended networks in the case where defense decisions are centralized, assuming that the number of byzantine nodes and the number of attacked nodes are equal to one. We have also shown (for any number of byzantine and attacked nodes) that, in the case of sufficiently well-behaved functions f (including f in line with Metcalfe’s law), careful network design allows for mitigating the inefficiencies due to decentralized protection decisions arbitrarily well when the number of nodes in the network is sufficiently large, despite the presence of the byzantine nodes. In terms of network design, we showed that a generalized star is a topology that can be used to achieve this goal. This topology creates incentives for protection by two means. Firstly, it is sufficiently redundant, so that the protected nodes are connected to several other protected nodes. This secures adequate network value even if some of these nodes are malicious. Secondly, it gives sufficient exposure to the nodes, encouraging the nodes that would benefit from protection to choose to protect through fear of being infected (either directly or indirectly). These results could be valuable, in particular, to policy-makers and regulators, showing that such regulations can have a strong effect and provide hints for which network structures are better and why.
An interesting avenue for future research is to consider a setup where not only the identities but also the number of byzantine nodes are unknown. How would the optimal networks look if the protection decisions are centralized? Can we still mitigate the inefficiencies caused by decentralization? Another interesting problem is the optimal networks under centralized protection when the number of byzantine nodes or the budget of the adversary are greater than 1. Based on our experiments, we suspect that the topology of these networks is very similar to the case considered here. Nevertheless, a formal result remains elusive.

Author Contributions

T.J. provided all the formal results together with proofs, figures, and numerical solutions. M.S. extended the analysis and simplified some proofs. The first two authors wrote Section 4 and Appendix A. M.D. proposed the problem of network design and defense with byzantine nodes, contributed to model formalization, checked the proofs, and wrote Section 1 and Section 2.

Funding

This work was supported by Polish National Science Centre through Grant No. 2014/13/B/ST6/01807.

Acknowledgments

Mateusz Skomra is supported by a grant from Région Ile-de-France.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Characterization of Equilibria in the Centralized Defense Model

In this section, we prove the characterization of equilibria given in Proposition 1. We start with some auxiliary lemmas.
Lemma A1.
For all x , y > 0 , we have f ( x + y ) > f ( x ) + f ( y ) .
Proof. 
From the strict convexity of f, we have f ( x ) = f ( x x + y ( x + y ) + y x + y · 0 ) < x x + y f ( x + y ) . Analogously, f ( y ) < y x + y f ( x + y ) . Hence, f ( x + y ) > f ( x ) + f ( y ) . ☐
Lemma A2.
For every t > 0 , the function g ^ t : R > 0 R > 0 defined as g ^ t ( x ) = f ( x + t ) f ( x ) is strictly increasing.
Proof. 
Let 0 < x < y . First, use the left inequality from Equation (A4) on the tuple ( x , x + t , y + t ) and then use the right inequality on the tuple ( x , y , y + t ) . ☐
Corollary A1.
For all y 1 and x 2 y , we have f ( x + y ) f ( x ) + f ( 2 y ) . For all y 1 and x 2 y + 2 , we have f ( x + y ) f ( x ) + f ( 2 y + 1 ) .
Proof. 
Both claims follow from Lemma A2 applied to g ^ y and Equation (1). ☐
Lemma A3.
Suppose that ( G , Δ ) is an equilibrium network and that | Δ | = k 2 . Then, there is a network ( G , Δ ) such that ( G , Δ ) is also an equilibrium network, | Δ | = k , all nodes from Δ belong to the same connected component, this component is a generalized k-star, and Δ is the core of this star. Furthermore, the component of G that contains Δ is strictly larger than other components of G.
Proof. 
We will show how to transform ( G , Δ ) into ( G , Δ ) without diminishing the pessimistic payoff to the designer. First, if two nodes i , j Δ are protected, then we add an edge between them. This does not decrease the designer’s payoff because there is only one byzantine node; hence, any attack infects at most one of the nodes i , j and the residual network after the attack is not smaller that before the addition of the edge. Therefore, we can suppose that the subnetwork G [ Δ ] is a clique. We focus on the connected component C of G that contains this clique. We will show that the remaining nodes of C can be distributed in such a way that they form a periphery of a generalized k-star.
Let G = ( V , E ) . For any i V , let V i V denote the set of nodes that get infected if i is byzantine and gets infected. In other words, V i contains i and all unprotected nodes j V Δ such that there is a path from i to j that passes only through unprotected nodes. We refer to Figure A1 for an example. Observe that any optimal attack of the adversary that infects a node from G infects in fact a set of nodes V j . Indeed, if this attack infects the byzantine node θ A , then the set of infected nodes is equal to V θ A . If, instead, this attack infects an unprotected genuine node i, then the set of infected nodes is equal to V i .
We do the following operation. We fix i Δ , we take all unprotected nodes that belong to V i , we delete all of their outgoing edges and, for every such node j, add the edge i j . An example of this operation is depicted in Figure A1. We will show that this operation does not decrease the pessimistic payoff to the designer. Denote the new network by G ˜ = ( V , E ˜ ) , and the corresponding sets by V ˜ for V . By the discussion in the preceding paragraph, it is enough to prove that, for every V , the connected components of the network G V do not get smaller after our operation. Suppose that j 0 j 1 E is an edge in G V for some V . We will prove that the node j 1 is still reachable from j 0 in the network G ˜ V ˜ . First, we need to prove that j 0 , j 1 do not belong to V ˜ . Indeed, if = i , then the claim is obvious because V ˜ i = V i . Otherwise, a path from to j p (for p { 0 , 1 } ) that goes through unprotected nodes in G ˜ cannot contain a node from V ˜ i , because unprotected nodes in V ˜ i have degree 1 and are connected to a protected node i. Thus, any such path does not contain a node from V i , and hence it is also a path in G. Therefore, j 0 , j 1 V ˜ . We can now prove that j 1 is reachable from j 0 in G ˜ V ˜ . If j 0 , j 1 V i , then j 0 j 1 is an edge in E ˜ and the claim is true. Otherwise, we have two possibilities. If both nodes j 0 , j 1 belong to V i , then j 0 i j 1 is a path in G ˜ . If only one of them belongs to V i , then the second one must belong to Δ , and hence j 0 i j 1 is still a path in G ˜ (because protected nodes form a clique in G ˜ ). Moreover, the node i does not belong to V because i . Therefore, the path j 0 i j 1 belongs to G ˜ V ˜ . We can repeat this reasoning for every edge in G V . As a consequence, if two nodes j , j V are connected by a path in G V , then they are still connected by a path in G ˜ V ˜ . Therefore, our operation does not decrease the pessimistic payoff of the designer.
We can repeat the operation presented above for every protected node i Δ . As a result, we get a network ( G , Δ ) such that G [ Δ ] is a clique and every unprotected node that belongs to the component C containing this clique has degree 1. It remains to prove that these nodes can be distributed evenly among the core protected nodes. Suppose that there are two protected nodes i , j Δ such that | V i | | V j | + 2 (where the sets V are defined as previously). We take an unprotected node V i , delete the edge i and add the edge j . This operation does not decrease the pessimistic payoff to the designer. Indeed, if the adversary infects a node in a component different than C, then the payoff to the designer does not change. Otherwise, the pessimistic utility to the designer is achieved when the adversary infects a byzantine node i * Δ such that the set V i * has maximal cardinality. Hence, this payoff does not decrease after our operation.
Finally, if the component of G that contains Δ is smaller than or equal to a component that does not contain any protected node, then it is more profitable to the adversary to infect this unprotected component. Hence, the designer can strictly improve his payoff by not using any protection at all, Δ = , which gives a contradiction to our assumptions. ☐
Figure A1. Transforming a network into a generalized star. Protected nodes are depicted in bold. In the left picture, we have V 1 = { 1 , 7 , 8 , 9 , 10 , 11 , 12 } , V 3 = { 3 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 } , and V 5 = { 5 } . The node 7 is reachable from 4 in G V 5 . In the right picture, the nodes { 7 , 8 , 9 , 10 , 11 , 12 } have degree 1 and are connected to node 1. The node 7 is still reachable from 4 in G ˜ V ˜ 5 .
Figure A1. Transforming a network into a generalized star. Protected nodes are depicted in bold. In the left picture, we have V 1 = { 1 , 7 , 8 , 9 , 10 , 11 , 12 } , V 3 = { 3 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 } , and V 5 = { 5 } . The node 7 is reachable from 4 in G V 5 . In the right picture, the nodes { 7 , 8 , 9 , 10 , 11 , 12 } have degree 1 and are connected to node 1. The node 7 is still reachable from 4 in G ˜ V ˜ 5 .
Information 09 00214 g0a1
Proof of Proposition 1. 
Let ( G , Δ ) be an equilibrium network. Let C 1 , C 2 , , C m denote its connected components, s i = | C i | for all i 1 , and assume that s 1 s 2 s m . We will show how to transform ( G , Δ ) into an equilibrium network with the topology described in the claim. Let k = | Δ | and observe that k 1 . Indeed, if | Δ | = 1 , then the pessimistic payoff to the designer is strictly larger if he stops protecting the one protected node (because, in the worst case scenario, this node is byzantine). First, suppose that k 3 . By Lemma A3, we can assume that C 1 is a generalized k-star with protected core and that s 1 > s 2 . We want to find an equilibrium in which s 2 { 0 , 1 } . If s 2 2 , then we consider the following transformation of the network ( G , Δ ) : we take the unprotected component C 2 and move all of its nodes to C 1 , spreading them in a regular fashion as depicted in Figure A2. More formally, we consider a network ( G , Δ ) such that G consists of a connected component C 1 having a topology of a generalized k-star of size s 1 + s 2 with protected core and unprotected components C 3 , C 4 , , C m . We will show that the network ( G , Δ ) gives a payoff to the designer that is no smaller than the payoff obtained from choosing ( G , Δ ) . First, observe that if the adversary infects the component C 2 in ( G , Δ ) , then the pessimistic payoff to the designer is equal to P 1 = f ( s 1 ) + f ( s 3 ) + f ( s 4 ) + + f ( s m ) . Moreover, if the adversary infects the component C 1 in ( G , Δ ) , then the pessimistic payoff to the designer is equal to
P 2 = f s 1 s 1 k + f ( s 2 ) + f ( s 3 ) + + f ( s m ) .
Hence, the pessimistic payoff to the designer from playing ( G , Δ ) is equal to min { P 1 , P 2 } . On the other hand, his payoff from playing ( G , Δ ) is equal to min { T 1 , T 2 } , where T 1 = f ( s 1 + s 2 ) + f ( s 4 ) + + f ( s m ) and
T 2 = f s 1 + s 2 s 1 + s 2 k + f ( s 3 ) + + f ( s m ) .
Therefore, it is enough to show that min { T 1 , T 2 } min { P 1 , P 2 } . By Lemma A1, we get f ( s 1 + s 2 ) f ( s 1 ) + f ( s 2 ) f ( s 1 ) + f ( s 3 ) . This shows that T 1 P 1 . To prove that T 2 min { P 1 , P 2 } , we consider multiple cases, depending on the relative sizes of C 1 and C 2 .
Case I:
Suppose that s 2 2 s 1 k . We then have T 2 P 1 by the inequality
f ( s 1 + s 2 s 1 + s 2 k ) f s 1 + s 2 s 1 k s 2 k f s 1 s 1 k + s 2 2 f ( s 1 ) .
Case II:
Suppose that 2 s 2 s 1 s 1 k . In this case, by Corollary A1,
f s 1 s 1 k + s 2 2 f s 1 s 1 k + f ( s 2 ) .
Thus, we have T 2 P 2 by combining A2 and the first two inequalities of A1.
Case III:
Suppose that s 1 s 1 k < s 2 < 2 s 1 k and s 2 2 . Let s 1 = k l + r , where 0 r < k and l 1 . If r = 0 , then we have k l l < 2 l , which is impossible for k 3 . Hence, r 1 and we have k l + r l 1 < s 2 < 2 l + 2 . Note that the open interval ( k l + r l 1 , 2 l + 2 ) contains an integer number if and only if ( 2 l + 2 ) ( k l + r l 1 ) 2 3 l + 1 k l + r . This condition is satisfied only for k = 3 and r = 1 . Hence, we have s 1 = 3 l + 1 and s 2 = 2 l + 1 for some l 1 . We want to prove that T 2 P 1 or, equivalently,
f 5 l + 2 5 l + 2 3 f ( 3 l + 1 ) .
If l = 1 , then this inequality takes form f ( 4 ) f ( 4 ) . If l 2 , then we have 5 l + 2 6 l and hence f 5 l + 2 5 l + 2 3 f ( 3 l + 2 ) f ( 3 l + 1 ) .
Therefore, there is an equilibrium network such that s 2 { 0 , 1 } . If s 2 = 1 and m 3 , then Lemma A1 and Corollary A1 give f ( s 1 1 ) + 2 f ( 1 ) < f ( s 1 1 ) + f ( 2 ) f ( s 1 ) . Therefore, it is more profitable to the adversary to infect one node from C 1 than to infect a component composed of two nodes. Thus, it would be strictly profitable to the designer to merge C 2 and C 3 , which gives a contradiction. Hence, we have s 2 = 0 or s 2 = 1 and m = 2 . It is easy to see that the first case is more profitable to the designer if n mod k 1 while the second case is more profitable if n mod k = 1 .
The proofs for the cases k = 0 and k = 2 are less involved than the one above, so we just sketch them. For k = 0 , the pessimistic payoff to the designer is equal to P = f ( s 2 ) + + f ( s m ) . We do the following transformations on the network: if s i = 2 l for some i 3 and l 1 , then we spread half of C i into C 1 and the other half into C 2 . By Corollary A1, we have f ( s 2 + l ) f ( s 2 ) + f ( s i ) , and hence this change is profitable to the designer. If s i is odd for all i 3 and we have m 4 , then we take all the nodes belonging to the union of C 3 and C 4 and spread half of them into C 1 and the other half into C 2 . This improves the designer’s payoff by the inequality f ( ( s 2 + 1 2 s 3 ) + 1 2 s 4 ) f ( s 2 + 1 2 s 3 ) + f ( s 4 ) f ( s 2 ) + f ( s 3 ) + f ( s 4 ) . Finally, if m = 3 and s 3 = 2 l + 1 is odd, greater than 1, and strictly smaller than s 2 , then we spread l nodes from C 3 to C 1 and l + 1 nodes to C 2 . By Corollary A1, we have min { f ( s 1 + l ) , f ( s 2 + l + 1 ) } f ( s 2 + l ) f ( s 2 ) + f ( s 3 ) and this change is profitable to the designer.
For k = 2 , we can suppose (as in the case k = 3 ), that C 1 is a generalized 2-star with a protected core and that s 1 > s 2 . The pessimistic payoff to the designer is equal to min { P 1 , P 2 } , where
P 1 = f ( s 1 ) + f ( s 3 ) + f ( s 4 ) + + f ( s m ) , P 2 = f s 1 s 1 2 + f ( s 2 ) + f ( s 3 ) + + f ( s m ) .
We do the following transformation on the network: if s 3 = 2 l , then we spread half of its nodes to C 2 and the other half to C 1 (so that C 1 becomes a generalized 2-star with s 1 + l nodes). By Corollary A1, we have f ( s 1 + l ) f ( s 1 ) + f ( s 3 ) and f ( s 2 + l ) f ( s 2 ) + f ( s 3 ) . Moreover, we have
f s 1 + l s 1 + l 2 f s 1 s 1 2 + l l 2 f s 1 s 1 2 .
Therefore, this change is profitable to the designer. If s 3 = 2 l + 1 is odd and greater that 1, then we do the following transformation: we spread l nodes to C 1 (so that C 1 becomes a generalized 2-star with s 1 + l nodes) and l + 1 nodes to C 2 . Equation (A3) still holds. Moreover, since s 1 > s 2 s 3 , Corollary A1 shows that f ( s 2 + l + 1 ) f ( s 2 + 1 ) + f ( s 3 ) f ( s 2 ) + f ( s 3 ) and f ( s 1 + l ) f ( s 1 ) + f ( s 3 ) . As before, this change is profitable to the designer. Finally, if s 3 = 1 and m 4 , then we have two cases. If s 1 3 , then, by the same reasoning as in the case k = 3 , merging C 3 and C 4 is profitable to the designer. Otherwise, we have s 1 = 2 , s 2 = 1 and s i = 1 for all i 3 . In this case, we have f ( s 1 ) = f ( 2 ) > 2 f ( 1 ) = f ( s 1 1 ) + f ( 1 ) . Therefore, the optimal attack of the adversary attacks the protected node. It is thus profitable to the designer to split the nodes forming C 1 and not use the protection.
To finish the proof, we observe that the quantities w k ( n ) correspond to the pessimistic payoffs of the designer achieved from choosing an equilibrium network with k protected nodes and the topology described in the claim. ☐
Figure A2. Spreading the nodes from an unprotected component to a core-protected generalized star.
Figure A2. Spreading the nodes from an unprotected component to a core-protected generalized star.
Information 09 00214 g0a2

Appendix B. Characterization of Equilibria in the Centralized Defense Model

In this section, we prove the characterization of equilibria given in Proposition 1 and also state and prove an auxiliary lemma used in the proof of Theorem 1. The proof of Proposition 3 requires the following lemma.
Lemma A4.
Let e ε be any equilibrium of Γ and x ¯ : G ( V ) × 2 V × V n B Σ ( V n A ) denote the (possibly mixed) strategy of the adversary in this equilibrium. Let ( G , Δ ) be a network such that G is a generalized k-star. Furthermore, suppose that n k 2 , n 3 , and that the set of byzantine nodes B contains a core node. Then, x ¯ ( G , Δ , θ A ) infects this node with probability one.
Proof. 
Since e is an equilibrium and the adversary has complete information about the network before making his decision, his strategy x ¯ ( G , Δ , θ A ) is a probability distribution over the set of subsets of nodes that are optimal to attack. Let b B denote any byzantine node that is also a core node. We will show that any optimal attack infects b.
To do so, fix any set of attacked nodes I V n A and suppose that attacking I does not infect b. Given the structure of generalized k-star, we see that I consists of genuine protected nodes and periphery nodes that are connected to genuine protected core nodes. To finish the proof, fix any node j I and observe that it is strictly better to the adversary to attack the set I { b } { j } . Indeed, if j is a genuine protected node, then attacking it does nothing, while attacking b destroys at least one more node. Moreover, if j is a periphery node connected to a genuine core protected node, then attacking b not only destroys one node but also disconnects the network (b is connected to at least one periphery node because n k 1 1 ). ☐
We are now ready to present the proof of Proposition 3.
Proof of Proposition 3.
Let x ¯ : G ( V ) × 2 V × V n B Σ ( V n A ) denote the strategy of the adversary in e and let Δ be any choice of protected nodes on G, Δ V . Let j V be a genuine node.
First, suppose that j Δ . We will show that the pessimistic payoff of j is equal to 0. On the one hand, this payoff is nonnegative for every possible choice of the infected node. On the other hand, we can bound it from above by supposing that there exists a byzantine node b B that is a core node and a neighbor of j. Then, Lemma A4 shows that x ¯ infects b, and the pessimistic payoff of j is not greater than 0.
Second, suppose that j Δ . Then, we have two possibilities. If j is a periphery node, then the same argument as above shows that the pessimistic payoff of j is equal to f ( 1 ) c . If j is a core node, then her payoff is bounded from below by f ( x ) x c (where x = n k n A + 1 ) for every possible choice of the set of infected nodes. Moreover, by supposing that every byzantine node is a core node, we see that the pessimistic payoff of j is bounded from above by f ( y ) y c (where y = n n B n k ).
Since the estimates presented above are valid for any choice of Δ , we get the desired characterization of equilibria. ☐
Proof of Theorem 1 requires an auxiliary lemma, concerning asymptotic properties of function f.
Lemma A5.
We have lim x + f ( x ) x = + .
Proof. 
Since f is strictly convex, for any 0 < x < y < z , we have ([45], Sect. I.1.1)
f ( y ) f ( x ) y x < f ( z ) f ( x ) z x < f ( z ) f ( y ) z y .
As a result, the function g t ( x ) = ( f ( x + t ) f ( t ) ) / x is strictly increasing for all t > 0 (to see that, let 0 < x < y and use the left inequality from Equation (A4) on the tuple ( t , x + t , y + t ) ). Since f is convex and increasing, it is also continuous on [ 0 , + ) ([45] Sect. I.3.1). By fixing x and taking t 0 , we get that the function x f ( x ) x is nondecreasing. Suppose that lim x + f ( x ) x = η < + . Then, by the assumption that f ( 3 x ) 2 f ( 2 x ) for all x 1 , we have
η = lim x + f ( x ) x = lim x + f ( 3 x ) 3 x lim x + 2 f ( 2 x ) 3 / 2 · 2 x = 4 3 η .
Hence, η 0 and f ( x ) = 0 for all x 0 , which contradicts the assumption that f is strictly convex. ☐

References

  1. Laszka, A.; Felegyhazi, M.; Buttyán, L. A survey of interdependent information security games. ACM Comput. Surv. 2015, 47, 23. [Google Scholar] [CrossRef] [Green Version]
  2. Böhme, R.; Schwartz, G. Modeling cyber-insurance: Towards a unifying framework. In Proceedings of the 9th Workshop on the Economics of Information Security (WEIS 2010), Cambridge, MA, USA, 7–8 June 2010. [Google Scholar]
  3. Cerdeiro, D.; Dziubiński, M.; Goyal, S. Individual security and network design. In Proceedings of the 15th ACM Conference on Economics and Computation (EC’14), Stanford, CA, USA, 9–12 June 2014; pp. 205–206. [Google Scholar] [CrossRef]
  4. Cerdeiro, D.; Dziubiński, M.; Goyal, S. Individual security, contagion, and network design. J. Econom. Theory 2017, 170, 182–226. [Google Scholar] [CrossRef]
  5. Weaver, N.; Paxson, V.; Staniford, S.; Cunningham, R. A taxonomy of computer worms. Proceedings of Tenth ACM Conference on Computer and Communications Security 2003, Washington, DC, USA, 27–30 October 2003; pp. 11–18. [Google Scholar]
  6. Zou, C.; Gong, W.; Towsley, D. Code red worm propagation modeling and analysis. In Proceedings of the 9th ACM Conference on Computer and Communications Security, Washington, DC, USA, 18–22 November 2002; pp. 138–147. [Google Scholar]
  7. Chen, Z.; Gao, L.; Kwiat, K. Modeling the spread of active worms. In Proceedings of the Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE INFOCOM’03), San Francisco, CA, USA, 30 March–3 April 2003; pp. 1890–1900. [Google Scholar]
  8. Dainotti, A.; Pescape, A.; Ventre, G. Worm Traffic Analysis and Characterization. In Proceedings of the 2007 IEEE International Conference on Communications, Glasgow, Scotland, 24–28 June 2007; pp. 1435–1442. [Google Scholar]
  9. Shapiro, C.; Varian, H. Information Rules: A Strategic Guide to the Network Economy; Harvard Business School Press: Boston, MA, USA, 2000. [Google Scholar]
  10. Moscibroda, T.; Schmid, S.; Wattenhofer, R. When selfish meets evil: Byzantine players in a virus inoculation game. In Proceedings of the 25th ACM Symposium on Principles of Distributed Computing (PODC 2006), Denver, CO, USA, 23–26 July 2006; pp. 35–44. [Google Scholar]
  11. Moscibroda, T.; Schmid, S.; Wattenhofer, R. The price of malice: A game-theoretic framework for malicious behavior. Int. Math. 2009, 6, 125–156. [Google Scholar] [CrossRef]
  12. Baccarelli, E.; Naranjo, P.; Shojafar, M.; Scarpiniti, M. Q*: Energy and delay-efficient dynamic queue management in TCP/IP virtualized data centers. Comput. Commun. 2017, 102, 89–106. [Google Scholar] [CrossRef]
  13. Pooranian, Z.; Chen, K.; Yu, C.; Conti, M. RARE: Defeating side channels based on data-deduplication in cloud storage. In Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Honolulu, HI, USA, 15–19 April 2018; pp. 444–449. [Google Scholar]
  14. Isaac, J.; Zeadally, S.; Cámara, J. Security attacks and solutions for vehicular ad hoc networks. IET Commun. 2010, 4, 894–903. [Google Scholar] [CrossRef]
  15. Petrillo, A.; Pescapé, A.; Santini, S. A collaborative approach for improving the security of vehicular scenarios: The case of platooning. Comput. Commun. 2018, 122, 59–75. [Google Scholar] [CrossRef]
  16. Dainotti, A.; Pescapé, A.; Ventre, G. A cascade architecture for DoS attacks detection based on the wavelet transform. J. Comput. Security 2009, 17, 945–968. [Google Scholar] [CrossRef]
  17. Chen, X.; Makki, K.; Yen, K.; Pissinou, N. Sensor network security: A survey. IEEE Commun. Sur. Tutor. 2009, 11, 52–73. [Google Scholar] [CrossRef]
  18. Padmavathi, G.; Shanmugapriya, D. A survey of attacks, security mechanisms and challenges in wireless sensor networks. arXiv, 2009; arXiv:0909.0576. [Google Scholar]
  19. Castiglione, A.; D’Arco, P.; De Santis, A.; Russo, R. Secure group communication schemes for dynamic heterogeneous distributed computing. Future Gener. Compute. Syst. 2017, 74, 313–324. [Google Scholar] [CrossRef]
  20. Kiskani, M.; Sadjadpour, H. A secure approach for caching contents in wireless ad hoc networks. IEEE Trans. Veh. Tech. 2017, 66, 10249–10258. [Google Scholar] [CrossRef]
  21. Marti, S.; Giuli, T.; Lai, K.; Baker, M. Mitigating routing misbehavior in mobile ad hoc networks. In Proceedings of the 6th Annual International Conference on Mobile Computing and Networking, Boston, MA, USA, 6–11 August 2000; pp. 255–265. [Google Scholar]
  22. Michiardi, P.; Molva, R. Core: A Collaborative reputation mechanism to enforce node cooperation in mobile ad hoc networks. In Advanced Communications and Multimedia Security: IFIP TC6/TC11 Sixth Joint Working Conference on Communications and Multimedia Security; Jerman-Blažič, B., Klobučar, T., Eds.; Springer: Boston, MA, USA, 2002; pp. 107–121. [Google Scholar]
  23. Zhang, Y.; Lee, W.; Huang, Y.A. Intrusion detection techniques for mobile wireless networks. Wirel. Netw. 2003, 9, 545–556. [Google Scholar] [CrossRef]
  24. Kargl, F.; Klenk, A.; Schlott, S.; Weber, M. Advanced detection of selfish or malicious nodes in ad hoc networks. In Security in Ad-hoc and Sensor Networks; Castelluccia, C., Hartenstein, H., Paar, C., Westhoff, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 152–165. [Google Scholar]
  25. Nasser, N.; Chen, Y. Enhanced intrusion detection system for discovering malicious nodes in mobile ad hoc networks. In Proceedings of the 2007 IEEE International Conference on Communications, Glasgow, Scotland, 24–28 June 2007; pp. 1154–1159. [Google Scholar]
  26. Buttyán, L.; Hubaux, J.P. Stimulating cooperation in self-organizing mobile ad hoc networks. Mob. Netw. Appl. 2003, 8, 579–592. [Google Scholar] [CrossRef]
  27. Zhong, S.; Chen, J.; Yang, Y. Sprite: A simple, cheat-proof, credit-based system for mobile ad-hoc networks. In Proceedings of the 22nd Annual Joint Conference of the IEEE Computer and Communications Societies, San Francisco, CA, USA, 30 March–3 April 2003; pp. 1987–1997. [Google Scholar]
  28. Smith, A.; Vorobeychik, Y.; Letchford, J. Multidefender security games on networks. ACM SIGMETRICS Perform. Eval. Rev. 2014, 41, 4–7. [Google Scholar] [CrossRef]
  29. Lou, J.; Vorobeychik, Y. Equilibrium analysis of multidefender security games. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI’15), Buenos Aires, Argentina, 25–31 July 2015; pp. 596–602. [Google Scholar]
  30. Lou, J.; Smith, A.; Vorobeychik, Y. Multidefender security games. IEEE Intell. Syst. 2017, 32, 50–60. [Google Scholar] [CrossRef]
  31. Kunreuther, H.; Heal, G. Interdependent security. J. Risk Uncertain. 2003, 26, 231–249. [Google Scholar] [CrossRef]
  32. Varian, H. System reliability and free riding. In Economics of Information Security; Camp, L.J., Lewis, S., Eds.; Springer: Boston, MA, USA, 2004; pp. 1–15. [Google Scholar] [CrossRef]
  33. Aspnes, J.; Chang, K.; Yampolskiy, A. Inoculation strategies for victims of viruses and the sum-of-squares partition problem. J. Comput. Syst. Sci. 2006, 72, 1077–1093. [Google Scholar] [CrossRef]
  34. Lelarge, M.; Bolot, J. Network externalities and the deployment of security features and protocols in the Internet. ACM SIGMETRICS Perform. Eval. Rev. 2008, 36, 37–48. [Google Scholar] [CrossRef]
  35. Lelarge, M.; Bolot, J. A local mean field analysis of security investments in networks. In Proceedings of the 3rd International Workshop on Economics of Networked Systems (NetEcon’08), Seattle, WA, USA, 17–22 August 2008; pp. 25–30. [Google Scholar] [CrossRef]
  36. Chan, H.; Ceyko, M.; Ortiz, L. Interdependent defense games: modeling interdependent security under deliberate attacks. arXiv, 2012; arXiv:1210.4838. [Google Scholar]
  37. Acemoglu, D.; Malekian, A.; Ozdaglar, A. Network security and contagion. J. Econom. Theory 2016, 166, 536–585. [Google Scholar] [CrossRef] [Green Version]
  38. Gueye, A.; Walrand, J.; Anantharam, V. Design of network topology in an adversarial environment. In Proceedings of the 2010 Conference on Decision and Game Theory for Security (GameSec 2010), Berlin, Germany, 22–23 November 2010; pp. 1–20. [Google Scholar] [CrossRef]
  39. Gueye, A.; Walrand, J.C.; Anantharam, V. How to choose communication links in an adversarial environment? In Proceedings of the 2nd International Conference on Game Theory for Networks (GameNets 2011), College Park, MD, USA, 14–15 November 2011; pp. 233–248. [Google Scholar]
  40. Gueye, A.; Marbukh, V.; Walrand, J. Towards a metric for communication network vulnerability to attacks: A game theoretic approach. In Proceedings of the 3rd International Conference on Game Theory for Networks (GameNets 2012), Budapest, Hungary, 5–6 November 2012; pp. 259–274. [Google Scholar] [CrossRef]
  41. Laszka, A.; Szeszlér, D.; Buttyán, L. Game-theoretic robustness of many-to-one networks. In Proceedings of the 3rd International Conference on Game Theory for Networks (GameNets 2012), Budapest, Hungary, 5–6 November 2012; pp. 88–98. [Google Scholar] [CrossRef] [Green Version]
  42. Laszka, A.; Szeszlér, D.; Buttyán, L. Linear loss function for the network blocking game: An efficient model for measuring network robustness and link criticality. In Proceedings of the 3rd International Conference on Game Theory for Networks (GameNets 2012), Budapest, Hungary, 5–6 November 2012; pp. 152–170. [Google Scholar] [CrossRef]
  43. Goyal, S.; Jabbari, S.; Kearns, M.; Khanna, S.; Morgenstern, J. Strategic network formation with attack and immunization. In Proceedings of the 12th Conference on Web and Internet Economics (WINE 2016), Montreal, QC, Canada, 11–14 December 2016; pp. 429–443. [Google Scholar] [CrossRef]
  44. Aghassi, M.; Bertsimas, D. Robust game theory. Math. Program. 2006, 107, 231–273. [Google Scholar] [CrossRef]
  45. Hiriart-Urruty, J.B.; Lemaréchal, C. Convex Analysis and Minimization Algorithms I: Fundamentals; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
Figure 1. A generalized star with 12 nodes and core of size 5.
Figure 1. A generalized star with 12 nodes and core of size 5.
Information 09 00214 g001
Table 1. Summary of the notation.
Table 1. Summary of the notation.
NotationDefinition
nnumber of nodes
n B number of byzantine nodes
n A number of nodes infected by the adversary
fcomponent value function
Gnetwork
Δ set of protected nodes
u D , u A , u j payoff to the designer, the adversary, and a node
U ^ D , U ^ j pessimistic payoff to the designer and a node
Table 2. Optimal networks for n { 12 , 30 , 50 } .
Table 2. Optimal networks for n { 12 , 30 , 50 } .
n = 50
n = 30 c < 3.88 50-star
c < 3.80 30-star c ( 3.88 , 11.875 ) 25-star
n = 12 c ( 3.80 , 11 ) 15-star c ( 11.875 , 23.25 ) 17-star
c < 3.50 12-star c ( 11 , 26 ) 10-star c ( 23.25 , 30 . ( 3 ) ) 13-star
c ( 3.50 , 9.50 ) 6-star c ( 26 , 49 ) 6-star c ( 30 . ( 3 ) , 85 ) 10-star
c ( 9.50 , 11.25 ) 4-star c ( 49 , 70.20 ) 5-star c ( 85 , 195 ) 5-star
c > 11.25 two disconnected components of equal size c > 70.20 two disconnected components of equal size c > 195 two disconnected components of equal size

Share and Cite

MDPI and ACS Style

Janus, T.; Skomra, M.; Dziubiński, M. Individual Security and Network Design with Malicious Nodes. Information 2018, 9, 214. https://doi.org/10.3390/info9090214

AMA Style

Janus T, Skomra M, Dziubiński M. Individual Security and Network Design with Malicious Nodes. Information. 2018; 9(9):214. https://doi.org/10.3390/info9090214

Chicago/Turabian Style

Janus, Tomasz, Mateusz Skomra, and Marcin Dziubiński. 2018. "Individual Security and Network Design with Malicious Nodes" Information 9, no. 9: 214. https://doi.org/10.3390/info9090214

APA Style

Janus, T., Skomra, M., & Dziubiński, M. (2018). Individual Security and Network Design with Malicious Nodes. Information, 9(9), 214. https://doi.org/10.3390/info9090214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop