Next Article in Journal
Swap Equilibria under Link and Vertex Destruction
Next Article in Special Issue
On Information Aggregation and Interim Efficiency in Networks
Previous Article in Journal
Topological Aspects of the Multi-Language Phases of the Naming Game on Community-Based Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interdependent Defense Games with Applications to Internet Security at the Level of Autonomous Systems

1
Department of Computer Science, Trinity University, San Antonio, TX 78212, USA
2
Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
3
Tumblr, 35 E 21 Street, Ground Floor, New York, NY 10010, USA
4
Department of Computer and Information Science, University of Michigan-Dearborn, Dearborn, MI 48128, USA
*
Author to whom correspondence should be addressed.
Games 2017, 8(1), 13; https://doi.org/10.3390/g8010013
Submission received: 1 October 2016 / Accepted: 28 December 2016 / Published: 16 February 2017
(This article belongs to the Special Issue Decision Making for Network Security and Privacy)

Abstract

:
We propose interdependent defense (IDD) games, a computational game-theoretic framework to study aspects of the interdependence of risk and security in multi-agent systems under deliberate external attacks. Our model builds upon interdependent security (IDS) games, a model by Heal and Kunreuther that considers the source of the risk to be the result of a fixed randomized-strategy. We adapt IDS games to model the attacker’s deliberate behavior. We define the attacker’s pure-strategy space and utility function and derive appropriate cost functions for the defenders. We provide a complete characterization of mixed-strategy Nash equilibria (MSNE), and design a simple polynomial-time algorithm for computing all of them for an important subclass of IDD games. We also show that an efficient algorithm to determine whether some attacker’s strategy can be a part of an MSNE in an instance of IDD games is unlikely to exist. Yet, we provide a dynamic programming (DP) algorithm to compute an approximate MSNE when the graph/network structure of the game is a directed tree with a single source. We also show that the DP algorithm is a fully polynomial-time approximation scheme. In addition, we propose a generator of random instances of IDD games based on the real-world Internet-derived graph at the level of autonomous systems (≈27 K nodes and ≈100 K edges as measured in March 2010 by the DIMES project). We call such games Internet games. We introduce and empirically evaluate two heuristics from the literature on learning-in-games, best-response gradient dynamics (BRGD) and smooth best-response dynamics (SBRD), to compute an approximate MSNE in IDD games with arbitrary graph structures, such as randomly-generated instances of Internet games. In general, preliminary experiments applying our proposed heuristics are promising. Our experiments show that, while BRGD is a useful technique for the case of Internet games up to certain approximation level, SBRD is more efficient and provides better approximations than BRGD. Finally, we discuss several extensions, future work, and open problems.

1. Introduction

Attacks carried out by hackers and terrorists in recent decades have led to increased efforts by both government and the private sector to create and adopt mechanisms to prevent future attacks. This effort has yielded a more focused research attention to models, computational and otherwise, that facilitate and help to improve (homeland) security for both physical infrastructure and cyberspace. In particular, there has been quite a bit of recent research activity in the general area of game-theoretic models for terrorism settings (see, e.g., Bier and Azaiez [1] and Cárceles-Poveda and Tauman [2]).
Interdependent security (IDS) games are one of the earliest models resulting from a game-theoretic approach to model security in non-cooperative environments composed of free-will self-interested individual decision-makers. Originally introduced and studied by economists Kunreuther and Heal [3], IDS games model general abstract security problems. In those problems, an individual within a population considers whether to voluntarily invest in some protection mechanisms or security against a risk they may face. Individuals do so knowing that the cost-effectiveness of the decision depends on the investment decisions of others in the population. This is because of transfer risks; that is, the “bad event” may be transferable from a compromised individual to another.
In their work, Kunreuther and Heal [3] provided several examples based on their economics, finance, and risk management expertise. We refer the reader to their paper for more detailed descriptions. As a canonical example of the real-world relevance of IDS settings and the applicability of IDS games, Heal and Kunreuther [4] used this model to describe problems such as airline baggage security. In their setting, individual airlines may choose to invest in additional complementary equipment to screen passengers’ bags and check for hazards such as bombs that could cause damage to their passengers, planes, buildings, or even reputations. However, mainly due to the large amount of traffic volume, it is impractical for an airline to go beyond applying security checks to bags incoming from passengers and include checks to baggage or cargo transferred from other airlines. On the other hand, if an airline invests in security, they can still experience a bad event if the bag was transferred from an airline that does not screen incoming bags, rendering their investment useless. 1 Thus, we can see how the cost-effectiveness of an investment can be highly dependent on others’ investment decisions. Another recent application of the IDS model is on container shipping transportation [6], in which the objective is to study the effect that investment decisions about container screening on some ports may have on neighboring ports.
Some security-related problems in cyberspace are similar, but slightly different in nature to the airline scenario just described. Consider a network where all computers fully trust all other computers and freely exchange information. Each user has complete control over his own computer and can decide if he wants to protect the individual user’s computer from hackers, by installing a firewall, for example. However, that individual user cannot directly control or impose others in the network to protect themselves too. Thus, in order for an individual to feel secure about storing his information on the network, that individual user not only has to think about the security of his own computer, but also the security of other computers on the internal network. This is because any other computer may access that individual user’s computer as well. If any computer were hacked, that individual user’s information would potentially be exposed to the outside world.
Two potential outcomes immediately arise out of the cyber-security scenario. If one does not think enough people have invested in security, then one will not invest either, because any investment will contribute negligibly to the overall protection of one’s data. Also, and this is the aspect that perhaps differentiates cyberspace from the airline security scenario, if nearly everyone has invested in security, one may no longer feel the need to protect oneself. This is because the network is already mostly secure and the amount of work required to protect oneself outweighs the minimal change in overall security. Thus, as many invest, fewer may want to invest.
In this work, we build on the literature in IDS games. In particular, we adapt the model to situations in which the abstract “bad event” results from the deliberate action of an attacker. The “internal agents” (e.g., airlines and computer network users or administrators), whom we also often refer to as “defenders” or “sites,” have the voluntary choice to individually invest in security to defend themselves against a direct or indirect offensive attack, modulo, of course, the cost-effectiveness to do so. As a result, we formally define a new model class: interdependent defense (IDD) games.
In our model, both the attacker(s) 2 and the defenders, modeled as players in a game, make decisions based on cost-benefit analysis. The attacker wants to find the most cost-effective way to attack nodes in the network, and does so by maximizing explicit personal preferences. Similarly, each defender (node in a network) takes into account the costs, as well as potential losses, risks, and actions not only of the attacker, but also of other nodes in the network, when making security investment decisions.
A side benefit of explicitly modeling the attacker, as we do in our model, is that the probability of an attack results directly from the equilibrium analysis. Building IDS games can be hard because it requires a priori knowledge of the likelihood of an attack. Attacks of this kind are considered rare events and thus notoriously difficult to statistically estimate in general.

1.1. Related Work

Johnson et al. [7] and Fultz and Grossklags [8] independently developed non-cooperative game models similar to ours. Johnson et al. [7] extend IDS games by modeling uncertainty about the source of the risk (i.e., the attacker) using a Bayesian game over risk parameters. Fultz and Grossklags [8] propose and study a non-graphical game-theoretic model for the interactions between attackers and nodes in a network. In their model, each node in the network can decide whether to contribute (by investment) to the overall safety of the network or to individual safety. The attackers can attack any number of nodes, but with each attack there is an increased probability that the attacker might get caught and suffer penalties or fines. Hence, while their game has IDS characteristics, it is technically not within the standard IDS game framework introduced by Heal and Kunreuther.
Most of the previous related work explores the realm of information security and is application/network specific (see Roy et al. [9] for a survey on game theory application to network security). Syverson and Systems [10] suggest the use of game-theoretic models (non-cooperative or cooperative) to model the relationship between the attacker and the nodes in the network. Past literature has largely focused on two-person (an attacker and a defender) games where the nodes in the network are regarded as a single entity (or a central defender). For example, Lye and Wing [11] look at the interactions between an attacker and the (system) administrator using a two-player stochastic game. Recent work uses a Stackelberg game model in which the defender (or leader) commits to a mixed strategy to allocate resources to defend a set of nodes in the network, and the follower (or attacker) optimally allocates resources to attack a set of “targets” in the network given the leader’s commitment [12,13,14,15,16].
Very recent work by Smith et al. [17] and Lou and Vorobeychik [18] extends the traditional Stackelberg settings to multiple leaders and use different equilibrium concepts than MSNE. Laszka et al. [19] have used their model to study the interaction among an attacker and multiple defenders in spear-phishing attack settings. The main distinctions of their work to ours are that the defenders are not interconnected, the attacker has some fixed number of attacks (more than one), and the equilibrium concepts studied are different.
Other work strives to understand the motivation of the attackers. For example, Liu [20] focuses on understanding the attacker’s intent, objectives, and strategies, and derive a (two-player) game-theoretic model based on those factors. As another example, Cremonini and Nizovtsev [21] use cost-benefit analysis (of attackers) to address the issue of the optimal amount of security (of the nodes in the network).

1.2. Brief Overview of the Article and the Significance of Our Contributions

We adapt the standard non-cooperative framework of IDS games, which we present and briefly discuss in Section 2, to settings in which the source of the risk is the result of a deliberate, strategic decision by an external attacker. In particular, we design and propose interdependent defense (IDD) games, a new class of games that, in contrast to standard IDS games, model the attacker explicitly, while maintaining a core component of IDS systems: the potential transferability of the risk resulting from an attack. We note that the explicit modeling of risk transfer is an aspect of our model that has not been a focus of previous game-theoretic attacker-defender models of security discussed earlier in Section 1.1.
We formally define and study IDD games in depth in Section 3. There, we also present some characterizations of their MSNE which have immediate computational and algorithmic implications.
In Section 4, we study several computational questions about IDD games. We first provide a polynomial-time algorithm to compute all MSNE for an important subclass of IDD games in Section 3.4. In that subclass, there is only one attack, the defender nodes are fully transfer-vulnerable (i.e., investing in security does nothing to reduce their external/transfer risk), and transfers are one-hop. 3 We describe this subclass in more detail in Section 3.4.
Before continuing, we would like to address two aspects of IDD games brought up in the last paragraph. Note that considering a single attacker is a typical assumption in security settings (see previous work discussed earlier in Section 1.1). It is also reasonable because we can view many attackers as a single attacker. Allowing at most one attack prevents immediate representational and computational intractability problems because, as we state at the beginning of Section 3.2, the number of the attacker’s pure strategies grows exponentially with the number of attacks. Finally, because the attacker has no fixed target, it seems practically ineffective for the attacker to consider or go beyond plans of attacks involving multiple (>2) transfers: such plans are complex, time consuming, and costly. Having said that, there has been increased interest over the last few years in the network security community to explicitly model multiple attackers [22]. For instance, Merlevede and Holvoet [22] view the multi-attacker setting as a very important future direction. Here, we focus most of our technical results and experiments to the single-attacker setting. Yet, we present multi-attacker extensions of our proposed model in Appendix E, where we also discuss some ideas and very preliminary technical results for the multi-attacker setting.
In Section 4.2, we formally prove that our results for computing all MSNE in a subclass of IDS games in polynomial time is unlikely to extend to arbitrary IDD games. Given that, we move on to explore approximate MSNE in IDD games in Section 4.3 and provide a fully polynomial-time approximation scheme (FPTAS) for the case in which the graph over the sites is a directed tree-like network. We note that the attacker is still connected to every site in the network. To place the significance of this result in context, we note that despite the apparent simplicity of the subgraph over the sites, there may be very important real-world applications in supply chains (e.g., see [23]) and the power grid [24], for example.
Our computational results in Section 4 are significant, and some initially surprising to us, within the context of the state of the art in computational and algorithmic game theory. Computing all MSNE in graphical IDS games is hard in general. For instance, the so-called Nash-extension computational problem in general IDS games is NP-complete [25]. To place our computational contributions in an even broader context, note that deciding whether an arbitrary graphical game has a pure-strategy Nash equilibrium (PSNE) is in general NP-complete [26]. In addition, many problems related to computing MSNE with particular properties, even in normal-form games, are NP-complete [27]. Indeed, computing an MSNE is PPAD-complete, even in two-player games [28], thus considered computationally intractable in general. Some alternative proofs of PPAD-completeness for two-player games are polynomial time reductions from graphical games, even if each player in the graphical game has at most 3 players [29]. We refer the reader to Papadimitriou [30] for more information on the complexity class PPAD and to Daskalakis et al. [31] for high-level information about the most recent computational results on the complexity of computing MSNE in normal-form games, and indirectly graphical games too. Also, computing all MSNE is rarely achieved and counting-related problems are often #P-complete. We refer the reader to, for example, Conitzer and Sandholm [32], and the references therein, for additional information. We do not know of any other non-trivial game for which there exists a polynomial-time algorithm to compute all MSNE except the one we provide in Section 4.1 here and the algorithm for uniform-transfer IDS games of Kearns and Ortiz [25]. Finally, while our hardness results build on those for graphical IDS games [25], there does not exist any analogous to our FPTAS for graphical IDS games. It seems that an FPTAS in the IDS setting may be possible by modifying the one we design here, and its proof, to that setting.
We provide experimental results in Section 5. In our experiments, we study the application of learning-in-games heuristics to compute approximate MSNE to both fixed and randomly-generated instances of IDD games. We focus on the class of games with at most one simultaneous attack and one-hop transfers. Our particular object of study is a very large Internet-derived graph at the level of autonomous systems (AS) (≈27 K nodes and ≈100 K edges) obtained from DIMES [33,34] for March 2010, the last network graph available to us. Cybersecurity scenarios motivate this study. We refer the reader to Roy et al. [9] for some examples. Here, we propose a generative model of single-attack IDD games based on the aforementioned Internet graph. For simplicity, we refer to the models that a simulator that we built based on the generative model outputs as Internet games (IGs). In our experiments, we employ simple best-response heuristics from learning in games [35] to compute (approximate) MSNE in IGs. In particular, we perform a series of experiments to both show the large-scale feasibility and scalability of the model and approach. We also explore the behavior of the internal players and the attacker in the resulting equilibria, and the properties of the network-structure induced at an equilibrium.
In Section 6, we provide a discussion of future work, some open problems, and a summary of our contributions.

2. Interdependent Security Games, and a Generalization

Each player i in a finite set [ n ] { 1 , 2 , n } of n players of an IDS game has a binary choice, to invest ( a i = 1 ) or not to invest ( a i = 0 ) in security mechanisms to protect themselves from a potential bad event. For each player i, the parameters C i and L i correspond to the cost of investment and loss induced by the bad event, respectively. We define the ratio of the two parameters, the player’s “cost-to-loss” ratio, as R i C i / L i . Bad events can occur through both direct and indirect means. The direct risk, or internal risk, parameter p i , is the probability that player i will experience a bad event because of direct contamination. The standard IDS model assumes that investing will completely protect the player from direct contamination; hence, internal risk is only possible when a i = 0 . The indirect-risk parameter q j i is the probability that player j is directly “contaminated,” does not experience the bad event, but transfers it to player i who ends up experiencing the bad event. This discussion leads us to the following definition of standard IDS games.
Definition 1.
(Standard IDS Games) An IDS game is defined by a tuple ( n , C , L , p , Q ) , where C ( C i ) i [ n ] , L ( L i ) i [ n ] , p ( p i ) i [ n ] , Q is a matrix representation of the q i j ’s, where ( i , j ) [ n ] 2 . Implicit in Definition 1 is that q i i = 0 for all i.
Before we provide a formal definition of the model semantics in the upcoming paragraphs in this section, as a preview, we want to highlight that the standard IDS model assumes that the interactions between players are unaffected by investment in security. Said differently, each individual player’s transferred risk is the same regardless of whether the player invests in security, or not.
We now formally define a (directed) graphical-games [36,37] version of IDS games, as first introduced by Kearns and Ortiz [25]. The reason we introduce graphical IDS games here is to emphasize that the matrix Q is often sparse and their non-zero entries lead to a network structure captured by an induced graph G = ( [ n ] , E ) . That way the representation size of a graphical IDS game is not n 2 , as in standard IDS games, but essentially the number of directed edges E of G, which could potentially be significantly smaller than n 2 . In addition, we can exploit the sparse representation and the network structure to provide provably tractable computational solutions in some cases.
Definition 2.
(Graphical IDS Games) The parameters q i j ’s induce a directed graph G = ( [ n ] , E ) such that E { ( i , j ) | q i j > 0 } . Indeed, we assume that Q has a sparse-matrix representation as a list of non-zero q i j values for each edge ( i , j ) E . Thus, the representation size of Q is O ( | E | ) . Agraphical IDS game is defined by the tuple ( n , G , C , L , p , Q ) .
We now discuss the game-theoretic semantics of (graphical) IDS games. For each player i [ n ] , let Pa ( i ) { j [ n ] q j i > 0 } be the set of players that are parents of player i in G (i.e., the set of players that player i is exposed to via transfers); and PF ( i ) Pa ( i ) { i } be the parent family of player i, which includes i. Denote by k i | PF ( i ) | the size of the parent family of player i. Similarly, let Ch ( i ) { j [ n ] q i j > 0 } be the set of players that are children of player i (i.e., the set of players to whom player i can present a risk via transfer) and CF ( i ) Ch ( i ) { i } the (children) family of player i, which includes i. The probability that player i is safe from player j, as a function of player j’s decision, is
e i j ( a j ) a j + ( 1 a j ) ( 1 q j i ) = ( 1 q j i ) 1 a j .
Equation (1) results from noting that if j invests, then it is impossible for j to transfer the bad event, while if j does not invest, then j either experiences the bad event or transfers it to another player, but never both. 4 The player receiving the transfer still has the chance of not experiencing the bad event. However, without some form of screening of transfers, this chance is usually very low.
Denote by a ( a 1 , , a n ) { 0 , 1 } n the joint action of all n players. Also denote by a i the joint action of all players except i, and for any subset I [ n ] of players, denote by a I the sub-component of the joint action corresponding to those players in I only. We define i’s overall safety from all other players as
s i ( a Pa ( i ) ) j Pa ( i ) e i j ( a j ) ,
and equivalently the overall risk from some other players as
r i ( a Pa ( i ) ) 1 s i ( a Pa ( i ) ) .
Note that, as given in Equation (2) (and Equation (3)), each players’ external safety (and risk) is a direct function of its parents only, not all other players. Note also that from the perspective of each players’ preferences, as quantified by their cost functions presented next (Equation (4)), the source of each player’s transfer risk is independent over the player’s parents in the graph. This independence assumption comes from the original definition of standard IDS games, but there the graph is fully connected (i.e., Pa ( i ) = [ n ] { i } ) [3]. From these definitions, we obtain player i’s overall cost function: the cost of joint action a { 0 , 1 } n , corresponding to the (binary) investment decision of all players, is
M i ( a i , a Pa ( i ) ) a i [ C i + r i ( a Pa ( i ) ) L i ] + ( 1 a i ) [ p i + ( 1 p i ) r i ( a Pa ( i ) ) ] L i .
Whether players invest depends solely on what they can gain or lose by investing. If the overall cost of investing is less than the overall cost of not investing, the player will invest. Applying this logic to cost function M i in Equation (4), player i will invest if
C i + r i ( a Pa ( i ) ) L i < [ p i + ( 1 p i ) r i ( a Pa ( i ) ) ] L i
so that the investment cost and the losses due to a transferred event do not outweigh the losses from an internal or transferred bad event. Similarly, if the inequality in the last expression is reversed or is replaced by equality, player i will not invest or would be indifferent, respectively. Rearranging the expression for the best-response condition for strictly playing a i = 1 , given in the last equation (Equation (5)), and letting Δ i R i / p i = C i p i L i , the cost-to-expected-loss ratio of player i, we get the following best-response correspondence BR i : { 0 , 1 } k i 1 2 { 0 , 1 } for player i: 5 for all a Pa ( i ) { 0 , 1 } k i 1 ,
BR i ( a Pa ( i ) ) { 1 } , if s i ( a Pa ( i ) ) > Δ i , { 0 } , if s i ( a Pa ( i ) ) < Δ i , { 0 , 1 } , if s i ( a Pa ( i ) ) = Δ i .
In other words, whether it is cost-effective for player i to invest or not depends on a simple threshold condition on the player’s safety: Does the player feel safe enough from others?
Definition 3.
A joint-action a * { 0 , 1 } n is a pure-strategy Nash equilibrium (PSNE) of a graphical IDS game ( n , G , C , L , p , Q ) (see Definition 2) if a i * BR i ( a Pa ( i ) * ) for all players i (i.e., a * is a mutual best-response, as defined in Equation (6)).

2.1. Generalized IDS Games

In the standard IDS game model, investment in security does not reduce transfer risks. However, in some IDS settings (e.g., vaccination and cyber-security), it is reasonable to expect that security investments would include mechanisms to reduce transfer risks. This motivates our first modification to the traditional IDS games: allowing the investment in protection to not only make us safe from direct attack but also partially reduce (or even eliminate) the transfer risk. Thus, we introduce a new real-valued parameter α i [ 0 , 1 ] representing the probability that a transfer of a potentially bad event will go unblocked by i’s security, assuming i has invested. Thus, we redefine player i’s overall cost as 6
M i ( a i , a Pa ( i ) ) a i [ C i + α i r i ( a Pa ( i ) ) L i ] + ( 1 a i ) [ p i + ( 1 p i ) r i ( a Pa ( i ) ) ] L i .
We call the generalization α-IDS games, where α ( α 1 , α 2 , , α n ) corresponds to the vector composed of the parameter values of each player. This discussion leads to the following definition.
Definition 4.
A graphical α-IDS game, or simply α-IDS game, is given by a tuple ( n , G , C , L , p , Q , α ) , where each tuple-entry is as defined in the discussion above, and the semantics of the cost functions M i ’s of the players is as defined in Equation (7).
From a game-theoretic perspective, the key aspect of the α parameters is that they determine the characteristics of the best-response behavior of each player. That is, it allows us to model players that may behave in a way that is consistent with behavior that ranges from strategic complementarity (e.g., airline setting, where α i = 1 ), all the way to strategic substitutability (e.g., vaccination setting, where α i = 0 ), based on the relationship between α i and 1 p i .
The corresponding definition of PSNE for α-IDS games is analogous to that given in Definition 3. Hence, we do not formally re-state it here.

3. Interdependent Defense Games

Building from generalized IDS games, in this section we introduce interdependent defense (IDD) games. We begin by introducing an additional player, the attacker, who deliberately initiates bad events: now bad events are no longer “chance occurrences” without any strategic deliberation. 7 The attacker has a target decision for each player - a choice of attack ( b i = 1 ) or not attack ( b i = 0 ) player i. Hence, the attacker’s pure strategy is denoted by the vector b { 0 , 1 } n . (We discuss an extension of the model to multiple attackers in Appendix E.)
Changing from “random” non-strategic attacks, whose probability of occurrence is determined independent of the actions of the internal players, to intentional attacks, in which actions are deliberately carried out by an external actor, leads us to alter p i and q i j . This is because their original definitions imply extra meaning with respect to the new aggressor.
The game parameter p i implicitly “encodes” b i because b i = 0 implies p i = 0 . Thus, we redefine
p i p i ( b i ) b i p ^ i
so that player i has intrinsic risk p ^ i , and only has internal risk if targeted (i.e, b i = 1 ). The new parameter p ^ i represents the (conditional) probability that an attack is successful at site player i given that site i was directly targeted and did not invest in protection.
Similarly, the game parameter q i j “encodes” b i = 1 , because a prerequisite is that i is targeted before it can transfer the bad event to j. We redefine
q i j q i j ( b i ) b i q ^ i j
so that q ^ i j is the intrinsic transfer probability from player i to player j, independent of b i . The new parameter q ^ i j represents the (conditional) probability that an attack is successful at player j given that it originated at player i, did not occur at i, but was transferred undetected to j.
Because the p i ’s and q i j ’s depend on the attacker’s action b , so do the safety and risk functions. In particular, we now have
e i j ( a j , b j ) a j + ( 1 a j ) ( 1 b j q ^ j i ) = ( 1 q ^ j i ) b j ( 1 a j ) , and
s i ( a Pa ( i ) , b Pa ( i ) ) j Pa ( i ) e i j ( a j , b j ) 1 r i ( a Pa ( i ) , b Pa ( i ) ) ,
where p ^ i and q ^ j i are as defined in Equations (8) and (9), respectively. Hence, for each site player i, the cost function becomes
M i ( a i , a Pa ( i ) , b i , b Pa ( i ) ) a i [ C i + α i r i ( a Pa ( i ) , b Pa ( i ) ) L i ] + ( 1 a i ) [ b i p ^ i + ( 1 b i p ^ i ) r i ( a Pa ( i ) , b Pa ( i ) ) ] L i ,
where p ^ i and r i are as defined in Equations (8) and (11), respectively. Let
Δ ^ i R i / p ^ i C i p ^ i L i
and
s ^ i ( a Pa ( i ) , b PF ( i ) ) b i s i ( a Pa ( i ) , b Pa ( i ) ) + 1 α i p ^ i r i ( a Pa ( i ) , b Pa ( i ) ) ,
where s i is as defined in Equation (11). The pure-strategy best-response correspondence of each site i is
BR i ( a Pa ( i ) , b PF ( i ) ) { 1 } , if s ^ i ( a Pa ( i ) , b PF ( i ) ) > Δ ^ i , { 0 } , if s ^ i ( a Pa ( i ) , b PF ( i ) ) < Δ ^ i , { 0 , 1 } , if s ^ i ( a Pa ( i ) , b PF ( i ) ) = Δ ^ i ,
where Δ ^ i and s ^ i are as defined in Equations (13) and (14), respectively.
We assume that the attacker wants to cause as much damage as possible. Here, we define the utility/payoff function U quantifying the objective of the attacker as
U ( a , b ) i = 1 n M i ( a PF ( i ) , b PF ( i ) ) a i C i b i C i 0 ,
where C i 0 is the attacker’s own “cost” to target player i. 8
Of course, many other utility functions of varied complexity are also possible. Indeed, one can consider increasingly complex and sophisticated utility functions that may explicitly parse out the involved costs and induced losses in finer-grain and painstaking detail. For instance, we could decompose the cost to the attacker to target a specific site into different components such as, perhaps, planning and setup costs, carry-out costs, and the costs of getting caught or retaliated against, to name a few. We leave those more complex variants for future work.
Definition 5.
A single-attacker graphical IDD game, or simply IDD game , is given by the tuple ( n , G , C , L , C 0 , p ^ , Q ^ , α ) , where the tuple’s entries, as well as the model semantics, are as defined in the preceding discussion (Equations (8), (9), (12) and (16)), and the matrix Q ^ is analogous to the matrix Q described in Definition 2.
The attacker’s pure-strategy best-response correspondence BR 0 : { 0 , 1 } n 2 { 0 , 1 } n : 9
BR 0 ( a ) arg max b { 0 , 1 } n U ( a , b ) ,
where U is as defined in Equation (16).
Definition 6.
A pure-strategy profile ( a * , b * ) { 0 , 1 } 2 n is a PSNE of an IDD game if, for each player i, a i * BR i ( a Pa ( i ) * , b PF ( i ) * ) , and for the attacker, b * BR 0 ( a * ) , where BR i and BR 0 are as defined in Equations (15) and (17), respectively.

3.1. Conditions on Model Parameters

We now introduce the following reasonable restrictions on the game parameters. We employ these conditions without loss of generality from a mathematical and computational standpoint, as we now discuss.
The first condition states that every site’s investment cost is positive and (strictly) smaller than the conditional expected direct loss if the site were to be attacked directly ( b i = 1 ). That is, if a site knows that an attack is directed against it, the site will prefer to invest in security, unless the external risk is too high. This condition is reasonable because otherwise the player will never invest regardless of what other players do (i.e., “not investing” would be a dominant strategy).
Assumption 1.
For all sites i [ n ] , 0 < C i < p ^ i L i .
The second condition states that, for all sites i, the attacker’s cost to attack i is positive and (strictly) smaller than the expected loss achieved (i.e., gains from the perspective of the attacker) if an attack initiated at site i is successful, either directly at i or at one of its children (after transfer). That is, if an attacker knows that an attack is rewarding (or able to obtain a positive utility), it will prefer to attack some nodes in the network. This assumption is also reasonable; otherwise the attacker will never attack regardless of what other players do (i.e., not attacking would be a dominant strategy, leading to an easy problem to solve).
Assumption 2.
For all sites i [ n ] , 0 < C i 0 < p ^ i L i + j Ch ( i ) q ^ i j α j L j .
In what follows, we study the problem of finding and computing MSNE in IDD games under Assumptions 1 and 2.

3.2. There Is No PSNE in Any IDD Game with at Most One (Simultaneous) Attack

Note that the attacker has in principle an exponential number of pure strategies. For IDD games this translates to a number of simultaneous attacks being 2 n . This presents several challenges.
One is the question of how one would compactly represent an attacker’s strategy (or policy) over 2 n events, when the representation size of the model is N = O ( | E | + n ) , which is O ( n 2 ) in the worst case. Thus, one would need a representation of the attacker’s policy that is polynomial in N.
Another related question is how realistic is for the attacker to have such a huge amount of power.
One way to deal with the compact representation, while at the same time realistically constraining the attacker’s power, is to limit the number of simultaneous attack sites to some small finite number K n . Even then, the number of pure strategies will grow exponentially in the number of potential attacks K, which still renders the attacker’s pure-strategy space unrealistic, especially on a very large network with about 30 K nodes and 100 K edges, like the one we study in our experiments (Section 5). Worst-case, we need to consider up to 2 n number of pure strategies for K attacks as K goes to n. The simplest version of this constraint is to allow at most a single (simultaneous) attack (i.e., K = 1 ).
Assumption 3.
The set of pure strategies of the attacker is
B = b { 0 , 1 } n i = 1 n b i 1 .
We emphasize that Definition 7 does not explicitly preclude multiple attacks, just that they cannot occur simultaneously.
For convenience, we introduce the following definition.
Definition 7.
We say an IDD game is a game with a single simultaneous attack, or simply a single-attack game for brevity, if Assumption 3 holds (i.e., at most one simultaneous attack is possible).
The following technical results on single-attack IDD games will also be convenient.
Lemma 1.
The following holds in any single-attack IDD game ( [ n ] , G , C , L , p ^ , Q ^ , α ) : for all players i [ n ] , for all a Pa ( i ) and b PF ( i ) ,
r i ( a Pa ( i ) , b Pa ( i ) ) = j Pa ( i ) b j ( 1 a j ) q ^ j i , s i ( a Pa ( i ) , b Pa ( i ) ) = 1 r i ( a Pa ( i ) , b Pa ( i ) ) and
b i s i ( a Pa ( i ) , b Pa ( i ) ) = b i .
The proof is in Appendix B.1.
Proposition 1.
In any single-attack IDD game ( [ n ] , G , C , L , p ^ , Q ^ , α ) , we have that for all players i [ n ] , for all a and b ,
U ( a , b ) = i = 1 n b i U i ( a ) ,
where for all i = 1 , , n ,
U i ( a ) ( 1 a i ) p ^ i L i + j Ch ( i ) q ^ i j ( a j α j + ( 1 a j ) ) L j C i 0 .
The proof is in Appendix B.2.
We note that single-attack IDD games are instances of graphical multi-hypermatrix games [40], in which the local hypergraph of each site i [ n ] has vertices PF ( i ) { 0 } , where 0 denotes the attacker node, and hyperedges { { i } , { 0 , i } } j Pa ( i ) { { 0 , i , j } } , while the local hypergraph of the attacker has n + 1 vertices [ n ] { 0 } and hyperedges { { 0 } } i [ n ] { { 0 , i } } j Ch ( i ) { { 0 , i , j } } . This is implicit in the expression of the attacker’s payoff function (Equations (20) and (21) in Proposition 1) and the resulting expression for the sites’ cost functions (Equation (12)) after substituting the corresponding expressions in Equation (18). Thus, as a standard graphical game, the game graph of a single-attack IDD game has the attacker connected to each of the n sites. Yet, looking at it from the perspective of the subgame over the sites only, given a fixed y , the resulting subgame among the sites only is a graphical polymatrix game in parametric-form, which for 2-action games is strategically equivalent to an influence game [41]. This view of the subgame over the sites only will be useful for our hardness results (Section 4.2) and the discussion within the concluding section (Section 6).
It turns out that when one combines Assumptions 1 and 2 on the parameters with Assumption 3, no PSNE is possible, as we formally state in the next proposition (Proposition 2). This is typical of attacker-defender settings. This technical result eliminates PSNE as a universal solution concept for natural IDD games in which at most one simultaneous attack is possible. The main significance of this result is that it allows us to concentrate our efforts on the much harder problem of computing MSNE.
Proposition 2.
No single-attack IDD game in which Assumptions 1 and 2 hold has a PSNE.
The proof is in Appendix B.3.

3.3. Mixed Strategies in IDD Games

We do not impose Assumption 3 in this entire subsection. We do define some notation that will become useful when dealing with single-attack IDD games later in the manuscript (Section 3.4).
For each player i, denote by x i the mixed strategy of player i: the probability that player i invests. Let x ( x i ) i [ n ] be the joint mixed strategy. Consider any subset I [ n ] of the internal players. Denote by P { 0 , 1 } | I | the set of all joint-subset marginal probability mass functions (PMFs) over { 0 , 1 } | I | . For instance, P { 0 , 1 } n is the set of joint PMFs over the joint pure-strategy space of the attacker { 0 , 1 } n , which is by definition the set of all possible mixed strategies of the attacker. Denote by P B P { 0 , 1 } n the joint PMF over { 0 , 1 } n corresponding to the attacker’s mixed strategy so that for all b { 0 , 1 } n ,
P B ( b ) P ( B = b )
is the probability that the attacker executes joint-attack vector b . Denote the (subset) marginal PMF over a subset I [ n ] of the internal players by P B I P { 0 , 1 } | I | , such that for all b I { 0 , 1 } | I | , P B I ( b I ) b I P B ( b I , b I ) is the (joint marginal) probability that the attacker chooses a joint-attack vector in which the sub-component decisions corresponding to players in I are as in b I . For simplicity of presentation, it will be convenient to let P P { 0 , 1 } n , P P B , P I P B I , and
y i P i ( 1 ) P B i ( 1 ) = P B { i } ( 1 ) ,
the marginal probability that the attacker chooses an attack vector in which player i is directly targeted.
Slightly abusing notation, we redefine the function e i j (i.e., how safe i is from j), s i and r i (i.e., the overall transfer safety and risk, respectively), originally defined in Equations (10) and (11), as
e i j ( x j , b j ) x j + ( 1 x j ) ( 1 b j q ^ j i ) , s i ( x Pa ( i ) , b Pa ( i ) ) j Pa ( i ) e i j ( x j , b j ) ,
s i ( x Pa ( i ) , P Pa ( i ) ) b Pa ( i ) { 0 , 1 } | Pa ( i ) | P Pa ( i ) ( b Pa ( i ) ) s i ( x Pa ( i ) , b Pa ( i ) ) , and
r i ( x Pa ( i ) , P Pa ( i ) ) 1 s i ( x Pa ( i ) , P Pa ( i ) ) ,
where q ^ j i is as defined in Equation (9).
In general, we can express the expected cost of protection to site i, with respect to a mixed-strategy profile ( x , P ) , as
M i ( x i , x Pa ( i ) , P PF ( i ) ) x i [ C i + α i r i ( x Pa ( i ) , P Pa ( i ) ) L i ] + ( 1 x i ) [ p ^ i f i ( x Pa ( i ) , P PF ( i ) ) + r i ( x Pa ( i ) , P PF ( i ) ) ] L i ,
where p ^ i and r i are as in Equations (8) and (25), respectively,
f i ( x Pa ( i ) , P PF ( i ) ) E b PF ( i ) P PF ( i ) [ b i s i ( x Pa ( i ) , b Pa ( i ) ) ] = b PF ( i ) { 0 , 1 } | PF ( i ) | P PF ( i ) ( b PF ( i ) ) b i s i ( x Pa ( i ) , b Pa ( i ) ) ,
and s i ( x Pa ( i ) , b Pa ( i ) ) is as defined in Equation (24).
The expected payoff of the attacker is
U ( x , P ) i = 1 n M i ( x i , x Pa ( i ) , P PF ( i ) ) x i C i y i C i 0 ,
where M i is as defined in Equation (26). Let
s ^ i ( x Pa ( i ) , P PF ( i ) ) f i ( x Pa ( i ) , P PF ( i ) ) + 1 α i p ^ i r i ( x Pa ( i ) , P Pa ( i ) ) ,
where p ^ i , r i , and f i are as defined in Equations (8), (25), and (27), respectively. The mixed-strategy best-response correspondence of defender i is then
BR i ( x Pa ( i ) , P PF ( i ) ) { 1 } , if s ^ i ( x Pa ( i ) , P PF ( i ) ) > Δ ^ i , { 0 } , if s ^ i ( x Pa ( i ) , P PF ( i ) ) < Δ ^ i , [ 0 , 1 ] , if s ^ i ( x Pa ( i ) , P PF ( i ) ) = Δ ^ i ,
where Δ ^ i and s ^ i are as defined in Equations (13) and (29), respectively.
The best-response correspondence for the attacker is simply
BR 0 ( x ) arg max P P { 0 , 1 } n U ( x , P ) ,
where U is as defined in Equation (28).
Definition 8.
A mixed-strategy profile ( x * , P * ) is an MSNE of an IDD game if (1) for all i [ n ] , x i * BR i ( x Pa ( i ) * , P PF ( i ) * ) and (2) P * BR 0 ( x * ) , where BR i and BR 0 are as defined in Equations (30) and (31), respectively.

3.3.1. A Characterization of the MSNE: Compact Representation of Attacker’s Mixed Strategies

Recall that the space of pure strategies of the aggressor is, in its most general form, exponential in the number of internal players. This is an obstacle to tractable computational representations in large-population games. In the following result, we establish an equivalence class over the aggressor’s mixed strategy that allows us to only consider “simpler” mixed strategies in terms of their probabilistic structure.
Proposition 3.
For any mixed strategy ( x * , P * ) of an IDD game, there exists another mixed strategy ( x * , P ˜ ) , such that
1. 
the joint PMF P ˜ decomposes as 10
P ˜ ( b ) i = 1 n Φ PF ( i ) ( b PF ( i ) )
for some non-negative functions Φ PF ( i ) : { 0 , 1 } k i [ 0 , ) , and all b { 0 , 1 } n ,
2. 
for all i [ n ] , the parent-family marginal PMFs P ˜ PF ( i ) = P PF ( i ) * agree, and
3. 
the sites and the aggressor achieve the same expected cost and utility, respectively, in ( x * , P ˜ ) as in ( x * , P * ) : for all i [ n ] ,
M i ( x PF ( i ) * , P ˜ PF ( i ) ) = M i ( x PF ( i ) * , P PF ( i ) * ) ,
and
U ( x * , P ˜ ) = U ( x * , P * ) .
Proof. 
(Sketch) The proof of the proposition follows closely a similar argument used by Kakade et al. [43] to characterize the probabilistic structure of correlated equilibria [44,45] in arbitrary graphical games. The core of the argument is to realize that the maximum-entropy (MaxEnt) distribution [46], over the aggressor’s pure strategies, with the same parent-family marginals as those of P * satisfy all the conditions above. We refer the reader to Appendix B.4 for the formal proof. ☐
Thus, in the sense given by Proposition 3, even though in principle there may be equilibria in which the aggressor’s mixed strategy is an arbitrarily complex distribution, we can restrict our attention to aggressor’s mixed strategies that respect the decomposition in terms of functions over the parent-families of each site only. The proposition has important implications for the representation of the mixed strategies of the aggressor, and hence, the representation size of any MSNE, modulo expected-payoff equivalence. In particular, within the equivalence class of mixed strategies achieving the same expected-payoff, the representation needed to represent a mixed strategy in an IDD game is exponential only in the size of the largest parent-family k i , not in the number of sites n. If the size of the largest parent-family k max max i [ n ] k i is bounded, then the representation is polynomial in the size of the representation size of the game N. Otherwise, because N = O ( n + | E | ) is linear in the size of the game graph G = ( [ n ] , E ) , while being exponential in the k max is an exponential reduction in general from being exponential in n, it is still technically intractable with respect to N. The following corollary summarizes the discussion.
Corollary 1.
For any IDD game, let k max max i [ n ] k i be the size of the largest parent-family in the game graph. The representation size of any mixed strategy of the aggressor in the game is O 2 k max , modulo expected-payoff equivalence.
Theoretically establishing the existence of such compact representations for the attacker’s mixed strategy in general may also have computational and algorithmic implications. This is because those results suggest that computing MSNE in arbitrary IDD games with arbitrary attacker’s mixed strategies (i.e., arbitrary multiple simultaneous attacks) may be at least feasible in terms of the representation of the output MSNE itself. This is despite the fact that the attacker’s mixed strategy in the MSNE is over an exponentially-sized set (i.e., { 0 , 1 } n ), as we discussed briefly at the beginning of this subsection, and Section 1.2 and Section 3.2.

3.4. MSNE of IDD Games with at Most One Simultaneous Attack and Full Transfer Vulnerability

In this subsection, we impose Assumption 3. We first consider IDD games in which the players’ investments cannot reduce the overall risk (i.e., α i = 1 ). This is the same setting used in the original IDS games (see Definitions 1 and 2, and the discussion on model semantics, in Section 2).
Assumption 4.
For all internal players i [ n ] , the probability that player i’s investment in security does not protect the player from transfers, α i , is 1.
For convenience, we introduce the following definition.
Definition 9.
We say an IDD game is fully transfer-vulnerable if Assumption 4 holds.
Before continuing, we remind the reader that the Definition 7 does not explicitly preclude multiple attacks, just that they cannot occur simultaneously. In terms of the attacker’s mixed strategy P, as defined in Equation (22), Definition 7, via Assumption 3, implies that i = 0 n y i = 1 , where each y i is as defined in Equation (23) and y 0 is the probability of no attack:
y 0 P ( 0 ) P ( B = 0 ) = 1 i = 1 n y i .
Thus, in what follows, when dealing with single-attack IDD games, we denote the joint mixed strategy ( x , P ) simply as ( x , y ) when P is compactly represented by y as defined in Equations (23) and (32); hence, for such games, we denote any MSNE ( x * , y * ) ( x * , P * ) .
In addition, the sites’ cost functions and the attacker’s utility of fully transfer-vulnerable single-attack IDD games are simpler than their most general versions for arbitrary IDD games. This is because Assumptions 3 and 4 greatly simplify the condition of the best-response correspondence of the internal players.
The following technical results extend Lemma 1 and Proposition 1 for single-attack IDD games from pure strategies to mixed strategies.
Lemma 2.
The following holds in any single-attack IDD game ( [ n ] , G , C , L , p ^ , Q ^ , α ) : given joint mixed-strategy ( x , y ) ( x , P ) , with P represented using y as defined in Equations (23) and (32), for all players i [ n ] , for all b PF ( i ) ,
r i ( x Pa ( i ) , b Pa ( i ) ) r i ( x Pa ( i ) , b Pa ( i ) ) = j Pa ( i ) b j ( 1 x j ) q ^ j i ,
which implies
r i ( x Pa ( i ) , y Pa ( i ) ) r i ( x Pa ( i ) , P Pa ( i ) ) = j Pa ( i ) y j ( 1 x j ) q ^ j i ,
s i ( x Pa ( i ) , y Pa ( i ) ) s i ( x Pa ( i ) , P Pa ( i ) ) = 1 r i ( x Pa ( i ) , y Pa ( i ) )
f i ( x Pa ( i ) , y PF ( i ) ) f i ( x Pa ( i ) , P PF ( i ) ) = y i , a n d
s ^ i ( x Pa ( i ) , y PF ( i ) ) y i + 1 α i p ^ i j Pa ( i ) y j ( 1 x j ) q ^ j i .
The proof is in Appendix B.5.
Proposition 4.
In any single-attack IDD game, we have, for all players i [ n ] ,
M i ( x i , x Pa ( i ) , y PF ( i ) ) M i ( x i , x Pa ( i ) , P PF ( i ) ) ,
where M i ( x i , x Pa ( i ) , P PF ( i ) ) results from the corresponding substitution of the expressions given in Equations (34) and (36) into Equation (26); and
U ( x , y ) U ( x , P ) = i = 1 n y i U i ( x ) ,
where for all i = 1 , , n ,
U i ( x ) ( 1 x i ) p ^ i L i + j Ch ( i ) q ^ i j ( x j α j + ( 1 x j ) ) L j C i 0 .
The proof is in Appendix B.6.
The expressions for sites’ costs and attacker’s payoff of fully transfer-vulnerable single-attack IDD games are even simpler, as we discuss in detail in the remaining of this section.
To start, from Equation (37), for this class of games we have s ^ i ( x Pa ( i ) , y PF ( i ) ) = y i .
Let L i 0 ( x i ) ( 1 x i ) ( p ^ i L i + j Ch ( i ) q ^ i j L j ) . It will also be convenient to denote by
L ¯ i 0 L i 0 ( 0 ) = p ^ i L i + j Ch ( i ) q ^ i j L j ,
so that we can express L i 0 ( x i ) = ( 1 x i ) L ¯ i 0 , to highlight that L i 0 is a linear function of x i .
Similarly, it will also be convenient to let M i 0 ( x i ) L i 0 ( x i ) C i 0 , and denote by
M ¯ i 0 M i 0 ( 0 ) = L ¯ i 0 C i 0 .
Let
η i 0 C i 0 / L ¯ i 0 .
The best-response condition of the attacker also simplifies under the same assumptions because now
U ( x , y ) U ( x , P ) = i = 1 n y i M i 0 ( x i ) .
Assumption 2 is reasonable in our new context because, under Assumption 4, if there were a player i with η i 0 > 1 , the attacker would never attack i, and as a result player i would never invest. In that case, we can safely remove j from the game, without any loss of generality.
As graphical multi-hypermatrix games [40], fully transfer-vulnerable single-attack IDD games have a considerably simpler graph structure than for arbitrary single-attack IDD games. In particular, from Equation (38) in Proposition 4 (via Equations (30) and (37) with α i = 1 for all i), the local hypergraph of each site has a single hyperedge, i.e., { 0 , i } , while, from Equation (44), the local hypergraph of the attacker has n hyperedges, one for each site, of size 2 and the set of local hyperedges for the attacker (i.e., player 0) equals i [ n ] { { 0 , i } } = { { 0 , 1 } , { 0 , 2 } , , { 0 , n } } . Recall that in single-attack IDD games, given a fixed y , only the subgame over just the sites is a graphical polymatrix game, not the whole game. Hence, adding full transfer vulnerability makes the whole game a graphical polymatrix game with a simple star graph in which the attacker is the center node (i.e., the single internal node), and each site node is a leaf. We exploit this property in the next subsection to provide a characterization of all MSNE in any such game.

3.4.1. Characterizing the MSNE of Fully Transfer-Vulnerable Single-Attack IDD Games

We now characterize the space of MSNE in fully transfer-vulnerable single-attack IDD games under Assumptions 1 and 2. Our characterization will immediately lead to a polynomial-time algorithm for computing all MSNE in that subclass of games (Section 4.1).
The characterization starts by partitioning the space of games into three, based on whether i = 1 n Δ ^ i is (1) <, (2) =, or (3) > than 1, where Δ ^ i is as defined in Equation (13). The rationale behind this is that now the players are indifferent between investing or not investing when y i = Δ ^ i , where y i is as defined in Equation 23, by the resulting best-response correspondence for the attacker’s mixed strategy in this case. The following result completely characterizes the set of MSNE in fully transfer-vulnerable single-attack IDD games.
Proposition 5.
Consider any fully transfer-vulnerable single-attack IDD game G ( n , G , C , L , C 0 , p ^ , Q ^ , 1 ) , whose parameters satisfy Assumptions 1 and 2. Let Δ ^ i , L ¯ i 0 , M ¯ i 0 , and η i 0 be as defined in Equations (13), (41)–(43), respectively. The mixed-strategy profile ( x * , y * ) is an MSNE of G in which
1. 
i = 1 n Δ ^ i < 1 if and only if
(a) 
1 > y 0 * = 1 i = 1 n Δ ^ i > 0 , and
(b) 
for all i, y i * = Δ ^ i > 0 and 0 < x i * = 1 η i 0 < 1 .
2. 
i = 1 n Δ ^ i = 1 if and only if
(a) 
y 0 * = 0 , and
(b) 
for all i, y i * = Δ ^ i > 0 and x i * = 1 v + C i 0 L ¯ i 0 with 0 v min i [ n ] M ¯ i 0 .
3. 
i = 1 n Δ ^ i > 1 if and only if
(a) 
y 0 * = 0 , and
(b) 
there exists a non-singleton, non-empty subset I [ n ] , such that min i I M ¯ i 0 max k I M ¯ k 0 if I [ n ] , and the following holds:
i. 
for all k I , x k * = 0 and y k * = 0 ,
ii. 
for all i J arg min i I M ¯ i 0 , x i * = 0 and 0 y i * Δ ^ i , and in addition, i J y i * = 1 t I J Δ ^ i ; and
iii. 
for all i I J , y i * = Δ ^ i and 0 < x i * = 1 min t I M ¯ t 0 + C i 0 L ¯ i 0 < 1 .
The proof of Proposition 5 is in Appendix B.7. As a proof sketch, we briefly state that the proposition follows from the restrictions imposed by the model parameters and their implication to indifference and monotonicity conditions. We also mention that the third case in the proposition implies that if the M ¯ l 0 ’s form a complete order, then the last condition stated in that case allows us to search for an MSNE by exploring only n 2 sets, vs. 2 n 2 if done naively.
It turns out that a complete order is unnecessary. The following claim allows us to safely move all the internal players with the same value of M ¯ i 0 in a group as a whole inside or outside I. This technical result is important because of its algorithmic implications, as we discuss in Section 4.1.
Claim 1.
Let I [ n ] , such that I I , | I | < | I | < n 1 . Suppose we find an MSNE ( x , y ) such that I = { i y i > 0 } , with the property that min l I M ¯ l 0 = max k I M ¯ k 0 . In addition, suppose I satisfies min l I M ¯ l 0 = min l I M ¯ l 0 max k I M ¯ k 0 . Then, we can also find ( x , y ) using the partition imposed by I.
The proof of the claim is in Appendix B.8.

3.4.2. Some Remarks on the MSNE of Fully Transfer-Vulnerable Single-Attack IDD Games

We begin by pointing out that, under the conditions of Proposition 5, for almost every setting of the free parameters of the system, subject to their respective constraints, every IDD games have a corresponding unique MSNE, which we denote by ( x * , y * ) . We consider that MSNE to be the equilibrium or stable outcome of the system.

Security Investment Characteristics

At equilibrium, if x i * > 0 , the probability of not investing is proportional to C i 0 and inversely proportional to p ^ i L i + j Ch ( i ) q ^ i j L j . It is kind of reassuring that, at an equilibrium, the probability of investing increases with the potential loss a site’s non-investment decision could cause to the system. Hence, behavior in a stable system implicitly “forces” all sites to indirectly account for or take care of their own children. This may sound a bit paradoxical at first given that we are working within a “noncooperative” setting and each sites’s cost function is only dependent on the investment decision of the player’s parents, in general. However, in the case of fully transfer-vulnerable single-attack IDD games, the site’s cost is only a function of its mixed strategy x i and the probability y i that the attacker will directly target site i. Said differently, any site’s best response is independent of their parents, the source of transfer risk, if investment in security does nothing to protect that player from transfers (i.e., α i = 1 ). Interestingly, even in such circumstances, the existence of the attacker in the system is inducing an (almost-surely) unique stable outcome in which an implicit form of “cooperation” occurs. In retrospect, this makes sense because no site can control the transfer risk. Said differently, there is nothing any site can do to prevent the transfer, even though the original potential for transfers does depend on the parents’ investment strategies. In short, rational/optimal noncooperative behavior for each site is not only to protect the player’s own losses but also to “cooperate” to protect the player’s children.

Relation to Network Structure

How does the network structure of the given input game model relates to its equilibrium? As seen above, the values of the equilibrium strategy of each player depend on information from the attacker, the player and the player’s children. From the discussion in the last paragraph, within the context of the given input model, a player’s probability of investing at the equilibrium increases with the expected loss sustained from a “bad event” occurring as a result of a transfer from a player to the player’s children.
Let us explore this last point further by considering the case of uniform-transfer probabilities (also studied by Kunreuther and Heal [3] and Kearns and Ortiz [47]). In that case, transfer probabilities are only a function of the source, not the destination: q ^ i j δ ^ i . The expression for the equilibrium probabilities of those players who have a positive probability of investing would simplify to x i * = 1 v + C i 0 p ^ i L i + δ i j Ch ( i ) L j , for some constant v. The last expression suggests that j Ch ( i ) L j differentiates the probability of investing between sites. That would suggest that the larger the number of children in the given input game model, the larger the probability of investing at the corresponding equilibrium for the given input game model. A scenario that seems to further lead us to that conclusion is when we make the further assumption of an homogeneous system as first studied in the original IDS paper [3]: L i L , p ^ i p ^ , δ i δ , and C i 0 C 0 11 for all players. Then, we would get x i * = 1 v + C 0 L p ^ + δ | Ch ( i ) | . Thus, at equilibrium, the probability of not investing, 1 x i * , is inversely proportional to the number of children player i has, which is implicit in the directed graph over sites in the model given as input.

On the Attacker’s Equilibrium Strategy

The support of the attacker, I * { i y i * > 0 } , at equilibrium has the following properties:
  • players for which the attacker’s cost-to-expected-loss is higher are “selected” first in the algorithm;
  • if the size of that set is t, and there is a lower bound on Δ ^ i > Δ ^ , and i = 1 n Δ ^ i > 1 , then t < 1 / Δ ^ is an upper-bound on the number of players that could potentially be attacked;
  • if we have a game with homogeneous parameters, then the probability of an attack will be uniform over that set I * ; and
  • all but one of the players in that set I * invest in security with some non-zero probability, for almost every parameter setting for IDD games satisfying the conditions of Proposition 5.

4. On the Complexity of Computing an MSNE in Single-Attack IDD Games

Here, we consider the computational complexity of computing MSNE in single-attack IDD games.

4.1. Computing All MSNE of Fully Transfer-Vulnerable Single-Attack IDD Games in Polynomial Time

We now describe an algorithm to compute all MSNE in a fully transfer-vulnerable single-attack IDD games that falls off Proposition 5. We begin by noting that the equilibrium in the case of IDD games with i = 1 n Δ ^ i 1 , corresponding to cases 1 and 2 of the proposition, has essentially an analytic closed-form. Hence, we concentrate on the remaining and most realistic case in large-population games, for which we expect i = 1 n Δ ^ i > 1 . We start by sorting the indices of the internal players in descending order based on the M ¯ i 0 ’s. Let Val ( l ) and Idx ( l ) be the lth value and index in the resulting sorted list, respectively. Find t such that 1 Δ ^ Idx ( t ) l = 1 t 1 Δ ^ Idx ( l ) < 1 . Let k = max { l l t and Val ( l ) = Val ( t ) } (i.e., continue down the sorted list of values until a change occurs). For i = 1 , , t 1 , let l = Idx ( i ) and set x l * = 1 Val ( t ) + C l 0 L ¯ l 0 and y l * = Δ ^ l . For i = k + 1 , , n , let l = Idx ( i ) and set x l * = 0 and y l * = 0 . For i = t , , k , let l = Idx ( i ) and set x l * = 0 . Finally, represent the simplex defined by the following constraints: for i = t , , k , let l = Idx ( i ) and 0 y l * Δ ^ l ; i = t k y Idx ( i ) * = 1 i = 1 t 1 Δ ^ Idx ( i ) . The running time of the algorithm is O ( n log n ) (because of sorting).
Theorem 1.
There exists a polynomial-time algorithm to compute all MSNE of any fully transfer-vulnerable single-attack IDD game with parameters that satisfy Assumptions 1 and 2.
In cases in which the equilibria is not unique, it can be generated via simple sampling of either a simple linear system or a simplex. In either case, one can compute a single MSNE from that infinite set in polynomial time [48].
Let us revisit the types of games that may have an infinite MSNE set. Note that the case in which i = 1 n Δ ^ i = 1 has (Borel) measure zero and is quite brittle (i.e., adding or removing a player breaks the equality). For the case in which i = 1 n Δ ^ i > 1 , if the value of the M ¯ i 0 ’s are distinct, 12 then there is a unique MSNE. Algorithm A1 in Appendix C provides pseudocode of the exact algorithm just described in this subsection.
In what remains of this section, we will continue to study the problem of computing MSNE in single-attack IDD games under Assumptions 1 and 2 as stated and discussed in Section 3.1.

4.2. Hardness Results on Computing MSNE in General Single-Attack IDD Games

For the purpose of studying the computational complexity of single-attack IDD games, it is natural to view the computation of an MSNE as a two-part process. Given an attacker’s strategy, we need to determine the MSNE of the underlying game of the sites, or sites-game for short. The sites-game could have many MSNE and each MSNE could yield a different utility for the attacker (and the sites). Naively, the attacker can verify whether each of the MSNE is in the attacker’s best response. Clearly, doing so depends on whether we can efficiently compute all MSNE in the sites-game, which of course depends on the given attacker’s strategy. For example, if i = 1 n y i = 0 , then the sites-game would have ’none invest’ as the only outcome, because of Assumption 1 in Section 3.1.
Our goal in this subsection is to formally prove that there is an instance of a single-attack IDD game, and an attacker’s strategy in that instance, with the property that, if we fix that attacker’s strategy, we cannot compute all of the MSNE efficiently in the underlying sites-game, unless P = N P . The implication is that the existence of an efficient algorithm to compute an MSNE of IDD games based on the natural two-part process described in the previous paragraph, (i.e., checking whether each attacker’s strategy can be part of an MSNE), is unlikely.
To formally prove that it is unlikely that we can always tractably compute all of the MSNE in an instance of the sites-games, as induced by an IDD game and an attacker’s strategy, we consider the Pure-Nash-Extension computational problem [47] for binary-action n-player games, which is NP-complete for graphical IDS games (see Definition 2). In the Pure-Nash-Extension problem, the input is given by a description of the game and a partial assignment to a joint pure strategy a { 0 , 1 , * } n , where ’*’ is our way of indicating that some components of the joint pure strategy a do not have assignments yet. In fact, the computational problem is precisely to determine whether there exists a complete assignment, i.e., a joint pure strategy, a { 0 , 1 } n consistent with a in the following sense: for all i, if a i { 0 , 1 } , then we must have a i = a i , but if a i = , then we are free to assign a value to a i { 0 , 1 } , as long as the resulting joint pure strategy a is a PSNE of the given input game model. 13
Theorem 2.
Consider a variant of single-attack IDD games in which Assumptions 1 and 2 hold, and i = 1 n Δ ^ i 1 . There is an attacker’s strategy P defined by y such that if we fix y , then the Pure-Nash Extension problem for the induced n-player sites-game is NP-complete.
The proof of the theorem is in Appendix B.9. In the worst case, we need to consider the y just described, should other strategies fail to be a part of any MSNE. Another challenge is that even if we can compute all exact MSNE, there could be exponentially many of them to check. In the next section, we provide provably efficient algorithms to compute an approximate MSNE in tree-like subgraph structures over the sites only.

4.3. FPTAS to Compute Approximate MSNE of Tree-Like Single-Attack IDD Games

In this section, we focus on the question of computing approximate MSNE, a concept which the upcoming Definition 10 formalizes in our context, in a subclass of IDD games. In particular, we focus our study to the case of single-attack IDD games in which the game subgraph composed of the sites is a directed tree, although our technical result holds for slightly more general tree-like graph structures among the sites. Despite the apparent simplicity of the subgraph over the sites, one can envision very important real-world applications such as protection of supply chains and other hierarchical structures (e.g., see Agiwal and Mohtadi [23]). Recent work on optimization problems related to the power grid uses a directed-tree graphical model as the underlying structure of the electricity distribution network [24]. We note that the attacker is connected to all of the sites even if we do not point it out explicitly.
Given that there is no PSNE in any IDD games, under reasonable conditions (Proposition 2), we shift our focus to computing an MSNE. In Section 4.1, we provided an algorithm to compute all exact MSNE in an instance of IDD games where α i = 1 for all sites i (i.e., investment cannot protect the sites from indirect risk). The result we present in this subsection is for computing approximate MSNE, but holds for general α. We now formally define approximate MSNE in the context of this subsection.
Definition 10.
A mixed strategy ( x * , y * ) is an ϵ-MSNE of a single-attack IDD game if
1. 
for all i [ n ] , M i ( x i * , x Pa ( i ) * , y PF ( i ) * ) min x i M i ( x i , x Pa ( i ) * , y PF ( i ) * ) + ϵ , and
2. 
U ( x * , y * ) max y U ( x * , y ) ϵ = max max i [ n ] U i ( x ) , 0 ϵ ,
where the M i ’s, U, and U i ’s are as defined in Equations (38)–(40) in Proposition 4, respectively.
An exact MSNE ≡ 0-MSNE. Moreover, we assume that all the cost and utility functions are individually normalized to [ 0 , 1 ] and ϵ [ 0 , 1 ] ; otherwise ϵ is not truly well-defined.
The following is one of our main technical results about computing approximate MSNE in IDD games with arbitrary α with a directed tree network structure over the sites.
Theorem 3.
There exists an FPTAS to compute an ϵ-MSNE in single-attack IDD games with directed tree-like graphs G over the sites, under Assumptions 1 and 2 on the game parameters.
The proof is in Appendix D.
Note that Theorem 3 is nontrivial within the context of the state-of-the-art in computational game theory. We are working with a graph structure where there is one node (the attacker) connected to all the nodes of the tree-like graph G (the sites). Naively applying the traditional well-known dynamic programming (DP) algorithms of [37] and [49] to our problem would not give us any FPTAS. In fact, their game representation size is exponential in the number of neighbors instead of our linear representation size. Moreover, finding ϵ-MSNE in general degree-3 graphical games is PPAD-hard [49], even if the payoff is additive [29], or more generally a graphical polymatrix game in parametric form [50], as is the case for our model under at most one simultaneous attack (Assumption 3).

5. Experiments

In the previous section (Section 4), we established the theoretical characteristics and computational tractability of single simultaneous attack IDD games with the highest transfer vulnerability parameter: α i = 1 . We also established hardness results related to computing all MSNE in single-attack IDD games with arbitrary graph structures and α values. We then considered approximate MSNE in the same class of games and provided an FPTAS for cases in which the game subgraph over the sites is a directed tree-like graph. In this section, partly motivated by security problems in cyberspace, we concentrate instead on empirically evaluating the other extreme of transfer vulnerability: games with low α i values (i.e., near 0), so that investing in security considerably reduces the transfer risk. We also consider a complex graph structure found in the real-world Internet corresponding to the AS-level network, as measured in March 2010 by DIMES.
Our main objectives for the experiments presented here are
  • to demonstrate that a simple heuristic, best-response-gradient dynamics (BRGD), is practically effective in computing an ϵ-MSNE, up to ϵ = Ω ( 10 3 ) , in a very large class of IDD games with realistic Internet-scale network graphs in a reasonable amount of time for cases in which the transfer vulnerabilities α i ’s are low;
  • to explore the general structural and computational characteristics of (approximate) MSNE in such IDD games, including their dependence on the underlying network structure of the game (and approximation quality); and
  • to evaluate and illustrate the effectiveness of an improved version of the simple heuristics, which uses the concept of smooth best-response dynamics (SBRD) for the attacker, for computing ϵ-MSNE for ϵ values that are an order of magnitude lower (i.e., ϵ = Ω ( 10 4 ) ).
BRGD is a well-known technique from the literature on learning in games [35]. We refer the reader to Singh et al. [51] for more information on properties of BRGD, and to Kearns and Ortiz [47], Heal and Kunreuther [4], and Kearns [52] for examples of its application within an IDS context. Here, we use BRGD as a tool to compute an ϵ-approximate MSNE (Definition 10), as was the case for the other previous applications of the technique in IDS contexts. Our particular implementation of BRGD begins by initializing x i and y i in [ 0 , 1 ] for all sites i such that i = 1 n y i 1 . At each round, BRGD updates, simultaneously for each i,
x i x i 10 × ( M i ( 1 , x Pa ( i ) , y PF ( i ) ) M i ( 0 , x Pa ( i ) , y PF ( i ) ) and y i y i + 10 × ( U i ( x ) U ( x , y ) ) ,
where the U i ’s are as defined in Equation (40); the M i ’s (Equation (38)) and U (Equation (39)) functions are normalized to [ 0 , 1 ] ; and the constant value 10 is the learning-rate/step-size in our case.
We obtained the latest version (March 2010 at the time) of the real structure and topology of the AS-level Internet from DIMES (netdimes.org) [34]. The AS-level network has 27 , 106 nodes (683 isolated) and 100 , 402 directed edges; the graph length (diameter) is 6253, the density (number of edges divided by number of possible edges) is 1.9920 × 10 5 , and the average (in and out) degree is 3.70 , with ≈76.93% and 2.59 % of the nodes having zero indegree and outdegree, respectively. Figure 1 shows the indegree and outdegree distribution and Figure 2 shows the scatter plot of indegree and outdegree of the graph.
All the IDD games in the experiments presented in this section have this network structure.
For simplicity, we call Internet games the class of IDD games with the AS-level network graph and low α i values. We considered various settings for model parameters of Internet games: a single instance with specific fixed values; and several instances generated at random (see Table 1 for details). The attacker’s cost-to-attack parameter for each node i is always held constant: C i 0 = 10 6 . For each run of each experiment, we ran BRGD with randomly-generated initial conditions (i.e., random initializations of the players’ mixed strategies): x i Uniform ( [ 0 , 1 ] ) , i.i.d. for all i, and y is a probability distribution generated uniformly at random, and independent of x , from the set of all probability mass functions over n + 1 events. 14 The initialization of the transfer-probability parameters of a node essentially gives higher transfer probability to children with high (total) degree (because they are potentially “more popular”). The initialization also enforces p ^ i + j Ch ( i ) q ^ i j = 0.9 . Other initializations are possible but we did not explore them here.

5.1. Computing an ϵ-MSNE Using BRGD

Given the lack of theoretical guarantees on the convergence rate of BRGD, we began our empirical study by evaluating the convergence and computation/running-time behavior of BRGD on Internet games. We ran ten simulations for each ϵ value and recorded the number of iterations until convergence (up to 2000 iterations). Figure 3 presents the number of iterations taken by BRGD to compute an ϵ-MSNE as a function of ϵ. All simulations in this experiment converged (except for ϵ = 0.001 , for which two of the runs on the single instance and all those on randomly-generated instances did not converge). Each iteration took roughly 1–2 s. (wall clock). Hence, we can use BRGD to consistently compute an ϵ-MSNE of a 27 K-players Internet game in a few seconds.
We now concentrate on the empirical study of the structural characteristics of the ϵ-MSNE found by BRGD. We experimented on both the single and randomly-generated Internet game instances. We discuss the typical behavior of the attacker and the sites in an ϵ-MSNE, and the typical relationship between ϵ-MSNE and network structure.

5.1.1. A Single Internet Game

We first studied the characteristics of the ϵ-MSNE of a single Internet game instance. The only source of randomness in these experiments comes from BRGD’s initial conditions (i.e., the initialization of the mixed strategies x and y ). BRGD consistently found exact MSNE (i.e., ϵ = 0 ) in all runs.

Players’ Equilibrium Behavior

In fact, we consistently found that the attacker always displays only two types of “extreme” equilibrium behavior, corresponding to the two kinds of MSNE BRGD found for the single Internet game: place positive probability of a direct attack to either almost all nodes (Strategy A) or a small subset (Strategy B). Figure 4 shows a plot of the typical probability of direct attack for those two equilibrium strategies for the attacker when BRGD stops. In both strategies, a relatively small number of nodes (about 1K out of 27K) have a reasonably high (and near uniform) probability of direct attack. In Strategy A, however, every node has a positive probability of being the target of a direct attack, albeit relatively very low for most; this is in contrast to Strategy B where most nodes are fully immune from a direct attack. Interestingly, none of the nodes invests in either MSNE: x i * = 0 for all nodes i. Thus, in this particular Internet game instance, all site nodes are willing to risk an attack.

Relation to Network Structure

We found that the nodes with (relatively) high probability of direct attack are at the “fringe” of the graph (i.e., have low or no degree). In Strategy A, fringe nodes (with mostly 0 or 1 outdegree) have relatively higher probability of direct attack than nodes with higher outdegree. Similarly, in Strategy B, the small subset of nodes that are potential target of a direct attack have relatively low outdegree (mostly 0, and 0.0067 on average; this is in contrast to the average outdegree of 3.9639 for the nodes immune from direct attack). Figure 5 shows the relation between the probability of attack and outdegree and the relation between the indegree and outdegree of a typical simulation runs for strategy A and for Strategy B as described above, respectively. We emphasize that these observations are consistent throughout all runs of the experiment. In short, we consistently found that the nodes with low outdegree are more likely to get attacked directly in the single Internet game instance.

5.1.2. Randomly-Generated Internet Games

We now present results from experiments on randomly-generated instances of ten Internet games, a single instance for each ϵ { 0.001 , 0.002 , , 0.009 } . For simplicity, we present the result of a single BRGD run on each instance. 15

Behavior of the Players

Figure 6 shows plots of the attacker’s probability of direct attack and histograms of the nodes’s probability of investment in a typical run of BRDG on each randomly-generated Internet game instance for each ϵ value.
The plots suggest that approximate MSNE found by BRGD is quite sensitive to the ϵ value: as ϵ decreases, the attacker tends to “spread the risk” by selecting a larger set of nodes as potential targets for a direct attack, thus lowering the probability of a direct attack on any individual node; the nodes, on the other hand, tend to deviate from (almost) fully investing and (almost) not investing to a more uniform mixed strategy (i.e., investing or not investing with roughly equal probability).
A more thorough study confirms the above observation of the attacker and it is illustrated by Figure 7. Figure 7 shows: (a) the number of iterations taken by the smooth-best-response gradient-dynamics algorithm for ϵ-MSNE to converge (top left); (b) the number of nodes that are being targeted (top right); (c) the highest probability of attack (bottom left); and (d) the scatter plot of the nodes that are being targeted and the highest probability of attack (bottom right) for each of the ten simulations. From this figure, we observe that, as ϵ decreases, (1) the number of iterations takes for an ϵ-MSNE to converge increases (top left); (2) the number of nodes that are being targeted increases (top right); and (3) the highest probability of attack decreases (bottom left). From the bottom right graph of Figure 7, we observe that there is a negative correlation between the number of nodes that are being targeted and the highest probability of an attack: as the highest probability of an attack increases, the number of nodes that are being targeted decreases.
A possible reason to explain the behavior of the sites is that as more nodes become potential targets of a direct attack, more nodes with initial mixed strategies close to the “extreme” (i.e., very high or very low probabilities of investing) will move closer to a more uniform (and thus less “predictable”) investment distribution.

Relation to Network Structure

Figure 8 presents some experimental results on the relationship between network structure and the attacker’s equilibrium behavior. The graphs show, for each ϵ value, the average indegree and outdegree of those nodes that are potential targets of a direct attack at an ϵ-MSNE, across the BRGD runs of the ten randomly-generated IG instances. In general, both the average indegree and outdegree of the nodes that are potential targets of a direct attack tend to increase as ϵ decreases. One possible reason for this finding could be the fact that the values of α i generated for each player are relatively low (i.e., uniformly distributed over 0 , 1 40 ); yet, interestingly, such behavior and pattern, is the exact opposite of the theory for the case α i = 1 .

5.1.3. Case Study: A Randomly-Generated Instance of an IG at 0.005 -MSNE

In this subsection, we provide a detailed topological study of a randomly-generated IG instance at 0.005 -MSNE.

Topological Structure of an Attack to the Internet

In Figure 9, we plot the topological structure of the top sites (in this case 402) with the highest y i and their immediate neighbors at 0.005 -MSNE.
Notice that there are a few isolated nodes and a few small “node-parent-children” networks, but in general, the largest network component tends to have a cluster-like structure. Figure 9 also shows the number of connected components of the network for the subgraph of the nodes most likely to be attacked (and their neighbors), as well as those of the network for the subgraph of the nodes with the highest probability of investing, along with some additional properties of the graphs.
Figure 10 and Figure 11 show the indegree and outdegree of the (402) non-zero y i nodes and the remaining (26704) zero y i nodes, respectively. We did not observe in our experiments any strong relationship between the y i ’s or x i ’s in the ϵ-MSNE we found and the corresponding indegree or outdegree of the node i. However, we observed that, among the nodes with non-zero probability of attack, there was a slight tendency for those nodes with the lowest probability of attack to also have low outdegree and for those nodes with the highest probability of investing to also have low outdegree, but that tendency did not seem significant enough.
As mentioned earlier (Section 5.1.2), the behavior of the site players is quite sensitive to the ϵ value. Therefore, this could be one of the reasons that these nodes (with the highest y i ) have low outdegree.

5.2. A Heuristic to Compute ϵ-MSNE Based on Smooth Best-Response for the Attacker

In this subsection, we introduce an improved, simple heuristic to compute ϵ-MSNE on arbitrary graphs for lower ϵ values than those considered in the previous subsection (Section 5.1). Here, we evaluate the proposed hybrid BRGD-SBRD heuristic for computing ϵ-MSNE in IDD game using IGs randomly generated as described and used previously in this section.
We first look at the attacker’s behavior at an ϵ-MSNE we obtain using BRGD in IGs. We generate a few IG instances and run BRGD until it converges to an ϵ-MSNE for ϵ { 0.001 , 0.002 , , 0.009 } . We observe that in a 0.001 -MSNE ( x * , y * ) , (1) there is a positive, almost-deterministic correlation between the probability of an attack y i * and each component y i * U i ( x * ) of the expected utility the attacker obtained for each site i, where U i ( x * ) is as defined in Equation 40, and (2) the attacker always target the sites with the highest potential utility max i U i ( x * ) (i.e., the maximum utility the attacker can get by attacking any site with probability 1). This observation is consistent with other IGs and holds across the different ϵ-MSNE for various ϵ values. Figure 12 shows evidence of this behavior. Indeed, the main take away is that the attacker tends to favor (or target) sites with highest expected utility. As observed, the attack seems to have some distributional form. Note that while one might expect this behavior given the way BRGD works, there is no theoretical guarantee for such behavior occuring at an approximate MSNE. This is because, in principle, any player may achieve a given approximation level without necessarily assigning a probability over each pure strategy in a way that is monotonically related to the expected payoff for executing that pure strategy deterministically, let alone the linear relationship we observe in the left-hand-side plot (a) of Figure 12. Similarly, given the last two statements, that the attacker is actually placing positive probability of attack only among those sites for which it would obtain highest maximum expected payoff, as the right-hand-side plot (b) of Figure 12 shows, is reassuring.
In what follows, we assume that the attacker is using smoothed best-response [35] and that the attack distribution is a quantal-response mixed strategy [53] (i.e., has the form of a Gibbs-Boltzmann distribution):
y i λ ( x ) exp ( U i ( x i , x Ch ( i ) ) ) / λ ) i = 0 n exp ( U i ( x i , x Ch ( i ) ) / λ ) ,
where U 0 ( x i , x Ch ( i ) ) is the normalized version of U ( x , 0 ) = 0 (i.e., after normalizing U to [ 0 , 1 ] ), and λ is a positive constant. Thus, the attacker’s best-response correspondence is always a singleton, BR 0 ( x ) { y λ ( x ) } , and thus essentially a strictly positive, vector-valued function of x , for any given λ. This update is the result a slight modification of the attacker payoff function that adds a “controlled” entropic term to favor “diverse” attack probabilities, where λ controls diversity (i.e., the level of non-determinism of the attacker’s best response): formally, given a positive real-value λ, the attacker’s payoff is now U ( x , y ) + λ H ( y ) , where H ( y ) i = 0 n y i ln 1 y i is the standard (Shannon) entropy function. The interpretation of λ is that it controls the precision of the attacker and make the utility more distinct. The parameter λ is really the precision or temperature parameter of the Gibbs-Boltzmann distribution: increasing λ leads to the uniform distribution, while decreasing λ produces ϵ-MSNE with lower ϵ because λ restricts the effect of the entropic term in that case. In fact, at temperature λ = 0 , we recover the original best-response for the attacker.
The form for the attacker’s mixed strategy given in Equation (45) has several, natural attractive properties: (1) sites with high utility will have higher probability of an attack and (2) the respective expected utility and the probability of an attack are positively correlated (higher probability of attack implies higher expected utility gain). We observe these characteristics in our experiments (Figure 12).
Based on the previous discussion, we propose the following heuristic to compute ϵ-MSNE which refer to as the hybrid BRGD-SBRD heuristic. The heuristic starts by initializing all of the sites’ investment level x i to 0. It then updates the probability of attack for each site and increments the investment level of the site by a small amount (currently 0.001 ) for sites that do not satisfy the following condition: R i y i p ^ i + ( 1 α i ) j Pa ( i ) y j ( 1 x j ) q ^ j i . The algorithm terminates either when all of the sites satisfy the condition or when it reaches the maximum number of iterations. The condition, R i y i p ^ i + ( 1 α i ) j Pa ( i ) y j ( 1 x j ) q ^ j i , for site i is the threshold for i to not invest. A nice property of this is that given the attacker’s Gibbs-Boltzmann distribution, for any site i, given the strategies of others, the attack decreases monotonically with x i . As a result, no site has an incentive to unilaterally increase its investment to violate the constraint above. Consequently, to justify the use of the condition in the hybrid BRGD-SBRD heuristic in IGs, we observe that in all of the IGs we generated, the percentage of the sites at the 0.001 -MSNE we obtained that satisfies the above condition is ≥98%. The quality of an ϵ-MSNE obtained by the hybrid BRGD-SBRD heuristic depends on the percentage of the sites that satisfy the condition at an ϵ-MSNE. Note that if a high percentage of the sites does not satisfy the condition at the ϵ-MSNE, we can reverse the heuristic by initializing all of the sites investment level x i to 1 and lower the x i ’s until all sites satisfy the opposite constraint.
Algorithm 1 provides pseudocode for the resulting hybrid BRGD-SBRD heuristic, in which the attacker uses smooth best-responses while the sites use best-response gradient, to compute an approximate MSNE in arbitrary IDD games as discussed.
Algorithm 1: Heuristic Based on Hybrid of BRGD and Smooth Best-Response Dynamics (SBRD) to Compute an ϵ-MSNE in Single-Attack IDD Games
Games 08 00013 i001

5.3. Evaluation of the Hybrid BRGD-SBRD Heuristic on Internet Games

To evaluate the hybrid BRGD-SBRD heuristic, we randomly generated ten IGs and compare the results to those obtained using BRGD exclusively.
The first question we address is, what is the relation between the constant λ and the actual approximation quality ϵ achieved in practice? Table 2 shows the impact λ has on ϵ, for the smallest ϵ-MSNE we can obtain for an instance of the IGs (others are similar). The take-home message is that ϵ deceases with λ as expected. For the remaining of this section, we will fix λ = 0.001 when comparing to BRGD as BRGD cannot find ϵ-MSNE beyond 0.0009 -MSNE within 10 K iterations (1 s per iteration).

5.3.1. Comparing Running Time of BRGD and the Proposed Hybrid BRGD-SBRD Heuristic

Next we study the convergence time to an ϵ-MSNE on the ten IG instances when using (1) BRGD exclusively and (2) the hybrid heuristic. We consider the running time in terms of the number of iterations that the algorithm takes to achieve a particular ϵ-MSNE. Each iteration is roughly 1 s for both BRGD and the hybrid heuristic. Figure 13a shows that the running time of the hybrid heuristic is considerably faster than BRGD. The rate at which the number of iterations increases as ϵ decreases seems extreme for the hybrid heuristic—it is almost constant—relative to that for BRGD. Not only is the hybrid heuristic faster than BRGD but it can also find ϵ-MSNE with lower ϵ values.
As an application, we could run our heuristic until it reaches an ϵ-MSNE or converges. Then use the output of our ϵ-MSNE to initialize BRGD. Figure 14 shows the relative improvement over the hybrid heuristic on some IGs. It improves our 0.001 / 0.0009 -MSNE to 0.0006 -MSNE.

5.3.2. Attacker and Sites’ Equilibrium Behavior

We study whether the equilibrium behavior of the attacker and sites that we observed and discussed in Section 5.1 remains the same, or is similar, as we lower ϵ. The following results are a direct output of our heuristic. Figure 13b shows the attack distribution (left) and the investment distribution (right) at ϵ-MSNE, for different ϵ values, on an IG instance. Our results are consistent with those in Section 5.1, and persist for lower ϵ values. We see that as ϵ decreases, the attacker targets more sites while lowering the probability of the direct attack, and more sites move from not invest to partially invest.

5.3.3. Network Structure of an Attack

Next, we present experimental results on the average indegree and outdegree of the targeted sites at ϵ-MSNE to understand the “network structure of the attack” as we did in Section 5.1. Figure 15 shows exactly this. To summarize our experimental results, we can clearly observe that as ϵ decreases both the average indegree and outdegree increase. The results for lower ϵ values indicate the average indegree and outdegree are stabilizing and converging as ϵ decreases. This is also consistent with the observations made in Section 5.1. This consistency also adds evidence to the effectiveness of our proposed heuristic for very low ϵ values.

6. Future Work, Open Problems, and a Summary of Our Contributions

We end by discussing future work and some open problems, and providing a brief summary of our contributions.

6.1. Future Work

In this subsection, we discuss potential avenues of future work.

6.1.1. Attackers Can Affect Transfer Probabilities

We could extend the strategy space of the attacker by allowing the attacker to affect transfer. One particular instantiation of this idea is to have the network graph edges represent the attacker’s targets, as opposed to just the node. The attacker’s pure strategies would now be based on the edges ( i , j ) , such that binary action variable b i j would now represent the attack, taking a value of one if the attacker wants to attack j but only via a transfer from i.

6.1.2. Multiple Attackers with Multiple Attacks

While dealing with multiple attackers is outside the scope of this paper, we have in fact extended the model in a natural way in that direction. We refer the reader to Appendix E for a discussion of this setting. In short, we were able to extend the representation results, but not the characterization or computational/algorithmic results. We leave that endeavor for future work. In principle, we can use BRGD and the hybrid BRGD-SBRD as heuristics in the case of multiple attackers.

6.1.3. Learning IDD Games

Another interesting research direction is the adaptation of machine-learning techniques to infer IDD games from data for the purpose of compactly representing stable outcomes and performing strategic inference.

6.1.4. Other Open Problems

A thorough characterization of the equilibria of interdependent defense games is lacking, specially for the case of multiple potential attacks by multiple aggressors. Also, we need a better understanding of the effect of network structure of the game and restrictions on the aggressors’ available strategies on the equilibria of the game.
Many computational problems in the context of interdependent defense games remain open.
  • What is the computational complexity of the problem of computing equilibria of single-attack IDD games with arbitrary transfer vulnerability? (e.g., a single, multiple or all MSNE? MSNE with particular properties?)
  • What is the computational complexity of the problem of identifying “influential” agents, in the sense of Irfan and Ortiz [41] (see also, Kleinberg [54] and the references therein)?
  • How is the complexity affected by network structure or restrictions on the aggressors’ available strategies? While we provide FPTAS for approximate MSNE in single-attack IDD games with arbitrary transfer vulnerability values and directed tree graphs over the sites, it is fair to say that there is more work to do in that direction. A particularly interesting question is whether we can establish PPAD-completeness results for arbitrary single-attack IDD games. That would strengthen the hardness (NP-complete) result we present in Section 4.2. The relationship between single-attack IDD games and graphical polymatrix games seems particularly close. Perhaps one may be able to apply existing results on the PPAD-completeness of certain classes of graphical polymatrix games to establish PPAD-completeness in our context. A promising direction is to pursue potential reductions using results such as those of (Cai and Daskalakis [50], Theorem 1.2) and (Daskalakis et al. [29], Lemma 6.3) on (graphical) polymatrix games with “strictly competitive games on the edges” and on 3-additive graphical Nash, respectively.

6.2. Summary of Contributions

In this paper, we propose IDD games, an adaptation of IDS games to the setting in which the attack is deliberate and the attacker is explicitly modeled. We consider the special case of the single attack scenario as a way to limit the attackers power, and prove that no PSNE exists in such subclass of games. We then consider randomized strategies and derive the appropriate expressions for the expected costs of the internal players and the expected payoff of the attacker, and consequently their respective best-response correspondences.
We study in depth the case in which only one attack is possible and investment in security does nothing to protect the players from the transfer risk (which is the same implicit assumption made in the original IDS work). We completely characterize the MSNE of such a subclass of IDD games. We prove that almost every game in that subclass has a unique MSNE, which can be almost-fully determined analytically.
That result immediately lead to a simple algorithm for computing the equilibrium that only requires a sorting of the cost-to-expected-loss ratio gain of the attacker for each player. Hence, the algorithm runs in O ( n log n ) , where n is the number of internal players.
We then discuss some corollaries of the characterization and highlight the connection between the network structure and the investment of players at equilibrium. In particular, we show how investment probabilities at equilibrium essentially reflect some degree of “cooperation” (in a fully non-cooperative setting). It turns out that, at an equilibrium, players want to protect their own children in the network graph. That is, each player i wants to protect the set of players to which player i can transfer). Yet, somewhat counterintuitively at first, each player i’s security investment level has no direct dependence on the player’s parents, who are the true source of the risk to the player. In particular, we show how the probability of investment can is directly proportional to the fixed, but arbitrary number of children in the given directed network.
We also provided a hardness result for computing exact MSNE in arbitrary single-attack IDD games. We also designed an FPTAS to compute an ϵ-MSNE in graph with directed trees over the sites, despite the fact that the attacker’s payoff is a function of all the sites.
Finally, inspired by problems in cyber and network security, we designed a generator of random instances of IDD games based on a real-world instance of the AS-level Internet graph. We call the resulting instances Internet games (IGs) . We studied simple heuristics based on BRGD, SBRD, and a hybrid, for computing approximate MSNE in IGs. We evaluated the running times of the different heuristics and found that BRGD, when run exclusively, seems to be effective at computing ϵ-MSNE for ϵ values as low as 0.001 . We studied the characteristics of the approximate MSNE that BRGD produces in terms of both sites’ and attacker’s behavior and the relationship to the IGs network structure. We then proposed and studied a hybrid BRGD-SBRD heuristic in which the attacker uses SBRD while the sites use BRGD. It allows us to compute ϵ-MSNE with an order of magnitude lower in ϵ values very quickly in IG instances. Using the hybrid heuristic we successfully verified some of the conclusions we had obtained using BRGD alone, but now consider considerably lower ϵ values. We also showed how we can improve the effectiveness both in terms of speed and quality when using BRGD alone, by first running the hybrid heuristic and then starting BRGD alone from the ϵ-MSNE values that the hybrid heuristic found.
We view our work as an initial illustration of the potential for the type of quantitative analysis that may be possible in network security settings, even at the real-world Internet scale.

Acknowledgments

This work was funded in part by a National Science Foundation (NSF) Graduate Research Fellowship, an Office of Naval Research’s MINERVA Grant, and an NSF Faculty Early Career Development Program (CAREER) Award IIS-1643006 (transferred from IIS-1054541).

Author Contributions

H.C. and L.O. contributed equally to all aspects of this work, including the writing of the paper, model design and development, theoretical/mathematical analysis, algorithm design, conception, design, and execution of experiments, and evaluation and presentation of empirical results. M.C. helped to initiate work on the IDD model, to design the random generator of Internet Games using the DIMES AS-level Internet graph data, to implement the BRGD heuristics, and to run and collect preliminary data based on that heuristic. He did that as part of his Undergraduate Honor’s Project, under the guidance of L.O. Even though M.C. did not write this submission, there are several parts of his original project report still embedded within this paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ASautonomous systems
BRGDbest-response gradient dynamics
CCconnected components
DPdynamic programming (or program)
FPTASfully polynomial-time approximation scheme
IDDinterdependent security games
IDSinterdependent security
IGInternet game
MSEmean-squared-error
MSNEmixed-strategy Nash equilibrium (or equilibria)
PSNEpure-strategy Nash equilibrium (or equilibria)
SBRDsmooth best-response dynamics

Appendix A

For the reader’s convenience, Table A1 provides a summary of the most relevant notation used throughout the Appendices.
Table A1. Notation Legend.
Table A1. Notation Legend.
SymbolSemantics
nnumber of sites
[ n ] { 1 , 2 , , n }
a i action (pure strategy) of site i: a i = 1 (“invest”) or a i = 0 ("not invest")
a joint action (pure strategy) for all sites: ( a i ) i [ n ]
b pure strategy of attacker
b i component of attacker’s pure strategy corresponding to site i (i.e., b ( b i ) i [ n ] ):
b i = 1 if attacker directly targets site i; b i = 0 otherwise
B b { 0 , 1 } n i = 1 n b i 1 (Assumption 3)
C i cost to site i of investing in security
C ( C i ) i [ n ] (Definition 1)
C i 0 cost to attacker for directly targeting site i
L i loss to site i should it experience the “bad event”
L ( L i ) i [ n ] (Definition 1)
α i probability that the transfer of the “bad event” will not caught
given site i invest in security (i.e., a i = 1 )
α ( α i ) i [ n ]
p ^ i conditional probability that site i experience the “bad event”
given that site i was a direct target (Equation (8))
p ^ ( p ^ i ) i [ n ] (Definition5)
q ^ i j conditional probability that site j experience the “bad event” (Equation (9))
as result of a transfer from site i
Q ^ matrix composed of the q ^ i j ’s (Definition 5)
Δ ^ i ratio of cost to conditional expected loss (Equation (13)): C i p ^ i L i
Gdirected network graph of sites: ( [ n ] , E )
Pa ( i ) set of sites that are parent of site i in G
PF ( i ) site i’s parent family: Pa ( i ) { i }
k i | PF ( i ) |
k max max i [ n ] k i
Ch ( i ) set of sites that are children of site i in G
CF ( i ) site i’s children family: Ch ( i ) { i }
e i j ( a j , b j ) probability that site i is safe from j (Equation (10)): a j + ( 1 a j ) ( 1 b j q ^ j i )
s i ( a Pa ( i ) , b Pa ( i ) ) external overall safety of site i (Equation (11)): j Pa ( i ) e i j ( a j , b j )
r i ( a Pa ( i ) , b Pa ( i ) ) external overall risk of site i (Equation (11)): 1 s i ( a Pa ( i ) , b Pa ( i ) )
M i ( a , b ) cost function of site i (Equation (12)): M i ( a PF ( i ) , b PF ( i ) )
U ( a , b ) payoff function of attacker (Equation (16))
x i site i’s individual mixed strategy: probability of investing
(i.e., probability assigned to a i = 1 )
x joint mixed strategy of all sites: ( x i ) i [ n ]
Pmixed strategy of attacker
y i probability that attacker directly targets site i: P ( b i = 1 )
y 0 probability of no attack: P ( b = 0 ) ; under Assumption 3, y 0 = 1 i = 1 n y i
y compact representation of attacker’s mixed strategy under Assumption 3: ( y i ) i [ n ] ,
where y i = P ( b i = 1 ) = P ( b i = 1 , b i = 0 ) and b P ( b ) = i = 0 n y i = 1
L i 0 ( x i ) ( 1 x i ) ( p ^ i L i + j Ch ( i ) q ^ i j L j )
L ¯ i 0 L i 0 ( 0 ) = p ^ i L i + j Ch ( i ) q ^ i j L j (Equation (41))
M i 0 ( x i ) L i 0 ( x i ) C i 0
M ¯ i 0 M i 0 ( 0 ) = L ¯ i 0 C i 0 (Equation (42))
η i 0 C i 0 / L ¯ i 0 (Equation (43))
U ( x , y ) expected payoff of attacker under Assumption 3 (Equation (44)): i = 1 n y i M i 0 ( x i )
r i ( a Pa ( i ) , y Pa ( i ) ) external overall risk of site i
when the sites in Pa ( i ) use joint action a Pa ( i ) (Equation (34)): j Pa ( i ) y j ( 1 a j ) q ^ j i

Appendix B. Proofs Missing from the Main Body of the Article

Here, we present all the proofs that we moved out of the main body of the article in order to maintain a fluid presentation.

Appendix B.1. Proof of Lemma 1

By the definition of single-attack IDD games (Definition 7), any attacker pure strategy b in B , as defined in Assumption 3, is either a vector of all 0’s, or exactly one 1. Because e i j ( a j , 0 ) = 1 (Equation (10)), by the definition of s i (Equation (11)), we have
s i ( a Pa ( i ) , b Pa ( i ) ) = j Pa ( i ) e i j ( a j , b j ) = j Pa ( i ) [ e i j ( a j , 1 ) ] b j = j Pa ( i ) b j e i j ( a j , 1 ) , if b k = 1 for some k Pa ( i ) , 1 , if b k = 0 for all k Pa ( i ) , = 1 j Pa ( i ) b j ( 1 a j ) q ^ j i ,
so that
r i ( a Pa ( i ) , b Pa ( i ) ) = j Pa ( i ) b j ( 1 a j ) q ^ j i ,
and
b i s i ( a Pa ( i ) , b Pa ( i ) ) = b i .

Appendix B.2. Proof of Proposition 1

From Lemma 1, and the definition of U (Equation (16)), we obtain
U ( a , b ) = i = 1 n M i ( a , b ) a i C i b i C i 0 = i = 1 n ( a i α i r i ( a Pa ( i ) , b Pa ( i ) ) + ( 1 a i ) ( b i p ^ i + ( 1 b i p ^ i ) r i ( a Pa ( i ) , b Pa ( i ) ) ) ) L i b i C i 0 = i = 1 n b i ( 1 a i ) p ^ i L i + ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) r i ( a Pa ( i ) , b Pa ( i ) ) L i b i C i 0 .
After distributing the sum over each term, for the second term inside the sum, by the expression for the r i ’s (Equation (18)), we have
i = 1 n ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) r i ( a Pa ( i ) , b Pa ( i ) ) L i = i = 1 n ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) j P a ( i ) b j ( 1 a j ) q ^ j i L i = i = 1 n j P a ( i ) ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) b j ( 1 a j ) q ^ j i L i = i = 1 n j = 1 n 1 j Pa ( i ) ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) b j ( 1 a j ) q ^ j i L i = i = 1 n j = 1 n 1 i Ch ( j ) ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) b j ( 1 a j ) q ^ j i L i = j = 1 n i = 1 n 1 i Ch ( j ) ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) b j ( 1 a j ) q ^ j i L i = j = 1 n i Ch ( j ) ( a i α i + ( 1 a i ) ( 1 b i p ^ i ) ) b j ( 1 a j ) q ^ j i L i = j = 1 n i Ch ( j ) ( b j a i α i + ( 1 a i ) ( b j b j b i p ^ i ) ) ( 1 a j ) q ^ j i L i = j = 1 n i Ch ( j ) ( b j a i α i + ( 1 a i ) ( b j 0 ) ) ( 1 a j ) q ^ j i ) L i = j = 1 n i Ch ( j ) ( b j a i α i + ( 1 a i ) b j ) ( 1 a j ) q ^ j i L i = j = 1 n b j ( 1 a j ) i Ch ( j ) ( a i α i + ( 1 a i ) ) q ^ j i L i i = 1 n b i ( 1 a i ) j Ch ( i ) ( a j α j + ( 1 a j ) ) q ^ i j L j .
The result follows by simple substitutions and some algebra.

Appendix B.3. Proof of Proposition 2

If the single-attack IDD game has a PSNE ( a * , b * ) , then, by Proposition 1, we can express the attacker’s payoff in it as
U ( a * , b * ) = max i [ n ] ( 1 a i * ) p ^ i L i + j Ch ( i ) q ^ i j ( a j * α j + ( 1 a j * ) ) L j C i 0 + ,
where for any real number z R , the operator [ z ] + max ( z , 0 ) . In addition, if b l * = 1 for some l [ n ] , then
( 1 a l * ) p ^ l L l + j Ch ( l ) q ^ l j ( a j * α j + ( 1 a j * ) ) L j C l 0 max i [ n ] ( 1 a i * ) p ^ i L i + j Ch ( i ) q ^ i j ( a j * α j + ( 1 a j * ) ) L j C i 0 + .
The proof of the proposition is by contradiction. Consider an IDD game that satisfies the conditions of the proposition. Let ( a * , b * ) be a PSNE of the game. We need to consider two cases at the PSNE: (1) there is some attack and (2) there is no attack.
  • If there is some attack, then b l * = 1 for some site l [ n ] , and for all i l , b i * = 0 . In addition, because b * is consistent with the aggressor’s best response to a * , we have, using the condition given in Equation (46) above,
    ( 1 a l * ) p ^ l L l + j Ch ( l ) q ^ l j ( a j * α j + ( 1 a j * ) ) L j C l 0 > 0 ,
    The last condition in Equation (47) and Assumption 2 implies a l * = 0 . Hence, by the best-response condition of site l, we have
    C l + α l r l ( a Pa ( l ) * , b Pa ( l ) * ) L l p ^ l L l + ( 1 p ^ l ) r l ( a Pa ( l ) * , b Pa ( l ) * ) L l .
    Because the attack occurs at l, the transfer risk r l ( a Pa ( l ) * , b Pa ( l ) * ) = r l ( a Pa ( l ) * , 0 ) = 0 at the PSNE. Therefore, the last condition simplifies to
    C l p ^ l L l ,
    which contradicts Assumption 1.
  • If there is no attack, then b * = 0 . In this case, the site’s best-response conditions imply a * = 0 . From the attacker’s best-response condition we obtain
    p ^ l L l + j Ch ( l ) q ^ l j L j C l 0 ,
    which contradicts Assumption 2.

Appendix B.4. Proof of Proposition 3

Let H ( P ) b { 0 , 1 } n P ( b ) ln 1 P ( b ) be Shannon’s entropy function, corresponding to a set of Bernoulli random variables with joint PMF P [55]. Consider the maximum-entropy (MaxEnt) distribution [46] resulting from the following optimization with respect to an MSNE ( x * , P * ) of an arbitrary IDD game.
arg max P P H ( P )
such that, for all i [ n ] , and b PF ( i ) { 0 , 1 } k i with P PF ( i ) * ( b PF ( i ) ) > 0 ,
P ( b PF ( i ) ) = P PF ( i ) * ( b PF ( i ) )
Note that the optimization problem is feasible because P * P . A standard duality argument [56] shows that we can express the solution to the optimization problem as
P λ * ( b ) exp i = 1 n b PF ( i ) { 0 , 1 } k i , P PF ( i ) * ( b PF ( i ) ) > 0 λ i , b PF ( i ) * = i = 1 n exp b PF ( i ) { 0 , 1 } k i , P PF ( i ) * ( b PF ( i ) ) > 0 λ i , b PF ( i ) *
for some vector of real-valued parameters λ * corresponding to the dual variables (also known as the Lagrange multipliers). We can now define PMF P ˜ over { 0 , 1 } n as
P ˜ ( b ) i = 1 n 1 P PF ( i ) * ( b PF ( i ) ) > 0 exp b PF ( i ) { 0 , 1 } k i , P PF ( i ) * ( b PF ( i ) ) > 0 λ i , b PF ( i ) * .
The optimization constraints and the fact that P * ( b ) = 0 if and only if b is such that P PF ( i ) * ( b PF ( i ) ) = 0 for some i, yield Parts 1 and 2 of the proposition. To see that Part 3 also holds, note that by the structure of the sites’ costs, and thus the attacker’s payoff, their expected value depends only on the joint marginal PMFs P PF ( i ) for all i. Thus, Part 3 actually follows immediately from Part 2.

Appendix B.5. Proof of Lemma 2

Equation (33) follows from Lemma 1 (Equation (18)), and linearity of expectation. Equation (34) follows from Equation (33) and linearity of expectation. Equation (35) follows from Equation (34) and linearity of expectation. Equation (36) follows from Lemma 1 (Equation (19)) and the definition of expected value. Equation (37) follows from Equations (34) and (36).

Appendix B.6. Proof of Proposition 4

Equation (38) follows from Lemma 2, and applying Equations (34) and (36) to Equation (26). Equation (40) follows from Lemma1 (Equation (21)), linearity of expectation, the definition of a joint mixed strategy (i.e., a product distribution), and the fact that each term in the sum is either the product of a linear function of a i and either a constant or a linear function of a j for j Ch ( i ) . Equation (39) follows from Equation (40) and linearity of expectation.

Appendix B.7. Proof of Proposition 5

Throughout this proof, by the hypothesis of the proposition, we assume we are dealing with fully transfer-vulnerable single-attack IDD games. We also use the same notation as that introduced before the statement of the proposition in the main body of the manuscript. The same holds for the proof of Claim 1 in the next subsection of this appendix (Appendix B.8). We remind the reader that Table A1 provides a summary of the most relevant notation used throughout the Appendices.
We begin the proof by noting that Proposition 4 implies that the best-response BR i of defender i directly depends on y i only. Said differently, BR i is conditionally independent of the mixed strategies x Pa ( i ) of its parent nodes Pa ( i ) of defender node i in the network given the probability y i that the attacker’s mixed-strategy assigns to a direct attack to i. Thus, in what follows, we abuse notation and define
BR i ( y i ) BR i ( x Pa ( i ) , P PF ( i ) ) = { 1 } , if y i > Δ ^ i , { 0 } , if y i < Δ ^ i , [ 0 , 1 ] , if y i = Δ ^ i .
Next, we prove some useful properties of the MSNE. 16
Claim 2.
In every MSNE ( x , y ) , for all i [ n ] , if the probability of a direct attack to a defender i is y i = 0 then the probability of investment of defender i is x i = 0 . In addition, if y i = 0 for some i [ n ] then the probability of no attack y 0 = 0 .
Proof. 
By BR i , y i = 0 < Δ ^ i implies x i = 0 . For the second part, if y i = 0 for some defender i [ n ] , then, by BR 0 , we have
max t M t 0 ( x t ) M i 0 ( x i ) = M ¯ k 0 > 0 ,
and thus y 0 = 0 . ☐
Proposition 6.
In every MSNE ( x , y ) , an attack is always possible: y 0 < 1 .
Proof. 
The proof is by contradiction. Let ( x , y ) be an MSNE. Suppose there is no attack: y 0 = 1 . Then, i = 1 n y i = 1 y 0 = 0 , so that y i = 0 for all i [ n ] . Because y i = 0 for some i [ n ] , Claim 2 yields y 0 = 0 , a contradiction. ☐
Lemma 3.
In every MSNE ( x , y ) , the probability y i of direct attack to defender i is no larger than Δ ^ i < 1 .
Proof. 
The proof is by contradiction. Suppose there is some MSNE in which y i > Δ ^ i for some i [ n ] . Then, x i = 1 and in turn M i 0 ( 1 ) = C i 0 < 0 . Because the attacker can always achieve expected payoff 0 by not attacking anyone, the last condition implies y i = 0 , a contradiction. ☐
Claim 3.
Let y be the compact representation of the mixed-strategy of the attacker in some MSNE. If the probability of no attack y 0 > 0 , then the probability of direct attack to defender i is equal to the cost-to-conditional expected-loss of defender i: y i = Δ ^ i for all i [ n ] .
Proof. 
The proof is by contradiction. By Lemma 3 y i Δ ^ i for all i [ n ] . Suppose y i < Δ ^ i for some i. Then, by BR i , we have x i = 0 , and by BR 0 , we have 0 M ¯ i 0 > 0 , a contradiction. ☐
Lemma 4.
In every MSNE ( x , y ) of an IDD game in which the total of cost-to-conditional expected-loss of all defenders is i = 1 n Δ ^ i < 1 , there may not be an attack: y 0 > 0 .
Proof. 
By Lemma 3, y i Δ ^ i for all i [ n ] . Using the last statement, note that
1 y 0 = i = 1 n y i i = 1 n Δ ^ i < 1 ,
from which the lemma immediately follows. ☐
As stated in the main text, we partition the class of IDD games into three subclasses, based on whether i = 1 n Δ ^ i is (1) less than, (2) equal to, or (3) greater than 1. We consider each subclass in turn.
Proposition 7.
The joint mixed-strategy ( x , y ) is an MSNE of an IDD game in which the total cost-to-conditional expected-loss of all defenders is i = 1 n Δ ^ i < 1 if and only if it satisfies the following properties.
1. 
There may not be an attack with probability of no attack equal to one minus the cost-to-conditional expected-loss of all defenders: for all defenders i 1 > y 0 = 1 i = 1 n Δ ^ i > 0 .
2. 
Every defender has non-zero chance of being attacked directly, and this probability equals the respective defender’s cost-to-conditional expected-loss of defender: for all defenders i [ n ] , y i = Δ ^ i > 0 .
3. 
Every defender invests some but none does fully, and in particular, the probability a defender does not invest equals the respective cost-to-loss ratio to the attacker: for all defenders i [ n ] , 0 < x i = 1 η i 0 < 1 .
Proof. 
Suppose the joint mixed-strategy ( x , y ) satisfies the properties above. Then, every defender is indifferent (i.e., for all i [ n ] , BR i ( y i ) = [ 0 , 1 ] , because y i = Δ ^ i ), as is also the attacker (i.e., BR 0 ( x ) equals the set of all probability distributions over n + 1 events because M i 0 ( x i ) = 0 for all i [ n ] ). Hence, ( x , y ) is an MSNE.
Now suppose ( x , y ) is an MSNE of the game. By Lemma 4, y 0 > 0 . Hence, for all i [ n ] , we have y i = Δ ^ i > 0 by Claim 3. Both of the previous sentences together imply M i 0 ( x i ) = 0 for all i [ n ] , because of BR 0 . Simple algebra yields that x i = 1 η i 0 . Finally, because y 0 + i = 1 n y i = 1 , we have y 0 = 1 i = 1 n Δ ^ i . ☐
Proposition 8.
The joint mixed-strategy ( x , y ) is an MSNE of an IDD game in which i = 1 n Δ ^ i = 1 if and only if it satisfies the following properties.
1. 
There is always an attack: y 0 = 0 .
2. 
Every defender has non-zero chance of being attacked directly, and this probability equals the respective defender’s cost-to-conditional expected-loss of defender i: for all defenders i [ n ] , y i = Δ ^ i > 0 .
3. 
No defender invests fully, and the possible investment probabilities are connected by a 1-d line segment in R n :
x i = 1 v + C i 0 L ¯ i 0 f o r a l l i [ n ]
with 0 v min i [ n ] M ¯ i 0 .
Proof. 
Suppose the joint mixed-strategy ( x , y ) satisfies the properties above. Then, every defender is indifferent: for all i [ n ] , BR i ( y i ) = [ 0 , 1 ] , because y i = Δ ^ i . To test y BR 0 ( x ) , note 0 ( 1 x i ) L ¯ i 0 C i 0 = M i 0 ( x i ) = max t [ n ] M t 0 ( x t ) for all i [ n ] , and
i = 1 n y i M i 0 ( x i ) = i = 1 n y i max t [ n ] M t 0 ( x t ) =
i = 1 n y i max t [ n ] M t 0 ( x t ) = max t [ n ] M t 0 ( x t ) .
Let the joint mixed-strategy ( x , y ) be an MSNE of the game. Let I I ( y ) { i [ n ] y i > 0 } . Note that y k = 0 for all k I . We first prove the following lemma.
Lemma 5.
I = [ n ] .
Proof. 
The proof is by contradiction. Suppose I [ n ] . By Proposition 6, y 0 < 1 = y 0 + i = 1 n y i so that y i > 0 for some i [ n ] , and therefore I . Also, there exists some k [ n ] I , for which y k = 0 . By Claim 2, we then have for all k I , x k = 0 . By BR 0 and Assumption 2, for all i , t I and k I ,
M i 0 ( x i ) = M t 0 ( x t ) M ¯ k 0 .
The condition above yields the following upper bound on the mixed strategies of the defenders in i I , after applying simple algebraic manipulations: for all i I , k I ,
x i 1 M ¯ k 0 + C i 0 L ¯ i 0 < 1 .
By BR i , this implies that y i Δ ^ i for all i I . Putting all of the above together, we have
1 = i = 0 n y i = i = 1 n y i = i I y i i I Δ ^ i i = 1 n Δ ^ i = 1 .
Now, because I [ n ] (by the hypothesis assumed to obtain a contradiction), we have k I Δ ^ k > 0 , and
i I y i = i = 1 n Δ ^ i = i I Δ ^ i + k I Δ ^ k > i I Δ ^ i i I y i ,
a contradiction. ☐
By the last lemma and BR 0 , we have
( 1 x 1 ) L ¯ 1 0 C 1 = = ( 1 x n ) L ¯ n 0 C n 0
Let v ( 1 x 1 ) L ¯ 1 0 C 1 . Then, 1 x i = v + C i 0 L ¯ i 0 > 0 . If v > 0 then y 0 = 0 . Because x i < 1 , we have y i Δ ^ i for all i [ n ] . Thus, we have y i = Δ ^ i for all i [ n ] because otherwise if y t < Δ ^ t for some t [ n ] , then 1 = y 0 + y t + i = 1 , i t n y i < i = 1 n Δ ^ i = 1 , a contradiction. If, instead, v = 0 , for all i, we have x i = 1 η i 0 > 0 , which implies y i = Δ ^ i . Therefore, y 0 = 1 i = 1 n y i = 1 i = 1 n Δ ^ i = 0 . ☐
Lemma 6.
In every MSNE ( x , y ) of an IDD game in which i = 1 n Δ ^ i > 1 , the probability of no attack y 0 = 0 .
Proof. 
The proof is by contradiction. Suppose y 0 > 0 . Then, by Claim 3, we have y i = Δ ^ i for all i [ n ] , and 1 = i = 0 n y i = i = 1 n Δ ^ i > 1 , a contradiction. ☐
Proposition 9.
In every MSNE ( x , y ) of an IDD game, the probability of no attack y 0 > 0 if and only if the game has the property i = 1 n Δ ^ i < 1 .
Proof. 
The “if” part is Lemma 4. For the “only if” part, the case in which i = 1 n Δ ^ i = 1 follows from Proposition 8; the case in which i = 1 n Δ ^ i > 1 follows from Lemma 6. ☐
Proposition 10.
In every MSNE ( x , y ) of an IDD game in which i = 1 n Δ ^ i > 1 , no defender is fully investing and some defender is not investing at all (i.e., x i = 0 for some i [ n ] ).
Proof. 
The proof is by contradiction. Proposition 9 yields y 0 = 0 . Suppose x i = 1 for some i [ n ] . Then, by BR i , y i Δ ^ i , and by BR 0 and the fact that y 0 = 0 , we have 0 > C i 0 = M i ( x i ) 0 , which implies y i = 0 , a contradiction.
Now suppose 0 < x i < 1 for all i [ n ] . Then, by BR i , we have y i = Δ ^ i for all i [ n ] . Thus we have 1 = i = 1 n y i = i = 1 n Δ ^ i > 1 , a contradiction. ☐
Proposition 11.
The joint mixed-strategy ( x , y ) is an MSNE of an IDD game in which i = 1 n Δ ^ i > 1 if and only if it satisfies the following properties.
1. 
There is always an attack: y 0 = 0 .
2. 
There exists a non-singleton, non-empty subset I [ n ] , such that min i I M ¯ i 0 max k I M ¯ k 0 , if I [ n ] , and the following holds.
(a) 
No defender outside I invests or is attacked directly: x k = 0 and y k = 0 for all k I .
(b) 
Let J arg min i I M ¯ i 0 . No defender in J invests and the probability of that defender being attacked directly is at most the defender’s cost-to-expected-loss ratio: for all i J , x i = 0 and 0 y i Δ ^ i ; in addition, i J y i = 1 t I J Δ ^ i .
(c) 
Every defender in I J partially invests and has positive probability of being attacked directly equal to the defender’s cost-to-expected-loss ratio: for all i I J , y i = Δ ^ i and
0 < x i = 1 min t I M ¯ t 0 + C i 0 L ¯ i 0 < 1 .
Proof. 
For the “if” part, we need to show ( x , y ) form mutual best-responses. For all k I , x k = 0 BR k ( y ) because y k = 0 < Δ ^ k . For all j J , x j = 0 BR j ( y ) because y j Δ ^ j . Finally, for all i I J , x i BR i ( y i ) = [ 0 , 1 ] because y i = Δ ^ i . Hence, we have x i BR i ( y i ) for all i [ n ] . For the attacker, let v v ( I ) min i I M ¯ i 0 . We have for all k I , M k ( x k ) = M ¯ k 0 max l I M ¯ l 0 min i I M ¯ i 0 = v , where the first equality holds because x k = 0 and the second inequality by the properties of I. We also have for all j J , M j ( x j ) = M ¯ j 0 = min i I M ¯ i 0 = v , where the first equality holds because x j = 0 and the second follows from the definition of J. Finally, using simple algebra, we also have for all i I J ,
M i ( x i ) = ( 1 x i ) L ¯ i 0 C i 0 = min t I M ¯ t 0 + C i 0 L ¯ i 0 L ¯ i 0 C i 0 = min t I M ¯ t 0 + C i 0 C i 0 = min t I M ¯ t 0 = v .
Hence, we have for all i [ n ] , M i ( x i ) v . The expected payoff of the attacker under the given joint mixed-strategy is
i = 1 n y i M i ( x i ) = j J y j M j ( x j ) + i I J y i M i ( x i ) = j J y j v + i I J y i v = v j J y j + i I J y i = v i = 1 n y i = v M i ( x i ) ,
for all i [ n ] . Hence, we also have y BR 0 ( x ) , and the joint mixed-strategy ( x , y ) is an MSNE.
We now consider the “only if” part of the proposition. Let ( x , y ) be an MSNE and let I I ( y ) { i [ n ] y i > 0 } be the support of the aggressor’s mixed strategy. We now show that I is a non-singleton and non-empty subset of [ n ] .
Claim 4.
1 < | I | n .
Proof. 
From Proposition 6, we have I . That I is not a singleton set follows from Lemma 3. ☐
By Proposition 9, we have y 0 = 0 . Applying Proposition 10, let t [ n ] be such that x t = 0 . Also by Proposition 10, the aggressor achieves a positive expected payoff: i = 1 n y i M i 0 ( x i ) = max l = 1 n M l 0 ( x l ) M t 0 ( x t ) = M ¯ t 0 > 0 . For all k I , because y k = 0 , Claim 2 implies x k = 0 .
By BR 0 , if I is a strict, non-empty and non-singleton subset of [ n ] , we have, for all i I and k I ,
M ¯ i 0 M i 0 ( x i ) = max l I M l 0 ( x l ) M ¯ k 0 > 0 ;
otherwise, if I = [ n ] , we have, for all i [ n ] ,
M i 0 ( x i ) = max l [ n ] M l 0 ( x l ) = M t 0 ( x t ) = M ¯ t 0 > 0 .
Let v v ( I ) max l I M l 0 ( x l ) . Then, the above expressions imply that for all i I , we have
0 < x i = 1 v + C i 0 L ¯ i 0 < 1 .
In addition, we have that if I is a strict, non-empty and non-singleton subset of [ n ] , we have,
v = M ¯ t 0 min i I M ¯ i 0 v max k I M ¯ k 0 ;
and if, instead, I = [ n ] , then
v = M ¯ t 0 = min i [ n ] M ¯ i 0 .
Hence, we have v = min i I M ¯ i 0 .
Let J J ( I ) arg min i I M ¯ i 0 . For all i J , we have M ¯ i 0 = v , and thus
x i = 1 v + C i 0 L ¯ i 0 = 1 M ¯ i 0 + C i 0 L ¯ i 0 = 1 L ¯ i 0 C i 0 + C i 0 L ¯ i 0 = 0 ,
and by BR i , we have 0 y i Δ ^ i .
For all i I J , we have M ¯ i 0 > v , and thus
0 = 1 M ¯ i 0 + C i 0 L ¯ i 0 < x i = 1 v + C i 0 L ¯ i 0 < 1 ,
and by BR i , we have y i = Δ ^ i .
Finally, we have i J y i = 1 i I J Δ ^ i , because y is a mixed strategy (i.e, a probability distribution). ☐
Proposition 5 stated in the main text follows by combining Propositions 7, 8 and 11.

Appendix B.8. Proof of Claim 1

To simplify the notation, let v min l I M ¯ l 0 = min l I M ¯ l 0 , J arg min l I M ¯ l 0 and J arg min i I M ¯ i 0 . The hypothesis implies that ( x , y ) satisfies the following properties.
for all i I : x i = y i = 0 for all i J : x i = 0 and 0 y i Δ ^ i ; also i J y i = 1 i I J Δ ^ i for all i I J : x i = 1 v + C i 0 L ¯ i 0 and y i = Δ ^ i
We now show that ( x , y ) also satisfies the constraints when using I with the properties stated in the claim. For that, it needs to satisfy the same expressions as above, but with I and J replaced by I and J, respectively.
The first condition is satisfied because I I . The second condition is satisfied for all i J I , because i I satisfies x i = 0 and 0 y i = 0 Δ ^ i . It is also satisfied for all i J I because i J implies M ¯ i 0 = v and, because i I , i J . For the third condition, note that I J I J because i I J implies the inequality M ¯ i 0 > v = max k I M ¯ k 0 ; hence, the first inequality in the last expression implies i J , while the equality implies i I .

Appendix B.9. Proof of Theorem 2

First, we construct a graph structure and set the values of the parameters to define the IDD game based on an NP-complete problem. Next, we show that if y exists, then the induced sites-game solves the NP-complete problem. Finally, we show that such a y exists.
We first define the notations that will be used in the proof. In particular, we consider the problem of determining whether there is a PSNE in the sites-games while fixing some strategies of some players. More specifically, we denote the instances with PSNE as
sites game = { ( ( n , G , C , L , p ^ , Q ^ , α ) , a S { 0 , 1 } | S | , y [ 0 , 1 ] n + 1 ) : there exists a PSNE in G with the players in S playing according to a S and the attacker plays y such that i = 0 n y i = 1 } .
We will reduce our problem from Monotone 1-in-3-SAT where each clause of the 3-SAT has exactly three variables and consists of (un-negated) variables. We use the term variable(s) by default for un-negated variable(s), unless stated otherwise. The solution to the Monotone 1-in-3-SAT is to find a satisfiable assignment such that exactly one variable is true in each clause. The Monotone 1-in-3-SAT is known to be NP-complete [57]. We denote the instances with satisfiable solutions as
Monotone 1 in 3 SAT = { ( ( v i ) i [ m ] , i = 1 c F i , F i = ( j = 1 3 v i j ) ) : there exists a satisfiable assignment with exactly one variable true in each clause } ,
where there are m variables, c clauses, and each clause has three (un-negated) variables. A satisfiable assignment is defined to be an assignment of all variables i to zero or one, v i { 0 , 1 } , such that the boolean formula i = 1 c F i is true or satisfied (i.e., each clause F i is true or satisfied and has exactly one variable true).
Below, given an instance of Monotone 1-in-3-SAT
γ ( v i ) i [ m ] , i = 1 c F i , F i = ( j = 1 3 v i j ) ,
we are going to construct an instance of sites-games with partial assignments
β ( n , G , C , L , p ^ , Q ^ , α ) , a S { 0 , 1 } | S | , y [ 0 , 1 ] n such that i = 0 n y i = 1
that correspond to γ.
  • There are n = 2 c + m players: two players for each clause and a player for each variable. The clause players and the variable players are indexed from 1 to 2 c and 2 c + 1 to 2 c + m , respectively.
  • First, we find 1 > L > C > 0 and 1 > p ^ > C L such that 0 < C L p ^ < 1 . Next, we find q ^ [ 0 , 1 ] such that 0 < q ^ < min { L p ^ 3 C , 1 } . For completeness, we find 1 > α > 0 . For each variable player i { 2 c + 1 , . . . , 2 c + m } , let C i = C , α i = α , L i = L , p ^ i = p ^ , and y i = C i L i p ^ i .
    The variable players are indifferent from playing the action “invest” or “not invest.”
  • Next, using the values of the parameters defined above, we find 0 < C < L < 1 , 1 > p ^ > C L > 0 , 0 < y < C L p ^ , and 1 > α > 0 such that 3 C q ^ L p ^ > 1 1 α C L y p ^ > 2 C q ^ L p ^ . Indeed, such value is always possible as we can make α and y to be arbitrarily small so that 1 1 α C L y p ^ C L .
    For each clause player i [ c ] such that F i = ( j = 1 3 v i j ) , q ( i j + 2 c ) i = q ^ for all j. To set the remaining parameters, for each clause player i [ c ] , set C i = C , L i = L , α i = α , p i = p , and y i = y .
  • Then, using the same values of the parameters defined for the variable players, we find 0 < C < L < 1 , 1 > p ^ > C L > 0 , 0 < y < C L p ^ , and 1 > α > 0 such that 2 C q ^ L p ^ > 1 1 α C L y p ^ > C q ^ L p ^ .
    For each clause player i { c + 1 , . . . , 2 c } such that F i c = j = 1 3 v ( i c ) j , q ( c i j + 2 c ) i = q for all j. To set the remaining parameters, for each clause player i { c + 1 , . . . , 2 c } , set C i = C , L i = L , α i = α , p i = p , and y i = y .
  • Here, we construct a partial action profile for some of the players. In particular, for each clause player i [ c ] , a i = 0 and a i + c = 1 . Thus, we are giving a partial action profile of all clause players. For completeness, let y 0 = 1 i = 1 n y i .
Lemma 7.
γ Monotone 1-in-3-SAT β sites game .
Proof. 
Given a satisfiable assignment for γ, we show how to construct a PSNE for β. Let I ( 1 ) { i [ m ] : v i = 1 } be the indices of the variables that are assigned a value of one in the satisfiable assignment. For consistence, we denote as a i the action of any player i [ n ] and construct a PSNE as follows. For each of the variable player i { 2 c + 1 , . . . , 2 c + m } , a i = 1 if ( i 2 c ) I ( 1 ) and a i = 0 otherwise. Together with the partial action profile of the clauses, we will call this constructed pure-strategy profile a = ( a 1 , . . . , a n ) .
To show that a is a PSNE, we argue that each player is playing its best-response. First, we consider the clause players. Recall that best-response correspondence of a clause player i [ c ] is
BR i ( a Pa ( i ) , y PF ( i ) ) { 1 } , if s ^ i ( a Pa ( i ) , y PF ( i ) ) > Δ ^ i , { 0 } , if s ^ i ( a Pa ( i ) , y PF ( i ) ) < Δ ^ i , [ 0 , 1 ] , if s ^ i ( a Pa ( i ) , y PF ( i ) ) = Δ ^ i .
where Δ ^ i C i L i p ^ i , s ^ i ( a Pa ( i ) , y PF ( i ) ) y i + 1 α i p ^ i r i ( a Pa ( i ) , y Pa ( i ) ) . Notice that, to determine the best-response strategy of player i, without loss of generality, we can also compare the values of 1 1 α i C i L i y i p ^ i and r i ( a Pa ( i ) , y Pa ( i ) ) . By our construction, Pa ( i ) = { i 1 , i 2 , i 3 } (which corresponds to variables v i 1 , v i 2 , v i 3 of clause i) and r i ( a Pa ( i ) , y Pa ( i ) ) = j Pa ( i ) C L p ^ ( 1 a j ) q ^ .
Moreover, by the satisfiable assignment, exactly one variable in Pa ( i ) is assigned to a value of 1 which corresponds to exactly one variable player that plays action 1. Therefore, r i ( a Pa ( i ) , y Pa ( i ) ) = 2 C q ^ L p ^ . By our construction, 3 C q ^ L p ^ > 1 1 α i C i L i y i p ^ i > 2 C q ^ L p ^ . It follows that r i ( a Pa ( i ) , y Pa ( i ) ) < 1 1 α i C i L i y i p ^ i , and the i’s best-response is zero. This holds for all clause players i [ c ] . On the other hand, for the clause player i { c + 1 , . . . , 2 c } , r i ( a Pa ( i ) , y Pa ( i ) ) = 2 C q ^ L p ^ as well. By our construction, 2 C q ^ L p ^ > 1 1 α i C i L i y i p ^ i > C q ^ L p ^ , it follows that 1 1 α i C i L i y i p ^ i < r i ( a Pa ( i ) , y Pa ( i ) ) and a i = 1 is the best-response.
For each variable player i { 2 c + 1 , . . . , 2 c + m } , i has no parent and i’s overall risk is 0. To determine whether i plays the action invest or not invest, we only need to compare the value of C i L i and y i p ^ i . By construction, C i L i p ^ i = y i for all variable players i, we have that the variable players are indifferent between playing 0 or 1. Hence, the pure-strategy profile a is a PSNE. ☐
Lemma 8.
β sites game γ Monotone 1-in-3-SAT.
Proof. 
Now we show how to construct a satisfiable assignment for γ given a PSNE of β. Let a = ( a 1 , . . . , a n ) be a PSNE of β. For each variable i [ m ] , if a 2 c + i = 1 then v i = 1 and if a 2 c + i = 0 then v i = 0 . To show that each clause, say i [ c ] , has exactly one variable that is true, we observe the best-response of clause players i and c + i that correspond to clause i. Given the fixed action of a i = 0 and a c + i = 1 at a PSNE, it implies that r i ( a Pa ( i ) , y Pa ( i ) ) < 1 1 α i C i L i y i p ^ i and r c + i ( a Pa ( c + i ) , y Pa ( c + i ) ) > 1 1 α c + i C c + i L c + i y c + i p ^ c + i . Because 3 C q ^ L p ^ > 1 1 α i C i L i y i p ^ i > 2 C q ^ L p ^ , 2 C q ^ L p ^ > 1 1 α c + i C c + i L c + i y c + i p ^ c + i > C q ^ L p ^ , Pa ( c + i ) = Pa ( i ) , | Pa ( i ) | = 3 , and the transfer risks are the same, we have r c + i ( a Pa ( c + i ) , y Pa ( c + i ) ) = 2 C q ^ L p ^ . This implies that exactly one of the variables is true. ☐
It is easy to see that given a (partial) pure-strategy profile, we can verify whether it is a PSNE of a sites-game in polynomial time. This fact, together with Lemma 7 and Lemma 8, yields our hardness result.

Appendix C. Pseudocode for Exact Algorithm to Compute All MSNE in Single-Attack Fully-Transfer-Vulnerable IDD Games

Algorithm A1 provides pseudocode of the exact algorithm described in Section 3.4. It uses several sub-routines provided in Algorithms A2, A3, and A4, corresponding to the different cases of the characterization of all MSNE for this class of games as also stated in Section 3.4.
Algorithm A1: Compute All MSNE of a Fully Transfer-Vulnerable Single-Attack IDD Game.
Games 08 00013 i002
Algorithm A2: Subroutine to Compute the Unique MSNE of a Fully Transfer-Vulnerable Single-Attack IDD Game with i = 1 n Δ ^ i < 1 .
Games 08 00013 i003
Algorithm A3: Subroutine to Compute (a Simple Linear Representation of) All MSNE of a Fully Transfer-Vulnerable Single-Attack IDD Game with i = 1 n Δ ^ i = 1 .
Input: n, Δ ^ , L ¯ 0 , C 0
Output: The set NE of All MSNE for this Case
foreach i = 1 to n do
 | y i Δ ^ i
end
y 0 0
X 0 { x 0 ( 1 x 1 ) L ¯ 1 0 C 1 0 = = ( 1 x n ) L ¯ n 0 C n 0 0 }
NE X 0 × { y }
return NE
Algorithm A4: Subroutine to Compute (a Simple Simplex Representation of) All MSNE of a Fully Transfer-Vulnerable Single-Attack IDD Game with i = 1 n Δ ^ i > 1 .
Games 08 00013 i004

Appendix D. FPTAS for Computing an ϵ-MSNE in IDD Games with Directed-Tree Graphs

We will start off simple by considering a directed star (DS) graph structure. We refer to single-attack IDD games with such structure simply as DS-IDD games. We show that there is a fully polynomial-time approximation scheme (FPTAS) to compute an ϵ-MSNE in DS-IDD games. Roughly speaking, an FPTAS’s running time is some polynomial of the input and 1 ϵ for 1 < ϵ < 1 [58]. Then we generalize the result to directed trees (DT).

Appendix D.1. Directed Stars

Let the source node correspond to player n, and the remaining n 1 sink nodes correspond to players’ 1 , , n 1 . The directed star (DS) is equivalent to a directed tree with a single root at n with n 1 leaves and no internal nodes.
Since the domain of the variables (i.e., mixed strategies) is [ 0 , 1 ] , a direct optimization method to compute an MSNE would require solving a highly non-linear optimization problem: cubic objective function for the attacker with quartic constraints for the sites. An alternative is to discretize the continuous space of the x i ’s and y i ’s.
Given two integers ρ x > 1 and ρ y > 1 , let X X ( ρ x ) { 0 , τ x , 2 τ x , , ( ρ x 2 ) τ x , 1 } and Y Y ( ρ y ) { 0 , τ y , 2 τ y , , ( ρ y 2 ) τ y , 1 } be the respective discretization of the interval [ 0 , 1 ] for the sites and the attacker, where τ x 1 ρ x 1 and τ y 1 ρ y 1 are the respective lengths between points in the discretization grid of [ 0 , 1 ] , and ρ x and ρ y are the respective discretization sizes. The discretization defines the domains of x i and y i to be X and Y , respectively. Moreover, | X | = ρ x and | Y | = ρ y . Of course, there is an extra constraint for the y i ’s in Y : i = 1 n y i 1 for y Y n . We will determine the values of ρ x and ρ y to guarantee an ϵ-MSNE later in the section, but for now, assume they are given. A simple brute-force algorithm to compute an ϵ-MSNE is to check all possible discrete combinations and would take O ( ρ x ρ y ) n time to run in the worst case.
Indeed, we can apply the principle of dynamic programming (DP) [59] and design an efficient algorithm to compute ϵ-MSNE that is provably an FPTAS. The key idea is to realize that given a strategy ( x n , y n ) of the root n, the leaves’ decisions are independent of each other. However, there is a sum less than or equal to one constraint for the attacker (i.e., i = 1 n y i 1 ). Indeed, for each possible combination of ( x n , y n ) , we can run a DP algorithm (to be presented later in Appendice D.1.1 and Appendice D.1.2) based on some ordering of the nodes and obtain a (best) value for each ( x n , y n ) . Clearly, the best ( x n * , y n * ) that obtains the maximum value among all other ( x n , y n ) ’s is the best possible strategy for the attacker. This guarantees that the attacker would not deviate to a different strategy. Moreover, the DP algorithm would produce solutions that ensure each leaf player is best-responding. More formally, we define the following mathematical expressions for the DP algorithm. This will give us an FPTAS for DS-IDD games.

Appendix D.1.1. Upstream Pass: Collection of Conditional ϵ-MSNE Computation

First, we impose an ordering on the leaves, that is, we order the leaves in increasing order. Let M ¯ i ( x i , y i , x n , y n ) M i ( x i , y i , x n , y n ) x i C i y i C i 0 be the attacker’s utility for attacking i. For each leaf i = 1 , , n 1 , we compute the set of individual conditional tables (in this order),
T ¯ i , n ( x n , y n , v i , x i , y i , v i 1 ) M ¯ i ( x i , y i , x n , y n ) + log 1 v i = y i + v i 1 + log 1 x i BR x i ϵ ( y i , x n , y n ) + T i 1 , n ( x n , y n , v i 1 )
T i , n ( x n , y n , v i ) max ( x i , y i , v i 1 ) T ¯ i , n ( x n , y n , v i , x i , y i , v i 1 )
W i , n ( x n , y n , v i ) arg max ( x i , y i , v i 1 ) T ¯ i , n ( x n , y n , v i , x i , y i , v i 1 )
where T 0 , n ( x n , y n , s 0 ) = 0 for all ( x n , y n , s i 0 ) . Each T i , n specifies the maximum possible utility an attacker can get by attacking all the leaves up to i given that the attacker will attack the root n with probability y n , the root n to invest with probability x n , and the allowable remaining probability of an attack v i . The first and the second log-terms are to ensure that the overall probability of attack does not exceed the allowable limit and that site player i is playing best-respond strategies, respectively. Computing each “table of sets” T’s and W’s, given above, all take O ( ρ x 2 ρ y 4 ) each. For n, the root of the tree, we compute
R ¯ 0 ( s 0 , x n , y n , s n ) M ¯ n ( x n , y n ) + log 1 s 0 = s n + y n + log 1 x n BR n ϵ ( y n ) + R n ( x n , y n , s n )
R 0 ( s 0 ) max ( x n , y n , s n ) R ¯ 0 ( s 0 , x n , y n , s n )
W 0 ( s 0 ) arg max ( x n , y n , s n ) R ¯ 0 ( s 0 , x n , y n , s n )
Clearly, computing R 0 and W 0 takes O ( ρ x ρ y 3 ) . As mentioned earlier, for each combination of ( x n , y n ) , we are going to compute the best value an attacker can obtain. The computation of R 0 does exactly this.

Appendix D.1.2. Downstream Pass: Assignment Phase

The assignment phase is essentially the backtracking phase in the DP algorithm where we follow the “back pointers" to find the mixed-strategies for the players and the attacker. For the “downstream” or assignment pass, we are going to start with the root and find s 0 * arg max s 0 R 0 ( s 0 ) . Because of the discretization result of Theorem 4, there always exists an ϵ-MSNE, and thus, there is an s 0 * such that R 0 ( s 0 * ) < . We set the mixed-strategy of the root to be some ( x n * , y n * , s n * ) W 0 ( s 0 * ) . Starting from the opposite order of upstream pass (i.e., n 1 , . . . , 1 ), we set the mixed-strategies of the leaves according to v n 1 * s n * , and for i = n 1 , , 1 ,
( x i * , y i * , s i * , v i 1 * ) W i ( x n * , y n * , v i * ) .
By construction the resulting ( x * , y * ) is an ϵ-MSNE of the DS-IDD game.
The key to show that this DP algorithm produces an ϵ-MSNE for the DS-IDD games is the discretization sizes. The question is, how small can we make ρ x and ρ y and still guarantee an ϵ-MSNE in the discretized space? A more general result about sparse discretization for graphical games [40] provides the answer. Below, we formally state the result of Ortiz [40] for graphical games, but tailored to our context.
Theorem 4.
[40] For any single-attack IDD game, with graph G = { [ n ] , E } , and any ϵ > 0 , a (individually-uniform) discretization of size
ρ i = Θ k i ϵ
for each site i [ n ] , and
ρ 0 = Θ n | E | ϵ
for the attacker, is sufficient to guarantee that for every exact MSNE of the game, its closest mixed-strategy profile in the induced discretized space in terms of distance is also an ϵ-MSNE of the game.
In other words, to get an ϵ-MSNE, we need to set the discretization sizes as specified above for each player in the game.
Lemma 9.
Let ρ x = Θ 1 ϵ and ρ y = Θ n 2 ϵ . There is a DP algorithm that computes an ϵ-MSNE in DS-IDD games in time O n ( ρ x ρ y 2 ) 2 = O n 9 ϵ 6 .
Proof. 
From Theorem 4, we need to set the appropriate discretization sizes for the sites and the attacker as given in the condition in order to guarantee the existence of an MSNE in the discretized grid. Also, in the case of directed stars, we have k max = 2 and | E | = n 1 . Moreover, the DP algorithm uses at most O ( ρ x 2 ρ y 4 ) space and takes the same amount of time to run in the worst case. A simple substitution gives us the claimed running times. ☐
Our next corollary follows from the above lemma and the definition of FPTAS.
Corollary 2.
There is an FPTAS to compute an ϵ-MSNE in single-attack IDD games with a directed star graph over the sites.

Appendix D.2. Directed Trees

We refer to a single-attack IDD game with a directed tree graph over the sites as a DT-IDD game. Let n denote a site/node in the directed tree with a single source (i.e., the root of the tree). Let ( i 1 , , i l n ) be a sequence ordering the set of children of n, Ch ( n ) { i 1 , , i l n } , where l n | Ch ( n ) | . The following conditions provide the expressions corresponding to the “upstream pass” of the DP algorithm. For all n, except the root of the directed tree, we (recursively) define
R n ( x n , y n , s n ) T i k n , n ( x n , y n , s n )
such that, for all j = 1 , , l n , we define
T i j , n ( x n , y n , v i j ) max ( x i j , y i j , s i j , v i j 1 ) M ¯ i j ( x i j , y i j , x n , y n ) + log 1 v i j = s i j + y i j + v i j 1 + log 1 x i j BR x i j ϵ ( y i j , x n , y n ) + R i j ( x i j , y i j , s i j ) + T i j 1 , n ( x n , y n , v i j 1 ) ,
W i j , n ( x n , y n , v i j ) is the arg max of the same optimization (i.e., the set of “witnesses” containing the values of ( x i j , y i j , s i j , v i j 1 ) that achieve the maximum values of the optimization given each ( x n , y n , v i j ) ), and, to simplify the presentation, we use the boundary conditions (1) T i 0 , n ( x n , y n , s i 0 ) = 0 for all ( x n , y n , s i 0 ) ; and (2) if i j is a leaf of the tree, then R i j ( x i j , y i j , s i j ) = 0 for all ( x i j , y i j , s i j ) . If n is the root of the tree, we compute
R 0 ( s 0 ) max ( x n , y n , s n ) M ¯ n ( x n , y n ) + log 1 s 0 = s n + y n + log 1 x n BR n ϵ ( y n ) + R n ( x n , y n , s n ) , and
W 0 ( s 0 ) is the arg max of the same optimization (i.e., the set of “witnesses” containing the values of ( x n , y n , s n ) that achieve the maximum values of the optimization given each s 0 in the discretized grid of probability values in [ 0 , 1 ] ).
For the “downstream” or assignment pass, first find s 0 * arg max s 0 R 0 ( s 0 ) . Note that such s 0 * with R 0 ( s 0 * ) < because of the properties of the discretization (i.e., the existence of ϵ-MSNE in the appropriately sized grid). Set the values of the root node, denoted by n, to some ( x n * , y n * , s n * ) W 0 ( s 0 * ) . Then (recursively) set the values of the children of n, in the reversed order in which the DP computes the maximizations: set v i l n * s n * , and for j = l n , , 1 .
( x i j * , y i j * , s i j * , v i j 1 * ) W i j ( x n * , y n * , v i j * ) .
We repeat the same assignment process for all of the nodes in the tree. Recall that there will always be at least one witness value during the assignment phase because of the properties of the discretization and the existence of MSNE. By construction (i.e., the properties of the DP algorithm and the discretization of Theorem 4), the resulting ( x * , y * ) is an ϵ-MSNE of the DT-IDD game. Just like for DS-IDD games, the “upstream phase” dominates the worst-case running time of the DP algorithm for DT-IDD games, which is now O ( n ρ x 2 ρ y 5 ) . Our result follows by applying the same analysis of Lemma 9, and noting that, just like for DS-IDD games, k max = 2 and | E | = n 1 .
Lemma 10.
Let ρ x = Θ 1 ϵ and ρ y = Θ n 2 ϵ . There is a DP algorithm that computes an ϵ-MSNE in DT-IDD games in time O n ρ x 2 ρ y 5 = O n 11 ϵ 7 .
The proof of Theorem 3, stated in the main body of the paper (Section 4.3), follows from Lemma 10.

Appendix E. Multiple Attackers

In this section we extend the model further by considering the possibility of multiple attackers. While in the non-cooperative setting we considered the attackers act independently, some degree of mutual “cooperation” via mutual interest is induced because of the attackers’ shared objectives, as encoded in their utility functions; for example, several attackers may all derive utility from attacking the same (set of) sites. We discuss other ways to induce cooperation at the end of Appendix E.3, where we briefly mention another solution concept: i.e., correlated equilibria (CE) [44,45].

Appendix E.1. Pure Strategies

First we consider the extension of the model in terms of pure strategies.
Let m be the number of attackers. Let S l be the set of sites that can be a target of attacker l [ m ] . Let T i be the set of attackers that can target site i [ n ] and T I = i I T i the set of all attackers that can target any player in I [ n ] . Denote by b l ( b i l : i S l ) { 0 , 1 } S l the action or pure-strategy (i.e., the “vector of attack”) of attacker l [ m ] . Extending our previous notation, denote by b I l ( b i l : i S l I ) the actions of player l with respect to the sites in I [ n ] . (Note that we ignore any player in I that is not in S l .) Denote by b D ( b l : l D ) the joint-actions or pure strategies (i.e., the joint “vector of attack”) of all attackers l D [ m ] . For I [ n ] and D [ m ] , denote by b I D ( b I l : l D ) . For simplicity, denote by b i b i T i and b I b I T I . 17 Let
t i t i ( b i ) 1 l T i b i l > 0 = 1 l T i ( 1 b i l )
indicate whether i will be attacked.
A simple way to introduce t i into the cost function for the sites i is to replace b i in the previous single-attacker formulation with t i : 18
M i M i ( a PF ( i ) , b PF ( i ) ) C i + α i r i L i , if a i = 1 ( player i invests ) , t i p ^ i L i + ( 1 t i p ^ i ) r i L i , if a i = 0 ( player i does not invest ) , = a i [ C i + α i r i L i ] + ( 1 a i ) [ t i p ^ i L i + ( 1 t i p ^ i ) r i L i ] ,
where
r i r i ( a Pa ( i ) , b Pa ( i ) ) = 1 s i
and
s i s i ( a Pa ( i ) , b Pa ( i ) ) = j Pa ( i ) e i j ( a j , t j ( b j ) ) .
Then, similar to the discussion for a single attacker, the utility function of each attacker l, after performing the replacement applied to the sites, becomes
U l U l ( a , b ) i S l u i ( a , b ) b i l C i l ,
where
u i ( a , b ) u i ( a PF ( i ) , b PF ( i ) ) M i a i C i = w i L i
and
w i w i ( a PF ( i ) , b PF ( i ) ) a i α i r i + ( 1 a i ) ( t i p ^ i + ( 1 t i p ^ i ) r i ) = ( 1 a i ) p ^ i t i s i + ( a i α i + ( 1 a i ) ) r i .

Appendix E.2. Mixed Strategies

Here, we consider the extension of the interdependent defense model in the context of mixed strategies. Let P l P B S l l be the joint PMF corresponding to the mixed strategy of attacker l. Denote by P ( P 1 , , P m ) the mixed strategies of all the attackers. The distribution of play induced by joint mixed-strategy ( x , P ) satisfies
P ( A = a , B = b ) = i = 1 n P ( A i = a i ) l = 1 m P ( B S l l = b S l l ) = i = 1 n x i a i ( 1 x i ) 1 a i l = 1 m P l ( b S l l ) .
The cost of each site i is now a random function of the decisions of site i’s attackers as well as i’s parents and i’s parents’ attackers. The expected cost is
M i ( x PF ( i ) , P PF ( i ) ) E [ M i ( a PF ( i ) , b PF ( i ) ) ] = x i C i + ( x i α i + ( 1 x i ) ) E [ r i ( a Pa ( i ) , b Pa i ] L i + ( 1 x i ) E [ t i ( b i ) s i ( a Pa ( i ) , b Pa ( i ) ) ] p ^ i L i ,
where
E [ r i ( a Pa ( i ) , b Pa ( i ) ) ] 1 E [ s i ( a Pa ( i ) , b Pa ( i ) ) ] ,
and
E [ s i ( a Pa ( i ) , b Pa ( i ) ) ] E j Pa ( i ) e i j ( a j , t j ( b j ) ) = E j Pa ( i ) e i j ( x j , t j ( b j ) ) = b Pa ( i ) P Pa ( i ) ( b Pa ( i ) ) j Pa ( i ) e i j ( x j , t j ( b j ) ) = b Pa ( i ) l T Pa ( i ) P Pa ( i ) l ( b Pa ( i ) l ) j Pa ( i ) t j ( b j ) = 1 e i j ( x j , 1 ) .
and
E [ t i ( b i ) s i ( a Pa ( i ) , b Pa ( i ) ) ] b PF ( i ) l T PF ( i ) P PF ( i ) l ( b PF ( i ) l ) t i ( b i ) j Pa ( i ) t j ( b j ) = 1 e i j ( x j , 1 )
Similarly, for the attackers, the expected payoff of each attacker l becomes
U l ( x S l , y S l ) E [ U l ( a , b ) ] E i S l w i ( a PF ( i ) , b PF ( i ) ) L i b i l C i l = i S l E [ w i ( a PF ( i ) , b PF ( i ) ) ] L i y i l C i l ,
where
E [ w i ( a PF ( i ) , b PF ( i ) ) ] = ( 1 x i ) p ^ i E [ t i ( b i ) s i ( a Pa ( i ) , b Pa ( i ) ) ] + ( x i α i + ( 1 x i ) ) E [ r i ( a Pa ( i ) , b Pa ( i ) ) ] .
The following result extends Proposition 3 to the case of multiple attackers; and the proof is analogous.
Proposition 12.
For every mixed-strategy ( x * , P * ) of an IDD game with multiple attackers, and sites’ costs and attackers’ utilities defined as above, there exists a mixed strategy ( x * , P ˜ ) , such that for each attacker l, the following holds.
1. 
For all sites i [ n ] , the parent-family marginals P ˜ PF ( i ) l = P PF ( i ) l agree, and
2. 
the PMF P ˜ l decomposes as
P ˜ l ( b S l l ) i S l Φ PF ( i ) l ( b PF ( i ) l )
for some non-negative functions Φ PF ( i ) l : { 0 , 1 } k i [ 0 , ) , and all b S l { 0 , 1 } | S l | .
In addition, all sites i [ n ] and attackers l [ m ] achieve the same expected cost and utility, respectively, in ( x * , P * ) as in ( x * , P ˜ ) :
M i ( x * , P ˜ ) = M i ( x * , P * )
and
U l ( x * , P ˜ ) = U l ( x * , P * ) .
Similarly to the case of a single attacker, we have the following as a corollary of the last proposition.
Corollary 3.
For any IDD game with multiple attackers, and for every attacker l [ m ] in the game, let k max ( l ) max i S l k i be the size of the largest parent-family in the game graph that attacker l can target. Then, for every attacker l in the game, the representation size of any of attacker l’s mixed strategies is O 2 k max ( l ) , modulo expected-payoff equivalence.

Appendix E.3. Attackers with Limited Mixed Strategies

In this section we consider attackers with possibly limited randomization capabilities for attacks.
One of the simplest restrictions we can impose on the attackers is to assume that every attacker’s decision to perform an attack on one of its target sites, is independent of the decision for other sites. Formally, for all attackers l, we would assume
P l ( b l ) = i S l P i l ( b i l ) i S l y i l b i l 1 y i l 1 b i l .
The universal existence of MSNE in such form is still open. Note that Nash’s Existence Theorem [60] no longer applies because we are not considering the nice, compact set of all possible probability distributions over each attacker’s space of pure strategies, only the ones that are product distributions. In the special case of a single attacker, given the structure of the attacker’s mixed strategy as defined in the last equation (Equation (48)) and compactly represented by y , the subgame induced over the sites only is a graphical α-IDS game in which each p i = y i p ^ i and q j i = y j q ^ j i (see Definition 4 and Equation (7)). It is also important to note that, in general, Nash’s Existence Theorem does imply that that subgame over the sites only, given the attackers’ mixed-strategies, regardless of their structure or restrictions, always has an MSNE. This is because, given the attackers’ mixed strategies, the subgame over the sites only is a 2-action finite game in parametric-form.
Another possibility is to restrict the mixed strategy P l of attacker l to have some compact factored representation over its (factored) pure-strategy space such as those belonging to the class of probabilistic graphical models (i.e., a Markov or Bayesian network whose graph has the sites T l as nodes). Unlike the previous restriction given in Equation (48) imposing complete independence (i.e., a product distribution over each site that each attacker could potentially attack), Theorem 12 establishes sufficient conditions for the existence of an MSNE in which the attackers’ mixed strategies have such forms.
As a final remark, one could use the solution concept of correlated equilibria (CE) [44,45] to introduce potential “cooperation” jointly over both attackers and sites within a non-cooperative setting. Many possible uses or combinations of CE and MSNE are possible. For instance, we could combine a CE over the attackers with another CE over the sites; or a CE over the attackers and MSNE over the sites; or an MSNE over the attackers and a CE over the sites. Of course, we would still have a problem of how to compactly represent a CE over either the attackers or the sites, or both. To do that, we would have to modify the representation result of Kakade et al. [43] because a naive application of that result would lead to a representation that is exponential over neighborhood sizes, while the IDD representation is only linear over each neighborhood size.

Appendix E.4. Brief Remarks on Computing Equilibria in Multi-Attacker Settings

While extending the model itself to handle multi-attackers is relatively straightforward, as we show in this appendix, we believe that extending the technical computational results would require non-trivial amount of work. We leave a comprehensive study of the computational question in multi-attacker IDD models for future work. Having said that, extending the learning-in-games heuristics such as BRGD and SBRD, or the hybrid BRGD-SBRD heuristic we propose in Section 5.2, to this setting is relatively straightforward. But once again, we leave a comprehensive evaluation of such heuristics for future work.

References

  1. Bier, V.M.; Azaiez, M.N. Game Theoretic Risk Analysis of Security Threats; Springer: New York, NY, USA, 2009. [Google Scholar]
  2. Cárceles-Poveda, E.; Tauman, Y. A Strategic Analysis of the War against Transnational Terrorism. Games Eco. Behav. 2011, 7, 49–65. [Google Scholar] [CrossRef]
  3. Kunreuther, H.; Heal, G. Interdependent Security. J. Risk Uncertain. 2003, 26, 231–249. [Google Scholar] [CrossRef]
  4. Heal, G.; Kunreuther, H. IDS Models of Airline Security. J. Confl. Resolut. 2005, 49, 201–217. [Google Scholar] [CrossRef]
  5. O’Connor, A.; Schmitt, E. Terror Attempt Seen as Man Tries to Ignite Device on Jet. The New York Times, 2009. Available online: http://www.nytimes.com/2009/12/26/us/26plane.html (accessed on 31 August 2010).
  6. Gkonis, K.; Psaraftis, H. Container transportation as an interdependent security problem. J. Transp. Secur. 2010, 3, 197–211. [Google Scholar] [CrossRef]
  7. Johnson, B.; Grossklags, J.; Christin, N.; Chuang, J. Uncertainty in Interdependent Security Games. In Proceedings of the First International Conference on Decision and Game Theory for Security, GameSec’10, Berlin, Germany, 22–23 November 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 234–244. [Google Scholar]
  8. Fultz, N.; Grossklags, J. Blue versus Red: Towards a Model of Distributed Security Attacks. In Financial Cryptography and Data Security; Dingledine, R., Golle, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 167–183. [Google Scholar]
  9. Roy, S.; Ellis, C.; Shiva, S.; Dasgupta, D.; Shandilya, V.; Wu, Q. A Survey of Game Theory as Applied to Network Security. In Proceedings of the 2010 43rd Hawaii International Conference on System Sciences, HICSS ’10, Honolulu, HI, USA, 5–8 January 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 1–10. [Google Scholar]
  10. Syverson, P.F.; Systems, A.C. A Different Look at Secure Distributed Computation. In Proceedings of the CSFW-10, Rockport, MA, USA, 10–12 June 1997; IEEE Computer Society Press: Washington, DC, USA, 1997; pp. 109–115. [Google Scholar]
  11. Lye, K.W.; Wing, J. Game Strategies in Network Security. In Proceedings of the Workshop on Foundations of Computer Security, Cape Breton, NS, Canada, 24–26 June 2002; pp. 1–2.
  12. Jain, M.; Korzhyky, D.; Vanek, O.; Conitzery, V.; Pechoucek, M.; Tambe, M. A Double Oracle Algorithm for Zero-Sum Security Games on Graphs. In Proceedings of the AAMAS, Taipei, Taiwan, 2–6 May 2011.
  13. Kiekintveld, C.; Jain, M.; Tsai, J.; Pita, J.; Ordóñez, F.; Tambe, M. Computing Optimal Randomized Resource Allocations for Massive Security Games. In Proceedings of the AAMAS, Budapest, Hungary, 10–15 May 2009; pp. 689–696.
  14. Korzhyk, D.; Conitzer, V.; Parr, R. Complexity of Computing Optimal Stackelberg Strategies in Security Resource Allocation Games. In Proceedings of the AAAI, Atlanta, GA, USA, 11–15 July 2010.
  15. Korzhyk, D.; Conitzer, V.; Parr, R. Security Games with Multiple Attacker Resources. In Proceedings of the IJCAI, Catalonia, Spain, 16–22 July 2011; pp. 273–279.
  16. Korzhyk, D.; Conitzer, V.; Parr, R. Solving Stackelberg Games with Uncertain Observability. In Proceedings of the AAMAS, Taipei, Taiwan, 2–6 May 2011; pp. 1013–1020.
  17. Smith, A.; Vorobeychik, Y.; Letchford, J. MultiDefender Security Games on Networks. SIGMETRICS Perform. Eval. Rev. 2014, 41, 4–7. [Google Scholar] [CrossRef]
  18. Lou, J.; Vorobeychik, Y. Equilibrium Analysis of Multi-Defender Security Games. In Proceedings of the International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015.
  19. Laszka, A.; Lou, J.; Vorobeychik, Y. Multi-Defender Strategic Filtering Against Spear-Phishing Attacks. In Proceedings of the AAAI, Phoenix, AZ, USA, 12–17 February 2016.
  20. Liu, P. Incentive-Based Modeling and Inference of Attacker Intent, Objectives, and Strategies. In Proceedings of the 10th ACM Computer and Communications Security Conference (CCS’03), Washington, DC, USA, 27–30 October 2003; pp. 179–189.
  21. Cremonini, M.; Nizovtsev, D. Understanding and Influencing Attackers’ Decisions: Implications for Security Investment Strategies. In Proceedings of the Fifth Workshop on the Economics of Information Security (WEIS 2006), Cambridge, UK, 26–28 June 2006.
  22. Merlevede, J.S.A.; Holvoet, T. Game Theory and Security: Recent History and Future Directions. In Decision and Game Theory for Security, Proceedings of the 6th International Conference, GameSec 2015, London, UK, 4–5 November 2015; Khouzani, M., Panaousis, E., Theodorakopoulos, G., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 334–345. [Google Scholar]
  23. Agiwal, S.; Mohtadi, H. Risk Mitigating Strategies in the Food Supply Chain. In Proceedings of the American Agricultural Economics Assocation (Annual Meeting), Orlando, FL, USA, 27–29 July 2008.
  24. Dvijotham, K.; Chertkov, M.; Van Hentenryck, P.; Vuffray, M.; Misra, S. Graphical models for optimal power flow. Constraints 2016, 1–26. [Google Scholar] [CrossRef]
  25. Kearns, M.; Ortiz, L.E. Algorithms for Interdependent Security Games. In Proceedings of the Neural Information Processing Systems (NIPS), Whistler, BC, Canada, 11–13 December 2003.
  26. Gottlob, G.; Greco, G.; Scarcello, F. Pure Nash equilibria: Hard and easy games. J. Artif. Intell. Res. 2005, 24, 357–406. [Google Scholar]
  27. Gilboa, I.; Zemel, E. Nash and correlated equilibria: Some complexity considerations. Games Econ. Behav. 1989, 1, 80–93. [Google Scholar] [CrossRef]
  28. Chen, X.; Deng, X.; Teng, S.H. Settling the complexity of computing two-player Nash equilibria. J. ACM 2009, 56, 1–57. [Google Scholar] [CrossRef]
  29. Daskalakis, C.; Goldberg, P.W.; Papadimitriou, C.H. The Complexity of Computing a Nash Equilibrium. SIAM J. Comput. 2009, 39, 195–259. [Google Scholar] [CrossRef]
  30. Papadimitriou, C.H. On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence. J. Comput. Syst. Sci. 1994, 48, 498–532. [Google Scholar] [CrossRef]
  31. Daskalakis, C.; Goldberg, P.W.; Papadimitriou, C.H. The complexity of computing a Nash equilibrium. Commun. ACM 2009, 52, 89–97. [Google Scholar] [CrossRef] [Green Version]
  32. Conitzer, V.; Sandholm, T. New complexity results about Nash equilibria. Games Econ. Behav. 2008, 63, 621–641. [Google Scholar] [CrossRef]
  33. Shavitt, Y.; Shir, E. DIMES—Letting the Internet Measure Itself. Available online: http://www.arxiv.org/abs/cs.NI/0506099 (accessed on 25 January 2017).
  34. Shavitt, Y.; Shir, E. DIMES: Let the Internet Measure Itself. ACM SIGCOMM Comput. Commun. Rev. 2005, 35, 71–74. [Google Scholar] [CrossRef]
  35. Fudenberg, D.; Levine, D. The Theory of Learning in Games; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  36. Kearns, M. Graphical Games. In Algorithmic Game Theory; Nisan, N., Roughgarden, T., Éva, T., Vaziran, V.V., Eds.; Cambridge University Press: Cambridge, UK, 2007; pp. 159–180. [Google Scholar]
  37. Kearns, M.; Littman, M.; Singh, S. Graphical Models for Game Theory. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, Seattle, WA, USA, 2–5 August 2001; pp. 253–260.
  38. Heal, G.; Kunreuther, H. You Only Die Once: Managing Discrete Interdependent Risks. Technical Report W9885. NBER, 2003. Working Paper. Available online: http://ssrn.com/abstract=430599 (accessed on 25 January 2017).
  39. Heal, G.; Kunreuther, H. Modeling Interdependent Risks. Risk Anal. 2007, 27, 621–634. [Google Scholar] [CrossRef] [PubMed]
  40. Ortiz, L.E. On Sparse Discretization for Graphical Games. CoRR 2014. abs/1411.3320. [Google Scholar]
  41. Irfan, M.T.; Ortiz, L.E. On influence, stable behavior, and the most influential individuals in networks: A game-theoretic approach. Artif. Intell. 2014, 215, 79–119. [Google Scholar] [CrossRef]
  42. Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  43. Kakade, S.; Kearns, M.; Langford, J.; Ortiz, L. Correlated Equilibria in Graphical Games. In Proceedings of the 4th ACM conference on Electronic commerce, EC ’03, San Diego, CA, USA, 9–12 June 2003; ACM: New York, NY, USA, 2003; pp. 42–47. [Google Scholar]
  44. Aumann, R. Subjectivity and Correlation in Randomized Strategies. J. Math. Econ. 1974, 1, 67–96. [Google Scholar] [CrossRef]
  45. Aumann, R. Correlated Equilibrium as an Expression of Bayesian Rationality. Econometrica 1987, 55, 1–18. [Google Scholar] [CrossRef]
  46. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  47. Kearns, M.; Ortiz, L.E. Algorithms for Interdependent Security Games. In Advances in Neural Information Processing Systems 16; Thrun, S., Saul, L.K., Schölkopf, B., Eds.; MIT Press: Cambridge, MA, USA, 2004; pp. 561–568. [Google Scholar]
  48. Dyer, M.; Frieze, A.; Kannan, R. A random polynomial time algorithm for approximating the volume of a convex body. JACM 1991, 38, 1–17. [Google Scholar] [CrossRef]
  49. Elkind, E.; Goldberg, L.A.; Goldberg, P.W. Nash Equilibria in Graphical Games on Trees Revisited. In Proceedings of the 7th ACM Conference on Electronic Commerce, EC ’06, Ann Arbor, MI, USA, 11–15 June 2006; pp. 100–109.
  50. Cai, Y.; Daskalakis, C. On Minmax Theorems for Multiplayer Games. In Proceedings of the Twenty-second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’11, San Francisco, CA, USA, 23–25 January 2011; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2011; pp. 217–234. [Google Scholar]
  51. Singh, S.P.; Kearns, M.J.; Mansour, Y. Nash Convergence of Gradient Dynamics in General-Sum Games. In Proceedings of the UAI, Stanford, CA, USA, 30 June–3 July 2000; pp. 541–548.
  52. Kearns, M. Economics, Computer Science, and Policy. Issues Sci. Technol. Available online: http://issues.org/21-2/kearns/ (accessed on 25 January 2017).
  53. McKelvey, R.D.; Palfrey, T.R. Quantal Response Equilibria for Normal Form Games. Games Econ. Behav. 1995, 10, 6–38. [Google Scholar] [CrossRef]
  54. Kleinberg, J. Cascading Behavior in Networks: Algorithmic and Economic Issues. In Algorithmic Game Theory; Nisan, N., Roughgarden, T., Éva, Tardos., Vazirani, V.V., Eds.; Cambridge University Press: Cambridge, UK, 2007; pp. 613–632. [Google Scholar]
  55. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley & Sons: New York, NY, USA, 2006. [Google Scholar]
  56. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  57. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; W. H. Freeman & Co.: New York, NY, USA, 1979. [Google Scholar]
  58. Vazirani, V.V. Approximation Algorithms; Springer: New York, NY, USA, 2001. [Google Scholar]
  59. Bellman, R.E. Dynamic Programming; Dover Publications: Mineola, NY, USA, 2003. [Google Scholar]
  60. Nash, J. Non-cooperative games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
  • 1.Note that even if full screening were performed, the Christmas Day 2009 episode in Detroit [5] serves as a reminder that transfer risk still exists.
  • 2.Throughout this article, we often used “attacker(s)” and “aggressor(s)” interchangeably as a way to remind the reader that our model handles a variety of interdependent security settings beyond airline or Internet infrastructure security.
  • 3.We note that the original IDS games were also fully transfer-vulnerable and assumed one-hop transfers.
  • 4.Or as Heal and Kunreuther [38] put it, “You Only Die Once.”
  • 5.By 2 { 0 , 1 } we mean the power set of { 0 , 1 } which equals { , { 0 } , { 1 } , { 0 , 1 } } .
  • 6.A similar extension was also proposed independently by Heal and Kunreuther [39].
  • 7.By “strategic” here we mean that the action of an individual entity may depend on those of others in the population.
  • 8.We should note that the terms “ a i C i ” are actually strategically irrelevant, and could have been removed. That doing so is sound will become clear when we define the best-response correspondence of the attacker (Equation (17)). We decided to keep those terms to explicitly express the notion that the attacker does not care about the cost for investments that any player may incur.
  • 9.Note that BR 0 ( a ) = arg max b { 0 , 1 } n i = 1 n M i ( a PF ( i ) , b PF ( i ) ) b i C i 0 , because the term i = 1 n a i C i is not a function of b .
  • 10.In other words, P ˜ is a Gibbs distribution with respect to the undirected “moralized” graph that results from adding an (undirected) edge among every pair of parents of every node to the original directed graph of the game and ignoring the directions of the edges in the original game graph. We refer the reader to Koller and Friedman [42] for a textbook introduction to concepts from probabilistic graphical models.
  • 11.Note that this does not mean that the expected loss caused by a player that does not invest but is attacked, L p ^ + δ | Ch ( i ) | , is the same for all players.
  • 12.Distinct M ¯ i 0 ’s for the set of defenders at which the sum goes over one is sufficient to guarantee unique MSNE.
  • 13.We note that proving that computing an MSNE in IDD games is PPAD-complete would be more appropriate, since there always exists an MSNE in IDD games, but we will leave that question for future work.
  • 14.Recall the probability of no attack y 0 = 1 i = 1 n y i .
  • 15.While some results presented here are for a single instance of the Internet game for each ϵ, the results are typical of multiple instances. Our observations are robust to the experimental randomness in both the Internet game parameters and the initialization of BRGD. For the sake of simplicity of presentation, we discuss results based on a single instance of the Internet game, and in some cases based on a single BRGD run. Note that, for each ϵ value we considered, the Internet game parameters remain constant within different BRGD runs. BRGD always converged within 2000 iterations (except 6 runs for ϵ = 0.001 ).
  • 16Throughout the proof, to simplify notation, we drop the ‘*’ superscript used in the main text to denote MSNE.
  • 17Note that b l = b S l l , thus consistent with the notation. Note also that, when clear from context, singleton sets are denoted without the set bracket.
  • 18By defining the cost functions of each player i this way, i.e., based on the definition of the attack function t i given, we are implicitly subscribing to the “you only die once” principle [38], because even if multiple attacks on any site i are successful, the loss L i induced by the attack is the same as if a single attack were successful. Variations of this model that would make L i depend on the number of successful attacks that are possible, but not pursued here. Also, in the case of multiple attackers, one may consider p ^ i a function of b i , and more specifically, the number of attacks on site i.
Figure 1. Histograms of Indegree and Outdegree of the Nodes of the Internet Graph from DIMES at the Level of Autonomous Systems (AS). The bar graphs show (the logarithm, base 10, of) the number nodes with a particular outdegree (a) and indegree (b) value. (The graphs only show the in/out degrees with a non-zero number of nodes.)
Figure 1. Histograms of Indegree and Outdegree of the Nodes of the Internet Graph from DIMES at the Level of Autonomous Systems (AS). The bar graphs show (the logarithm, base 10, of) the number nodes with a particular outdegree (a) and indegree (b) value. (The graphs only show the in/out degrees with a non-zero number of nodes.)
Games 08 00013 g001
Figure 2. Indegree and Outdegree of the Nodes of the AS-level Intenet Graph from DIMES. The scatter plot shows the indegree and outdegree pairs of the AS nodes in logarithmic (base 10) scale.
Figure 2. Indegree and Outdegree of the Nodes of the AS-level Intenet Graph from DIMES. The scatter plot shows the indegree and outdegree pairs of the AS nodes in logarithmic (base 10) scale.
Games 08 00013 g002
Figure 3. Convergence Rate of Best-Response Gradient Dynamics (BRGD) Heuristic for Computing an Approximate Mixed Strategy Nash Equilibrium (MSNE). The plots in this figure present the number of iterations of BRGD as a function of ϵ under the two experimental conditions: Internet games with fixed (a) and randomly-generated parameters (b). Applying mean-squared-error (MSE) regression to the left-hand and right-hand graphs, we obtain a functional expression for the number of iterations N F ( ϵ ) = 0.00003 ϵ 2.547 ( R 2 = 0.90415 ) and N R ( ϵ ) = 0.0291 ϵ 1.589 ( R 2 = 0.9395 ), respectively (i.e., low-degree polynomials of 1 / ϵ ).
Figure 3. Convergence Rate of Best-Response Gradient Dynamics (BRGD) Heuristic for Computing an Approximate Mixed Strategy Nash Equilibrium (MSNE). The plots in this figure present the number of iterations of BRGD as a function of ϵ under the two experimental conditions: Internet games with fixed (a) and randomly-generated parameters (b). Applying mean-squared-error (MSE) regression to the left-hand and right-hand graphs, we obtain a functional expression for the number of iterations N F ( ϵ ) = 0.00003 ϵ 2.547 ( R 2 = 0.90415 ) and N R ( ϵ ) = 0.0291 ϵ 1.589 ( R 2 = 0.9395 ), respectively (i.e., low-degree polynomials of 1 / ϵ ).
Games 08 00013 g003
Figure 4. Attacker’s Equilibrium Strategy on an Internet Game Instance (Fixed). The graph shows the values of y i * > 0 for each node i, sorted in decreasing order (in log-log scale), for attacker’s Strategy A (blue/denser-dots line) and Strategy B (red/sparser-dots line) at an MSNE of the single instance of the Internet game.
Figure 4. Attacker’s Equilibrium Strategy on an Internet Game Instance (Fixed). The graph shows the values of y i * > 0 for each node i, sorted in decreasing order (in log-log scale), for attacker’s Strategy A (blue/denser-dots line) and Strategy B (red/sparser-dots line) at an MSNE of the single instance of the Internet game.
Games 08 00013 g004
Figure 5. Attacker’s Equilibrium Strategy and the Degrees of the Nodes. The top graph (a), which depicts Strategy A (all 27106 nodes), shows the probability of attack (y-axis) of a node and its corresponding outdegree (x-axis) in logarithmic (base 10) scale. The bottom graphs, (b) and (c), show the indegree (y-axis) of a node and its corresponding outdegree (x-axis) in logarithmic (base 10) scale of Strategy B: the graphs on the left and right consist of the (1780) nodes with nonzero probability of attack and the (25326) nodes with zero probability of attack, respectively.
Figure 5. Attacker’s Equilibrium Strategy and the Degrees of the Nodes. The top graph (a), which depicts Strategy A (all 27106 nodes), shows the probability of attack (y-axis) of a node and its corresponding outdegree (x-axis) in logarithmic (base 10) scale. The bottom graphs, (b) and (c), show the indegree (y-axis) of a node and its corresponding outdegree (x-axis) in logarithmic (base 10) scale of Strategy B: the graphs on the left and right consist of the (1780) nodes with nonzero probability of attack and the (25326) nodes with zero probability of attack, respectively.
Games 08 00013 g005
Figure 6. Attacker’s and Site’s ϵ-mixed-strategy Nash equilibria (MSNE) Strategies for a Randomly-Generated Internet Game. The graphs show the empirical distributions of the probability of attack (a) and histograms of the probability of investment (b), for different ϵ-value conditions encoded in the right-hand side of the plots (i.e., from 0.001 to 0.009 ). In both graphs, the distributions and histograms found for each ϵ value considered are drawn in the same corresponding graph superimposed. The left-hand graph plots the distribution of y i where the nodes are ordered decreasingly based on the y i value. The right-hand bar graph shows histograms of the probability of investing in defense/security measures based on the following sequence of 10 ranges partitioning the unit interval: ( [ 0 , 0.1 ] , ( 0.1 , 0.2 ] , . . . , ( 0.9 , 1 ] ) .
Figure 6. Attacker’s and Site’s ϵ-mixed-strategy Nash equilibria (MSNE) Strategies for a Randomly-Generated Internet Game. The graphs show the empirical distributions of the probability of attack (a) and histograms of the probability of investment (b), for different ϵ-value conditions encoded in the right-hand side of the plots (i.e., from 0.001 to 0.009 ). In both graphs, the distributions and histograms found for each ϵ value considered are drawn in the same corresponding graph superimposed. The left-hand graph plots the distribution of y i where the nodes are ordered decreasingly based on the y i value. The right-hand bar graph shows histograms of the probability of investing in defense/security measures based on the following sequence of 10 ranges partitioning the unit interval: ( [ 0 , 0.1 ] , ( 0.1 , 0.2 ] , . . . , ( 0.9 , 1 ] ) .
Games 08 00013 g006
Figure 7. Attacker’s Strategy at ϵ-MSNE. The x-axis of the graphs (a), (b), and (c) represents the ϵ value and their y-axis represents the number of iterations until convergence (or 2000 iterations max) to some ϵ-MSNE, the number of nodes that are being attacked, and the highest probability of attack, respectively. The scatter plot in graph (d) shows the relation between the number of nodes that are being attacked and the highest probability of attack in x-axis and y-axis, respectively.
Figure 7. Attacker’s Strategy at ϵ-MSNE. The x-axis of the graphs (a), (b), and (c) represents the ϵ value and their y-axis represents the number of iterations until convergence (or 2000 iterations max) to some ϵ-MSNE, the number of nodes that are being attacked, and the highest probability of attack, respectively. The scatter plot in graph (d) shows the relation between the number of nodes that are being attacked and the highest probability of attack in x-axis and y-axis, respectively.
Games 08 00013 g007
Figure 8. Attacker’s ϵ-MSNE Strategy vs. Node Degrees. Average indegree (a) and outdegree (b) of nodes potentially attacked in terms of the ϵ-MSNE.
Figure 8. Attacker’s ϵ-MSNE Strategy vs. Node Degrees. Average indegree (a) and outdegree (b) of nodes potentially attacked in terms of the ϵ-MSNE.
Games 08 00013 g008
Figure 9. The Structure of an Attack to the Internet. The 3-d graph (a) corresponds to the top 402 Internet autonomous systems (AS)-level nodes most likely to be attacked according to our model at 0.005 -MSNE, and their neighbors (i.e., both parent and children family). Graph (b) is a 2-d projection of the 3-d graph (a). The self-loops mark the nodes that are actually attacked. For the most part, the graph structures exhibit very dense clustering. Bar graph (c) corresponds to the number of connected components (CCs) of the top 402 Internet AS-level nodes that are most likely to be attacked. Bar graph (d) shows the number of nodes with the probability of investing in defense/security measures within the range of ( [ 0 , 0.1 ] , ( 0.1 , 0.2 ] , . . . , ( 0.9 , 1 ] ) . Some properties of the graph corresponding to the network structure are shown on the upper corner of graph (c). The graph consists of 1606 nodes, 2044 edges, and 75 CCs. Out of the 75 CCs, the largest CC contains 1427 nodes and the smallest CC consists of just 1 node (there are only 4 of them). There are 46 of 2-CC (CC with only 2 nodes), 20 of 3-CC, 1 of 4-CC, 1 of 5-CC, and 2 of 7-CC. The diameters and density of the graphs are 13 and 0.002 , respectively.
Figure 9. The Structure of an Attack to the Internet. The 3-d graph (a) corresponds to the top 402 Internet autonomous systems (AS)-level nodes most likely to be attacked according to our model at 0.005 -MSNE, and their neighbors (i.e., both parent and children family). Graph (b) is a 2-d projection of the 3-d graph (a). The self-loops mark the nodes that are actually attacked. For the most part, the graph structures exhibit very dense clustering. Bar graph (c) corresponds to the number of connected components (CCs) of the top 402 Internet AS-level nodes that are most likely to be attacked. Bar graph (d) shows the number of nodes with the probability of investing in defense/security measures within the range of ( [ 0 , 0.1 ] , ( 0.1 , 0.2 ] , . . . , ( 0.9 , 1 ] ) . Some properties of the graph corresponding to the network structure are shown on the upper corner of graph (c). The graph consists of 1606 nodes, 2044 edges, and 75 CCs. Out of the 75 CCs, the largest CC contains 1427 nodes and the smallest CC consists of just 1 node (there are only 4 of them). There are 46 of 2-CC (CC with only 2 nodes), 20 of 3-CC, 1 of 4-CC, 1 of 5-CC, and 2 of 7-CC. The diameters and density of the graphs are 13 and 0.002 , respectively.
Games 08 00013 g009
Figure 10. Attacker’s Equilibrium Strategy vs Degree of the Nodes at 0.05 -MSNE. These are plots on the 402 nodes with the highest y i . The two graphs on top, (a) and (b), show the corresponding y i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale. Similarly, the two graphs at the bottom, (c) and (d), show the corresponding x i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale.
Figure 10. Attacker’s Equilibrium Strategy vs Degree of the Nodes at 0.05 -MSNE. These are plots on the 402 nodes with the highest y i . The two graphs on top, (a) and (b), show the corresponding y i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale. Similarly, the two graphs at the bottom, (c) and (d), show the corresponding x i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale.
Games 08 00013 g010
Figure 11. The Degrees and Strategies of Sites Not Directly Targeted. These are plots on the remaining 26 , 704 nodes with zero y i . The two graphs on top, (a) and (b), show the corresponding y i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale. Similarly, the two graphs at the bottom, (c) and (d), show the corresponding x i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale.
Figure 11. The Degrees and Strategies of Sites Not Directly Targeted. These are plots on the remaining 26 , 704 nodes with zero y i . The two graphs on top, (a) and (b), show the corresponding y i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale. Similarly, the two graphs at the bottom, (c) and (d), show the corresponding x i (y-axis) and its indegree and outdegree in logarithmic (base 10) scale.
Games 08 00013 g011
Figure 12. Attack Distribution of 0.001 -MSNE ( x * , y * ) Using best-response-gradient dynamics (BRGD) on Internet Game. Scatter plots of (y-axis) the component y i U i ( x * ) of the (normalized) expected utility that the attacker obtained from attacking site i (a) and each expected utility U i ( x * ) that the attacker would obtain from fully targeting each site i (b) as a function of the probability y i * of attacking the corresponding site i (x-axis).
Figure 12. Attack Distribution of 0.001 -MSNE ( x * , y * ) Using best-response-gradient dynamics (BRGD) on Internet Game. Scatter plots of (y-axis) the component y i U i ( x * ) of the (normalized) expected utility that the attacker obtained from attacking site i (a) and each expected utility U i ( x * ) that the attacker would obtain from fully targeting each site i (b) as a function of the probability y i * of attacking the corresponding site i (x-axis).
Games 08 00013 g012
Figure 13. Properties of the Hybrid BRGD-Smooth Best-Response Dynamics (SBRD) Heuristic. BRGD vs. hybrid heuristic running time (a); Attacker’s attack (b) and sites’ (c) investment distribution on ϵ-MSNE.
Figure 13. Properties of the Hybrid BRGD-Smooth Best-Response Dynamics (SBRD) Heuristic. BRGD vs. hybrid heuristic running time (a); Attacker’s attack (b) and sites’ (c) investment distribution on ϵ-MSNE.
Games 08 00013 g013
Figure 14. Combing BRGD and the Hybrid BRGD-SBRD Heuristic. Internet Games: BRGD Improvement (y-axis represents the ϵ values).Combing BRGD and our heuristic
Figure 14. Combing BRGD and the Hybrid BRGD-SBRD Heuristic. Internet Games: BRGD Improvement (y-axis represents the ϵ values).Combing BRGD and our heuristic
Games 08 00013 g014
Figure 15. Degrees of the Targeted Site Nodes at ϵ-MSNE. Internet Games: average indegree (a) and average outdegree (b) of the targeted sites over ϵ-MSNE
Figure 15. Degrees of the Targeted Site Nodes at ϵ-MSNE. Internet Games: average indegree (a) and average outdegree (b) of the targeted sites over ϵ-MSNE
Games 08 00013 g015
Table 1. Internet Games’ Model Parameters.
Table 1. Internet Games’ Model Parameters.
Model ParametersFixed: U = 0.5 Random: U Uniform ( [ 0 , 1 ] )
α i U / 20
L i 10 8 + ( 10 9 ) × U
C i 10 5 + ( 10 6 ) × U
p ^ i 0.9 × p ˜ i p ˜ i + k Ch ( i ) q ˜ i k
q ^ i j 0.9 × q ˜ i j p ˜ i + k Ch ( i ) q ˜ i k
z i 0.2 + U / 5
p ˜ i 0.8 + U / 10
q ˜ i j z i | Ch ( j ) | + | Pa ( j ) | k Ch ( i ) | Ch ( k ) | + | Pa ( k ) |
C i 0 10 6
Table 2. Selection of the Constant λ for the Hybrid BRGD-SBRD Heuristic.
Table 2. Selection of the Constant λ for the Hybrid BRGD-SBRD Heuristic.
λSmallest ϵ
0.050.06
0.010.008
0.0050.004
0.0010.0009
0.00050.0006
0.00010.0004

Share and Cite

MDPI and ACS Style

Chan, H.; Ceyko, M.; Ortiz, L. Interdependent Defense Games with Applications to Internet Security at the Level of Autonomous Systems. Games 2017, 8, 13. https://doi.org/10.3390/g8010013

AMA Style

Chan H, Ceyko M, Ortiz L. Interdependent Defense Games with Applications to Internet Security at the Level of Autonomous Systems. Games. 2017; 8(1):13. https://doi.org/10.3390/g8010013

Chicago/Turabian Style

Chan, Hau, Michael Ceyko, and Luis Ortiz. 2017. "Interdependent Defense Games with Applications to Internet Security at the Level of Autonomous Systems" Games 8, no. 1: 13. https://doi.org/10.3390/g8010013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop