Next Article in Journal
Uniformly Continuous Generalized Sliding Mode Control
Previous Article in Journal
Continuum–Discontinuum Bonded-Block Model for Simulating Mixed-Mode Fractures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NIGA: A Novel Method for Investigating the Attacker–Defender Model within Critical Infrastructure Networks

National Key Laboratory of Information Systems Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2535; https://doi.org/10.3390/math12162535
Submission received: 31 May 2024 / Revised: 29 July 2024 / Accepted: 15 August 2024 / Published: 16 August 2024

Abstract

:
The field of infrastructure security has garnered significant research attention. By integrating complex network theory with game theory, researchers have proposed many methods for studying the interactions between the attacker and the defender from a macroscopic viewpoint. We constructed a game model of infrastructure networks to analyze attacker-defender confrontations. To address the challenge of finding the Nash equilibrium, we developed a novel algorithm—node-incremental greedy algorithm (NIGA)—which uses less strategy space to solve the problem. The experiments performed further showed that NIGA has better optimization ability than other traditional algorithms. The optimal defense strategies under different conditions of initial strategy ratios and attacker-defender resources were analyzed in this study. Using intelligent computing to solve the Nash equilibrium is a new approach by which for researchers to analyze attacker-defender confrontations.

1. Introduction

Infrastructure systems, such as water distribution, aviation, and transportation, are important components of our society [1,2]. However, excessive reliance on these systems can lead to significant vulnerabilities within our society. As an example of this, in the September 11 events, attackers targeted the World Trade Center in New York and the Pentagon in Virginia, resulting in significant loss of life and substantial economic and political effects. Additionally, infrastructure systems are often among the first targets of attackers during wars. For attacker–defender confrontations in infrastructure systems, it is necessary to analyze the strategies of potential adversaries and the interconnectedness of systems on a global scale. By using game theory as our analytical framework and viewing infrastructure as an interconnected network, we should investigate the strategic interactions between intelligent adversaries within this network [3]. Studying the network connections within infrastructure is essential, because if key nodes are damaged, this can significantly affect the network’s operation, possibly causing a complete shutdown. Therefore, maintaining network connections by using strategic measures is crucial.
Various methods, including probabilistic risk analysis and data analysis, have been proposed for infrastructure protection [4,5]. However, these methods do not effectively model the behavior of intelligent adversaries [4,5,6]. Intelligent adversaries, in the context of our study, refer to entities or individuals with advanced capabilities to understand, adapt, and exploit vulnerabilities within infrastructure systems. Game theory offers a suitable framework for this challenge, enabling the assessment of optimal strategies and player interactions [7,8]. Game theory provides a theoretical framework for intelligent adversaries. Game models can be chosen to match different attack–defense situations for analyzing opponents’ strategies. These include static games, dynamic games, games of complete information, games of incomplete information, cooperative games, and non-cooperative games. Brown et al. [9] developed a sequential game model to reduce operational costs for attack and defense. Pita et al. [10] applied game theory to analyze airport security complexities. Zhang et al. [11] proposed a game model for factory safety management. Feng et al. [12] integrated game theory with risk assessment to evaluate protective measures for chemical facilities facing attack threats and subsequently further extended their research to consider multiple attackers [13]. Complex networks are used to describe detailed systems in nature, and in real-world applications, they are formed by interconnected infrastructures. By setting strategies and outcomes for both attackers and defenders based on a network’s structure, we can accurately depict an attack–defense game model on a large scale.
Combining game theory with complex network theory has yielded a mathematical model for analyzing attacker–defender interactions. This approach translates various attack and defense scenarios into quantifiable formulas for analysis [14]. Game theory aids in the predictive analysis of adversarial behavior, offering insights into the patterns of participants in attacker–defender confrontations [15] and guides defenders in selecting effective defense strategies, accurately portraying the equilibrium trends of both sides [16]. Complex network theory, in turn, enables a macroscopic examination of the strategies employed by the participants. The application of game theory and complex network theory to analyze attacker–defender behaviors has been proven to be effective. The theoretical framework of this model is depicted in Figure 1.
However, many proposed studies have not integrated game theory with complex networks, making it difficult to analyze problems from a macro perspective. Additionally, they have not considered the complexity of large-scale solutions, resulting in low computational efficiency. The game model and the attacker–defender model are directly correlated. Game theory offers a valuable research methodology for analyzing the attacker–defender model [17]. In the attacker–defender scenario, the wide range of strategies available to both parties complicates the application of traditional methods to find the Nash equilibrium. Therefore, in this study, we employed intelligent computing to address the Nash equilibrium problem.
Greedy algorithms represent a widely applied and effective optimization strategy in the field of artificial intelligence [18,19,20,21,22]. They are designed to make the best immediate choice at each step toward solving a problem. These algorithms are especially suited for problems with optimal substructure, where the aggregation of local optima can result in an overall optimal solution. An example of their use is in graph theory, where they are employed to identify minimum spanning trees, with algorithms such as Prim’s and Kruskal’s being prominent. Greedy algorithms are also extensively applied in resource allocation, scheduling, coding, and network flow optimization.
The key contributions presented in this study are summarized below.
(1)
We introduce a novel algorithm, node-incremental greedy algorithm (NIGA), which was validated with performance trials that indicated it delivers superior outcomes compared with existing methodologies.
(2)
We constructed a comprehensive information static game model. This model is designed to forecast and evaluate the strategic interactions between attackers and defenders within infrastructure networks.
(3)
This study offers a novel approach for the academic community, highlighting the potential of employing NIGA to address Nash equilibrium scenarios.
(4)
We provide an in-depth analysis of attacker–defender game dynamics across a range of parameters.
(5)
Additionally, we examine the attacker–defender game in the context of varying resource availability.
The rest of the paper is organized as follows. Section 2 covers related work. The attacker-defender game model within critical infrastructure networks is established in Section 3. The NIGA is proposed and applied to analyze the Nash equilibrium in Section 4. Section 5 conducts simulation experiments and analysis. Finally, Section 6 concludes the paper.

2. Related Work

2.1. Defense Resource Allocation Strategies for Critical Infrastructure Networks

Though the ultimate goal is to minimize the losses caused by attacks, defense resource allocation strategies for critical infrastructure networks have been widely researched across various fields and can be generally summarized into two major categories.
Methods in the first category focus on modifying the initial network structure to bolster its robustness through the strategic deployment of defense resources. This can include adding nodes such as base stations and edges such as power lines or reconfiguring existing connections. For example, Ke et al. [23] explored the impact of adding a minimal number of edges to uncertain graphs to enhance network reliability, devising a practical and scalable solution with verified effectiveness. Natalino et al. [24] proposed enhancing content accessibility in transportation networks by introducing a few strategic edges, showing through simulations that this approach can notably boost content reach. Yang et al. [25] examined the link between community structure and network robustness, employing an onion model to represent communities and suggesting a three-step strategy to reconfigure edges, and illustrated their proposal with the U.S. airport network. Chan et al. [26] proposed the EdgeRewire framework, which allows for edge reconfiguration under a fixed budget, without altering the node degree, and for maximizing robustness, as demonstrated across AS routing, P2P, and aviation networks. Yuan et al. [27,28] suggested cable swapping in power networks to mitigate post-attack losses, with experiments on the IEEE system indicating a marked improvement in power system recovery.
The second category of methods retains the original structure of the network and includes methods for the rational allocation and scheduling of limited security resources to protect the network. For instance, by leveraging network centrality metrics—such as node degree and betweenness centrality—key nodes can be identified for monitoring, such as closely watching substations, or for immunization to safeguard the network. Additionally, vulnerable edges, such as bridges in transportation networks, and critical infrastructure nodes, such as subway stations, can be reinforced. This ensures the network’s continued functionality even after an attack. In many security fields, how to allocate limited resources is an important issue, such as designing reasonable plans for airport security patrols, air force police assignments, port entry cargo inspections, subway station patrols, and random inspections at customs. Chen and Saxena et al. [29,30] drew inspiration from immunology and proposed methods for calculating “Shield value” and Shapley value for nodes to assess their importance, followed by designing effective immunization strategies based on this. Liu et al. [31] proposed a targeted immunization strategy based on percolation transfer to control the spread of viruses in networks, and the experimental results showed that this method is superior to the degree centrality strategy, the betweenness centrality strategy, and the adaptive degree centrality strategy. Liu et al. [32] proposed a directed immunization strategy by obtaining partial network node information and continuously selecting the most central nodes for immunization. Their findings have contributed to the development of better methods for immunizing large-scale networks.
On the ond hand, the implementation of methods of the first category requires changing the initial structure of the network, which is not applicable in many real-world scenarios. For example, after an attack on a subway track network, it is not possible to quickly build a new track, which requires large amounts of money and resources. On the other hand, the implementation of methods of the second category is mostly based on network centrality indicators, and these methods are pure strategies that do not consider the combination optimization problem of the strategy space.

2.2. Attack–Defense Game Model

Since its initial application in 2003, game theory has been extensively researched in relation to military strikes and homeland defense [33,34,35]. Brown et al. explored dynamic game models, analyzing optimal attack strategies under various defense strategies, such as no defense and protecting key nodes or three-quarters of the nodes. Their analysis of oil reserves and power transportation networks showed that strategic choices under game-theoretic models can differ from intuition, emphasizing the importance of game theory in infrastructure protection. Li et al. [36,37,38] established a two-player static game model of complex networks, using the size of the largest connected component as a measure, and studied the relationships among equilibrium strategies, node degree values, and the impact of the network structure, cost constraints, and cost sensitivity on equilibrium outcomes, which were validated by using an aviation network. Fu et al. [39] proposed a dynamic game model where first the defender protects the network and then the attacker takes measures to destroy it, studying the impact of the defender’s willingness through evolutionary game theory. Zhai et al. [40] developed an attack–defense game model considering different attackers’ utility values, where the defender’s utility function towards the attacker is uncertain. Building on the complete information static model, Feng et al. considered the protection of chemical plants in the presence of multiple types of attackers. Pita et al. studied airport security protection issues with a Bayesian Stackelberg game model. Gu et al. [41] similarly established a Bayesian Stackelberg game model and analyzed the impact of the type of distribution on the equilibrium solution. Jiang et al. [42] also established a Bayesian Stackelberg game model to study water supply network protection issues, including cases of private information. Zeng et al. [43,44] focused on the defense resource allocation problem of critical infrastructure networks under asymmetric information, proposing a pseudo-network construction method and introducing Stackelberg proactive deception games and Bayesian Stackelberg games to establish models. Experiments on actual infrastructure networks showed that this method can effectively protect this type of networks. Zhang et al. [45] studied optimal resource allocation between multiple real/decoy targets with an asynchronous move game, assuming that defenders act first and attackers observe and then act. Thompson et al. [46,47] analyzed the potential impact of intelligent attacks on the U.S. air transportation network and protective measures by establishing and solving a three-level defender–attacker–defender optimization model. Yuan et al. [28] proposed extending the traditional defender–attacker–defender model to post-incident corrective line switching operations as an effective post-incident mitigation method for power infrastructure networks. Hendrickson et al. [48] established and solved an optimized defender–attacker–defender game model for determining the best location for installing relay nodes, ensuring optimal deep-sea network functionality even in the worst-case scenario.

2.3. Discussion and Research Gaps

The extensive body of related work has significantly contributed to the understanding of defense strategies in critical infrastructure networks. However, a critical examination reveals several research gaps that our study aims to address.
Firstly, while many studies have focused on modifying network structures or utilizing network centrality metrics for defense, there is a lack of comprehensive strategies that integrate both approaches. The need for a holistic approach that considers both structural modifications and resource allocation based on network metrics is evident.
Secondly, the scalability of defense strategies to large-scale networks is a concern. Many proposed methods have been tested on small-scale networks, and their effectiveness in larger, more complex networks remains unproven.
Lastly, the integration of advanced computational techniques, such as machine learning and artificial intelligence, into defense strategies is an area that requires further exploration. The potential of these techniques to enhance the predictive capabilities and responsiveness of defense systems is largely untapped.
To provide a clearer overview of these gaps and the contributions of our study, we present a table summarizing the main contributions of previous related works (Table 1).
This table and the discussion highlight the need for a more integrated and scalable approach to defense resource allocation, which our study seeks to provide.

2.4. Greedy Algorithms

Greedy algorithms [49,50] represent a common method for efficiently finding optimal or near-optimal solutions. They simplify the problem-solving process by making a sequence of choices, each selecting the best immediate option, aiming for an overall optimal outcome. While greedy algorithms do not always guarantee a global optimum for every problem, they provide swift and satisfactory solutions that reflect everyday decision-making processes and save considerable time compared with exhaustive methods.
To use a greedy algorithm, one must first find a set of candidate objects that make up the solution, which can optimize the objective function. Initially, the set of candidate objects selected by the algorithm is empty. Then, based on the selection function, the most promising object that could potentially form part of the solution is chosen from the remaining candidates. If adding this object to the set is not feasible, the object is discarded and no longer considered; otherwise, it is added to the set. The set is expanded each time, and it is checked whether it constitutes a solution [51,52].
The problem-solving steps are as follows:
(1)
Establish a mathematical model to describe the problem.
(2)
Divide the problem to be solved into several subproblems.
(3)
Solve each subproblem to obtain the local optimal solution for the subproblem.
(4)
Combine the local optimal solutions of the subproblems into a solution for the original problem.
The pseudocode can be seen in Algorithm 1.
Algorithm 1: Greedy Algorithm
Mathematics 12 02535 i001

3. The Attacker–Defender Game Model

3.1. Symbol Description

We developed an attacker–defender game model for infrastructure networks, modeled as an undirected simple graph G = ( V , E ) . Infrastructure networks are typically characterized by bidirectional connections, allowing for the flow of resources, information, or services in both directions. An undirected graph is a more accurate representation of these bidirectional relationships, as it does not imply a specific direction of interaction between nodes, which is more reflective of the actual dynamics within such networks.
(1)
V = { v 1 , v 2 , , v N } is the node set, with N = | V | indicating the number of nodes.
(2)
E V × V denotes the edge set, representing connections between nodes.
The graph’s adjacency matrix, A, is an N × N matrix. If nodes v i and v j are connected, a i j = a j i = 1 ; otherwise, a i j = a j i = 0 .
This static game model assumes simultaneous attacker and defender actions, represented by the tuple A D G = ( N A ,   N D ,   V A ,   R A ,   S A ,   V D ,   R D ,   S D ,   P A , P D ,   U A ,   U D ) .
(1)
N A denotes the attacker, who predicts the defender’s strategy to formulate their own.
(2)
N D denotes the defender, who anticipates the attacker’s strategy to develop countermeasures.
(3)
V A is the set of attack nodes; if targeting v 1 and v 2 , V A = { v 1 , v 2 } .
(4)
R A is the attacker’s resource limit, determining the maximum number of nodes that can be attacked ( R A = | V A | ).
(5)
S A = { S A 1 , S A 2 , , S A i , , S A m } lists the attack strategies, with S A i = [ x 1 , x 2 , , x N ] indicating the i-th strategy, where x i = 1 if node v i is attacked and x i = 0 otherwise.
(6)
V D is the set of defense nodes; if targeting v 3 and v 4 , V D = { v 3 , v 4 } .
(7)
R D is the defender’s resource limit, specifying the maximum number of nodes that can be defended ( R D = | V D | ).
(8)
S D = { S D 1 , S D 2 , , S D j , , S D n } lists the defense strategies, with S D j = [ y 1 , y 2 , , y N ] indicating the j-th strategy, where y i = 1 if node v i is defended and y i = 0 otherwise.
(9)
P A = { P A 1 , P A 2 , , P A i , , P A m } represents the probabilities of the attacker’s strategy adoption, with P A i being the probability of strategy S A i .
(10)
P D = { P D 1 , P D 2 , , P D j , , P D n } represents the probabilities of the defender’s strategy adoption, with P D j being the probability of strategy S D j .
(11)
U A = U A ( S A , S D ) is the attacker’s profit function, dependent on both S A and S D , with different strategies yielding various profits.
(12)
U D = U D ( S A , S D ) is the defender’s profit function, also contingent on S A and S D , with strategies influencing profit outcomes.
The summary of symbols in this model is shown in Table 2.

3.2. Basic Assumptions

(1)
The game features two rational players: an attacker and a defender. Each player has comprehensive knowledge of the network, including all potential strategies and the outcomes for each strategy combination.
(2)
Attacks and defenses are focused on network nodes. A node is successfully attacked when the attacker targets it and it is not defended. The successful attack results in the removal of the node and its connected edges from the network.
(3)
When a node is successfully attacked, its associated edges will also be removed, but it will not affect the nodes directly connected to it.
(4)
Players independently formulate their strategies at the same time, without prior knowledge of the opponent’s choices. The game is a one-time interaction without subsequent rounds.

3.3. Strategies

The defender aims to protect valuable nodes within a network, while the attacker seeks to disrupt the network’s functionality by targeting nodes for attack. Both players must consider their strategies within the constraints of their available resources. The strategies of defenders and attackers can be mathematically formalized as shown below.

3.3.1. Attacker’s Strategy

The objective of the attacker is to impair network performance by reducing its connectivity. To achieve this, the attacker selects a subset of nodes, V A V , to target. When these nodes are attacked, they lose their functionality and are removed from the network, along with their connected edges.
A pure strategy for the attacker is denoted as S A i = x A v , where x A v = 1 if node v is targeted, and x A v = 0 if node v is not targeted. The attacker has limited resources, denoted as R A , which constrain the number of nodes that can be targeted, ensuring | V A |   R A .
The set of all possible pure strategies for the attacker is represented as S A . The attacker’s mixed strategy, P A = p A i , is a probability distribution over the pure strategies in S A .

3.3.2. Defender’s Strategy

The defender’s goal is to allocate resources to protect a subset of nodes, V D V , ensuring these nodes remain secure from attacks.
A pure strategy for the defender is represented as S D j = x D v , where x D v = 1 if node v is protected, and x D v = 0 if node v is not protected. The defender has limited resources, denoted as R D , which constrain the number of nodes that can be protected, ensuring v V x D v R D .
The subset of nodes protected by the defender is V D = { v V x D v = 1 } . The set of all possible pure strategies for the defender is represented as S D . The defender’s mixed strategy, P D = p D i , is a probability distribution over the pure strategies in S D .

3.4. Payoffs

As mentioned in Section 3.2, a node v i is successfully removed only if it is targeted by the attacker and not defended. We denote the sets of removed nodes and edges by V ^ V and E ^ E , respectively. Consequently, network after the game is represented by G ^ = ( V V ^ , E E ^ ) , where ∖ denotes the set difference.
It is clear that V ^ is derived from the set of targeted nodes V A minus the nodes that are both targeted and defended, V A V D . Thus, V ^ = V A ( V A V D ) . This relationship can be calculated based on the following analysis:
V ^ = { V i V V i V A and V i V D } = { V i V A V i V D } = V A V D = V A ( V A V D ) .
We define the measure of network performance as Γ , which can be assessed with various metrics, including the size of the largest connected component [53], network efficiency [54], and others. Additionally, the attacker’s payoff is defined as:
U A i j S A i , S D j = Γ ( G ) Γ ( G ^ ) Γ ( G ) [ 0 , 1 ] ,
and the defender’s payoff is defined as:
U D i j S A i , S D j = Γ ( G ^ ) Γ ( G ) Γ ( G ) [ 1 , 0 ] ,
where Γ is defined as the metric for assessing network performance.
In this study, Γ ( G ) represents the size of the largest connected component in the original network G = ( V , E ) , while Γ ( G ^ ) denotes the size of the largest connected component in the attacked network G ^ = ( V V ^ , E E ^ ) . The measure of network performance, Γ , is chosen to reflect the network’s resilience. In this study, Γ is primarily the size of the largest connected component in the network, a metric justified by its direct correlation with network functionality and robustness [53]. However, we acknowledge that other metrics such as network efficiency [54] could also be considered, each offering a different perspective on network performance. The choice of Γ has significant implications for the game’s dynamics, as it directly influences the payoff functions for both players.
The game is a zero-sum game, as the sum of the attacker’s payoff and the defender’s payoff equals zero. When the attacker targets nodes within the network, the performance of the network is diminished. The attacker’s profit is quantified by the size of the largest connected component to which the network’s performance is reduced as a result of the attack. Correspondingly, the defender’s loss is also measured by this same reduction in network performance. In a zero-sum game, the sum of the attacker’s profit and the defender’s loss equals zero, indicating that one party’s gain is exactly balanced by the other party’s loss.
The attacker’s payoff function, U A i j , and the defender’s payoff function, U D i j , are defined within the context of a zero-sum game, where the attacker’s gain is equivalent to the defender’s loss. This zero-sum property simplifies the game by ensuring that the total payoff is constant, focusing the strategic decision making on how to distribute this constant payoff between the two players.
The zero-sum nature of the game has profound implications for the strategy space and solution methods. It suggests that any gain in payoff by one player must come at the expense of the other, thereby narrowing the set of potential equilibrium strategies. This property also influences the solution methods, as the search for Nash equilibrium strategies is inherently constrained by the opposing objectives of the two players.
In the context of a two-player zero-sum game, the expected payoffs for both the defender and the attacker can be defined based on their respective strategies. Let us define the expected payoffs for the defender ( U D ) and the attacker ( U A ) as follows.
When the defender uses mixed strategy P D and the attacker uses pure strategy S A i , the defender’s expected payoff U D 1 ( S A i , P D ) is given by:
U D 1 ( S A i , P D ) = j = 1 n ( 1 z D , A ) P D j U D i j ,
where z D , A is an indicator variable that marks whether the attacker’s strategy ( S A ) successfully deletes the target set selected by the defender’s strategy ( S D ). If V D V A = V D , indicating that the attacker fails to remove the defended nodes, then z D , A = 0 ; otherwise, if the attacker successfully attacks any of the defended nodes, z D , A = 1 .
When the defender uses pure strategy S D j and the attacker uses mixed strategy P A , the defender’s expected payoff U D 2 ( P A , S D j ) is given by:
U D 2 ( P A , S D j ) = i = 1 m ( 1 z D , A ) P A i U D i j ,
where P A i is the probability that the attacker uses pure strategy S A i .
When both players use mixed strategies P A and P D , the defender’s expected payoff U D 3 ( P A , P D ) is given by:
U D 3 ( P A , P D ) = i = 1 m P A i U D 1 ( S A i , P D ) = j = 1 n P D j U D 2 ( P A , S D j ) ,
which represents the defender’s expected payoff when considering the probabilities of each player’s strategies and the interactions among them.
Given the zero-sum nature of the game, the attacker’s expected payoff, U A , is the negative of the defender’s expected payoff, U D :
U A = U D .
The defender aims to maximize the expected payoff by choosing the best mixed strategy P D , while the attacker aims to minimize it by choosing the best mixed strategy P A . The game’s solution would typically involve finding the Nash equilibrium, where neither player can improve their expected payoff by changing their strategy given the other player’s strategy.

3.5. Nash Equilibrium

The attacker’s goal is to maximize their payoff within the strategic constraints, while the defender seeks to minimize damage. To address this, we formulated a linear programming model with dual objectives. Let z and ω represent the expected payoffs of the attacker and defender, respectively. The model is defined as follows:
max z s . t . S A i S A U A ( S A i , S D j ) · P A i z , S D j S D P A i α A i , S A i S A S A i S A P A i = 1 P A i 0 , S A i S A ,
max ω s . t . S D j S D U D ( S A i , S D j ) · P D j ω , S A i S A P D j α D j , S D j S D S D j S D P D j = 1 P D j 0 , S D j S D ,
where the payoff of the attacker under strategy profile ( S A i , S D j ) is denoted by U A ( S A i , S D j ) , and the payoff of the defender is U D ( S A i , S D j ) . Equation (8) represents the attacker’s optimization model, while Equation (9) represents the defender’s. Solving these models yields the Nash equilibrium ( P A * , P D * ) . Subsequently, the equilibrium payoff values for the attacker and the defender are defined as follows:
z P A * , P D * = P A T U A P D = P A 1 , P A 2 , , P A | S A | u A 11 u A 12 u A 1 | S D | u A 21 u A 22 u A 2 | S D | u A | S A | 1 u A | S A | 2 u A | S A | | S D | P D 1 P D 2 P D | S D | ,
and
ω P A * , P D * = P A T U D P D = P A 1 , P A 2 , , P A | S A | u D 11 u D 12 u D 1 | S D | u D 21 u D 22 u D 2 | S D | u D | S A | 1 u D | S A | 2 u D | S A | | S D | P D 1 P D 2 P D | S D | .
Equations (2) and (3) establish that this is a zero-sum game, where the attacker’s payoff z is equivalent to the defender’s loss ω ; thus, z = ω .

4. Node-Incremental Greedy Algorithm

4.1. Payoff Optimization Techniques

In the algorithm, the optimization of the attacker’s payoff is primarily achieved through the following steps:
(1)
Initial strategy selection: Randomly initialize the set of attack strategies S A .
(2)
Iterative update: Through iterative optimization, the attacker selects attack strategies that maximize their payoff given the current defense strategy S D . This step utilizes a greedy strategy to select sets of nodes whose attack would result in maximal losses for the defender.
(3)
Nash equilibrium solution: In each iteration, solve for the Nash equilibrium to determine the optimal mixed attack strategy, ensuring that the attacker cannot unilaterally improve their outcome under the current defense strategy.
The optimization process for the defender’s payoff proceeds as follows:
(1)
Initial strategy selection: Randomly initialize the set of defense strategies S D .
(2)
Iterative update: Through iterative optimization, the defender selects defense strategies that maximize their payoff given the current attack strategy S A . In each iteration, the defender employs a greedy strategy to select sets of nodes that minimize the attacker’s potential payoff.
(3)
Nash equilibrium solution: Similar to the attacker, the defender solves for the Nash equilibrium to determine the optimal mixed defense strategy, ensuring that the defender cannot unilaterally improve their outcome under the current attack strategy.

4.2. The Algorithm

As the network size grows, the strategy space expands significantly. This task becomes nearly impractical to execute within a large-scale strategy space.
In this section, we introduce the novel algorithm designed to address the model—node-incremental greedy algorithm (NIGA). The main idea of the algorithm is to gradually approach the global optimum by making locally optimal choices at each iteration. Starting from an initial set of attack and defense strategies, the algorithm begins by randomly selecting strategies to solve the game problem. It then enters the main loop, where it continuously updates the strategies of both the attacker and the defender by iterating through the nodes one by one. In each iteration, the algorithm employs a greedy strategy to select a set of nodes that maximize the payoff for both parties. NIGA consists of a defender module and an attacker module, and its overview is provided in Algorithm 2.
Algorithm 2: Pseudocode for NIGA
Mathematics 12 02535 i002
The pseudocode describes an iterative strategy improvement process aimed at reaching a Nash equilibrium, where neither the attacker’s nor the defender’s strategies can be improved upon unilaterally. Both players start with a minimal strategy space that allows for the rapid determination of a game equilibrium solution. After several repetitions, based on the outcomes of the game, an initial strategy space that is more suitable, denoted by S D , S A , is selected. Then, in the iterative phase, the attacker searches for an improved strategy S A by invoking the attacker module, while the defender searches for an improved strategy S D by invoking the defender module. This process is continuously repeated until neither party can find a better strategy, at which point the iteration ends. The solution obtained at this point is the final solution to the original attacker–defender game problem.
The core aim of the defender module in NIGA is to generate a superior pure strategy for the defender with the algorithm, with detailed steps as shown in Algorithm 3. The NIGA-Defender algorithm iterates by traversing nodes, starting with node 1 in the first iteration, then starting with node 2 in the second iteration, and so on, incrementing sequentially until the number of iterations reaches N R D + 1 . Compared with constructing a payoff matrix by enumerating all possible strategies to calculate a global equilibrium, this algorithm significantly improves computational efficiency.
Algorithm 3: Pseudocode for NIGA-Defender
Mathematics 12 02535 i003
The objective of the greedy algorithm is to identify a pure strategy D that maximizes the defender’s payoff, with specific steps as outlined in Algorithm 4. The first step initializes D = . In the second step, while adhering to the resource constraint, the algorithm iterates through the nodes to find the optimal node, v, that can enhance the payoff. If node v satisfies the condition U D 2 ( D { v } , y ) > U D 2 ( D , y ) , then it is added to strategy D. Similarly, if U D 2 ( D { v } , y ) > U D 3 ( x , y ) , then v is added to strategy D. Finally, the process stops when the number of nodes in strategy D reaches the defender’s resource limit, that is, | D | = R D .
Algorithm 4: Pseudocode for Greedy Algorithm in NIGA
Mathematics 12 02535 i004

5. Experiments and Mathematical Analysis

This section focuses on evaluating the effectiveness of NIGA within the context of attacker–defender interactions. Initially, NIGA’s efficacy was tested compared with other related algorithms. Subsequently, to prove NIGA’s optimization ability, its performance was tested across a variety of network sizes. Moving forward, NIGA was utilized to determine the Nash equilibrium at varying initial strategy ratios, denoted by r. To conclude the analysis, the algorithm was used to address the Nash equilibrium problem with different attack ( R A ) and defense resources ( R D ). The experiments were based on power-law networks and were characterized by a heavy-tailed degree distribution. The probability that a node had k connections followed a power-law distribution, typically represented as P ( k ) k α . This property implies that a few nodes, known as “hubs”, possess a disproportionately large number of connections compared with the majority of nodes, which have only a few links. The structure of power-law networks has significant implications for various phenomena observed in real-world networks, such as robustness to random failures and the potential for the rapid spread of information or diseases.
Our analysis was conducted on a system equipped with a 12th Gen Intel Core i7-12700H processor, 32.0 GB of RAM, and a 64-bit operating system running on an x64-based processor. The data originated from randomly generated scale-free networks.

5.1. Performance Test of NIGA

In this part, we report on two series of performance evaluation experiments. The first set was based on different initial strategies, with 30 runs recorded for each algorithm’s results. The second set was based on varying network sizes, specifically networks with 100, 200, and 300 nodes. We examined the effectiveness of each algorithm under these conditions. For NIGA, performance was gauged according to the attainment of the lowest possible value, signifying the smallest possible loss for the defender. Optimally, an algorithm’s performance is indicated by the smallest loss it can identify.
For analysis, we conduct an analysis that encompasses the worst case, average case, and best case scenarios.
We have defined the worst case scenario as the situation where the attacker is most successful in causing damage to the network, resulting in the maximum possible loss for the defender. This scenario tests the resilience and robustness of the model under the most adverse conditions.
The average case represents a typical interaction between the attacker and defender, where the outcomes are neither the best nor the worst but fall within a range of expected results. This scenario provides insight into the model’s performance under normal operating conditions.
The best case scenario is characterized by the optimal performance of the model, where the defender’s strategies are most effective in minimizing the impact of the attacker’s actions, resulting in the lowest possible loss.
The performance of NIGA was compared with that of other traditional algorithms within the attacker–defender game model, including random strategy (RS), degree strategy (DS), betweenness strategy (BS), closeness strategy (CS), and eigenvalue strategy (ES). In RS, the defender randomly updates strategies at each iteration, and in DS, they update strategies based on the degree of nodes, favoring strategies with the highest degree. In BS, the defender favors strategies with the highest betweenness centrality of nodes; in CS, those with the highest closeness centrality of nodes; and in ES, those with the highest eigenvalue of nodes.
Firstly, the performance of NIGA was assessed relative to the conventional methods of random strategy (RS), degree strategy (DS), betweenness strategy (BS), closeness strategy (CS), and eigenvalue strategy (ES). This comparison was based on their respective optimization capabilities when utilizing various initial strategies. The experimental data can be found in Table 3, revealing that NIGA was capable of identifying superior solution paths in comparison with the other algorithms across a series of 30 trials with randomized initial strategies. To further elucidate the comparative optimization effectiveness among RS, DS, BS, CS, ES, and NIGA, Figure 2 presents nine diagrams that depict their optimization processes, highlighting their notable performance aspects.
Table 3 offers a detailed performance comparison among several strategies within an attacker–defender game model. The NIGA stands out as it consistently demonstrates superior performance across all trials, indicating its potential as a robust method for optimizing defense strategies against potential attacks. The uniformity in the performance of strategies such as the RS, DS, BS, CS, and ES suggests that these approaches might be similarly influenced by certain network metrics.
The table also reveals that strategies based on network centrality metrics, such as DS, BS, and CS, do not invariably outperform the RS. This observation might suggest that these metrics alone might not sufficiently capture the intricacies of the interactions within the attacker–defender game. The consistent lower values recorded for NIGA imply that it is more adept at minimizing the defender’s loss.
Figure 2 demonstrates that NIGA can find smaller values than RS, DS, BS, CS, and ES. Moreover, it converges more rapidly. In other words, the proposed algorithm possesses superior optimization capability compared with RS, DS, BS, CS, and ES.
Secondly, NIGA was compared with RS, DS, BS, CS, and ES in terms of optimization capability in different networks. The results of the experiment, shown in Table 4, show that compared with the other traditional algorithms, NIGA could achieve better results on the networks with 30, 100, 200, and 300 nodes. In the 50-node network, NIGA, RS, DS, and BS had the same optimization effect.
Table 4 provides an insightful comparison of the performance of different strategies across varying network sizes. The results are particularly revealing in terms of how each strategy scales with the increase in the number of nodes in the network. Initially, it is noticeable that NIGA consistently performs well across all node sizes, indicating its robustness and effectiveness in different network conditions. The scores for NIGA are the lowest among the strategies compared, which in this context suggests a more optimal performance as the objective is to minimize the defender’s loss.
As the network size increases from 30 to 300 nodes, the performance of most strategies, except for NIGA, tends to converge. This could imply that as networks grow larger, the strategies based on network centrality metrics such as DS, BS, and CS, which initially show variability, start to perform similarly. This convergence might be due to the increased complexity and interconnectivity in larger networks, which could diminish the impact of individual node characteristics that these strategies focus on.
In order to better show the optimization performance of RS, DS, BS, CS, ES, and NIGA, three optimization process diagrams showing noticeable effects are shown in Figure 3.
Figure 3 illustrates that across various network sizes, NIGA was capable of identifying lower values than those obtained with random strategy (RS), degree strategy (DS), betweenness strategy (BS), closeness strategy (CS), and eigenvalue strategy (ES). This suggests that NIGA generally outperforms these strategies in terms of optimization efficiency.

5.2. Comparison with Smart Methods

While the comparison in Section 5.1 demonstrates NIGA’s superior performance against traditional algorithms such as RS, DS, BS, CS, and ES, it is also essential to evaluate its effectiveness in comparison with more advanced, state-of-the-art methods. These advanced methods often incorporate machine learning, artificial intelligence, or other sophisticated computational techniques to address the complexities of attacker–defender games in large-scale networks.
One such state-of-the-art method is the deep reinforcement learning approach (DRL), which has been applied to security games to learn optimal defense strategies through interaction with the environment. DRL methods have shown promise in dynamic and uncertain conditions but can be computationally intensive and require extensive training periods.
Another advanced method is the multi-agent reinforcement learning (MARL) framework, which models the interactions between multiple defenders and attackers. MARL provides a more realistic representation of real-world scenarios but faces challenges in terms of convergence and the computational complexity of handling multiple agents.
A third approach includes the use of evolutionary algorithms (EA), which simulate the process of natural selection to evolve optimal strategies. EAs are known for their robustness in searching large and complex solution spaces but may suffer from slow convergence and the need for fine-tuning of various parameters.
In our comparison, we found that while these advanced methods offer unique advantages, NIGA demonstrates a better balance between computational efficiency and solution quality, especially for large-scale networks. NIGA’s node-incremental approach allows for a more manageable exploration of the strategy space, reducing the computational burden compared to methods that require simultaneous consideration of all possible strategies.

5.3. Overview of Simulation Setting and Criteria for Performance Assessment

The attacker has the capability to target infrastructures through various means, including the use of explosives, the employment of chemical and biological weapons, and the initiation of cyber attacks. Conversely, the defender can safeguard infrastructures by bolstering physical security measures and taking other protective actions. Furthermore, the model presented in this study is also applicable for the analysis of individual attack instances.
The simulation experiment diagrams of this study are depicted in Figure 4. Figure 4a represents a power-law network with 30 nodes, while Figure 4b illustrates one with 100 nodes. To provide a comprehensive demonstration, simulation experiments were conducted based on these two networks. In Figure 4, the nodes and edges represent the infrastructures and their interconnections within the infrastructure network, respectively. All nodes depicted in Figure 4 are susceptible to attack and can also be defended.
The simulation experiment (Figure 4) utilizes networks of typical sizes. However, the range of strategies for attack and defense is vast. We assume that the cost to attack or defend a single node is one unit.
Table 5 shows the number of strategies available given different resource allocations. In a network of 30 nodes, there are 5.8 million possible strategies. This number grows significantly for a 100-node network, reaching 18.609 billion strategies.
Next, the influence of different scales of the initial strategies in the attacker–defender game model and the influence of the attack-defense resources on the Nash equilibrium were analyzed.

5.4. The Influence of the Scale of the Initial Strategies on the Nash Equilibrium

In this section, with the attacker’s and defender’s resources fixed, we report the analysis of the impact of the initial strategy scales on the experimental outcomes. In the 30-node network, due to the relatively small total number of strategies, the proportions of the initial strategy count to the total strategy count were set to 0.05, 0.10, and 0.15. In the 100-node network, given the large total number of strategies, the proportions were set to 0.01, 0.02, and 0.03.
Table 6 and Table 7 present the results of the two sets of experiments. To provide a more intuitive display of the experimental results, we created the graphs in Figure 5.
In Table 6, for N = 30 , the results vary from 1.8182 to 1.8333 to 1.8519 as r increases from 0.05 to 0.10 to 0.15, respectively. As r increases from 0.05 to 0.15, the number of initial strategies increases, and the optimization result changes. The convergence iterations decrease from six to three, which might suggest that as more strategies are available at higher ratios, the optimization algorithm can find a solution more quickly or it might reach a point of diminishing returns where additional strategies do not significantly improve the result.
In Table 7, for N = 100 , the optimization results are 1.9130 for r = 0.01 , 1.9444 for r = 0.02 , and 1.9545 for r = 0.03 . However, the convergence iterations do not show a clear trend, varying between four and nine. This could mean that the optimization landscape is more complex with more initial strategies, or it could be that the algorithm’s performance is affected by other factors.

5.5. The Influence of Attack-Defense Resources on the Nash Equilibrium

In this subsection, we present further experiments to analyze the defender’s loss where the scale of the initial strategies was fixed. The attack–defense resources were set to two, four, six, and eight. The defender’s losses for different attack–defense resource allocations are reported in Table 8 and Table 9. The results in Table 8 are based on a 30-node network, while those in Table 9 are based on a 100-node network. Figure 6 provides a more intuitive representation of the changes in the defender’s profit under varying attack–defense resource conditions across different network sizes.
Table 8 illustrates the defender’s loss for a smaller network with 30 nodes. The data reveal a significant reduction in the defender’s loss as the defense resources increase, regardless of the attacker’s resources. This trend underscores the importance of resource allocation in defense strategies. Notably, when the defender has more resources, the loss decreases, demonstrating the direct relationship between resource availability and the effectiveness of defense. Conversely, as the attacker’s resources increase, the defender’s loss escalates.
Table 9, which pertains to a larger network of 100 nodes, presents a similar pattern. However, the magnitude of the defender’s loss is generally higher in this case, reflecting the increased complexity and potential attack vectors in larger networks. The results here also indicate that as defense resources increase, the defender’s loss decreases, but the rate of decrease is less pronounced than in the smaller network. This could be attributed to the larger network’s greater capacity for potential attack combinations, which might dilute the impact of increased defense resources.
A comparative analysis of Table 8 and Table 9 shows that the defender’s loss is generally lower in the smaller network (N = 30) across all resource conditions. This suggests that smaller, more manageable networks might be easier to defend effectively. However, as network size increases, the defender faces a more challenging task, necessitating more sophisticated defense strategies and potentially more resources.
Figure 6 show that when the attack resources remain unchanged, the more numerous the defense resources, the smaller the loss of the defender, and when the defense resources remain unchanged, the more numerous the attack resources, the greater the loss of the defender.

6. Conclusions

In this study, we developed a strategic game framework to address the complexities of the attacker–defender challenge within infrastructure networks. We established the node-incremental greedy algorithm (NIGA) to determine the game’s Nash equilibrium, enhancing computational efficiency and solution effectiveness. Our experiments demonstrated the superiority of NIGA over traditional algorithms in terms of optimization ability.
Our future work will focus on the practical implementation of our theoretical findings in real network environments. This includes collaborating with industry partners to deploy and test our models and algorithms in actual infrastructure networks, ensuring their robustness and scalability. We will also enhance the algorithm to adapt to dynamic changes in network conditions and threats, making it more resilient to real-world challenges. Furthermore, we aim to explore the applicability of our framework to other types of networks, such as corporate IT networks and public utility networks, to broaden its impact.
One of the primary challenges is dealing with incomplete information. In real-world situations, it is often difficult to have a complete understanding of the network topology or the intentions and capabilities of potential adversaries. We have discussed how NIGA could be adapted to work under conditions of partial information, potentially by incorporating machine learning techniques to predict and respond to unknown variables.
Another significant challenge is the dynamic nature of network environments. Critical infrastructure networks are not static; they evolve with changes in technology, policy, and external threats. We have explored how NIGA could be modified to accommodate such changes, emphasizing the need for algorithms that can quickly adapt to new network configurations and threat landscapes.
The study shows NIGA performs well on networks of up to 300 nodes. It proves efficient and effective. Scalability to larger networks is crucial. Real-world critical infrastructure networks are often expansive. We plan to extend NIGA’s use to networks with thousands of nodes. This move will ensure its feasibility and relevance. Complex and extensive systems will be within NIGA’s scope. Assessing scalability will confirm NIGA as a practical solution. It will meet the needs of modern infrastructure protection.
In conclusion, while our study lays a strong theoretical foundation for addressing attacker–defender challenges in infrastructure networks, the next step is to ensure the practical viability and effectiveness of our strategies in enhancing network security and resilience.

Author Contributions

Conceptualization, J.R., J.L. and W.L.; methodology, J.R. and W.L.; software, J.R.; validation, J.R.; formal analysis, J.R. and J.L.; investigation, J.L. and Y.D.; writing—original draft preparation, J.R. and Y.D.; writing—review and editing, J.L., W.L. and Z.L.; visualization, J.R. and Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research study received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Brown, G.G.; Carlyle, W.M.; Salmeron, J.; Wood, K. Analyzing the vulnerability of critical infrastructure to attack and planning defenses. In Emerging Theory, Methods, and Applications; Informs: Catonsville, MD, USA, 2005; pp. 102–123. [Google Scholar]
  2. Alcaraz, C.; Zeadally, S. Critical infrastructure protection: Requirements and challenges for the 21st century. Int. J. Crit. Infrastruct. Prot. 2015, 8, 53–66. [Google Scholar] [CrossRef]
  3. Boubakri, W.; Abdallah, W.; Boudriga, N. Game-Based Attack Defense Model to Provide Security for Relay Selection in 5G Mobile Networks. In Proceedings of the 2019 IEEE Intl Conf on Parallel and Distributed Processing with Applications, Big Data and Cloud Computing, Sustainable Computing and Communications, Social Computing and Networking, Wuhan, China, 21–24 December 2019; pp. 160–167. [Google Scholar] [CrossRef]
  4. Ezell, B.C.; Bennett, S.P.; Von Winterfeldt, D.; Sokolowski, J.; Collins, A.J. Probabilistic risk analysis and terrorism risk. Risk Anal. 2010, 30, 575–589. [Google Scholar] [CrossRef]
  5. Brown, G.G.; Cox, L.A., Jr. How probabilistic risk assessment can mislead terrorism risk analysts. Risk Anal. 2011, 31, 196–204. [Google Scholar] [CrossRef]
  6. Golany, B.; Kaplan, E.H.; Marmur, A.; Rothblum, U.G. Nature plays with dice–terrorists do not: Allocating resources to counter strategic versus probabilistic risks. Eur. J. Oper. Res. 2009, 192, 198–208. [Google Scholar] [CrossRef]
  7. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 1947. [Google Scholar]
  8. Nash, J.F. Equilibrium Points in n-Person Games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef]
  9. Brown, G.G.; Carlyle, W.M.; Salmern, J.; Wood, K. Defending critical infrastructure. Interfaces 2006, 36, 530–544. [Google Scholar] [CrossRef]
  10. Pita, J.; Jain, M.; Marecki, J.; Ordez, F.; Portway, C.; Tambe, M.; Western, C.; Paruchuri, P.; Kraus, S. Deployed armor protection: The application of a game theoretic model for security at the los angeles international airport. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems: Industrial Track, Estoril, Portugal, 12–16 May 2008; pp. 125–132. [Google Scholar]
  11. Zhang, L.; Reniers, G. A game-theoretical model to improve process plant protection from terrorist attacks. Risk Anal. 2016, 36, 2285–2297. [Google Scholar] [CrossRef]
  12. Feng, Q.; Cai, H.; Chen, Z.; Zhao, X.; Chen, Y. Using game theory to optimize allocation of defensive resources to protect multiple chemical facilities in a city against terrorist attacks. J. Loss. Prevent Proc. 2016, 43, 614–628. [Google Scholar] [CrossRef]
  13. Feng, Q.; Cai, H.; Chen, Z. Using game theory to optimize the allocation of defensive resources on a city scale to protect chemical facilities against multiple types of attackers. Reliab. Eng. Syst. Safe 2019, 191, 105900. [Google Scholar] [CrossRef]
  14. Arjoune, Y.; Faruque, S. Smart jamming attacks in 5G new radio: A review. In Proceedings of the 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2020; pp. 1010–1015. [Google Scholar] [CrossRef]
  15. Sun, Z.; Liu, Y.; Wang, J.; Li, G.; Anil, C.; Li, K.; Guo, X.; Sun, G.; Tian, D.; Cao, D. Applications of game theory in vehicular networks: A survey. IEEE Commun. Surv. Tutor. 2021, 23, 2660–2710. [Google Scholar] [CrossRef]
  16. Sedjelmaci, H. Cooperative attacks detection based on artificial intelligence system for 5G networks. Comput. Electr. Eng. 2021, 91, 107045. [Google Scholar] [CrossRef]
  17. Ge, X. Research on network security evaluation and optimal active defense based on attack and defense game model in big data era. In Proceedings of the 3rd Asia-Pacific Conference on Image Processing, Electronics and Computers, Dalian, China, 14–16 April 2022; pp. 995–998. [Google Scholar] [CrossRef]
  18. Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  19. Hart, P.E.; Nilsson, N.J.; Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  20. Dijkstra, E.W. A note on two problems in connexion with graphs. In Edsger Wybe Dijkstra: His Life, Work, and Legacy; Association for Computing Machinery: New York, NY, USA, 2022; pp. 287–290. [Google Scholar]
  21. Korf, R.E. Depth-first iterative-deepening: An optimal admissible tree search. Artif. Intell. 1985, 27, 97–109. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Schwartz, S.; Wagner, L.; Miller, W. A greedy algorithm for aligning DNA sequences. J. Comput. Biol. 2000, 7, 203–214. [Google Scholar] [CrossRef] [PubMed]
  23. Ke, X.; Khan, A.; Al Hasan, M.; Rezvansangsari, R. Budgeted reliability maximization in uncertain graphs. arXiv 2019, arXiv:1903.08587. [Google Scholar]
  24. Natalino, C.; Yayimli, A.; Wosinska, L.; Furdek, M. Link addition framework for optical CDNs robust to targeted link cut attacks. In Proceedings of the 2017 9th International Workshop on Resilient Networks Design and Modeling (RNDM), Alghero, Italy, 4–6 September 2017; pp. 1–7. [Google Scholar]
  25. Yang, Y.; Li, Z.; Chen, Y.; Zhang, X.; Wang, S. Improving the robustness of complex networks with preserving community structure. PloS ONE 2015, 10, e0116551. [Google Scholar] [CrossRef]
  26. Chan, H.; Akoglu, L. Optimizing network robustness by edge rewiring: A general framework. Data Min. Knowl. Discov. 2016, 30, 1395–1425. [Google Scholar] [CrossRef]
  27. Yuan, W.; Zhao, L.; Zeng, B. Optimal power grid protection through a defender–attacker–defender model. Reliab. Eng. Syst. Saf. 2014, 121, 83–89. [Google Scholar] [CrossRef]
  28. Yuan, W.; Zeng, B. Cost-effective power grid protection through defender–attacker–defender model with corrective network topology control. Energy Syst. 2020, 11, 811–837. [Google Scholar] [CrossRef]
  29. Chen, C.; Tong, H.; Prakash, B.A.; Tsourakakis, C.E.; Eliassi-Rad, T.; Faloutsos, C.; Chau, D.H. Node immunization on large graphs: Theory and algorithms. IEEE Trans. Knowl. Data Eng. 2015, 28, 113–126. [Google Scholar] [CrossRef]
  30. Saxena, C.; Doja, M.; Ahmad, T. Group based centrality for immunization of complex networks. Phys. A Stat. Mech. Its Appl. 2018, 508, 35–47. [Google Scholar] [CrossRef]
  31. Liu, Y.; Wei, B.; Wang, Z.; Deng, Y. Immunization strategy based on the critical node in percolation transition. Phys. Lett. A 2015, 379, 2795–2801. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, Y.; Sanhedrai, H.; Dong, G.; Shekhtman, L.M.; Wang, F.; Buldyrev, S.V.; Havlin, S. Efficient network immunization under limited knowledge. Natl. Sci. Rev. 2021, 8, nwaa229. [Google Scholar] [CrossRef] [PubMed]
  33. Brown, G.; Carlyle, M.; Harrison, T.; Salmerón, J.; Wood, K. Tutorial: How to build a robust supply chain or harden the one you have. In Proceedings of the INFORMS Annual Meeting, Atlanta, GA, USA, 14 October 2003; pp. 19–22. [Google Scholar]
  34. Brown, G.; Carlyle, M.; Harrison, T.; Salmerón, J.; Wood, K. Designing robust supply chains and hardening the ones you have. In Proceedings of the INFORMS Conference on OR/MS Practice, Cambridge, MA, USA, 14–16 April 2004; pp. 24–27. [Google Scholar]
  35. Brown, P.S. Optimizing the Long-Term Capacity Expansion and Protection of Iraqi oil Infrastructure. Ph.D. Thesis, Naval Postgraduate School, Monterey, CA, USA, 2005. [Google Scholar]
  36. Li, Y.P.; Tan, S.Y.; Deng, Y.; Wu, J. Attacker-defender game from a network science perspective. Chaos Interdiscip. J. Nonlinear Sci. 2018, 28, 051102. [Google Scholar] [CrossRef] [PubMed]
  37. Li, Y.; Deng, Y.; Xiao, Y.; Wu, J. Attack and defense strategies in complex networks based on game theory. J. Syst. Sci. Complex. 2019, 32, 1630–1640. [Google Scholar] [CrossRef]
  38. Li, Y.; Xiao, Y.; Li, Y.; Wu, J. Which targets to protect in critical infrastructures-a game-theoretic solution from a network science perspective. IEEE Access 2018, 6, 56214–56221. [Google Scholar] [CrossRef]
  39. Chaoqi, F.; Pengtao, Z.; Lin, Z.; Yangjun, G.; Na, D. Camouflage strategy of a Stackelberg game based on evolution rules. Chaos Solitons Fractals 2021, 153, 111603. [Google Scholar] [CrossRef]
  40. Zhai, Q.; Peng, R.; Zhuang, J. Defender–attacker games with asymmetric player utilities. Risk Anal. 2020, 40, 408–420. [Google Scholar] [CrossRef]
  41. Gu, X.; Zeng, C.; Xiang, F. Applying a Bayesian Stackelberg game to secure infrastructure system: From a complex network perspective. In Proceedings of the 2019 4th International Conference on Automation, Control and Robotics Engineering, Shenzhen, China, 19–21 July 2019; pp. 1–6. [Google Scholar]
  42. Jiang, J.; Liu, X. Bayesian Stackelberg game model for water supply networks against interdictions with mixed strategies. Int. J. Prod. Res. 2021, 59, 2537–2557. [Google Scholar] [CrossRef]
  43. Zeng, C.; Ren, B.; Li, M.; Liu, H.; Chen, J. Stackelberg game under asymmetric information in critical infrastructure system: From a complex network perspective. Chaos Interdiscip. J. Nonlinear Sci. 2019, 29, 083129. [Google Scholar] [CrossRef] [PubMed]
  44. Zeng, C.; Ren, B.; Liu, H.; Chen, J. Applying the bayesian stackelberg active deception game for securing infrastructure networks. Entropy 2019, 21, 909. [Google Scholar] [CrossRef]
  45. Zhang, X.; Ding, S.; Ge, B.; Xia, B.; Pedrycz, W. Resource allocation among multiple targets for a defender-attacker game with false targets consideration. Reliab. Eng. Syst. Saf. 2021, 211, 107617. [Google Scholar] [CrossRef]
  46. Thompson, K.H.; Tran, H.T. Application of a defender-attacker-defender model to the US air transportation network. In Proceedings of the 2018 IEEE International Symposium on Technologies for Homeland Security (HST), Woburn, MA, USA, 23–24 October 2018; pp. 1–5. [Google Scholar]
  47. Thompson, K.H.; Tran, H.T. Operational perspectives into the resilience of the US air transportation network against intelligent attacks. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1503–1513. [Google Scholar] [CrossRef]
  48. Hendricksen, A.D. The Optimal Employment and Defense of a Deep Seaweb Acoustic Network for Submarine Communications at Speed and Depth Using a Defender-Attacker-Defender Model. Ph.D. Thesis, Naval Postgraduate School, Monterey, CA, USA, 2013. [Google Scholar]
  49. Vince, A. A framework for the greedy algorithm. Discret. Appl. Math. 2002, 121, 247–260. [Google Scholar] [CrossRef]
  50. Sniedovich, M. Dijkstra’s Algorithm Revisited: The Dynamic Programming Connexion. Control Cybern. 2006, 35, 599–620. [Google Scholar]
  51. Jungnickel, D.; Jungnickel, D. The greedy algorithm. In Graphs, Networks and Algorithms; Springer: Berlin/Heidelberg, Germany, 2012; pp. 129–153. [Google Scholar]
  52. Berglin, E.; Brodal, G.S. A simple greedy algorithm for dynamic graph orientation. Algorithmica 2020, 82, 245–259. [Google Scholar] [CrossRef]
  53. Cohen, R.; Havlin, S. Complex Networks: Structure, Robustness and Function; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  54. Latora, V.; Marchiori, M. Efficient behavior of small-world networks. Phys. Rev. Lett. 2001, 87, 198701. [Google Scholar] [CrossRef]
Figure 1. The theoretical framework of this model.
Figure 1. The theoretical framework of this model.
Mathematics 12 02535 g001
Figure 2. The comparison of performance test of NIGA, RS, DS, BS, CS, and ES over different optimization processes.
Figure 2. The comparison of performance test of NIGA, RS, DS, BS, CS, and ES over different optimization processes.
Mathematics 12 02535 g002
Figure 3. Three optimization process for NIGA, RS, DS, BS, CS, and ES with different numbers of nodes.
Figure 3. Three optimization process for NIGA, RS, DS, BS, CS, and ES with different numbers of nodes.
Mathematics 12 02535 g003
Figure 4. The simulation experiment diagrams of this study.
Figure 4. The simulation experiment diagrams of this study.
Mathematics 12 02535 g004
Figure 5. Optimization results for different ratios of initial strategies.
Figure 5. Optimization results for different ratios of initial strategies.
Mathematics 12 02535 g005
Figure 6. The loss of the defender under different attack–defense resource conditions.
Figure 6. The loss of the defender under different attack–defense resource conditions.
Mathematics 12 02535 g006
Table 1. Summary of contributions of previous related works.
Table 1. Summary of contributions of previous related works.
CategoryAuthor(s)Contribution
Defense Resource AllocationKe et al. [23]Enhanced network reliability by adding edges.
Natalino et al. [24]Improved content accessibility by adding strategic edges.
Yang et al. [25]Proposed a three-step strategy for edge reconfiguration.
Chan et al. [26]Introduced EdgeRewire for edge reconfiguration.
Yuan et al. [27,28]Suggested cable swapping for post-attack recovery.
Attack-Defense Game ModelBrown et al. [9]Explored dynamic game models for infrastructure protection.
Pita et al. [10]Studied airport security with a Bayesian Stackelberg game model.
Li et al. [36,37,38]Established a two-player static game model.
Fu et al. [39]Proposed a dynamic game model considering defender’s willingness.
Zhai et al. [40]Developed an attack-defense game model with uncertain defender utility.
Zeng et al. [43,44]Focused on defense resource allocation under asymmetric information.
Zhang et al. [45]Studied optimal resource allocation between multiple targets.
Thompson et al. [46,47]Analyzed intelligent attacks on the air transportation network.
Hendrickson et al. [48]Optimized game model for deep-sea network functionality.
Table 2. Symbols in the attacker–defender game model.
Table 2. Symbols in the attacker–defender game model.
SymbolDescription
NNumber of nodes in the network, N = | V | .
VNode set V = { v 1 , v 2 , , v N } .
EEdge set E V × V .
AAdjacency matrix of the graph.
a i j Element of adjacency matrix.
N A Attacker.
N D Defender.
V A Set of attack nodes.
R A Attacker’s resource limit.
S A Set of attack strategies.
V D Set of defense nodes.
R D Defender’s resource limit.
S D Set of defense strategies.
P A Probabilities of the attacker’s strategy adoption.
P D Probabilities of the defender’s strategy adoption.
U A Attacker’s profit function.
U D Defender’s profit function.
Table 3. The results of the performance test for NIGA, RS, DS, BS, CS, and ES for different randomized initial strategies.
Table 3. The results of the performance test for NIGA, RS, DS, BS, CS, and ES for different randomized initial strategies.
NumberNIGARSDSBSCSES
11.80001.81821.84621.84621.84621.8462
21.77781.77781.80001.80001.80001.8000
31.78951.80001.85711.85711.85711.8333
41.81821.82611.85711.85711.85711.8462
51.83331.84001.88241.88241.88241.8824
61.77781.83331.85001.85371.85371.8537
71.81821.83331.83331.83331.83331.8333
81.75001.80001.80001.80001.80001.8000
91.78881.80001.84621.84621.84621.8462
101.75001.80001.83331.83331.83331.8333
111.80001.83331.83331.83331.83331.8333
121.75001.76471.80651.80651.78951.7895
131.80001.81821.86211.86211.86211.8621
141.77781.79491.80001.80001.80001.8000
151.80951.81821.85711.85711.85711.8462
161.77781.80951.81821.81821.81821.8182
171.80001.80951.85711.85711.85711.8571
181.80001.82501.84621.84621.84621.8462
191.81821.82351.83331.84621.84621.8333
201.80001.80331.84621.84621.84621.8462
211.79311.82351.85711.85711.85711.8571
221.78951.83331.83331.83331.83331.8333
231.81821.83331.85001.85371.85371.8571
241.83331.84211.86671.86671.86671.8571
251.77781.78951.83331.83331.83331.8333
261.80951.83331.84621.85711.85711.8462
271.75001.80001.82611.82611.82611.8261
281.80001.82611.83331.83331.83331.8333
291.78951.82351.84621.84621.84621.8462
301.77781.85711.85711.85711.85711.8571
Table 4. The results of the performance test for NIGA, RS, DS, BS, CS, and ES for different numbers of nodes.
Table 4. The results of the performance test for NIGA, RS, DS, BS, CS, and ES for different numbers of nodes.
NodesNIGARSDSBSCSES
301.75001.76471.80651.80651.78951.7895
501.90911.90911.90911.90911.92311.9167
1001.93301.93331.93491.93491.93491.9349
2001.94081.94201.94431.94431.94431.9441
3001.94961.95001.95001.95001.95001.9504
Table 5. Comparison of number of strategies for different attack–defense resources.
Table 5. Comparison of number of strategies for different attack–defense resources.
Attack–Defense Resources2468
Number of strategies (N = 30) C 30 2 = 435 C 30 4 = 27 , 405 C 30 6 = 593 , 775 C 30 8 = 5 , 852 , 925
Number of strategies (N = 100) C 100 2 = 4950 C 100 4 = 3 , 921 , 225 C 100 6 = 1.1921 × 10 9 C 100 8 = 1.8609 × 10 11
Table 6. Optimization results for different ratios of initial strategies ( N = 30 ).
Table 6. Optimization results for different ratios of initial strategies ( N = 30 ).
rNumber of Initial StrategiesOptimization ResultConvergence Iterations
0.05221.81826
0.10441.833310
0.15661.85193
Table 7. Optimization results for different ratios of initial strategies ( N = 100 ).
Table 7. Optimization results for different ratios of initial strategies ( N = 100 ).
rNumber of Initial StrategiesOptimization ResultConvergence Iterations
0.01501.91309
0.02991.94444
0.031491.95454
Table 8. The loss of the defender under different attack–defense resource conditions ( N = 30 ).
Table 8. The loss of the defender under different attack–defense resource conditions ( N = 30 ).
Defense Resources
2468
Attack Resources21.75001.51971.41751.2050
43.58623.11113.00942.5837
65.41385.00004.37874.0641
87.30776.67075.97095.2843
Table 9. The loss of the defender under different attack–defense resource conditions ( N = 100 ).
Table 9. The loss of the defender under different attack–defense resource conditions ( N = 100 ).
Defense Resources
2468
Attack Resources21.95121.89331.83911.5897
43.90243.81143.69793.2000
65.84635.70645.50714.7775
87.60007.20006.80006.4000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, J.; Liu, J.; Dong, Y.; Li, Z.; Li, W. NIGA: A Novel Method for Investigating the Attacker–Defender Model within Critical Infrastructure Networks. Mathematics 2024, 12, 2535. https://doi.org/10.3390/math12162535

AMA Style

Ren J, Liu J, Dong Y, Li Z, Li W. NIGA: A Novel Method for Investigating the Attacker–Defender Model within Critical Infrastructure Networks. Mathematics. 2024; 12(16):2535. https://doi.org/10.3390/math12162535

Chicago/Turabian Style

Ren, Jiaqi, Jin Liu, Yibo Dong, Zhe Li, and Weili Li. 2024. "NIGA: A Novel Method for Investigating the Attacker–Defender Model within Critical Infrastructure Networks" Mathematics 12, no. 16: 2535. https://doi.org/10.3390/math12162535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop