Next Article in Journal
Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel
Next Article in Special Issue
Securing Wireless Communications of the Internet of Things from the Physical Layer, An Overview
Previous Article in Journal
Extracting Knowledge from the Geometric Shape of Social Network Data Using Topological Data Analysis
Previous Article in Special Issue
Normalized Unconditional ϵ-Security of Private-Key Encryption
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies

Wireless Information Network Laboratory (WINLAB), Rutgers University, North Brunswick, NJ 08901, USA
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(7), 363; https://doi.org/10.3390/e19070363
Submission received: 19 May 2017 / Revised: 10 July 2017 / Accepted: 11 July 2017 / Published: 15 July 2017
(This article belongs to the Special Issue Information-Theoretic Security)

Abstract

:
Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless systems. These servers receive information regarding the needs of each system, and then provide instructions back to each system regarding the spectrum bands they may use. This sharing of information is complicated by the fact that these systems are often in competition with each other: each system desires to use as much of the spectrum as possible to support its users, and each system could learn and harm the bands of the other system. Three problems arise in such a spectrum-sharing problem: (1) how to maintain reliable performance for each system-shared resource (licensed spectrum); (2) whether to believe the resource requests announced by each agent; and (3) if they do not believe, how much effort should be devoted to inspecting spectrum so as to prevent possible malicious activity. Since this problem can arise for a variety of wireless systems, we present an abstract formulation in which the agents or spectrum server introduces obfuscation in the resource assignment to maintain reliability. We derive a closed form expression for the expected damage that can arise from possible malicious activity, and using this formula we find a tradeoff between the amount of extra decoys that must be used in order to support higher communication fidelity against potential interference, and the cost of maintaining this reliability. Then, we examine a scenario where a smart adversary may also use obfuscation itself, and formulate the scenario as a signaling game, which can be solved by applying a classical iterative forward-induction algorithm. For an important particular case, the game is solved in a closed form, which gives conditions for deciding whether an agent can be trusted, or whether its request should be inspected and how intensely it should be inspected.

1. Introduction

Cognitive radio (CR) networks are being explored as a powerful tool to improve spectrum efficiency by allowing unlicensed (secondary) users (SUs) to use spectrum belonging to a licensed (primary) user (PU) as long as they do not cause interference. Towards this end, the concept of a spectrum server has been introduced to improve the sharing of spectrum [1,2,3]. Spectrum servers coordinate the sharing of spectrum between different wireless services by taking in resource requests (i.e., an amount of bands needed), and then allocating resource assignments to these services. Unfortunately, the open and dynamic nature of CR platforms and their software, which allow for opportunistic access to the licensed spectrum by potentially unknown users, makes the operation of CR networks and their associated dynamic spectrum access protocols vulnerable to exploitation and interference. In particular, transmission reservation protocols, by which services (or users) request spectrum from a spectrum server and thereby support the opportunistic usage of spectrum by secondary users, assume that the entities involved in the protocols are honest. These protocols can fail if malicious services/users aim to undermine the rules and etiquette surrounding these protocols. Such malicious manipulation is unfortunately easy and, for this reason, CR security has attracted considerable research attention recently. A reader can find comprehensive surveys of such threats in [4,5,6,7].
A curious reader might also ask: if opportunistic access to licensed spectrum by unknown users can be dangerous to the network, why not to restrict such access? The issue is that such access, as pointed out in the the National Broadband Plan [8] and in the President’s Council of Advisors on Science and Technology (PCAST) report [9], represents an important economic growth engine in the United States. In particular, this report recommended the sharing of underutilized federal spectrum and identified 1000 MHz of spectrum as part of an ambitious endeavor to create ”the first shared-use spectrum superhighways”. Consequently, it is very important to develop a foundational understanding of the interference implications associated with spectrum access, and how spectrum bands can be assigned so as to mitigate intentional interference caused by malicious participants exploiting the information shared in a spectrum assignment.
In many scenarios involving spectrum sharing, the underlying spectrum resources (which might be spectral bands, or spectro-temporal slots) might originally be assigned to one entity (as in the case of TV white space spectrum), and it is the role of a spectrum server to support the sharing of spectrum with a second untrusted entity. The spectrum server, which might be administered by the government or a neutral third party, receives resource requests from both a primary/incumbent entity as well as the second entity, and aims to re-assign the spectrum so as to support an improved, combined benefit for both entities. Since the entities do not trust each other, the sharing of spectrum inherently becomes a competitive scenario that encounters numerous security risks as each system desires to use as much spectrum as possible to support its users, and each system could learn and cause harm (interference) to the bands assigned to the other system. Consequently, there are three main problems that arise in such a spectrum sharing problem: (1) how to maintain reliable performance for each system-shared resource (licensed spectrum); (2) whether to believe the resource requests announced by each agent; and (3) if they do not believe, how much effort should be devoted to inspecting spectrum so as to prevent possible malicious activity. In this paper, we present analysis that is focused on improving the security and assurability of the spectrum sharing problem by applying the notion of decoy tasks, i.e., obfuscating spectrum resource requests. Through our analysis, we show that it is possible to determine the expected damage that might result from a potential malicious activity, and arrive at an estimate for the probability of detecting the malicious activity by employing ”spectrum inspection”. Based on these probabilities, we then formulate a signaling game that allows one to determine whether to believe the second agent or not, and then how to engage in spectrum inspection. Since this type of problem can arise in many different scenarios involving spectrum sharing, such as the sharing of spectrum in cognitive radio networks or the sharing of unlicensed bands between two wireless carriers/technologies or the sharing between radar and communication systems [10,11,12], we formulate the problem abstractly as one involving two agents and a set of spectrum bands.
When considering dynamic spectrum sharing in a potentially adversarial setting, there is an underlying competitive problem to consider: multiple users compete for the spectrum resources and, regardless of whether they are malicious or not, there may not be any incentive for them to communally cooperate and remain within the bands that have been assigned to them. Game theory is an appropriate tool to analyze such competitive problems. In [13], the readers can find a comprehensive overview of game theoretical techniques for dynamic spectrum sharing problems, which maps out the problem of primary users sharing spectrum with well-intentioned secondary users, with a focus on the auctioning of spectrum. An excellent reference book involving game theory for wireless and communication networks is [14]. Our work differs from the problems outlined in both of these surveys as our problem involves a secondary user that has two objectives (a beneficial and an adversarial intent), and thus our formulation will involve adversarial rewards involving the harm inflicted upon the primary user. Since our work explores the sharing of information between different organizations while facing attacks, one of the most relevant examples game-theoretical methods to model security problems is [15], where a Cournot-type model based on a contest success function was suggested to model investments into information security technologies where the firms share information resources. While information sharing can, on one hand, can increase economic benefits, on the other hand, it also creates new possibilities for adversaries aimed to perform security threats such as cyber attacks. While this work considers many motivating scenarios for two firms exchanging information, it is specifically focused on cyber security threats originating from an external, third agent and the impact that third agent can have upon the sharing of information between two firms. Our problem, on the other hand, considers one of the two participants having both a beneficial and harmful objective, and consequently its models do not extend directly to our communication problem where the secondary user can inflict wireless interference. In [16], a signaling game was proposed to model defense against attacks in honeypot-enabled networks. In this game, the attacker may try to deceive the defender by employing different types of attacks, ranging from suspicious to seemingly normal activity, while the defender in turn can make use of honeypots as a deception tool to trap the attacker. In our paper, a spectrum server announces decoys as a means to protect the primary user from interference attacks from a secondary user that may or may not choose to attack, but these decoys are not used as a means to dupe the adversary into attacking. In [17], a repeated spectrum-sharing game with cheat-proof strategies was investigated. By using a punishment-based repeated game, users are incentivized to share the spectrum in a cooperative way; and through mechanism-design-based and statistics-based approaches, user honesty is further enforced. Our work differs from this paper in that the objective of the secondary user is two-fold: both to acquire an appropriate amount of channels to support its users, and to increase its ability to launch an interference attack against the primary user. In this regard, our work includes interference as a reward metric for the secondary user system. In [18], the authors explore the problem of a coalition of users communicating in the presence of an adversary that may apply different types of jamming strategies, and whether it is possible to learn the adversary type. A game-theoretical approach is used to show that it is possible to learn the adversary’s strategy in a finite number of steps. This work involves a general fading channel and power allocation formulation and does not involve a spectrum server allocating channels, and the objective of the communicating nodes is to identify the adversary’s strategy. In [19], using a fictitious game for analyzing the defense of cognitive radio networks against an unknown jamming attacker was proposed, and the defense strategy employed is to switch channels in order to evade the jammer. Our work differs in that their adversary is separate from the secondary users (as opposed to being a secondary user), and the primary user is considered apart from the conflict. In particular, they do not consider aspects related to sharing between the primary and secondary user, nor that secondary users may have malicious intentions against the primary user. In [20] a game, where a SU shares time between law-obedient message transmission and noise-transmission to jam the PU, was suggested. This work is similar to our work in that it considers a secondary user that has both a benevolent and an interference objective. However, this work differs in two key aspects: first, the overall objectives are to maximize data rates; and, second, the secondary user employs jamming (noise forwarding) not to harm the primary user, but to nudge the primary user to adjust its power allocation to allow the secondary user to achieve a minimum data rate. Our work, however, explicitly considers that the secondary user has an adversarial intent underlying the use of interference. In [21], the resilience of LTE networks against smart jamming attacks was modeled, and represents an important example of where spectrum sharing is being proposed. A one-time spectrum coexistence problem in dynamic spectrum access when the secondary user may be malicious was investigated in our prior work in [22]. While that work is similar to the work presented here, an important difference in the current work is that the current work includes the introduction of additional functions that legitimate agents can use to defend themselves against the false announcements and interference introduced by the adversary. In particular, the current work goes beyond the interference tradeoffs explored in [22], to include in the game the potential for each agent to believe/dis-believe the information being shared, and to give the primary user the ability to inspect the spectrum activities of a secondary user. Notably, in the current work, the costs associated with inspecting and the fine/penalty for the secondary user to be caught making a fraudulent resource request are integrated into the interference formulation, leading to the determination of the equilibria with respect to the amount of channels being shared and underlying costs for both sides associated with protecting and attacking spectrum sharing.
It must be recognized that the objective behind an adversaries strategy depends significantly on the objective of the adversary, and such knowledge can lead to better defenses. For example, the detection of the intruder with uncertainty about the application being used was investigated in [23]. Packet-dropping attacks were studied in [24]. As further examples of game theory being applied to security and communication problems, we briefly mention [25] as a reference involving modeling malicious users in collaborative networks , [26] as a reference in which entities share information while engaged in information warfare, [27] for modeling attack-type uncertainty in a network, [28] for security threats involving multiple users in ad hoc networks, and [29] for resource attacks in networks using the ALOHA protocol. How secret communication can be affected by the fact that an adversary’s capability to eavesdrop on a collection of communications from a base station to a set of users may be restricted and unknown to the transmitter was investigated in [30]. Problems related to spectrum scanning have been presented in [31,32], where the objective is to develop spectrum-scanning strategies that support the detection of a user illicitly using spectrum, and [33] for detecting attacks aimed at reducing the size of spectrum opportunities in a dynamic spectrum-sharing problem. [34] studied the interactions between a user and a smart jammer regarding their respective choices of transmit power in a general wireless setting, while [35,36] considered problems related to game theory and network security, While these references are not directly relevant to the spectrum allocation problem explored in this paper, these references help motivate the work that we present in this paper.
The organization of this paper is as follows: in Section 2 and in two its subsections, we introduce the tradeoff that exists between supporting the PU’s communication’s reliability and the cost of such reliability. We formulate this tradeoff as two-step game and obtain an explicit solution to the game. In Section 2, we formulate and solve a signaling game that examines a different trade-off problem, namely, whether it is too costly/risky to believe the spectrum request originating from the potentially malicious SU and, if not, then to engage in verification of the SU request. In Section 4, conclusions are presented.

2. Trade off between Communication Reliability and Its Cost

We begin by presenting a general, universal dynamic access problem through which many practical coordinated spectrum sharing cases may be examined. Our universal dynamic spectrum access problem is depicted in Figure 1. In this scenario, there are n spectrum resources, which we shall refer to as bands (e.g., these may be actual frequency bands, or time-frequency slots in the context of radio resource scheduling) that are available for usage by two different players, the primary user (PU) and the secondary user (SU). These n bands are administered by a spectrum owner SO, whose objective is two-fold: first, it aims to support the improved usage of the n bands collectively by both the PU and SU; and, second, it is responsible for supporting the reliable communication of the PU in the presence of a SU, who might be malicious. The PU wants to reliably communicate with a set of n P users using n P bands, where the PU uses a single band for each user. The SU, similarly, must support reliable communication with a set of n T users using n T bands, where n T + n P < n and n is the total number of bands available for sharing. The SU, similarly, needs to only use a single band for each of its users.
In the context of our problem, the SO may be thought of as a spectrum server that takes information related to each side’s resource requests, and appropriately shares such information with other participants. The conduit of information being shared between the PU and the SO, and the SU and SO is an example of a spectrum underlay, and many practical approaches have been proposed for implementing such a spectrum coordination system. For example, in the context of coordinating between two different LTE providers aiming to share spectrum, the X2 interface is a peer-to-peer channel that supports inter-cell interference coordination (ICIC) in LTE, and one could propose extensions to X2 and ICIC standard that would support coordination between different LTE systems and a spectrum server. Alternatively, one could employ the operations, administration and management (OAM) interface that has been used as the basis for building a connection between self-organizing network controllers and eNB scheduling agent software to gain access to schedule and MAC layer functions. Similar approaches to cellular coordination have already been prototyped and validated using software-defined networking interfaces, such as that presented in [37]. The SU might be law-obedient (with probability q 0 ) and then he will use only the bands reserved for him, or he might be malicious (with probability q 1 = 1 q 0 ) and, in this case he could try to harm the PU’s communication (e.g., by jamming). The SO has limited knowledge of the SU’s interference capabilities. To reflect this, the SO only knows that the SU can interfere with n A signals. The SO has to reserve bands for use by the PU and the SU. The SU knows which bands are reserved for him and which are reserved for the PU. Without loss of generality we can assume that the SO reserves bands [ n n T + 1 , n ] for the SU’s communication. Thus, the SO has to reserve a set of bands for the PU within [ 1 , n n T ] bands. The number of bands reserved for the PU’s usage should be larger than n P in order to reduce the probability that the SU can interfere with the PU’s legitimate signals—in essence, the PU’s real allocation has been privacy-enhanced with the announcement of additional, decoy channels.
To reflect the fact that there might be a cost associated with introducing uncertainty in order to maintain communication reliability, we present a cost model that we will use in this paper for introducing uncertainty. We assume that when the SO reserves extra bands as decoys for the PU (beyond what it needs to support its users), such reservation costs C U per band. The SO is thus faced with a dilemma: on one hand, more bands reserved for the PU increase communication reliability (it becomes harder for the SU to guess successfully interfere with actual communication signals). On the other hand, the costs associated with maintaining this higher level of communication reliability are increased. Thus, the SO has to make a tradeoff between the PU communication reliability and its cost. We will formulate and solve this problem as a two-step game between the SO and the SU in the following two subsections.

2.1. First Step of the Game: To Make the Trade off between Communication’s Reliability and Its Cost

In the first step of the game, to establish the tradeoff between communication reliability and the cost of obfuscation associated with increasing communication reliability, we assume the number of bands (decoys) c, reserved for increasing the domain of uncertainty, is fixed. Without loss of generality we can assume that bands [ 1 , n P + c ] are reserved for the PU, and the SU knows these bands. The PU has to support reliable communication with n P users. The actual band assigned to each such user is fixed and known only to the PU. Without loss of generality we can assume that the bands for these n P users are allocated within the bands [ 1 , n n T ] , and there is no way for the SU to ascertain whether an announced channel will be used or whether it will be a decoy channel. Further, we suppose that the SU is malicious and could try to interfere with the PU’s communication by using a jamming capacity of n A signals, where each interference signal can reside in only a single band.
A (pure) strategy Y of the SU is a subset of n A bands out of the set of [ 1 , n P + c ] bands reserved for the PU’s communication purposes, i.e., | Y | = n A . Then, a (pure) strategy X of the SO is a subset of n P bands from the bands [ 1 , n P + c ] , i.e., | X | = n P , which were assigned to the PU for transmitting communication signals. The payoff to the SO is the number of successfully transmitted (un-jammed)PU signals, i.e.,
v S O ( X , Y ) = | X \ Y | ,
namely bands that the PU employed that were not also selected by the SU. We look for a saddle point (an equilibrium strategy) [38], i.e., for a pair of strategies ( X * , Y * ) such that,
v S O ( X , Y * ) v S O ( X * , Y * ) v S O ( X * , Y ) ,
where v = v S O ( X * , Y * ) is the value of the game.
Since n T + n A < n , for each SU strategy Y * there is an SO strategy X such that X Y * is the empty set. Thus, by (1) and (2), the value of the game is greater or equal to n P . On the other hand, for each SO strategy X * there is an SU strategy Y such that X * Y is not empty. Thus, by (1) and (2), the value of the game is smaller than n P . This implies that the game does not have an equilibrium involving pure equilibrium strategies, i.e., where specific sets are chosen. To solve for an equilibrium, we have to employ mixed strategies, which involves randomizing the selection of pure strategies. The following proposition gives the value of the game and equilibrium strategies.
Proposition 1.
The value of the first step of the considered game is,
T n P , n A n P + c = n P 1 n A n P + c ,
and a saddle point is ( X n P , [ 1 , n T + c ] , Y n A , [ 1 , n P + c ] ) , where,
(1) 
X n P , [ 1 , n P + c ] is a (mixed) strategy of the SO in which the SO chooses to assign to the PU n P bands at random with equal probability from the total set of [ 1 , n P + c ] bands reserved for the PU’s communication. There are n P + c n P subsets of n P + c bands consisting of n P bands. Thus, the strategy X n P , [ 1 , n P + c ] chooses each such subset with probability 1 / n P + c n P . ( n p = n ! ( n p ) ! p ! is the number of combinations of p objects selected out of n objects.)
(2) 
Y n A , [ 1 , n P + c ] is a (mixed) strategy of the SU that involves choosing at random n A bands with equal probability from the full set of [ 1 , n P + c ] bands. There are n P + c n A subsets of n P + c bands consisting of n A bands. Thus, the strategy Y n A , [ 1 , n P + c ] chooses each such subset with probability 1 / n P + c n A .
Also, we note that X n P , [ 1 , n P + c ] and Y n A , [ 1 , n P + c ] are equalizing strategies, i.e., for any SU’s pure strategy Y and SO’s pure strategy X the following equalities hold:
v S O ( X n P , [ 1 , n P + c ] , Y ) = v S O ( X , Y n A , [ 1 , n P + c ] ) = T n P , n A n P + c .
The expected number of successfully interfered bands is,
H n P , n A n P + c = n A n P n P + c .
Proof. 
Let the SO apply a mixed strategy X n P , [ 1 , m ] , where m = n P + c and the SU applies a (pure) strategy Y that involves assigning n A fixed bands for the purpose of interfering with the PU. Then, the expected number of successfully transmitted signals is,
v S O ( X n P , [ 1 , m ] , Y ) = i = 0 n P A n P i n A i m n A n P i m n P ,
where n P A : = min { n P , n A } .
Now, look at the SU. Let the SO apply a (pure) strategy X, i.e., a set of n P bands for the PU’s transmission is fixed, and the SU apply a mixed strategy Y n S , [ 1 , m ] . Then, the expected number of successfully transmitted signals is,
v S O ( X , Y n A , [ 1 , m ] ) = i = 0 n P A n P i n P i m n P n A i m n A .
Since,
n A i m n A n P i m n P = n P i m n P n A i m n A ,
then,
v S O ( X n P , [ 1 , m ] , Y ) = v S O ( X , Y n A , [ 1 , m ] ) = T n P , n A m .
Now we prove (3) by induction by m. Let n A n P . For m = n A the result is obvious. Let it hold for a m n A . We prove that then it also holds for m + 1 . Since m + 1 k = m k + m k 1 for any k we have that,
i = 0 n A n P i n P i m + 1 n P n A i = i = 0 n A n P i n P i m n P n A i + i = 0 n A 1 n P i n P i m n P n A 1 i = ( by induction’s assumption ) = n P 1 n A m m n A + n P 1 n A 1 m m n A 1 = n P 1 n A m + 1 m + 1 n A ,
and (3) follows. The case n A > n P can be considered similarly. ☐

2.2. Second Step of the Game: To Make the Tradeoff between Communication’s Reliability and Its Cost

In the second step of the game, to make the tradeoff between communication reliability and the cost of such reliability, the SO, knowing the equilibrium strategy for the first step, wants to appropriately choose an appropriate amount of obfuscation, c, so as to specify the domain of uncertainty with the objective of maximizing the difference between reliable PU communication (i.e., unjammed communication signals) and the cost to maintain this reliability. Thus, by Proposition 1, the expected payoff to the SO is given as follows:
v U ( c ) = q 0 ( n P C U c ) + q 1 T n P , n S n P + c C U c = q 0 ( n P C U c ) + q 1 n P 1 n A n P + c C U c .
The goal of the SO is to maximize his payoff v U ( c ) , i.e., to find such c that,
c = arg max c { 0 , , n n P n T } v U ( c ) .
Proposition 2.
In the second step of the game, to achieve the optimal trade-off between communication reliability and the cost of this reliability, the SO has to announce n P + c bands that are reserved for the PU’s usage, where,
c = 0 , n A n P < C U q 1 , A , n A n P ( n n T ) 2 < C U q 1 < n A n P , v U ( A ) > v U ( A + ) A + , n A n P ( n n T ) 2 < C U q 1 < n A n P , v U ( A ) < v U ( A + ) n n T n P , C U q 1 < n A n P ( n n T ) 2
with ( ξ and ξ being the floor and ceiling functions mapping a real number to the largest previous to ξ or the smallest integer following ξ, respectively.)
A = q 1 n P n A C U n P   a n d   A + = q 1 n P n A C U n P .
Proof. 
Note that,
d v U ( c ) d c = q 1 n P n A ( n P + c ) 2 C U
and,
d 2 v U ( c ) d c 2 = 2 q 1 n P n A ( n P + c ) 3 < 0 .
Thus, v U is strictly a concave function, and it has a unique maximum in [ 0 , n n P n T ] which is given as follows:
c = 0 , q 1 n A n P C U , q 1 n P n A C U n P , q 1 n A n P ( n n T ) 2 < C U < q 1 n A n P , n n P n T , q 1 n A n P ( n n T ) 2 C U .
Since we have to maximize v U only within integer points of the interval [ 0 , n n P n T ] , the result follows. ☐
Figure 2a illustrates how the number of bands reserved by the SO depends on the cost of uncertainty C U and the number of user communications the PU has to maintain n P , where the scenario and adversarial profile was set to n A = 30 , n T = 60 , q 1 = 0 . 5 and n = 200 . Figure 2b,c illustrates how the number of extra reserved bands and the payoff to the SO depends on the cost for introducing uncertainty, C U , and the probability q 1 that the SU is malicious when n P = 50 . This figure shows that in some cases there is a tradeoff between the cost of uncertainty and the probability of the threat, in which case it is possible to increase communication reliability.
Finally, we note that, in this section, in Proposition 1, the basic formula for the expected number of jammed signals was derived in closed form. This allows one to find the tradeoff between communication reliability and the cost of such reliability if the SU might be malicious, which is maintained by means of introducing obfuscation. In the next section, we apply this formula to evaluate: (1) whether to believe in the announced purpose or size of the SU’s requests for spectrum resources; and (2) if one does not believe the SU, the intensity with which one should employ inspection of his spectrum activity so as to prevent possible malicious activity.

3. Signaling Game: Whether It is Worth Believing the Potentially-Malicious SU or Not to Believe and Then Inspect His Request

The malicious SU requests bands that he will supposedly use for legitimate purposes, but in doing so faces a dilemma. On the one hand, by requesting more bands under the pretext of legitimate use, the SU will actually support his malicious activity by reducing the amount of bands available to the SO to use to obfuscate the PU’s actual channel need (and thus increase the likelihood that the SU will be able to interfere with the PU). On the other hand, requesting too many extra bands can make the SO suspicious of the request, which might then lead to the SO inspecting the request by actively engaging additional resources (at a cost) to verify the truth of the request.
We now examine the scenario from the SO perspective. Since the SO is not sure about whether the SU is malicious, upon observing the SU request, the SO also faces a dilemma: either to believe or not believe the request. Not believing the request will then lead to the SO inspecting the veracity of the request to check whether the SU is trying to deceive. This can reduce the likelihood of possible SU malicious activity if the inspection detects the deception, but it also introduces an extra expense for the SO. In this section we formulate this problem as a signaling game [38], and then, to obtain insight into the problem, we give an explicit solution for a basic subcase in the next subsection.
Signaling games deal with the situation where one player knows some information that the other player does not. The first player (the sender), who possesses some private information, might try to manipulate the situation by sharing some altered version of that information to his rival. The first player may be motivated to deceive his rival, for example, in order to gain a higher payoff from the game. The second player (the receiver), who does not possess this private information, has to make a decision based on the information shared by the sender and, in particular, must decide how to take any actions given the potential that the information exchanged was false. We note that signaling games are widely employed for modeling different aspects of malicious activity in networks, for example, in multi-step attack-defense scenarios [39], for intrusion detection in wireless sensor networks [40], for cyber security [41], for intrusion detection in mobile ad hoc networks [42], for investigation of deception in network security [43], for honeypot selection in computer networks [44], for studying the impact of uncertain cooperation among well-behaved and socially selfish nodes on the performance of data forwarding [45], and for achieving an always best connected service in vehicular networks [46].
In the model we now consider, we assume that there is no cost associated with introducing uncertainty, i.e., C U = 0 , and in this case the SO will allocate the maximal domain of uncertainty based on the SU’s request, i.e., the PU bands will be [ 1 , n n T ] . We assume that n T { 1 , , N T } , where N T corresponds to an upper bound on the possible bands the SU could request for legitimate purposes. We assume that the SO has knowledge of N T , and that he has statistical knowledge characterizing the SU’s needs and behavior. Specifically, he knows that, with probability q 0 i , the SU is law-obedient and n T = i , and with probability q 1 i the SU is malicious and n T = i , where i = 1 , , N T .
The SU knows its own true n T , and, having knowledge of n T , can submit a reservation for either n T or n T + 1 , …, up to N T bands if he is malicious. If the SU is law-obedient, he requests the correct number of bands. As noted earlier, requesting more bands is better for a malicious SU’s jamming objective, but the SO can choose to inspect the bands to see whether that band is actually in use (or historically has been in use). If the SO finds that some bands are not in use, then the SU is fined C S per falsely-claimed, unused band. Meanwhile, however, we assume that there is an inspection cost C P per band that is inspected by the SO. The payoff to the SU is the expected number of interfered users, weighted by R times the number of successful transmission minus the expected fine. The payoff for the SO is the expected number of successful transmissions by the PU minus inspection expenses. This situation is well-modeled by a signaling game, where the SU is the sender. The SU can be one of 2 N T types: type- ( 1 , i ) which occurs with probability q 1 i and occurs when the SU is malicious and must support n T = i users; or type- ( 0 , i ) which occurs with probability q 0 i and occurs when the SU is law-obedient and must support n T = i users, where i = 1 , , N T . Let b be the requested number of bands by the SU. The malicious SU, knowing its n T , submits a reservation for at least n T bands, and at most N T bands. Of course, for n T = N T , (so, the SU has type- ( 1 , N T ) ), the only strategy he can apply is to request for N T bands, but for n T < N T , he has N T n T + 1 strategies, and thus he may request for either b = n T or b = n T + 1 , …or N T bands to be reserved. The law-obedient SU, submits a reservation without deception, so type- ( 0 , i ) SU has the only strategy to request b = i bands, i = 1 , , N T .
Denote by A S U ( t , τ ) , the set of (pure) SU’s strategies of type- ( t , τ ) . Thus,
A S U ( t , τ ) = { τ } , t = 0 , { τ , , N T } , t = 1 .
The SO observes the request for b bands and must decide either to believe the SU, and thus supplies him with b reserved bands (denote it as a strategy B, for “believe”), or not to believe and thereby inspect the bands (denote it as a strategy I, for “inspect”). Thus, the set of (pure) strategies for the SO is A S O ( b ) = { B , I } for b > 2 and A S O ( b ) = { B } for b = 1 .
We note that we implicitly assume that the inspection does not turn the malicious SU into a non-malicious SU. Rather, we assume that it might reduce the likelihood of his malicious activity being successful, and also reduces his payoff due to there being a fine for unused bands.
Of course, the result of the inspection depends on how the inspection protocol is being performed and the technical characteristics of the tools being employed (e.g., detection sensitivities, etc.). As a basic example of an inspection protocol we consider the following simple protocol where the SO starts by inspecting only one randomly chosen band out of the n T requested bands. Let α k , n T be the detection probability of an unused band when there are k unused bands among n T requested bands. It is clear that α k , n T = k / n T in the case of perfect detection for inspection of an unused band. If an unused band is detected (and hence a false request is being made) then the total inspection of the remaining full set of requested band n T 1 is performed. Let C P be the cost per an inspected band. Thus,
(1)
If the SU is of type- ( 0 , 1 ) , he has only the strategy of requesting one band and the SO also has only the strategy of believing the request. Then, by Proposition 1, the payoff to the SO is T n P , 0 n 1 and the payoff to the SU is R.
(2)
If the SU is of type- ( 0 , i ) , i { 2 , , N T } , he has one strategy for requesting i bands, while the SO has two strategies for each request of i bands (to believe or to inspect):
  • if the SO believes, then the payoff to the SO is T n P , 0 n i and the payoff to the SU is R i ,
  • if the SO inspects, then the payoff to the SO is T n P , 0 n i C P and the payoff to SU is R i .
(3)
If the SU is of type- ( 1 , i ) , i { 1 , , N T } , he has N T i + 1 strategies to request:
  • if i = 1 and the SU requests one band, the SO also has only the strategy to believe the request. Then the payoff to the SO is T n P , n A n 1 and the payoff to the SU is R + H n P , n A n 1 ;
  • if either i = 1 and the SU requests b > 1 bands, or i > 1 , then the SO has two strategies (to believe or to inspect):
    if the SU believes then the payoff to the SO is T n P , n A n b and the payoff to the SU is R i + H n P , n A n b ,
    if the SU inspects then the payoff to the SO is equal to,
    α b i , b T n P , n A n i + α ¯ b i , b T n P , n A n b C P C P ( b 1 ) α b i , b
    and the payoff to the SU is equal to,
    R i + α b i , b H n P , n A n i + α ¯ b i , i H n P , n A n b α b i , b C S ( b i ) .
Denote the payoff to the SU and the SO by v S U ( t , τ ; b , a ) and v S O ( t , τ ; b , a ) if circumstances are such that the SU is of type- ( t , τ ) , the SU has chosen b A S U ( t , τ ) bands (messages) to request for its own reservation, while the PU has selected strategy a A S O ( b ) .
We look for a perfect Bayesian equilibrium (PBE) in this signaling game [38]. A PBE is a pair ( b * ( t , τ ) , a * ( b ) ) of strategies such that:
(1)
For each type- ( 1 , τ ) , the malicious SU’s request (message) b * ( 1 , τ ) has to maximize the SU’s payoff, i.e.,
b * ( 1 , τ ) = arg max b A S U ( 1 , τ ) u S U ( 1 , τ ; b , a * ( b ) ) .
(2)
For each message b, the SO’s strategy a * ( b ) has to maximize the expected SO’s payoff, given his posterior beliefs μ ( · | b ) about which type could have sent the request (message) b, i.e.,
a * ( b ) = arg max a A S O ( b ) t = 0 , 1 , τ = 1 , , N T μ ( t , τ | b ) u S O ( t , τ ; b , a ) .
with the posterior SO beliefs μ ( · | b ) given by Bayes’s rule,
μ ( t , τ | b ) = q t τ P r o b ( b ( t , τ ) = b ) t , τ q t τ P r o b ( b ( t , τ ) = b )
for,
t , τ q t τ P r o b ( b ( t , τ ) = b ) > 0 .
This is a signaling game with a finite sets of (pure) strategies, and generally to solve it randomized strategies have to be employed. To deal numerically with such problem an iterative forward induction algorithm for solving signaling games based on a rationalizability approach suggested in [47] can be used. An alternative approach is to use the Gambit software package [48] for solving signaling games. To obtain insight into the problem and to see how the solution explicitly depends the parameters associated with the scenario, in the next section, we directly find the solution for a particular baseline case.

Explicit Solution for a Basic Case, N T = 2

In this section to get insight of the problem we present explicit solution for a particular case N T = 2 . Let α = α 1 , 2 be the detection probability of an unused band when there is one unused band among two requested bands. Note that, since N T = 2 , if an unused band is detected (and hence a false request is being made) then there is no need to engage the full set of bands in an inspection. In Figure 3, the diagram for making decisions in this signaling game is presented.
To describe the main result, let us introduce the following notations:
  • Let b = b ( τ , b ) be the probability that the malicious SU of type- ( 1 , τ ) requests b bands.
  • Let a ( i , ξ ) be the conditional probability that the SO employs strategy ξ when observing a request for i bands.
Thus, b and a can be interpreted as randomized behaviour strategies for the malicious SU and for the SO.
Proposition 3.
(a) If the cost associated with inspecting a band is high,
C P q 11 α ( T n P , n A n 1 T n P , n A n 2 ) q 02 + q 12 + q 11 ,
then the equilibrium strategy for the SO is to always believe (i.e., strategy B), while the equilibrium strategy for the SU is always to request two bands.
(b) If the inspection cost is small,
C P < q 11 α ( T n P , n A n 1 T n P , n A n 2 ) q 02 + q 12 + q 11 ,
then two subcases arise:
( b 1 ) if the fine is small,
C S < ( H n P , n A n 2 H n P , n A n 1 ) ( 1 α ) / α ,
then, the equilibrium strategy for the SU is always to request two bands, and the SO always should inspect;
( b 2 ) if the fine is large,
C S > ( H n P , n A n 2 H n P , n A n 1 ) ( 1 α ) / α ,
then in equilibrium both rivals apply mixed strategies, namely:
b ( 1 , 1 ) = α q 11 ( T n P , n A n 1 T n P , n A n 2 ) C P ( q 02 + q 12 + q 11 ) q 11 α ( T n P , n A n 1 T n P , n A n 2 ) C P , b ( 1 , 2 ) = C P ( q 02 + q 12 ) q 11 α ( T n P , n A n 1 T n P , n A n 2 ) C P , b ( 2 , 1 ) = 0 , b ( 2 , 2 ) = 1 , a ( 1 , B ) = 1 , a ( 1 , I ) = 0 , a ( 2 , B ) = α C S ( 1 α ) ( H n P , n A n 2 H n P , n A n 1 ) α ( C S + H n P , n A n 2 H n P , n A n 1 ) , a ( 2 , I ) = H n P , n A n 2 H n P , n A n 1 α ( C S + H n P , n A n 2 H n P , n A n 1 ) .
Proof. 
Since b = b ( τ , b ) is the probability that the malicious SU of type- ( 1 , τ ) requests b bands, it is clear that b ( 2 , 2 ) = 1 and b ( 1 , 1 ) + b ( 1 , 2 ) = 1 . Thus, the SU’s strategy b can be uniquely defined by using only one of its components b ( 1 , 2 ) .
Then, by the definition of b, the marginal probability γ i of observing a request for i bands is given as follows:
γ 1 = b ( 1 , 1 ) q 11 + q 01 = q 01 + q 11 ( 1 b ( 1 , 2 ) ) , γ 2 = b ( 1 , 2 ) q 11 + b ( 2 , 2 ) q 12 + q 02 = q 02 + q 12 + q 11 b ( 1 , 2 ) .
The SO can build his belief about the SU based on the SU’s request by considering the conditional probability μ ( t , τ | j ) that the SU of type- ( t , τ ) requests j bands, as follows:
μ ( 0 , 1 | 1 ) = q 01 q 01 + q 11 ( 1 b ( 1 , 2 ) ) , μ ( 0 , 2 | 1 ) = 0 , μ ( 1 , 1 | 1 ) = q 11 ( 1 b ( 1 , 2 ) ) q 01 + q 11 ( 1 b ( 1 , 2 ) ) , μ ( 1 , 2 | 1 ) = 0 , μ ( 0 , 1 | 2 ) = 0 , μ ( 0 , 2 | 2 ) = q 02 q 02 + q 12 + q 11 b ( 1 , 2 ) , μ ( 1 , 1 | 2 ) = q 12 q 02 + q 12 + q 11 b ( 1 , 2 ) , μ ( 1 , 2 | 2 ) = q 11 b ( 1 , 2 ) q 02 + q 12 + q 11 b ( 1 , 2 ) .
Let E b u S O ( ξ , b ) be the expected payoff for the SO when the SU applies strategy b and the SO employs strategy ξ and observes a request for b bands. Then,
E 1 u S O ( B , b ) = T n P , 0 n 1 μ ( 0 , 1 | 1 ) + T n P , n A n 1 μ ( 1 , 1 | 1 ) , E 2 u S O ( B , b ) = T n P , 0 n 2 μ ( 0 , 2 | 2 ) + T n P , n A n 2 μ ( 1 , 1 | 2 ) + T n P , n A n 2 μ ( 1 , 2 | 2 ) , E 2 u S O ( I , b ) = ( T n P , 0 n 2 C P ) μ ( 0 , 2 | 2 ) + ( α T n P , n A n 1 + α ¯ T n P , n A n 2 C P ) μ ( 1 , 1 | 2 ) + ( T n P , n A n 2 C P ) μ ( 1 , 2 | 2 ) .
For a fixed SU strategy, the best response SO strategy is B R S O b ( b ) = arg ξ max E b u S O ( ξ , b ) . By (8) and (9), we have that,
B R S O 1 ( b ) = B ,
B R S O 2 ( b ) = B , C P > Ξ , { B , I } , C P = Ξ , = I , C P < Ξ ,
with,
Ξ = α ( T n P , n A n 1 T n P , n A n 2 ) μ ( 1 , 1 | 2 ) = ( b y ( 8 ) ) = q 11 b ( 1 , 2 ) α ( T n P , n A n 1 T n P , n A n 2 ) q 02 + q 12 + q 11 b ( 1 , 2 ) .
Note that, for
C P q 11 α ( T n P , n A n 1 T n P , n A n 2 ) q 02 + q 12 + q 11 ,
the following inequality holds for any b ( 1 , 2 ) [ 0 , 1 ] :
C P > Ξ .
Thus, if (12) holds, then B R S O 2 ( b ) = B . Since (10) also holds, (12) implies that the SO equilibrium strategy is to believe independent on the SU’s request. Then, the SU equilibrium strategy is always to request two bands.
Let us now suppose that (12) does not hold. Thus,
C P < q 11 α ( T n P , n A n 1 T n P , n A n 2 ) q 02 + q 12 + q 11 .
Note that,
b ( 1 , 2 ) = C P ( q 02 + q 12 ) q 11 α ( T n P , n A n 1 T n P , n A n 2 ) C P
is the unique root by b ( 1 , 2 ) of the equation,
C P = Ξ .
Then, (14) implies that b ( 1 , 2 ) given by (15) is within ( 0 , 1 ) . Thus, by (11), if (14) holds, then (15) defines a SU strategy that is indifferent to the beliefs of the SO and his best response, and thus it is another possible candidate for the SU equilibrium strategy.
Since a ( i , ξ ) is the conditional probability that the SO employs strategy ξ when observing a request for i bands, it is clear that a ( 1 , B ) = 1 , a ( 1 , I ) = 0 and a ( 2 , B ) + a ( 2 , I ) = 1 . Then a is uniquely defined by its one component a ( 2 , B ) .
If a is such that the payoff to the SU of type-(1,1) is insensitive to all of his requests, then a is the equilibrium. This condition is met when the expected values at the end of branches in the decision tree (Figure 3) are equal, i.e.,
R + H n P , n A n 2 a ( 2 , B ) + R + α H n P , n A n 1 + α ¯ H n P , n A n 2 α C S a ( 2 , I ) = ( R + H n P , n A n 1 ) a ( 1 , B ) = R + H n P , n A n 1 .
Solving the last equation by a ( 2 , B ) yields:
a ( 2 , B ) = α C S + α ¯ ( H n P , n A n 1 H n P , n A n 2 ) ( C S H n P , n A n 1 + H n P , n A n 2 ) α .
Thus, if C S ( H n P , n A n 2 H n P , n A n 1 ) ( 1 α ) / α then a ( 2 , B ) [ 0 , 1 ] . Then, this a jointly with b given by (15) give an equilibrium.
If C S > ( H n P , n A n 2 H n P , n A n 1 ) ( 1 α ) / α then, instead of equality (16), the following inequality holds for any a:
R + H n P , n A n 2 a ( 2 , B ) + R + α H n P , n A n 1 + α ¯ H n P , n A n 2 α C S a ( 2 , I ) > R + H n P , n A n 1 .
The left side of this inequality obtains its minimum for a ( 2 , B ) = 0 and a ( 2 , I ) = 1 . Also, by (14) and (11), the best response to b ( 1 , 2 ) = 1 is to inspect. Thus, in this situation, requesting two bands and inspecting the request combine to give the equilibrium, and this implies the result. ☐
Thus, we have shown in Proposition 3 that the game has either a pooling or a mixed equilibrium. The pooling equilibrium assumes that all the malicious SU types request the same number of bands. In our particular case this corresponds to two bands. In the pooling equilibrium the SO learns nothing from the SU’s request.
Figure 4 illustrates threshold lines for switching between the equilibrium strategies as a function of the number of bands with α = 0 . 5 .
Figure 5a,b illustrates the probability of believing in the request for two bands a ( 2 , B ) and the probability for requesting two bands, if only one legitimate communication has to be supported, i.e., b ( 1 , 2 ) , as functions of the number of bands n and the probability that the malicious SU is of type- ( 1 , 1 ) with scenario parameters: C P = 0 . 005 , C S = 0 . 7 , n A = 10 , n P = 30 , α = 0 . 5 , q 01 = 0 . 1 , q 02 = 0 . 3 and q 11 + q 12 = 0 . 6 . Figure 5c illustrates how taking into account information about the nature of the SU can improve the SO’s payoff. Also, it is very interesting to note that the optimal strategy for the SO, as well as its payoff, are discontinuous in the information describing the SU.

4. Conclusions

In this paper, we have presented a new game-theoretic framework that can be useful in designing a dynamic spectrum access channel management protocol when there is the potential of an untrustworthy secondary participant. The new framework incorporates statistical information describing whether the SU is malicious and intends to interfere with primary communications. Using two Bayesian game-theoretical models, we have shown that such a paradigm can lead to protocols that can improve communication reliability. We have noted the interesting observation that the optimal strategy for the SO, as well as its payoff in this model, can be discontinuous in the statistical information describing the behavior characteristics of the SU. This implication of this discontinuity means that having precise knowledge of the statistical characterization of the SU is important since in some regimes there is a threshold behavior in the communication reliability, while in other situations the characterization produces only a minimal impact on the decisions being made. In particular, the practical interpretation of our analysis reveals that if one is operating in a channel-limited scenario, then using obfuscation (i.e., decoy channels) detracts too much from the objective of improving the combined performance (notably, obfuscation will impact the SU’s legitimate performance). On the other hand, if the SU cannot afford to engage in verifying whether the announced schedule is accurate (e.g., such verification would involve deploying infrastructure that is too costly), then the PU should maximize its use of a decoy strategy while meeting its own needs.

Acknowledgments

This material is based upon work supported by DARPA contract HR0011-13-C-0082. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).

Author Contributions

Andrey Garnaev and Wade Trappe jointly conceived the problem formulation and the derivation of the solutions. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations and notations are used in this manuscript:
CRCognitive Radio
PUPrimary User
SOSpectrum Owner
SUSecondary User
nNumber of bands
n P Number of users the PU wants to reliably communicate with
n T Number of users the SU wants to reliably communicate with
n A Number of signals the SU can interfere with
q 0 The probability that SU might be law-obedient
q 1 The probability that SU might be malicious
C U The reservation cost for a decoy
cThe number of decoys
XThe pure strategy of the SO
YThe pure strategy of the SU
v S O ( X , Y ) The payoff to the SO on the first step
X n P , [ 1 , n P + c ] A (mixed) strategy for the SO
Y n S , [ 1 , n P + c ] A (mixed) strategy for the SU
H n P , n A n P + c The expected number of successfully interfered bands
T n P , n A n P + c The expected number of non-interfered bands
v U The expected payoff to the SO on the second step
N T The upper bound on the possible bands the SU could request for legitimate purposes
A S U ( t , τ ) The set of (pure) SU’s strategies of type- ( t , τ )
C P Inspection cost per band
C S Fine/penalty per falsely-claimed, unused band
α s , t The detection probability of an unused band when there are s unused bands among t requested bands
RThe SU reward per successful requested communication
b ( τ , b ) The probability that the malicious SU of type- ( 1 , τ ) requests b bands
a ( i , ξ ) The conditional probability that the SO employs strategy ξ when observing a request for i bands
μ ( t , τ | j ) The conditional probability that the SU of type- ( t , τ ) requests j bands
q t , τ The probability that the SU has type- ( t , τ )
E b u S O ( ξ , b ) The expected payoff for the SO when the SU applies strategy b and the SO employs strategy ξ and observes a request for b bands
γ i The marginal probability of observing a request for i bands

References

  1. Raman, C.; Yates, R.D.; Mandayam, N.B. Scheduling variable rate links via a spectrum server. In Proceedings of the First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN), Baltimore, MD, USA, 8–11 November 2005; pp. 110–118. [Google Scholar]
  2. Buddhikot, M.M.; Kolodzy, P.; Miller, S.; Ryan, K.; Evans, J. DIMSUMnet: New directions in wireless networking using coordinated dynamic spectrum. In Proceedings of the Sixth IEEE International Symposium on a World of Wireless Mobile and Multimedia Networks, Taormina-Giardini, Naxos, Italy, 13–16 June 2005; pp. 78–85. [Google Scholar]
  3. Raychaudhuri, D.; Baid, A. NASCOR: Network Assisted Spectrum Coordination Service for Coexistence between Heterogeneous Radio Systems. IEICE Trans. Commun. 2014, E97-B, 251–260. [Google Scholar] [CrossRef]
  4. Park, J.-M.; Reed, J.H.; Clancy, T.C. Security and Enforcement in Spectrum Sharing. Proc. IEEE 2014, 102, 270–281. [Google Scholar] [CrossRef]
  5. Bhattacharjee, S.; Sengupta, S.; Chatterjee, M. Vulnerabilities in Cognitive Radio Networks: A Survey. Comput. Commun. 2013, 36, 1387–1398. [Google Scholar] [CrossRef]
  6. El-Hajj, W.; Safa, H.; Guizani, M. Survey of Security Issues in Cognitive Radio Networks. J. Internet Technol. 2012, 12, 181–198. [Google Scholar]
  7. Khare, A.; Saxena, M.; Thakur, R.S.; Chourasia, K. Attacks and Preventions of Cognitive Radio Network-A Survey. Int. J. Adv. Res. Comput. Eng. Technol. 2013, 2, 1002–1006. [Google Scholar]
  8. Federal Communications Commission (FCC). National Broadband Plan: Connecting America. 2010. Available online: http://www.broadband.gov/plan/ (accessed on 11 December 2013).
  9. President’s Council of Advisors on Science and Technology (PCAST). Report to the President Realizing the Full Potential of Government-Held Spectrum to Spur Economic Growth. 2012. Available online: http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast_spectrum_report_final_july_20_2012.pdf (accessed on 11 December 2013).
  10. Lackpour, A.; Luddy, M.; Winters, J. Overview of interference mitigation techniques between WiMAX networks and ground based radar. In Proceedings of the 20th Annual Wireless and Optical Communications Conference, Newark, NJ, USA, 15–16 April 2011; pp. 1–5. [Google Scholar]
  11. Sanders, F.H.; Sole, R.L.; Carroll, J.E.; Secrest, G.S.; Allmon, T.L. Analysis and Resolution of RF Interference to Radars Operating in the Band 2700–2900 MHz from Broadband Communication Transmitters; NTIA Technical Report TR-13-490; United States Department of Commerce: Washington, DC, USA, 2012.
  12. Khawar, A.; Abdel-Hadi, A.; Clancy, T.C. Spectrum sharing between S-band radar and LTE cellular system: A spatial approach. In Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks, McLean, VA, USA, 1–4 April 2014; pp. 7–14. [Google Scholar]
  13. Ji, Z.; Liu, K.J.R. Dynamic Spectrum Sharing: A Game Theoretical Overview. IEEE Commun. Mag. 2007, 45, 88–94. [Google Scholar] [CrossRef]
  14. Han, Z.; Niyato, D.; Saad, W.; Basar, T.; Hjrungnes, A. Game Theory in Wireless and Communication Networks: Theory, Models, and Applications; Cambridge University Press: New York, NY, USA, 2012. [Google Scholar]
  15. Hausken, K. Information sharing among firms and cyber attacks. J. Account. Public Policy 2007, 26, 639–688. [Google Scholar] [CrossRef]
  16. La, Q.D.; Quek, T.Q.S.; Lee, J.; Jin, S.; Zhu, H. Deceptive Attack and Defense Game in Honeypot-Enabled Networks for the Internet of Things. IEEE Internet Things J. 2016, 3, 1025–1035. [Google Scholar] [CrossRef]
  17. Wu, Y.; Wang, B.; Liu, K.J.R.; Clancy, T.C. Repeated Open Spectrum Sharing Game with Cheat-Proof Strategies. IEEE Trans. Wirel. Commun. 2009, 8, 1922–1933. [Google Scholar]
  18. Garnaev, A.; Liu, Y.; Trappe, W. Anti-jamming Strategy versus a Low-Power Jamming Attack When Intelligence of Adversary’s Attack Type is Unknown. IEEE Trans. Signal Inf. Process. Netw. 2016, 2, 49–56. [Google Scholar] [CrossRef]
  19. Meamari, E.; Afhamisisi, K.; Shahhoseini, H.S. An Analysis on Interactions among Secondary User and Unknown Jammer in Cognitive Radio Systems by Fictitious Play. In Proceedings of the 10th International ISC Conference on Information Security and Cryptology (ISCISC 2013), Yazd, Iran, 29–30 August 2013; pp. 1–6. [Google Scholar]
  20. Khalil, K.; Ekici, E. Multiple Access Game with a Cognitive Jammer. In Proceedings of the 46th Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 4–7 November 2012; pp. 1383–1387. [Google Scholar]
  21. Aziz, F.M.; Shamma, J.S.; Stuber, G.L. Resilience of LTE Networks against Smart Jamming Attacks. In Proceedings of the IEEE Global Communications Conference (GLOBECOM 2014), Austin, TX, USA, 8–12 December 2014; pp. 734–739. [Google Scholar]
  22. Garnaev, A.; Trappe, W. One-time Spectrum Coexistence in Dynamic Spectrum Access When the Secondary User may be Malicious. IEEE Trans. Inf. Forensics Secur. 2015, 10, 1064–1075. [Google Scholar] [CrossRef]
  23. Garnaev, A.; Trappe, W. A Bandwidth Monitoring Strategy Under Uncertainty of the Adversary’s Activity. IEEE Trans. Inf. Forensics Secur. 2016, 11, 837–849. [Google Scholar] [CrossRef]
  24. Estiri, M.; Khademzadeh, A. A Game-Theoretical Model for Intrusion Detection in Wireless Sensor Networks. In Proceedings of the 23rd Canadian Conference on Electrical and Computer Engineering (CCECE 2010), Calgary, AB, Canada, 2–5 May 2010; pp. 1–5. [Google Scholar]
  25. Theodorakopoulos, G.; Baras, J.S. Game Theoretic Modeling of Malicious Users in Collaborative Networks. IEEE J. Sel. Areas Commun. 2008, 26, 1317–1327. [Google Scholar] [CrossRef]
  26. Hamilton, S.N.; Miller, W.L.; Ott, A.; Saydjari, O.S. Challenges to Applying Game Theory to the Domain of Information Warfare; 4th Information Survivability Workshop: Vancouver, BC, Canada, 2002. [Google Scholar]
  27. Garnaev, A.; Baykal-Gursoy, M.; Poor, H.V. Security Games with Unknown Adversarial Strategies. IEEE Trans. Cybern. 2016, 46, 2291–2299. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, Y.; Comaniciu, C.; Mani, H. A Bayesian Game Approach for Intrusion Detection in Wireless Ad Hoc Networks; Workshop on Game Theory for Communications and Networks (GameNets): Pisa, Italy, 2006. [Google Scholar]
  29. Sagduyu, Y.E.; Ephremidess, A. A Game-theoretic Analysis of Denial of Service Attacks in Wireless Random Access. J. Wirel. Netw. 2009, 15, 651–666. [Google Scholar] [CrossRef]
  30. Garnaev, A.; Trappe, W. Secret Communication When the Eavesdropper Might Be an Active Adversary. In Multiple Access Communications; Lecture Notes in Computer Science; Jonsson, M., Vinel, A., Bellalta, B., Belyaev, E., Eds.; Springer: Halmstad, Sweeden, 2014; Volume 8715, pp. 121–136. [Google Scholar]
  31. Garnaev, A.; Trappe, W. Stationary Equilibrium Strategies for Bandwidth Scanning. In Multiple Access Communcations; Lecture Notes in Computer Science; Jonsson, M., Vinel, A., Bellalta, B., Marina, N., Dimitrova, D., Fiems, D., Eds.; Springer: Vilnius, Lithuania, 2013; Volume 8310, pp. 168–183. [Google Scholar]
  32. Garnaev, A.; Trappe, W.; Kung, C.-T. Optimizing Scanning Strategies: Selecting Scanning Bandwidth in Adversarial RF Environments. In Proceedings of the 8th International Conference on Cognitive Radio Oriented Wireless Networks (CROWNCOM 2013), Washington, DC, USA, 8–10 July 2013; pp. 148–153. [Google Scholar]
  33. Garnaev, A.; Trappe, W. Bandwidth Scanning when Facing Interference Attacks Aimed at Reducing Spectrum Opportunities. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1916–1930. [Google Scholar] [CrossRef]
  34. Xiao, L.; Liu, J.; Mandayam, N.B.; Poor, H.V. Prospect Theoretic Analysis of Anti-jamming Communications in Cognitive Radio Networks. In Proceedings of the IEEE Global Communications Conference (GLOBECOM 2014), Austin, TX, USA, 8–12 December 2014; pp. 746–751. [Google Scholar]
  35. Nguyen, K.C.; Alpcan, T.; Basar, T. Stochastic games for security in networks with interdependent nodes. In Proceedings of the International Conference on Game Theory for Networks (GAMENETS 2009), Istanbul, Turkey, 13–15 May 2009; pp. 697–703. [Google Scholar]
  36. Calinescu, G.; Kapoor, S.; Qiao, K.; Shin, J. Stochastic Strategic Routing Reduces Attack Effects. In Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM 2011), Houston, Texas, USA, 5–9 December 2011; pp. 1–5. [Google Scholar]
  37. Jin, X.; Li, L.E.; Vanbever, L.; Rexford, J. SoftCell: Scalable and Flexible Cellular Core Network Architecture. In Proceedings of the Ninth ACM Conference on Emerging Networking Experiments and Technologies (CoNEXT 2013), Santa Barbara, CA, USA, 9–12 December 2013; ACM: New York, NY, USA, 2013; pp. 163–174. [Google Scholar]
  38. Fudenberg, D.; Tirole, J. Game Theory; MIT Press: Boston, MA, USA, 1991. [Google Scholar]
  39. Lin, J.; Liu, P.; Jing, J. Using Signaling Games to Model the Multi-step Attack-Defense Scenarios on Confidentiality. In Decision and Game Theory for Security; Lecture Notes in Computer Science; Grossklags, J., Walrand, J., Eds.; Springer: Budapest, Hungary, 2012; Volume 7638, pp. 118–137. [Google Scholar]
  40. Estiri, M.; Khademzadeh, A. A Theoretical Signaling Game Model for Intrusion Detection in Wireless Sensor Networks. In Proceedings of the International Telecommunications Network Strategy and Planning Symposium (Networks), Warsaw, Poland, 27–30 September 2010; pp. 1–6. [Google Scholar]
  41. Casey, W.; Morales, J. A.; Nguyen, T.; Spring, J.; Weaver, R.; Wright, E.; Metcalf, L.; Mishra, B. Cyber Security via Signaling Games: Toward a Science of Cyber Security. In Distributed Computing and Internet Technology; Lecture Notes in Computer Science; Natarajan, R., Ed.; Springer: Bhubaneswar, India, 2014; Volume 8337, pp. 34–42. [Google Scholar]
  42. Patcha, A.; Park, J.-M. A Game Theoretic Approach to Modeling Intrusion Detection in Mobile Ad Hoc Networks; IEEE Workshop on Information Assurance and Security (WIAS): West Point, NY, USA, 2004; pp. 30–34. [Google Scholar]
  43. Carroll, T.E.; Grosu, D. A Game Theoretic Investigation of Deception in Network Security. Secur. Commun. Netw. 2011, 4, 1162–1172. [Google Scholar] [CrossRef]
  44. Pibil, R.; Lisy, V.; Kiekintveld, C.; Bosansky, B.; Pechoucek, M. Game Theoretic Model of Strategic Honeypot Selection in Computer Networks. In Decision and Game Theory for Security; Lecture Notes in Computer Science; Grossklags, J., Walrand, J., Eds.; Springer: Budapest, Hungary, 2012; Volume 7638, pp. 201–220. [Google Scholar]
  45. Xia, F.; Jedari, B.; Yang, L.T.; Ma, J.; Huang, R. A Signaling Game for Uncertain Data Delivery in Selfish Mobile Social Networks. IEEE Trans. Comput. Soc. Syst. 2016, 3, 100–112. [Google Scholar] [CrossRef]
  46. Mabrouk, A.; Kobbane, A.; Sabir, E.; Ben-Othman, J.; El Koutbi, M. A Signaling Game-Based Mechanism to Meet Always Best Connected Service in VANETs. In Proceedings of the IEEE Global Communications Conference (GLOBECOM 2015), San Diego, CA, USA, 6–10 December 2015; pp. 1–5. [Google Scholar]
  47. Battigalli, P. Rationalization in Signaling Games: Theory and Applications. Int. Game Theory Rev. 2006, 8, 67–93. [Google Scholar] [CrossRef]
  48. McKelvey, R.D.; McLennan, A.M.; Turocy, T.L. Gambit: Software Tools for Game Theory, Version 16.0.0. 2010. Available online: http://www.gambit-project.org (accessed on 11 December 2013).
Figure 1. A universal spectrum access scenario that will be used to generically model the information obfuscation problem associated with spectrum sharing. Here, the coordination between a primary user (PU) and a secondary user (SU) is administered through a spectrum owner (SO) that acts as a spectrum server.
Figure 1. A universal spectrum access scenario that will be used to generically model the information obfuscation problem associated with spectrum sharing. Here, the coordination between a primary user (PU) and a secondary user (SU) is administered through a spectrum owner (SO) that acts as a spectrum server.
Entropy 19 00363 g001
Figure 2. (a) Number of extra bands reserved for the PU as a function of the cost of uncertainty C U and number of users the PU has to maintain (i.e., n P ); (b) number of extra reserved bands for the PU; and (c) the payoff to the SO as a function of the cost of uncertainty and the probability q 1 that the SU is malicious.
Figure 2. (a) Number of extra bands reserved for the PU as a function of the cost of uncertainty C U and number of users the PU has to maintain (i.e., n P ); (b) number of extra reserved bands for the PU; and (c) the payoff to the SO as a function of the cost of uncertainty and the probability q 1 that the SU is malicious.
Entropy 19 00363 g002
Figure 3. Diagram for how decisions are made.
Figure 3. Diagram for how decisions are made.
Entropy 19 00363 g003
Figure 4. Threshold lines for switching between the equilibrium strategies as a function of the number of bands.
Figure 4. Threshold lines for switching between the equilibrium strategies as a function of the number of bands.
Entropy 19 00363 g004
Figure 5. (a) Probability of believing in the request of two bands, a ( 2 , B ) ; (b) probability of requesting two bands, if only one connection has to be supported, b ( 1 , 2 ) ; and (c) the payoff to the SO as functions of the number of bands n and the probability that the malicious SU is of type- ( 1 , 1 ) .
Figure 5. (a) Probability of believing in the request of two bands, a ( 2 , B ) ; (b) probability of requesting two bands, if only one connection has to be supported, b ( 1 , 2 ) ; and (c) the payoff to the SO as functions of the number of bands n and the probability that the malicious SU is of type- ( 1 , 1 ) .
Entropy 19 00363 g005

Share and Cite

MDPI and ACS Style

Garnaev, A.; Trappe, W. Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies. Entropy 2017, 19, 363. https://doi.org/10.3390/e19070363

AMA Style

Garnaev A, Trappe W. Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies. Entropy. 2017; 19(7):363. https://doi.org/10.3390/e19070363

Chicago/Turabian Style

Garnaev, Andrey, and Wade Trappe. 2017. "Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies" Entropy 19, no. 7: 363. https://doi.org/10.3390/e19070363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop