Next Article in Journal
Analysis of Self-Gravitating Fluid Instabilities from the Post-Newtonian Boltzmann Equation
Previous Article in Journal
Impact of Moving Walls and Entropy Generation on Doubly Diffusive Mixed Convection of Casson Fluid in Two-Sided Driven Enclosure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deceptive Information Retrieval

Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(3), 244; https://doi.org/10.3390/e26030244
Submission received: 16 January 2024 / Revised: 29 February 2024 / Accepted: 5 March 2024 / Published: 10 March 2024

Abstract

:
We introduce the problem of deceptive information retrieval (DIR), in which a user wishes to download a required file out of multiple independent files stored in a system of databases while deceiving the databases by making the databases’ predictions on the user-required file index incorrect with high probability. Conceptually, DIR is an extension of private information retrieval (PIR). In PIR, a user downloads a required file without revealing its index to any of the databases. The metric of deception is defined as the probability of error of databases’ prediction on the user-required file, minus the corresponding probability of error in PIR. The problem is defined on time-sensitive data that keep updating from time to time. In the proposed scheme, the user deceives the databases by sending real queries to download the required file at the time of the requirement and dummy queries at multiple distinct future time instances to manipulate the probabilities of sending each query for each file requirement, using which the databases’ make the predictions on the user-required file index. The proposed DIR scheme is based on a capacity achieving probabilistic PIR scheme, and achieves rates lower than the PIR capacity due to the additional downloads made to deceive the databases. When the required level of deception is zero, the proposed scheme achieves the PIR capacity.

1. Introduction

Information is generally retrieved from a data storage system by directly requesting what is required. This is the most efficient form of information retrieval in terms of the download cost, as the user only downloads exactly what is required. However, if the user does not want to reveal the required information to the data storage system from which the information is retrieved, extra information must be requested to increase the uncertainty of the databases’ knowledge on the user’s requirement. This is the core idea of private information retrieval (PIR) [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], where the user downloads a required file out of K independent files stored in N non-colluding databases without revealing the required file index. In PIR, the databases’ prediction of the user-required file based on the received queries is uniformly distributed across all files. Hence, the probability of error of the databases’ predictions in a PIR setting with K files is 1 1 K . In weakly private information retrieval [16,17], a certain amount of information on the user-required file index is revealed to the databases to reduce the download cost. In such cases, as the databases have more information on the file index that the user requests, the error probability of the databases’ prediction is less than 1 1 K . In this work, we study the case where the error probability of databases’ prediction is larger than 1 1 K .
Note that with no information received by the user at all, the databases can make a random guess on the user-required file index, and reach an error probability of 1 1 K . Therefore, to result in a prediction error that is larger than 1 1 K , the user has to deceive the databases by sending fake information on the required file index. The goal of this work is to generate a scheme that allows a user to download a required file k, while forcing the databases’ prediction on the user-required file index to be , where k , for as many cases as possible. This is coined as deceptive information retrieval (DIR). DIR is achieved by sending dummy queries to databases to manipulate the probabilities of sending each query for each file requirement, which results in incorrect predictions at the databases. However, sending dummy queries increases the download cost compared to PIR. Figure 1 shows the behavior of the prediction error probability and the corresponding download costs for different types of information retrieval. (The regions marked as “weakly PIR” and “DIR” in Figure 1 show the points that are conceptually valid for the two cases, and this does not imply that every point in those regions is achievable. The achievable points corresponding to “weakly PIR”, and “DIR” lies within the marked regions.)
The concept of deception has been studied as a tool for cyber defense [18,19,20,21,22], where the servers deceive attackers, adversaries, and eavesdroppers to eliminate any harmful activities. In all such cases, the deceiver (servers in this case) gains nothing from the deceived, i.e., attackers, adversaries, and eavesdroppers. In contrast, the main challenge in DIR is that what needs to be deceived is the same source of information that the user retrieves the required data from. This limits the freedom that a DIR scheme could employ to deceive the databases. To this end, we formulate the problem of DIR based on the key concepts used in PIR, while also incorporating a time dimension to aid deception.
The problem of DIR introduced in this paper considers a system of non-colluding databases storing K independent files that are time-sensitive, i.e., files that keep updating from time to time. We assume that the databases only store the latest version of the files. A given user wants to download arbitrary files at arbitrary time instances. The correctness condition ensures that the user receives the required file, right at the time of the requirement, while the condition for deception requires the databases’ prediction on the user-required file to be incorrect with a probability that is greater than 1 1 K , specified by the predetermined level of deception required in the system.
The scheme that we propose for DIR deceives the databases by sending dummy queries to the databases for each file requirement, at distinct time instances. From the user’s perspective, each query is designed to play two roles as real and dummy queries, with two different probability distributions. This allows the user to manipulate the overall probability of sending each query for each message requirement, which is known by the databases. The databases make predictions based on the received queries and the globally known probability distribution of the queries used for each file requirement. These predictions are incorrect with probability > 1 1 K as the probability distributions based on which the real queries sent are different from the globally known overall distribution. This is the basic idea used in the proposed scheme, which allows a user to deceive the databases while also downloading the required file. The download cost of the proposed DIR scheme increases with the required level of deception d, and achieves the PIR capacity when d = 0 .

2. Problem Formulation and System Model

We consider N non-colluding databases storing K independent files, each consisting of L uniformly distributed symbols from a finite field F q , i.e.,
H ( W 1 , , W K ) = i = 1 K H ( W i ) = K L ,
where W i is the ith file. The files keep updating from time to time, and a given user wants to download an arbitrary file at arbitrary time instances T i , i N . We assume that all files are equally probable to be requested by the user.
The user sends queries at arbitrary time instances to download the required file while deceiving the databases. We assume that the databases are unaware of being deceived, which is fundamental to the concept of deception. Moreover, we assume that the databases are only able to store data (files, queries from users, time stamps of received queries, etc.) corresponding to the current time instance, and that the file updates at distinct time instances are mutually independent. Therefore, the user’s file requirements and the queries sent are independent of the stored files at all time instances, i.e.,
I ( θ [ t ] , Q n [ t ] ; W 1 : K [ t ] ) = 0 , n { 1 , , N } , t ,
where θ [ t ] is the user’s file requirement, Q n [ t ] is the query sent by the user to database n, and W 1 : K [ t ] is the set of K files, all at times t (The notation 1 : K indicates all integers from 1 to K). At any given time t when each database n, n { 1 , , N } , receives a query from the user, it sends the corresponding answer as a function of the received query and the stored files; thus,
H ( A n [ t ] | Q n [ t ] , W 1 : K [ t ] ) = 0 , n { 1 , , N } ,
where A n [ t ] is the answer received by the user from database n at time t. At each time T i , i N , the user must be able to correctly decode the required file, that is,
H ( W θ [ T i ] | Q 1 : N [ T i ] , A 1 : N [ T i ] ) = 0 , i N .
At any given time t when each database n, n { 1 , , N } , receives a query from the user, it makes a prediction on the user-required file index using the maximum a posteriori probability (MAP) estimate as follows,
θ ^ Q ˜ [ t ] = arg max i P ( θ [ t ] = i | Q n [ t ] = Q ˜ ) , n { 1 , , N } ,
where θ ^ Q ˜ [ t ] is the predicted user-required file index based on the realization of the received query Q ˜ at time t. The probability of error of each database’s prediction is defined as
P e = E [ P ( θ ^ Q ˜ [ T i ] θ [ T i ] ) ] ,
where the expectation is taken across all Q ˜ and T i . Note that in PIR, P ( θ Q ˜ [ t ] = i | Q n [ t ] = Q ˜ ) = P ( θ Q ˜ [ t ] = j | Q n [ t ] = Q ˜ ) for all i , j { 1 , , N } , any Q ˜ [ t ] , which results in P e PIR = 1 1 K . Based on this information, we define the metric of deception as
D = P e 1 1 K .
For PIR, the amount of deception is D = 0 , and for weakly PIR, where some amount of information is leaked on the user-required file index, the amount of deception takes a negative value, as the probability of error is smaller than 1 1 K . The goal of this work is to generate schemes that meet a given level of deception D = d > 0 , while minimizing the normalized download cost, defined as
D L = H ( A 1 : N ) L ,
where A 1 : N represents all the answers received by all N databases, corresponding to a single file requirement of the user. The DIR rate is defined as the reciprocal of D L .

3. Main Result

In this section, we present the main result of this paper, along with some remarks. Consider a system of N non-colluding databases containing K identical files. A user is able to retrieve any file k, while deceiving the databases by leaking information about some other file k to the databases.
Theorem 1. 
Consider a system of N non-colluding databases storing K independent files. A required level of deception d, satisfying 0 d < ( K 1 ) ( N 1 ) K ( N K N ) , is achievable at a DIR rate
R = 1 + N K N N 1 e ϵ 1 + ( N K 1 1 ) e ϵ + N N 1 ( 2 u u ( u + 1 ) α ) 1 ,
where
ϵ = ln d K N + ( K 1 ) ( N 1 ) d K N + ( K 1 ) ( N 1 ) d K N K , α = N + ( N K N ) e ϵ ( N 1 ) e 2 ϵ + ( N K N ) e ϵ + 1 , u = 1 α
Remark 1. 
For given N and K, ϵ 0 is a one-to-one continuous function of d, the required level of deception, and α ( 0 , 1 ] is a one-to-one continuous function of ϵ. For a given u Z + , there exists a range of values of α, specified by 1 u + 1 < α 1 u , which corresponds to a unique range of values of ϵ, for which (9) is valid. Since ( 0 , 1 ] = { α : 1 u + 1 < α 1 u , u Z + } , there exists an achievable rate (as well as an achievable scheme) for any ϵ 0 as well as for any d in the range 0 d < ( K 1 ) ( N 1 ) K ( N K N ) .
Remark 2. 
When the user-specified amount of deception is zero, i.e., d = 0 , the corresponding values of α and u are α = 1 and u = 1 . The achievable rate for this case is 1 1 N 1 1 N K , which is equal to the PIR capacity.
Remark 3. 
The achievable DIR rate monotonically decreases with increasing amount of deception d for any given N and K.
Remark 4. 
The variation in the achievable DIR rate with the level of deception for different numbers of databases when the number of files fixed at K = 3 is shown in Figure 2. The achievable rate for different numbers of files when the number of databases is fixed at N = 2 is shown in Figure 3. For any given N and K, the rate decreases exponentially when the level of deception is close to the respective upper bound, i.e., d < ( K 1 ) ( N 1 ) K ( N K N ) .

4. DIR Scheme

The DIR scheme introduced in this section is designed for a system of N non-colluding databases containing K independent files, with a pre-determined amount of deception d > 0 required. For each file requirement at time T i , i N , the user chooses a set of M + 1 queries to be sent to database n, n { 1 , , N } , at time T i as well as at future time instances t i , j , j { 1 , , M } , such that each t i , j > T i . The query sent at time T i is used to download the required file, while the rest of the M queries are sent to deceive the databases. The queries sent at times T i , i N and t i , j , j { 1 , , M } , i N are known as real and dummy queries, respectively. The binary random variable R is used to specify whether a query sent by the user is real or dummy, i.e., R = 1 corresponds to a real query sent at time T i , and R = 0 corresponds to a dummy query sent at time t i , j . Next, we define another classification of queries used in the proposed scheme.
Definition 1 
( ϵ -deceptive query). An ϵ-deceptive query Q ˜ with respect to file k is defined as a query that always satisfies
P ( Q n = Q ˜ | θ = k , R = 1 ) P ( Q n = Q ˜ | θ = , R = 1 ) = e ϵ , P ( θ = k | Q n = Q ˜ ) P ( θ = | Q n = Q ˜ ) = e ϵ , { 1 , , K } , k ,
for some ϵ > 0 , where Q n and θ are the random variables representing a query sent to database n, n { 1 , , N } , and the user-required file index. An equivalent representation of (11) is given by
P ( R = 1 | θ = ) + P ( Q n = Q ˜ | θ = , R = 0 ) P ( Q n = Q ˜ | θ = , R = 1 ) P ( R = 0 | θ = ) P ( R = 1 | θ = k ) + P ( Q n = Q ˜ | θ = k , R = 0 ) P ( Q n = Q ˜ | θ = k , R = 1 ) P ( R = 0 | θ = k ) = e 2 ϵ , { 1 , , K } , k .
Definition 2 
(PIR query). A query Q ˜ that satisfies (11) with ϵ = 0 for all k { 1 , , K } , i.e., a 0-deceptive query, is known as a PIR query.
Remark 5. 
The intuition behind the definition of an ϵ-deceptive query with respect to message k in Definition 1 is as follows. Note that the second equation in (11) fixes the databases’ prediction on the user’s requirement as W k for the query Q ˜ . This is because the a posteriori probability corresponding to message k, when Q ˜ is received by the databases, is greater than that of any other message , k . However, the first equation in (11), which is satisfied at the same time, ensures that the user sends the query Q ˜ with the least probability when the user needs to download message k, compared to the probabilities of sending Q ˜ for other message requirements. In other words, since we assume equal priors, the query Q ˜ is mostly sent when the user needs to download W for k , and is rarely sent to download W k , while the databases’ prediction on the user-required message upon receiving query Q ˜ is fixed at W k , which is incorrect with high probability, hence, the deception.
At a given time t, there exists a set of queries consisting of both deceptive and PIR queries, sent to the N databases. Database n, n { 1 , , N } , is aware of the probability of receiving each query, for each file requirement, i.e., P ( Q n = Q ˜ | θ = k ) , for k { 1 , , K } , Q ˜ Q , where Q is the set of all queries. However, the databases are unaware of being deceived, and are unable to determine whether the received query Q ˜ is real or dummy or deceptive or PIR. The proposed scheme generates a list of real and dummy queries for a given N and K along with the probabilities of using them as ϵ -deceptive and PIR queries, based on the required level of deception d. The scheme also characterizes the optimum number of dummy queries M to be sent to the databases for each file requirement, to minimize the download cost. As an illustration of the proposed scheme, consider the following representative examples.

4.1. Example 1: Two Databases and Two Files, N = K = 2

In this example, we present how the proposed DIR scheme is applied in a system of two databases containing two files each. In the proposed scheme, the user generates M + 1 queries for any given file requirement which consists of one real query and M dummy queries. The user sends the real query at the time of the requirement T i , and the rest of the M dummy queries at M different future time instances t i , j . Table 1 and Table 2 give possible pairs of real queries that are sent to the two databases to retrieve W 1 and W 2 , respectively, at time T i , i N . The probability of using each pair of queries is indicated in the first columns of Table 1 and Table 2. Note that the correctness condition in (4) is satisfied at each time T i , as each row of Table 1 and Table 2 decodes files W 1 and W 2 , respectively, with no error.
The dummy queries sent to each database at time t i , j are given in Table 3 and Table 4. The purpose of the dummy queries sent at future time instances is to deceive the databases by manipulating the a posteriori probabilities, which impact their predictions. For example, if the user wants to download W 1 at time T i , the user selects one of the four query options in Table 1 based on the probabilities in column 1 (The values of p and p are derived later in this section), and sends the corresponding queries to databases 1 and 2 at time T i . Based on the information in Table 3, the user sends the query W 1 to both databases at M distinct future time instances t i , j , j { 1 , , M } .
Based on the information in Table 1, Table 2, Table 3 and Table 4, when the user-required file is W 1 , the probability of each query being received by database n, n { 1 , 2 } , at an arbitrary time instance t is calculated as follows. Let P ( R = 1 | θ = i ) = α for i { 1 , 2 } . (The intuition behind P ( R = 1 | θ = i ) is the probability of a query received by any database being real when the user-required file index is i. For a fixed M, P ( R = 1 | θ = i ) = 1 M + 1 ). Then,
P ( Q n = W 1 | θ = 1 ) = P ( Q n = W 1 | θ = 1 , R = 1 ) P ( R = 1 | θ = 1 )
  + P ( Q n = W 1 | θ = 1 , R = 0 ) P ( R = 0 | θ = 1 )
= p α + 1 α
P ( Q n = W 2 | θ = 1 ) = P ( Q n = W 2 | θ = 1 , R = 1 ) P ( R = 1 | θ = 1 )
  + P ( Q n = W 2 | θ = 1 , R = 0 ) P ( R = 0 | θ = 1 )
= p α
P ( Q n = W 1 + W 2 | θ = 1 ) = P ( Q n = W 1 + W 2 | θ = 1 , R = 1 ) P ( R = 1 | θ = 1 )
  + P ( Q n = W 1 + W 2 | θ = 1 , R = 0 ) P ( R = 0 | θ = 1 )
= p α
P ( Q n = ϕ | θ = 1 ) = P ( Q n = ϕ | θ = 1 , R = 1 ) P ( R = 1 | θ = 1 )
  + P ( Q n = ϕ | θ = 1 , R = 0 ) P ( R = 0 | θ = 1 )
= p α
Thus, writing these probabilities compactly, we have
P ( Q n = W 1 | θ = 1 ) = p α + 1 α
P ( Q n = W 2 | θ = 1 ) = p α
P ( Q n = W 1 + W 2 | θ = 1 ) = p α
P ( Q n = ϕ | θ = 1 ) = p α .
Similarly, when the user-required file is W 2 , the corresponding probabilities are
P ( Q n = W 1 | θ = 2 ) = p α
P ( Q n = W 2 | θ = 2 ) = p α + 1 α
P ( Q n = W 1 + W 2 | θ = 2 ) = p α
P ( Q n = ϕ | θ = 2 ) = p α .
These queries and the corresponding probabilities of sending them to each database for each message requirement are known to the databases. However, the decomposition of these probabilities based on whether the query is real or dummy, i.e., Table 1, Table 2, Table 3 and Table 4, is not known by the databases. When database n, n { 1 , , N } , receives a query Q ˜ at time t, it calculates the a posteriori probability distribution of the user-required file index, to predict the user’s requirement using (5). The a posteriori probabilities corresponding to the four queries received by database n, n { 1 , 2 } , are calculated as follows:
P ( θ = i | Q n = Q ˜ ) = P ( Q n = Q ˜ | θ = i ) P ( θ = i ) P ( Q n = Q ˜ ) .
Then, the explicit a posteriori probabilities are given by
P ( θ = 1 | Q n = W 1 ) = 1 2 ( p α + 1 α ) P ( Q n = W 1 )
P ( θ = 2 | Q n = W 1 ) = 1 2 p α P ( Q n = W 1 )
P ( θ = 1 | Q n = W 2 ) = 1 2 p α P ( Q n = W 2 )
P ( θ = 2 | Q n = W 2 ) = 1 2 ( p α + 1 α ) P ( Q n = W 2 )
P ( θ = 1 | Q n = W 1 + W 2 ) = 1 2 p α P ( Q n = W 1 + W 2 )
P ( θ = 2 | Q n = W 1 + W 2 ) = 1 2 p α P ( Q n = W 1 + W 2 )
P ( θ = 1 | Q n = ϕ ) = 1 2 p α P ( Q n = ϕ )
P ( θ = 2 | Q n = ϕ ) = 1 2 p α P ( Q n = ϕ ) .
While queries ϕ and W 1 + W 2 are PIR queries as stated in Definition 2, queries W 1 and W 2 are ϵ -deceptive with respect to file indices 1 and 2, respectively, for an ϵ that depends on the required amount of deception d. The values of p and p in Table 1, Table 2, Table 3 and Table 4 are calculated based on the requirements in Definition 1 as follows. It is straightforward to see that p = p e ϵ follows from the first part of (11) for each query Q ˜ = W 1 and Q ˜ = W 2 , which also gives p = 1 2 ( 1 + e ϵ ) . The second part of (11) (as well as (12)) results in α = 2 1 + e ϵ for both ϵ -deceptive queries W 1 and W 2 . Based on the a posteriori probabilities (30)–(37) calculated by the databases using the information in (21)–(28), each database predicts the user’s requirement at each time it receives a query from the user. The predictions corresponding to each query received by database n, n = 1 , 2 , which are computed using (5), are shown in Table 5.
Based on this information, when a database receives query Q = W 1 , it always decides that the requested message is W 1 , and when it receives query Q = W 2 , it always decides that the requested message is W 2 . For queries Q = ϕ and Q = W 1 + W 2 , the databases flip a coin to choose either W 1 or W 2 as the requested message.
As the queries are symmetric across all databases, the probability of error corresponding to some query Q ˜ received by database n at time T i is given by
P ( θ ^ Q ˜ [ T i ] θ [ T i ] )
= P ( θ [ T i ] = 1 , θ ^ Q ˜ [ T i ] = 2 | Q n [ T i ] = Q ˜ ) + P ( θ [ T i ] = 2 , θ ^ Q ˜ [ T i ] = 1 | Q n [ T i ] = Q ˜ )
= 1 P ( Q n [ T i ] = Q ˜ ) P ( θ ^ Q ˜ [ T i ] = 2 | θ [ T i ] = 1 , Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ | θ [ T i ] = 1 ) P ( θ [ T i ] = 1 )
+ P ( θ ^ Q ˜ [ T i ] = 1 | θ [ T i ] = 2 , Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ | θ [ T i ] = 2 ) P ( θ [ T i ] = 2 )
= 1 P ( Q n [ T i ] = Q ˜ ) P ( θ ^ Q ˜ [ T i ] = 2 | Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ | θ [ T i ] = 1 ) P ( θ [ T i ] = 1 )
+ P ( θ ^ Q ˜ = 1 | Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ | θ [ T i ] = 2 ) P ( θ [ T i ] = 2 ) ,
as the predictions only depend on the received queries. The explicit probabilities corresponding to the four queries are (Note that P ( Q n = Q ˜ | θ [ T i ] = i ) implies P ( Q n = Q ˜ | θ = i , R = 1 ) , as only real queries are sent at time T i ).
P ( θ ^ W 1 [ T i ] θ [ T i ] ) = 1 P ( Q n [ T i ] = W 1 ) e ϵ 4 ( 1 + e ϵ )
P ( θ ^ W 2 [ T i ] θ [ T i ] ) = 1 P ( Q n [ T i ] = W 2 ) e ϵ 4 ( 1 + e ϵ )
P ( θ ^ W 1 + W 2 [ T i ] θ [ T i ] ) = 1 P ( Q n [ T i ] = W 1 + W 2 ) e ϵ 4 ( 1 + e ϵ )
P ( θ ^ ϕ [ T i ] θ [ T i ] ) = 1 P ( Q n [ T i ] = ϕ ) 1 4 ( 1 + e ϵ ) .
As the same scheme is used for all user requirements at all time instances, the probability of error of each database’s prediction for this example is calculated using (6) as
P e = Q ˜ Q P ( Q n [ T i ] = Q ˜ ) P ( θ ^ Q ˜ [ T i ] θ [ T i ] )
= 3 e ϵ + 1 4 ( 1 + e ϵ )
where Q = { W 1 , W 2 , W 1 + W 2 , ϕ } , which results in a deception of D = 3 e ϵ + 1 4 ( 1 + e ϵ ) 1 2 = e ϵ 1 4 ( 1 + e ϵ ) . Therefore, for a required amount of deception d < 1 4 , the value of ϵ is chosen as ϵ = ln 4 d + 1 1 4 d .
The download cost of this scheme is computed as follows. As the scheme is symmetric across all file retrievals, and since the a priori probability distribution of the files is uniform, without loss of generality, we can calculate the download cost of retrieving W 1 . The download cost of retrieving W 1 for a user specified amount of deception d is given by
D L = 1 L 2 L p + 2 ( 2 L ) p e ϵ + 2 L m = 0 p m m
= 1 + 2 e ϵ 1 + e ϵ + 2 E [ M ]
where p m is the probability of sending m dummy queries per each file requirement. To minimize the download cost, we need to find the probability mass function (PMF) of M which minimizes E [ M ] such that P ( R = 1 | θ = i ) = α = 2 1 + e ϵ is satisfied for any i. Note that for any i, P ( R = 1 | θ = i ) can be written as
P ( R = 1 | θ = i ) = α = m = 0 p m 1 m + 1 = E 1 M + 1 ,
where M is the random variable representing the number of dummy queries sent to each database per file requirement. Thus, the following optimization problem needs to be solved, for a given ϵ , that is a function of the given value of d:
min E [ M ] s . t . E 1 M + 1 = 2 1 + e ϵ = α .
The solution to this problem is given in Lemma 1, and the resulting minimum download cost is given by
D L = 1 + 2 e ϵ 1 + e ϵ + 4 u 2 u ( u + 1 ) α ,
where u = 1 α . When d = 0 , it follows that ϵ = 0 and u = 1 , and the achievable rate is 2 3 , which is the same as the PIR capacity for N = 2 and K = 2 .

4.2. Example 2: Three Databases and Three Files, N = K = 3

Similar to the previous example, the user sends real queries at time T i and dummy queries at times t i , j , j { 1 , , M } , for each i N , based on the probabilities shown in Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11. The notation W i j in these tables corresponds to the jth segment of W i , where each file W i is divided into N 1 = 2 segments of equal size. Database n, n { 1 , , N } , only knows the overall probabilities of receiving each query for each file requirement of the user shown in Table 12. These overall probabilities, which are calculated using
P ( Q n = Q ˜ | θ = k ) = P ( Q n = Q ˜ | θ = k , R = 1 ) P ( R = 1 | θ = k ) + P ( Q n = Q ˜ | θ = k , R = 0 ) P ( R = 0 | θ = k ) , k { 1 , , K }
where P ( R = 1 | θ = i ) = α for any i = 1 , 2 , 3 , are the same for each database, as the scheme is symmetric across all databases. The entry “other queries” in Table 12 includes all queries that have sums of two or three elements. Based on this available information, each database calculates the a posteriori probability of the user-required file index conditioned on each received query Q ˜ using (29). Each query of the form W k j is an ϵ -deceptive query with respect to file k, where ϵ is a function of the required amount of deception, which is derived towards the end of this section. All other queries including the null query and all sums of two or three elements are PIR queries. As all ϵ -deceptive queries must satisfy (11), the value of p is given by p = p e ϵ , which results in p = 1 3 ( 1 + 8 e ϵ ) , based on the same arguments used in the previous example. Using (11) and (29) for any given deceptive query, the value of α is calculated as follows. Note that for a query of the form W k j , for each database n, n { 1 , , N } , using P ( θ = k ) = 1 K , we have
P ( θ = k | Q n = W k j ) P ( θ = | Q = W k j ) = P ( Q n = W k j | θ = k ) P ( Q n = W k j | θ = ) = p α + 1 2 ( 1 α ) p α ,
The value of α is computed as α = 1 2 p ( e 2 ϵ 1 ) + 1 , using (11) and (53) by solving p α + 1 2 ( 1 α ) p α = e ϵ .
Assume that the user wants to download W 2 at some time T i . Then, at time T i , the user picks a row of queries from Table 8 based on the probabilities in the first column, and sends them to each of the three databases. Note that correctness is satisfied, as it is possible to decode W 2 from any row of Table 8. Next, the user picks M future time instances t i , j , j { 1 , , M } , and at each time t i , j the user independently and randomly picks a row from Table 9 and sends the queries to the databases. This completes the scheme, and the value of M that minimizes the download cost is calculated at the end of this example.
The databases make predictions with the received query at each time t, based on the information available in Table 12. As the a posteriori probabilities P ( θ = k | Q n = Q ˜ ) are proportional to the corresponding probabilities given by P ( Q n = Q ˜ | θ = k ) from (29), the databases’ predictions (using (5)) and the corresponding probabilities are shown in Table 13.
The probability of error for each type of query is calculated as follows. First, consider the ϵ -deceptive queries with respect to file k, given by W k j , j { 1 , 2 } . For these queries, the error probability from the perspective of database n, n { 1 , , N } , is given by
P ( θ ^ W k j [ T i ] θ [ T i ] ) = P ( θ [ T i ] k | Q n [ T i ] = W k j )
= = 1 , k 3 P ( θ [ T i ] = | Q n [ T i ] = W k j )
= = 1 , k 3 P ( Q n [ T i ] = W k j | θ [ T i ] = ) P ( θ [ T i ] = ) P ( Q n [ T i ] = W k j )
= 1 P ( Q n [ T i ] = W k j ) 2 3 p e ϵ ,
where (54) follows from the fact that the databases’ prediction on a received query of the form W k j is file k with probability 1 from Table 13, and the probabilities in (57) are obtained from real query tables as they correspond to queries sent at time T i . Next, the probability of error corresponding to each of the the other queries, i.e., PIR queries that include the null query and sums of two or three elements, is given by
P ( θ ^ Q ˜ [ T i ] θ [ T i ] ) = P ( θ ^ [ T i ] θ [ T i ] | Q n [ T i ] = Q ˜ )
= j = 1 3 m = 1 , m j 3 P ( θ ^ [ T i ] = m , θ [ T i ] = j , Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ )
= j = 1 3 m = 1 , m j 3 P ( θ ^ [ T i ] = m | θ [ T i ] = j , Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ | θ [ T i ] = j ) P ( θ [ T i ] = j ) P ( Q n [ T i ] = Q ˜ )
= 1 P ( Q n [ T i ] = Q ˜ ) 2 p 3 , if Q ˜ = ϕ 2 p e ϵ 3 , if Q ˜ if of the form s = 1 W k s j s for { 2 , 3 }
where (61) follows from the fact that θ ^ [ T i ] is conditionally independent of θ [ T i ] given Q n , from (5). The probability of error at each time T i , i N , is the same, as the scheme is identical at each T i , and across all file requirements. Therefore, the probability of error of each database’s prediction using (6) is given by
P e = P ( θ ^ [ T i ] θ [ T i ] )
= Q ˜ Q P ( Q n = Q ˜ ) P ( θ ^ Q ˜ [ T i ] θ [ T i ] )
= k = 1 3 j = 1 2 P ( Q n = W k j ) 1 P ( Q n [ T i ] = W k j ) 2 3 p e ϵ + P ( Q n = ϕ ) 1 P ( Q n = ϕ ) 2 p 3
+ 20 P ( Q n = Q ^ ) 1 P ( Q n = Q ^ ) 2 p e ϵ 3
= 4 p e ϵ + 2 p 3 + 40 p e ϵ 3
= 52 e ϵ + 2 9 ( 8 e ϵ + 1 ) .
where Q is the set of all queries and Q ^ is a query of the form s = 1 W k s j s for { 2 , 3 } . The resulting amount of deception is,
D = P e 1 1 K = 52 e ϵ + 2 9 ( 8 e ϵ + 1 ) 2 3 = 4 ( e ϵ 1 ) 9 ( 8 e ϵ + 1 ) .
Therefore, for a required amount of deception d < 1 18 , ϵ is chosen as ϵ = ln 9 d + 4 4 ( 1 18 d ) .
Without loss of generality, consider the cost of downloading W 1 , which is the same as the expected download cost, as the scheme is symmetric across all file retrievals.
D L = 1 L L × 3 p + 3 L 2 × 24 p e ϵ + 3 L 2 m = 0 p m m = 1 + 12 e ϵ 1 + 8 e ϵ + 3 2 E [ M ]
To find the scheme that achieves the minimum D L , we need to find the minimum E [ M ] that satisfies P ( R = 1 | θ = i ) = α = E [ 1 M + 1 ] = 3 ( 1 + 8 e ϵ ) 2 e 2 ϵ + 24 e ϵ + 1 , i.e., the following optimization problem needs to be solved.
min E [ M ] s . t . E 1 M + 1 = 3 e 2 ϵ ( 1 + 8 e ϵ ) 2 + e 2 ϵ + 24 e ϵ .
The solution to this problem is given in Lemma 1. The resulting minimum download cost for a given value of ϵ , i.e., required level of deception d, is given by
D ϵ L = 1 + 12 e ϵ 1 + 8 e ϵ + 3 2 ( 2 u u ( u + 1 ) α ) , α = 3 e 2 ϵ ( 1 + 8 e ϵ ) 2 + e 2 ϵ + 24 e ϵ ,
where u = 1 α . When d = 0 , it follows that ϵ = 0 , α = 1 , and u = 1 , and the achievable rate is 9 13 , which is equal to the PIR capacity for the case N = 3 , K = 3 .

4.3. Generalized DIR Scheme for Arbitrary N and K

In the general DIR scheme proposed in this work, at each time T i , i N , when the user needs to download some file W k , the user sends a set of real queries to each of the N databases. These queries are picked based on a certain probability distribution, defined on all possible sets of real queries. For the same file requirement, the user sends M dummy queries at future time instances t i , j , j { 1 , , M } , where t i , j > T i . The dummy queries sent at each time t i , j are randomly selected from a subset of real queries. We assume that the databases are unaware of being deceived, and treat both real and dummy queries the same when calculating their predictions on the user-required file index at each time they receive a query. The overall probabilities of a given user sending each query for each file requirement are known by the databases. However, the decomposition of these probabilities based on whether each query is used as a real or a dummy query is not known by the databases. It is also assumed that the databases only store the queries received at the current time instance.
The main components of the general scheme include (1) N K possible sets of real queries to be sent to the N databases for each file requirement and their probabilities, (2) N 1 possible sets of dummy queries and their probabilities, (3) overall probabilities of sending each query for each of the K file requirements of the user. Note that (1) and (2) are only known by the user, while (3) is known by the databases.
As shown in the examples considered, the set of all possible real queries takes the form of the queries in the probabilistic PIR scheme in [23,24], with a non-uniform probability distribution, unlike in PIR. The real query table used when retrieving W k consists of the following queries:
  • Single blocks:  W k is divided into N 1 parts, and each part is requested from N 1 databases, while requesting nothing ϕ from the remaining database. All cyclic shifts of these queries are considered in the real query table.
  • Sums of two blocks/Single block: One database is used to download W j l , l { 1 , , N 1 } , j k and each one in the rest of the N 1 databases is used to download W k r + W j l for each r { 1 , , N 1 } . All cyclic shifts of these queries are also considered as separate possible sets of queries.
  • Sums of three/Two blocks: One database is used to download W j 1 1 + W j 2 2 , 1 , 2 { 1 , , N 1 } and j 1 j 2 k . Each one in the rest of the N 1 databases is used to download W j 1 l 1 + W j 2 l 2 + W k r for each r { 1 , , N 1 } . All cyclic shifts of these queries are also considered as separate possible sets of queries.
  • Sums of K and K 1 blocks: The above process is repeated for all sums of blocks until K/ K 1 .
Out of the N K different sets of queries described above in the real query table, all queries except ϕ in single blocks, i.e., queries of the form W k , { 1 , , N 1 } , are chosen as ϵ -deceptive ones with respect to file k, for each k { 1 , , K } , and are included in the set of dummy queries sent to databases when the user-required file index is k. The N 1 ϵ -deceptive queries W k r , r { 1 , , N 1 } , corresponding to the kth file requirement, must guarantee the condition in (11). For that, we assign
P ( Q n = W k r | θ = k , R = 1 ) = p , r { 1 , , N 1 }
and
P ( Q n = W k r | θ = j , R = 1 ) = p e ϵ , r { 1 , , N 1 } , j k ,
for each database n, n { 1 , , N } . The rest of the queries, i.e., ϕ and sums of blocks where { 2 , , K } , are PIR queries in the proposed scheme. Note that the query ϕ is always coupled with the ϵ -deceptive queries with respect to file index k (required file) for correctness (see Table 6, Table 8 and Table 10). Thus, ϕ is assigned the corresponding probability given by
P ( Q n = ϕ | θ = m , R = 1 ) = p , m { 1 , , K } , n { 1 , , N } .
Similarly, as the rest of the PIR queries are coupled with ϵ -deceptive queries with respect to file indices j, j k , or with other PIR queries, they are assigned the corresponding probability given by
P ( Q n = Q ^ | θ = m , R = 1 ) = p e ϵ , m { 1 , , K } , n { 1 , , N } ,
where Q ^ is any PIR query in the form of ℓ-sums with { 2 , , K } . Since the probabilities of the real queries sent for each file requirement must add up to one, i.e., Q ˜ Q P ( Q n = Q ˜ | θ = m , R = 1 ) = 1 for each m { 1 , , K } , p is given by
p = 1 N + ( N K N ) e ϵ ,
as there are N query sets in the real query table with probability p, and N K N sets with probability p e ϵ . Each ϵ -deceptive query with respect to file index k is chosen with equal probability to be sent to the databases as dummy queries at times t i , j when the file requirement at the corresponding time T i is W k . Since there are N 1 deceptive queries,
P ( Q n = W k r | θ = k , R = 0 ) = 1 N 1 , r { 1 , , N 1 }
and
P ( Q n = W k r | θ = j , R = 0 ) = 0 , r { 1 , , N 1 } , j k
for each database n, n { 1 , , N } . Therefore, for all ϵ -deceptive queries with respect to file index k of the form W k i , the condition in (12) can be written as
α α + 1 p ( N 1 ) ( 1 α ) = e 2 ϵ
thus,
α = 1 p ( N 1 ) ( e 2 ϵ 1 ) + 1 = N + ( N K N ) e ϵ ( N 1 ) e 2 ϵ + ( N K N ) e ϵ + 1 ,
which characterizes α = E 1 M + 1 . The information available to database n, n { 1 , , N } , is the overall probability of receiving each query for each file requirement of the user P ( Q n = Q ˜ | θ = k ) , k { 1 , , K } , given by
P ( Q n = Q ˜ | θ = k ) = P ( Q n = Q ˜ | θ = k , R = 1 ) P ( R = 1 | θ = k ) + P ( Q n = Q ˜ | θ = k , R = 0 ) P ( R = 0 | θ = k ) .
For ϵ -deceptive queries with respect to file index k, i.e., W k j , j { 1 , , N 1 } , the overall probability in (80) from the perspective of database n, n { 1 , , N } , is given by
P ( Q n = W k j | θ = ) = α p + 1 α N 1 = e 2 ϵ ( N 1 ) ( e 2 ϵ 1 ) + N + ( N K N ) e ϵ , = k α p e ϵ = e ϵ ( N 1 ) ( e 2 ϵ 1 ) + N + ( N K N ) e ϵ , k .
The probability of sending the null query ϕ to database n, n { 1 , , N } , for each file requirement k, k { 1 , , K } , is
P ( Q n = ϕ | θ = k ) = α p = 1 ( N 1 ) ( e 2 ϵ 1 ) + N + ( N K N ) e ϵ .
For the rest of the PIR queries denoted by Q ^ , i.e., queries of the form s = 1 W i s j s for { 2 , , K } , the overall probability in (80), known by each database n, n { 1 , , N } for each file requirement k, k { 1 , , K } , is given by
P ( Q n = Q ^ | θ = k ) = α p e ϵ = e ϵ ( N 1 ) ( e 2 ϵ 1 ) + N + ( N K N ) e ϵ .
Based on the query received at a given time t, each database n, n { 1 , , N } , calculates the a posteriori probability of the user-required file index being k, k { 1 , , K } , using
P ( θ = k | Q n = Q ˜ ) = P ( Q n = Q ˜ | θ = k ) P ( θ = k ) P ( Q n = Q ˜ ) .
Since we assume uniform priors, i.e., P ( θ = k ) = 1 K for all k { 1 , , K } , the posteriors are directly proportional to P ( Q n = Q ˜ | θ = k ) for each Q ˜ . Therefore, the databases predict the user-required file index for each query received using (5) and (81)–(83). For example, when the query W 1 1 is received, it is clear that the maximum P ( θ = k | Q n = W 1 1 ) in (5) is obtained for k = 1 from (81) and (84). The prediction corresponding to any query received is given in Table 14 along with the corresponding probability of choosing the given prediction (The superscript j in the first column of Table 14 corresponds to any index in the set { 1 , . N 1 } ).
Based on the information in Table 14, the probability of error when a database n, n { 1 , , N } , receives the query W k at some time T i is given by
P ( θ ^ W k [ T i ] θ [ T i ] ) = P ( θ [ T i ] k | Q n [ T i ] = W k )
= j = 1 , j k K P ( θ [ T i ] = j | Q n [ T i ] = W k )
= j = 1 , j k K P ( Q n [ T i ] = W k | θ [ T i ] = j ) P ( θ [ T i ] = j ) P ( Q n [ T i ] = W k )
= 1 K p e ϵ ( K 1 ) P ( Q n [ T i ] = W k ) ,
where (88) follows from the fact that the user sends real queries based on the probabilities P ( Q n = Q ˜ | θ = k , R = 1 ) at time T i .
For all other queries Q ˜ , the corresponding probability of error is given by
P ( θ ^ Q ˜ [ T i ] θ [ T i ] ) = P ( θ ^ [ T i ] θ [ T i ] | Q n [ T i ] = Q ˜ )
= j = 1 K m = 1 , m j K P ( θ ^ [ T i ] = m , θ [ T i ] = j , Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ )
= j = 1 K m = 1 , m j K P ( θ ^ [ T i ] = m | θ [ T i ] = j , Q n [ T i ] = Q ˜ ) P ( Q n [ T i ] = Q ˜ | θ [ T i ] = j ) P ( θ [ T i ] = j ) P ( Q n [ T i ] = Q ˜ )
= 1 P ( Q n [ T i ] = Q ˜ ) ( K 1 ) p K , if Q ˜ = ϕ ( K 1 ) p e ϵ K , if Q ˜ of the form s = 1 W i s j s , { 2 , , K }
where (92) follows from the fact that θ ^ [ T i ] is conditionally independent of θ [ T i ] given Q from (5). The probability of error of each database’s prediction is given by
P e = Q ˜ P ( Q n [ T i ] = Q ˜ ) P ( θ ^ [ T i ] θ [ T i ] | Q [ T i ] = Q ˜ )
= k = 1 K = 1 N 1 P ( Q n [ T i ] = W k ) 1 K p e ϵ ( K 1 ) P ( Q n [ T i ] = W k ) + P ( Q n [ T i ] = ϕ ) 1 K ( K 1 ) p P ( Q n [ T i ] = ϕ )
+ ( N K 1 K ( N 1 ) ) P ( P ( Q n [ T i ] = Q ^ ) 1 K ( K 1 ) p e ϵ P ( Q n [ T i ] = Q ^ ) )
= p e ϵ ( K 1 ) ( N 1 ) + ( K 1 ) p K + ( K 1 ) p e ϵ ( N K 1 K ( N 1 ) ) K
= ( K 1 ) ( 1 + e ϵ ( N K 1 ) ) K ( N + ( N K N ) e ϵ ) ,
where Q ^ in (94) represents the queries of the form s = 1 W i s j s for { 2 , , K } . Note that P ( Q n [ T i ] = Q ^ ) is the same for each Q ^ as P ( Q n [ T i ] = Q ^ | θ = j ) = p e ϵ for each Q ^ and all j { 1 , , K } from (74). Thus, the amount of deception achieved by this scheme for a given ϵ is given by
D = P e 1 1 K = ( K 1 ) ( N 1 ) ( e ϵ 1 ) K ( N + ( N K N ) e ϵ ) .
Therefore, for a required amount of deception d, satisfying d < ( K 1 ) ( N 1 ) K ( N K N ) , the value of ϵ must be chosen as
ϵ = ln d K N + ( K 1 ) ( N 1 ) d K N + ( K 1 ) ( N 1 ) d K N K .
The download cost of the general scheme is
D L = 1 L N p L + ( N K N ) p e ϵ N L N 1 + N L N 1 E [ M ]
D L = N p + N ( N K N ) N 1 p e ϵ + N N 1 E [ M ]
D L = N N 1 1 1 N + ( N K N ) e ϵ + E [ M ] .
The following optimization problem needs to be solved to minimize the download cost while satisfying α = N + ( N K N ) e ϵ ( N 1 ) e 2 ϵ + ( N K N ) e ϵ + 1 , from (49):
min E [ M ] s . t . E 1 M + 1 = N + ( N K N ) e ϵ ( N 1 ) e 2 ϵ + ( N K N ) e ϵ + 1 = α .
Lemma 1. 
The solution to the optimization problem in (102) is given by
E [ M ] = 2 u u ( u + 1 ) α ,
where u = 1 α for a given value of α, which is specified by the required level of deception d.
The proof of Lemma 1 is given in Appendix A. The minimum download cost for the general case with N databases, K files, and a deception requirement d is obtained by (101) and (103). The corresponding maximum achievable rate is given in (9).

5. Discussion and Conclusions

We introduced the problem of deceptive information retrieval (DIR), in which a user retrieves a file from a set of independent files stored in multiple databases, while revealing fake information about the required file to the databases, which makes the probability of error of the databases’ prediction on the user-required file index high. The proposed scheme achieves rates lower than the PIR capacity when the required level of deception is positive, as it sends dummy queries at distinct time instances to deceive the databases. When the required level of deception is zero, the achievable DIR rate is the same as the PIR capacity.
The probability of error of the databases’ prediction on the user-required file index is calculated at the time of the user’s requirement, as defined in Section 2. In the proposed scheme, the user sends dummy queries at other (future) time instances as well. As the databases are unaware of being deceived, and are unable to distinguish between the times corresponding to real and dummy queries, they make predictions on the user-required file indices every time a query is received. Note that whenever a query of the form W k is received, the databases’ prediction is going to be θ ^ = k from Table 14. Although this is an incorrect prediction with high probability at times corresponding to the user’s real requirements, these predictions are correct when W k is used as a dummy query, as W k is only sent as a dummy query when the user needs to download file k. However, the databases are only able to obtain these correct predictions at future time instances, after which the user has already downloaded the required file while also deceiving the databases.
The reason for the requirement of the time dimension is also explained as follows. An alternative approach to using the time dimension is to select a subset of databases to send the dummy queries and to send the real queries to the rest of the databases. As explained above, whenever a database receives a query of the form W k as a dummy query, the database predicts the user-required file correctly. Therefore, this approach leaks information about the required file to a subset of databases, right at the time of the retrieval, while deceiving the rest. Hence, to deceive all databases at the time of retrieval, we exploit the time dimension that is naturally present in information retrieval applications that are time-sensitive.
A potential future direction of this work is an analysis on the time dimension. Note that, in this work, we assume that the databases do not keep track of the previous queries and only store the information corresponding to the current time instance. Therefore, as long as the dummy queries are sent at distinct time instances that are also different from the time of the user’s requirement, the calculations presented in this paper are valid. An extension of basic DIR can be formulated by assuming that the databases keep track of all queries received and their time stamps. This imposes additional constraints on the problem, as the databases now have extra information along the time dimension, which requires the scheme to choose the time instances at which the dummy queries are sent, in such a way that they do not leak any information about the existence of the two types (real and dummy) of queries. Another direction is to incorporate the freshness and age of information into DIR, where the user may trade the age of the required file for a reduced download cost, by making use of the previous dummy downloads present in DIR.

Author Contributions

Formal analysis, S.V.; Investigation, S.V.; Writing—original draft, S.V.; Writing—review & editing, S.U.; Supervision, S.U.; Project administration, S.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Lemma 1

The solution to the optimization problem in (102) for the general case with N databases and K files is as follows. The optimization problem in (102), for a required amount of deception d and the corresponding ϵ with α = N + ( N K N ) e ϵ ( N 1 ) e 2 ϵ + ( N K N ) e ϵ + 1 , is given by
min E [ M ] = m = 0 m p m s . t . E 1 m + 1 = m = 0 1 m + 1 p m = α m = 0 p m = 1
p m 0 , m { 0 , 1 , } .
We need to determine the optimum PMF of M that minimizes E [ M ] while satisfying the given condition. The Lagrangian L of this optimization problem is given by
L = m = 0 m p m + λ 1 m = 0 1 m + 1 p m α + λ 2 m = 0 p m 1 m = 0 μ m p m .
Then, the following set of equations need to be solved to find the minimum E [ M ] :
L p m = m + λ 1 1 m + 1 + λ 2 μ m = 0 , m { 0 , 1 , }
m = 0 1 m + 1 p m = α
m = 0 p m = 1
μ m p m = 0 , m { 0 , 1 , }
μ m , p m 0 , m { 0 , 1 , } .
Case 1: Assume that the PMF of M contains at most two non-zero probabilities, i.e., p 0 , p 1 0 and p i = 0 , i { 2 , 3 , } . Then, the conditions in (A3)–(A7) are simplified as
L p 0 = λ 1 + λ 2 μ 0 = 0
L p 1 = 1 2 λ 1 + λ 2 μ 1 = 1
p 0 + 1 2 p 1 = α
p 0 + p 1 = 1
μ 0 p 0 = 0
μ 1 p 1 = 0
μ 0 , μ 1 , p 0 , p 1 0 .
From (A10) and (A11), we obtain
p 0 + 1 2 ( 1 p 0 ) = α
and thus,
p 0 = 2 α 1 , p 1 = 2 2 α ,
which along with (A14) implies that this solution is only valid for 1 2 α 1 . The corresponding optimum value of E [ M ] is given by
E [ M ] = 1 p 0 = 2 2 α , 1 2 α 1 .
Case 2: Now consider the case where at most three probabilities of the PMF of M are allowed to be non-zero, i.e., p 0 , p 1 , p 2 0 and p i = 0 , i { 3 , 4 , } . The set of conditions in (A3)–(A7) for this case is
L p m = m + λ 1 1 m + 1 + λ 2 μ m = 0 , m { 0 , 1 , 2 }
m = 0 2 1 m + 1 p m = α
m = 0 2 p m = 1
μ m p m = 0 , m { 0 , 1 , 2 }
μ m , p m 0 , m { 0 , 1 , 2 } .
The set of conditions in (A18)–(A22) can be written in a matrix form as
1 1 1 0 0 0 0 0 1 2 1 0 1 0 0 0 0 1 3 1 0 0 1 0 0 0 0 0 0 0 0 1 1 2 1 3 0 0 0 0 0 1 1 1 λ 1 λ 2 μ 0 μ 1 μ 2 p 0 p 1 p 2 = 0 1 2 α 1 .
Three of the above eight variables, i.e., either μ i or p i for each i, are always zero according to (A21). We consider all choices of { μ i , p i } pairs such that one element of the pair is equal to zero, and the other one is a positive variable, and solve the system for the non-zero variables. Then we calculate the resulting E [ M ] , along with the corresponding regions of u for which the solutions are applicable. For each region of u, we find the solution to (A23) that results in the minimum E [ M ] . Based on this process, the optimum values of p i , i { 0 , 1 , 2 } , the corresponding ranges of u, and the minimum values of E [ M ] are given in Table A1.
Table A1. Solution to Case 2: Optimum PMF of M, valid ranges of α , and minimum E [ M ] .
Table A1. Solution to Case 2: Optimum PMF of M, valid ranges of α , and minimum E [ M ] .
Range of α p 0 p 1 p 2 E [ M ]
1 3 α 1 2 0 6 α 2 3 6 α 4 6 α
1 2 α 1 2 α 1 2 2 α 0 2 2 α
As an example, consider the calculations corresponding to the case where μ 0 > 0 , μ 1 = μ 2 = 0 , which implies p 0 = 0 , p 1 , p 2 > 0 . Note that for this case, (A23) simplifies to
1 1 1 0 0 1 2 1 0 0 0 1 3 1 0 0 0 0 0 0 1 2 1 3 0 0 0 1 1 λ 1 λ 2 μ 0 p 1 p 2 = 0 1 2 α 1 .
The values of p 1 and p 2 , from the solution of the above system, and the corresponding range of α , from (A22), along with the resulting E [ M ] , are given by
p 1 = 6 α 2 , p 2 = 3 6 α , 1 3 α 1 2 , E [ M ] = 4 6 α .
Case 3: At most four non-zero elements of the PMF of M are considered in this case, i.e., p 0 , p 1 , p 2 , p 3 0 and p i = 0 , i { 4 , 5 , } . The conditions in (A3)–(A7) can be written in a matrix form as
1 1 1 0 0 0 0 0 0 0 1 2 1 0 1 0 0 0 0 0 0 1 3 1 0 0 1 0 0 0 0 0 1 4 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 2 1 3 1 4 0 0 0 0 0 0 1 1 1 1 λ 1 λ 2 μ 0 μ 1 μ 2 μ 3 p 0 p 1 p 2 p 3 = 0 1 2 3 α 1 .
Using the same method described in Case 2, the optimum values of p i , i { 0 , 1 , 2 , 3 } , corresponding ranges of α , and the resulting minimum E [ M ] for Case 3 are given in Table A2.
Table A2. Solution to Case 3: Optimum PMF of M, valid ranges of α and minimum E [ M ] .
Table A2. Solution to Case 3: Optimum PMF of M, valid ranges of α and minimum E [ M ] .
Range of α p 0 p 1 p 2 p 3 E [ M ]
1 4 α 1 3 00 12 α 3 4 12 α 6 12 α
1 3 α 1 2 0 6 α 2 3 6 α 0 4 6 α
1 2 α 1 2 α 1 2 2 α 00 2 2 α
Case 4: At most five non-zero elements of the PMF of M are considered in this case, i.e., p 0 , p 1 , p 2 , p 3 , p 4 0 and p i = 0 , i { 5 , 6 , } . The conditions in (A3)–(A7) can be written in a matrix form as
1 1 1 0 0 0 0 0 0 1 2 1 0 1 0 0 0 0 0 1 3 1 0 0 1 0 0 0 0 1 4 1 0 0 0 1 0 0 0 1 5 1 0 0 0 0 1 0 0 0 0 0 0 1 1 2 1 3 1 4 1 5 0 0 0 0 1 1 1 1 1 λ 1 λ 2 μ 0 μ 1 μ 2 μ 3 μ 4 p 0 p 1 p 2 p 3 p 4 = 0 1 2 3 4 α 1 .
Using the same method as before, the optimum values of p i , i { 0 , 1 , 2 , 3 , 4 } , the corresponding ranges of α , and the resulting minimum E [ M ] for Case 4 are given in Table A3.
Table A3. Solution to Case 4: Optimum PMF of M, valid ranges of α , and minimum E [ M ] .
Table A3. Solution to Case 4: Optimum PMF of M, valid ranges of α , and minimum E [ M ] .
Range of α p 0 p 1 p 2 p 3 p 4 E [ M ]
1 5 α 1 4 000 20 α 4 5 20 α 8 20 α
1 4 α 1 3 00 12 α 3 4 12 α 0 6 12 α
1 3 α 1 2 0 6 α 2 3 6 α 00 4 6 α
1 2 α 1 2 α 1 2 2 α 000 2 2 α
Note that the PMF of M and the resulting E [ M ] are the same for a given α in all cases (see Table A1, Table A2 and Table A3) irrespective of the support of the PMF of M considered. Therefore, we observe from the above cases that, for a given α in the range 1 + 1 α 1 , E [ M ] is minimized when the PMF of M is such that
p , p 1 > 0 , and p i = 0 for i Z + { , 1 } ,
which requires p and p 1 to satisfy
p + p 1 = 1
E 1 M + 1 = p 1 + 1 + p 1 1 = α .
Therefore, for a given α in the range 1 + 1 α 1 , the optimum PMF of M and the resulting minimum E [ M ] are given by
p = ( + 1 ) ( 1 α ) , p 1 = ( ( + 1 ) α 1 ) , E [ M ] = 2 α ( + 1 ) .

References

  1. Chor, B.; Kushilevitz, E.; Goldreich, O.; Sudan, M. Private Information Retrieval. J. ACM 1998, 45, 965–981. [Google Scholar] [CrossRef]
  2. Sun, H.; Jafar, S.A. The Capacity of Private Information Retrieval. IEEE Trans. Inf. Theory 2017, 63, 4075–4088. [Google Scholar] [CrossRef]
  3. Tian, C.; Sun, H.; Chen, J. Capacity-Achieving Private Information Retrieval Codes with Optimal Message Size and Upload Cost. IEEE Trans. Inf. Theory 2019, 65, 7613–7627. [Google Scholar] [CrossRef]
  4. Banawan, K.; Ulukus, S. The Capacity of Private Information Retrieval from Coded Databases. IEEE Trans. Inf. Theory 2018, 64, 1945–1956. [Google Scholar] [CrossRef]
  5. Sun, H.; Jafar, S.A. The Capacity of Robust Private Information Retrieval with Colluding Databases. IEEE Trans. Inf. Theory 2018, 64, 2361–2370. [Google Scholar] [CrossRef]
  6. Kadhe, S.; Garcia, B.; Heidarzadeh, A.; El Rouayheb, S.; Sprintson, A. Private Information Retrieval With Side Information. IEEE Trans. Inf. Theory 2020, 66, 2032–2043. [Google Scholar] [CrossRef]
  7. Li, S.; Gastpar, M. Single-server Multi-message Private Information Retrieval with Side Information: The General Cases. In Proceedings of the IEEE ISIT, Los Angeles, CA, USA, 21–26 June 2020. [Google Scholar]
  8. Yang, H.; Shin, W.; Lee, J. Private information retrieval for secure distributed storage systems. IEEE Trans. Inf. Forensics Secur. 2018, 13, 2953–2964. [Google Scholar] [CrossRef]
  9. Jia, Z.; Jafar, S.A. X-Secure T-Private Information Retrieval from MDS Coded Storage with Byzantine and Unresponsive Servers. IEEE Trans. Inf. Theory 2020, 66, 7427–7438. [Google Scholar] [CrossRef]
  10. Banawan, K.; Ulukus, S. Multi-Message Private Information Retrieval: Capacity Results and Near-Optimal Schemes. IEEE Trans. Inf. Theory 2018, 64, 6842–6862. [Google Scholar] [CrossRef]
  11. Wang, Q.; Sun, H.; Skoglund, M. The Capacity of Private Information Retrieval with Eavesdroppers. IEEE Trans. Inf. Theory 2019, 65, 3198–3214. [Google Scholar] [CrossRef]
  12. Kumar, S.; Lin, H.-Y.; Rosnes, E.; Amat, A.G.i. Achieving Maximum Distance Separable Private Information Retrieval Capacity with Linear Codes. IEEE Trans. Inf. Theory 2019, 65, 4243–4273. [Google Scholar] [CrossRef]
  13. Sun, H.; Jafar, S.A. The Capacity of Symmetric Private Information Retrieval. IEEE Trans. Inf. Theory 2019, 65, 322–329. [Google Scholar] [CrossRef]
  14. Woolsey, N.; Chen, R.; Ji, M. Uncoded Placement with Linear Sub-Messages for Private Information Retrieval from Storage Constrained Databases. IEEE Trans. Commun. 2020, 68, 6039–6053. [Google Scholar] [CrossRef]
  15. Fanti, G.; Ramchandran, K. Efficient Private Information Retrieval over Unsynchronized Databases. IEEE J. Sel. Top. Signal Process. 2015, 9, 1229–1239. [Google Scholar] [CrossRef]
  16. Samy, I.; Attia, M.; Tandon, R.; Lazos, L. Asymmetric Leaky Private Information Retrieval. IEEE Trans. Inf. Theory 2021, 67, 5352–5369. [Google Scholar] [CrossRef]
  17. Guo, T.; Zhou, R.; Tian, C. On the Information Leakage in Private Information Retrieval Systems. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2999–3012. [Google Scholar] [CrossRef]
  18. Liebowitz, D.; Nepal, S.; Moore, K.; Christopher, C.; Kanhere, S.; Nguyen, D.; Timmer, R.; Longland, M.; Rathakumar, K. Deception for Cyber Defence: Challenges and Opportunities. In Proceedings of the TPS-ISA, Atlanta, GA, USA, 13–15 December 2021. [Google Scholar]
  19. Yarali, A.; Sahawneh, F. Deception: Technologies and Strategy for Cybersecurity. In Proceedings of the SmartCloud, Tokyo, Japan, 10–12 December 2019. [Google Scholar]
  20. Faveri, C.; Moreira, A. Designing Adaptive Deception Strategies. In Proceedings of the QRS-C, Vienna, Austria, 1–3 August 2016. [Google Scholar]
  21. Tounsi, W. Cyber Deception, the Ultimate Piece of a Defensive Strategy—Proof of Concept. In Proceedings of the CSNet, Rio de Janeiro, Brazil, 24–26 October 2022. [Google Scholar]
  22. Sarr, A.; Anwar, A.; Kamhoua, C.; Leslie, N.; Acosta, J. Software Diversity for Cyber Deception. In Proceedings of the IEEE Globecom, Taipei, Taiwan, 7–11 December 2020. [Google Scholar]
  23. Samy, I.; Tandon, R.; Lazos, L. On the Capacity of Leaky Private Information Retrieval. In Proceedings of the IEEE ISIT, Paris, France, 7–12 July 2019. [Google Scholar]
  24. Vithana, S.; Banawan, K.; Ulukus, S. Semantic Private Information Retrieval. IEEE Trans. Inf. Theory 2022, 68, 2635–2652. [Google Scholar] [CrossRef]
Figure 1. Download costs and prediction error probabilities for different types of information retrieval.
Figure 1. Download costs and prediction error probabilities for different types of information retrieval.
Entropy 26 00244 g001
Figure 2. Achievable DIR rate for varying levels of deception and different numbers of databases when K = 3 .
Figure 2. Achievable DIR rate for varying levels of deception and different numbers of databases when K = 3 .
Entropy 26 00244 g002
Figure 3. Achievable DIR rate for varying levels of deception and different numbers of files when N = 2 .
Figure 3. Achievable DIR rate for varying levels of deception and different numbers of files when N = 2 .
Entropy 26 00244 g003
Table 1. Real query table— W 1 .
Table 1. Real query table— W 1 .
P ( Q | θ = 1 , R = 1 ) DB 1DB 2
p W 1 ϕ
p ϕ W 1
p W 2 W 1 + W 2
p W 1 + W 2 W 2
Table 2. Real query table— W 2 .
Table 2. Real query table— W 2 .
P ( Q | θ = 2 , R = 1 ) DB 1DB 2
p W 2 ϕ
p ϕ W 2
p W 1 W 1 + W 2
p W 1 + W 2 W 1
Table 3. Dummy query table— W 1 .
Table 3. Dummy query table— W 1 .
P ( Q | θ = 1 , R = 0 ) DB 1DB 2
1 W 1 W 1
Table 4. Dummy query table— W 2 .
Table 4. Dummy query table— W 2 .
P ( Q | θ = 2 , R = 0 ) DB 1DB 2
1 W 2 W 2
Table 5. Probabilities of each database predicting the user-required file in Example 1.
Table 5. Probabilities of each database predicting the user-required file in Example 1.
Query Q ˜ P ( θ ^ Q ˜ = 1 ) P ( θ ^ Q ˜ = 2 )
W 1 10
W 2 01
W 1 + W 2 1 2 1 2
ϕ 1 2 1 2
Table 6. Real query table— W 1 .
Table 6. Real query table— W 1 .
P ( Q | θ = 1 , R = 1 ) Database 1Database 2Database 3
p W 1 1 W 1 2 ϕ
p W 1 2 ϕ W 1 1
p ϕ W 1 1 W 1 2
p W 1 1 + W 2 1 W 1 2 + W 2 1 W 2 1
p W 1 2 + W 2 1 W 2 1 W 1 1 + W 2 1
p W 2 1 W 1 1 + W 2 1 W 1 2 + W 2 1
p W 1 1 + W 2 2 W 1 2 + W 2 2 W 2 2
p W 1 2 + W 2 2 W 2 2 W 1 1 + W 2 2
p W 2 2 W 1 1 + W 2 2 W 1 2 + W 2 2
p W 1 1 + W 3 1 W 1 2 + W 3 1 W 3 1
p W 1 2 + W 3 1 W 3 1 W 1 1 + W 3 1
p W 3 1 W 1 1 + W 3 1 W 1 2 + W 3 1
p W 1 1 + W 3 2 W 1 2 + W 3 2 W 3 2
p W 1 2 + W 3 2 W 3 2 W 1 1 + W 3 2
p W 3 2 W 1 1 + W 3 2 W 1 2 + W 3 2
p W 1 1 + W 2 1 + W 3 1 W 1 2 + W 2 1 + W 3 1 W 2 1 + W 3 1
p W 1 2 + W 2 1 + W 3 1 W 2 1 + W 3 1 W 1 1 + W 2 1 + W 3 1
p W 2 1 + W 3 1 W 1 1 + W 2 1 + W 3 1 W 1 2 + W 2 1 + W 3 1
p W 1 1 + W 2 2 + W 3 1 W 1 2 + W 2 2 + W 3 1 W 2 2 + W 3 1
p W 1 2 + W 2 2 + W 3 1 W 2 2 + W 3 1 W 1 1 + W 2 2 + W 3 1
p W 2 2 + W 3 1 W 1 1 + W 2 2 + W 3 1 W 1 2 + W 2 2 + W 3 1
p W 1 1 + W 2 1 + W 3 2 W 1 2 + W 2 1 + W 3 2 W 2 1 + W 3 2
p W 1 2 + W 2 1 + W 3 2 W 2 1 + W 3 2 W 1 1 + W 2 1 + W 3 2
p W 2 1 + W 3 2 W 1 1 + W 2 1 + W 3 2 W 1 2 + W 2 1 + W 3 2
p W 1 1 + W 2 2 + W 3 2 W 1 2 + W 2 2 + W 3 2 W 2 2 + W 3 2
p W 1 2 + W 2 2 + W 3 2 W 2 2 + W 3 2 W 1 1 + W 2 2 + W 3 2
p W 2 2 + W 3 2 W 1 1 + W 2 2 + W 3 2 W 1 2 + W 2 2 + W 3 2
Table 7. Dummy query table— W 1 .
Table 7. Dummy query table— W 1 .
P ( Q | θ = 1 , R = 0 ) DB 1 P ( Q | θ = 1 , R = 0 ) DB 2 P ( Q | θ = 1 , R = 0 ) DB 3
1 2 W 1 1 1 2 W 1 1 1 2 W 1 1
1 2 W 1 2 1 2 W 1 2 1 2 W 1 2
Table 8. Real query table— W 2 .
Table 8. Real query table— W 2 .
P ( Q | θ = 2 , R = 1 ) Database 1Database 2Database 3
p W 2 1 W 2 2 ϕ
p W 2 2 ϕ W 2 1
p ϕ W 2 1 W 2 2
p W 1 1 + W 2 1 W 1 1 + W 2 2 W 1 1
p W 1 1 + W 2 2 W 1 1 W 1 1 + W 2 1
p W 1 1 W 1 1 + W 2 1 W 1 1 + W 2 2
p W 1 2 + W 2 1 W 1 2 + W 2 2 W 1 2
p W 1 2 + W 2 2 W 1 2 W 1 2 + W 2 1
p W 1 2 W 1 2 + W 2 1 W 1 2 + W 2 2
p W 2 1 + W 3 1 W 2 2 + W 3 1 W 3 1
p W 2 2 + W 3 1 W 3 1 W 2 1 + W 3 1
p W 3 1 W 2 1 + W 3 1 W 2 2 + W 3 1
p W 2 1 + W 3 2 W 2 2 + W 3 2 W 3 2
p W 2 2 + W 3 2 W 3 2 W 2 1 + W 3 2
p W 3 2 W 2 1 + W 3 2 W 2 2 + W 3 2
p W 1 1 + W 2 1 + W 3 1 W 1 1 + W 2 2 + W 3 1 W 1 1 + W 3 1
p W 1 1 + W 2 2 + W 3 1 W 1 1 + W 3 1 W 1 1 + W 2 1 + W 3 1
p W 1 1 + W 3 1 W 1 1 + W 2 1 + W 3 1 W 1 1 + W 2 2 + W 3 1
p W 1 1 + W 2 1 + W 3 2 W 1 1 + W 2 2 + W 3 2 W 1 1 + W 3 2
p W 1 1 + W 2 2 + W 3 2 W 1 1 + W 3 2 W 1 1 + W 2 1 + W 3 2
p W 1 1 + W 3 2 W 1 1 + W 2 1 + W 3 2 W 1 1 + W 2 2 + W 3 2
p W 1 2 + W 2 1 + W 3 1 W 1 2 + W 2 2 + W 3 1 W 1 2 + W 3 1
p W 1 2 + W 2 2 + W 3 1 W 1 2 + W 3 1 W 1 2 + W 2 1 + W 3 1
p W 1 2 + W 3 1 W 1 2 + W 2 1 + W 3 1 W 1 2 + W 2 2 + W 3 1
p W 1 2 + W 2 1 + W 3 2 W 1 2 + W 2 2 + W 3 2 W 1 2 + W 3 2
p W 1 2 + W 2 2 + W 3 2 W 1 2 + W 3 2 W 1 2 + W 2 1 + W 3 2
p W 1 2 + W 3 2 W 1 2 + W 2 1 + W 3 2 W 1 2 + W 2 2 + W 3 2
Table 9. Dummy query table— W 2 .
Table 9. Dummy query table— W 2 .
P ( Q | θ = 2 , R = 0 ) DB 1 P ( Q | θ = 2 , R = 0 ) DB 2 P ( Q | θ = 2 , R = 0 ) DB 3
1 2 W 2 1 1 2 W 2 1 1 2 W 2 1
1 2 W 2 2 1 2 W 2 2 1 2 W 2 2
Table 10. Real query table— W 3 .
Table 10. Real query table— W 3 .
P ( Q | θ = 3 , R = 1 ) Database 1Database 2Database 3
p W 3 1 W 3 2 ϕ
p W 3 2 ϕ W 3 1
p ϕ W 3 1 W 3 2
p W 1 1 + W 3 1 W 1 1 + W 3 2 W 1 1
p W 1 1 + W 3 2 W 1 1 W 1 1 + W 3 1
p W 1 1 W 1 1 + W 3 1 W 1 1 + W 3 2
p W 1 2 + W 3 1 W 1 2 + W 3 2 W 1 2
p W 1 2 + W 3 2 W 1 2 W 1 2 + W 3 2
p W 1 2 W 1 2 + W 3 2 W 1 2 + W 3 1
p W 2 1 + W 3 1 W 2 1 + W 3 2 W 2 1
p W 2 1 + W 3 2 W 3 1 W 2 1 + W 3 1
p W 2 1 W 2 1 + W 3 1 W 2 1 + W 3 2
p W 2 2 + W 3 1 W 2 2 + W 3 2 W 2 2
p W 2 2 + W 3 2 W 2 2 W 2 2 + W 3 1
p W 2 2 W 2 2 + W 3 1 W 2 2 + W 3 2
p W 1 1 + W 2 1 + W 3 1 W 1 1 + W 2 1 + W 3 2 W 1 1 + W 2 1
p W 1 1 + W 2 1 + W 3 2 W 1 1 + W 2 1 W 1 1 + W 2 1 + W 3 1
p W 1 1 + W 2 1 W 1 1 + W 2 1 + W 3 1 W 1 1 + W 2 1 + W 3 2
p W 1 2 + W 2 1 + W 3 1 W 1 2 + W 2 1 + W 3 2 W 1 2 + W 2 1
p W 1 2 + W 2 1 + W 3 2 W 1 2 + W 2 1 W 1 2 + W 2 1 + W 3 1
p W 1 2 + W 2 1 W 1 2 + W 2 1 + W 3 1 W 1 2 + W 2 1 + W 3 2
p W 1 1 + W 2 2 + W 3 1 W 1 1 + W 2 2 + W 3 2 W 1 1 + W 2 2
p W 1 1 + W 2 2 + W 3 2 W 1 1 + W 2 2 W 1 1 + W 2 2 + W 3 1
p W 1 1 + W 2 2 W 1 1 + W 2 2 + W 3 1 W 1 1 + W 2 2 + W 3 2
p W 1 2 + W 2 2 + W 3 1 W 1 2 + W 2 2 + W 3 2 W 1 2 + W 2 2
p W 1 2 + W 2 2 + W 3 2 W 1 2 + W 2 2 W 1 2 + W 2 2 + W 3 1
p W 1 2 + W 2 2 W 1 2 + W 2 2 + W 3 1 W 1 2 + W 2 2 + W 3 2
Table 11. Dummy query table— W 3 .
Table 11. Dummy query table— W 3 .
P ( Q | θ = 3 , R = 0 ) DB 1 P ( Q | θ = 3 , R = 0 ) DB 2 P ( Q | θ = 3 , R = 0 ) DB 3
1 2 W 3 1 1 2 W 3 1 1 2 W 3 1
1 2 W 3 2 1 2 W 3 2 1 2 W 3 2
Table 12. Queries received by database n, n { 1 , , N } , at a given time t for each file requirement, and the corresponding probabilities.
Table 12. Queries received by database n, n { 1 , , N } , at a given time t for each file requirement, and the corresponding probabilities.
Query Q ˜ P ( Q n = Q ˜ | θ = 1 ) P ( Q n = Q ˜ | θ = 2 ) P ( Q n = Q ˜ | θ = 3 )
ϕ p α p α p α
W 1 1 p α + 1 2 ( 1 α ) p α p α
W 1 2 p α + 1 2 ( 1 α ) p α p α
W 2 1 p α p α + 1 2 ( 1 α ) p α
W 2 2 p α p α + 1 2 ( 1 α ) p α
W 3 1 p α p α p α + 1 2 ( 1 α )
W 3 2 p α p α p α + 1 2 ( 1 α )
other queries p α p α p α
Table 13. Probabilities of each database predicting the user-required file in Example 2.
Table 13. Probabilities of each database predicting the user-required file in Example 2.
Query Q ˜ P ( θ ^ Q ˜ = 1 ) P ( θ ^ Q ˜ = 2 ) P ( θ ^ Q ˜ = 3 )
W 1 1 100
W 1 2 100
W 2 1 010
W 2 2 010
W 3 1 001
W 3 2 001
other queries 1 3 1 3 1 3
Table 14. Probabilities of each database predicting the user-required file.
Table 14. Probabilities of each database predicting the user-required file.
Query Q ˜ P ( θ ^ Q ˜ = 1 ) P ( θ ^ Q ˜ = 2 ) P ( θ ^ Q ˜ = 3 ) P ( θ ^ Q ˜ = K )
W 1 j 1000
W 2 j 0100
W 3 j 0010
W K j 0001
other queries 1 K 1 K 1 K 1 K
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vithana, S.; Ulukus, S. Deceptive Information Retrieval. Entropy 2024, 26, 244. https://doi.org/10.3390/e26030244

AMA Style

Vithana S, Ulukus S. Deceptive Information Retrieval. Entropy. 2024; 26(3):244. https://doi.org/10.3390/e26030244

Chicago/Turabian Style

Vithana, Sajani, and Sennur Ulukus. 2024. "Deceptive Information Retrieval" Entropy 26, no. 3: 244. https://doi.org/10.3390/e26030244

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop