Next Article in Journal
A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers
Previous Article in Journal
A Biomorphic Model of Cortical Column for Content—Based Image Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cryptography in Hierarchical Coded Caching: System Model and Cost Analysis

1
CSE Department, Indian Institute of Technology Guwahati, Guwahati 781039, Assam, India
2
Cyber Science Lab, School of Computer Science, University of Guelph, Guelph, ON N1G 2W1, Canada
3
EEE Department, Indian Institute of Technology Guwahati, Guwahati 781039, Assam, India
4
Department of Computer Science and Software Engineering, Miami University, Oxford, OH 45056, USA
5
Department of Mathematics, Faculty of Education and Integrated Arts and Sciences, Waseda University, Tokyo 169-8050, Japan
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(11), 1459; https://doi.org/10.3390/e23111459
Submission received: 8 October 2021 / Revised: 30 October 2021 / Accepted: 1 November 2021 / Published: 3 November 2021

Abstract

:
The idea behind network caching is to reduce network traffic during peak hours via transmitting frequently-requested content items to end users during off-peak hours. However, due to limited cache sizes and unpredictable access patterns, this might not totally eliminate the need for data transmission during peak hours. Coded caching was introduced to further reduce the peak hour traffic. The idea of coded caching is based on sending coded content which can be decoded in different ways by different users. This allows the server to service multiple requests by transmitting a single content item. Research works regarding coded caching traditionally adopt a simple network topology consisting of a single server, a single hub, a shared link connecting the server to the hub, and private links which connect the users to the hub. Building on the results of Sengupta et al. (IEEE Trans. Inf. Forensics Secur., 2015), we propose and evaluate a yet more complex system model that takes into consideration both throughput and security via combining the mentioned ideas. It is demonstrated that the achievable rates in the proposed model are within a constant multiplicative and additive gap with the minimum secure rates.

1. Introduction

Coded caching, proposed by Maddah-Ali and Niesen [1], refers to an augmented variant of caching. Coded caching follows two strategies during two transmission phases in order to avoid a traffic bottleneck in the network. The first transmission phase, referred to as the placement phase, takes place in off-peak hours. During this phase the system attempts at placing frequently-demanded content items in local memories of corresponding interested users in order to avoid unnecessary transmission during peak time. This helps deteriorate network bandwidth over utilization and underutilization problems during peak and off-peak intervals. An effective placement strategy should consider the statistical and probabilistic nature of the users’ access patterns. The second phase, i.e., the delivery phase manages the transmission in peak hours. The ideal goal in the latter phase is to send only a single coded content item which is a function of the originally-requested content items. Each user—in the ideal case—should be able to calculate its own demanded item from the transmitted item. The more the system approaches this goal, the less amount of transmission during the delivery phase (rate) is required.
The authors in [1] made a lot of simplifying assumptions when establishing the first system model for a coded caching scheme. They assumed a simple network based on a star topology which provides a one-way content transmission from a single server storing N files each of size F bits to K users and each user having cache size of M F bits. Each user requests a single file during the delivery phase. Every file sent by the server passes through a single shared link and arrives at the hub where it is duplicated and transmitted to each user through a private link.
This system model is obviously not realistic enough because it ignores many considerations of a real-world network among which we focus on scalability and security in this paper. There are various security-related issues, such as confidentiality, privacy, and distributed denial-of-service (DDoS) attack protection which need to be addressed in coded caching. Among the mentioned issues, confidentiality has received the most focus in recent years [2,3]. Previous works in this area have augmented the coded caching system model by adding an adversary with access to the shared link only during the delivery phase. The space required to store the cryptographic keys in the server memory and user caches, as well as the extra traffic caused by key exchange mechanisms should be considered as obvious costs of this variant of coded caching.
In order to address the scalability issue, some researchers have augmented the coded caching system model in another way via proposing hierarchical network topology [4]. In the proposed topology, the main server is mirrored in each cluster of users. This allows part of the traffic to be locally handled in user clusters which leads to improved scalability. This improvement is achieved at the cost of redundant servers and links.
Although scalability and security have been separately examined in previous research, the literature in this area has not come up with a study on the possibility or the costs of considering both issues at the same time. This paper addresses both of the mentioned issues via considering confidential content transmission over a hierarchical network. This goal is achieved by further augmenting the coded caching system model, as well as analyzing the related costs. In our proposed system model, the adversary can eavesdrop the shared links in each hierarchy level during the peak interval.
The costs of scalability have already been analyzed in previous research [4]. We compare the results of our mathematical cost evaluations with those obtained in [4] to analyze the extra cost posed by confidentiality considerations. The key contribution of the paper is the result that although the achievable rates are within a constant multiplicative and additive gap to the corresponding lower bounds in both schemes, confidentiality causes the constants to grow larger.
The rest of this paper is organized as follows. Section 2 defines the problem we are tackling in this paper. This section first studies relevant works and presents some preliminaries, and then the shortcomings of the previous works which motivate our work in this paper are discussed. Section 3 explains the secure hierarchical coded caching scheme and describes the system model and configuration. The fundamental limits as well as costs are analyzed in Section 4. In this section, the secure achievable rates, memory requirements and the lower bounds on the rates are calculated. A gap analysis between the secure achievable rates and the corresponding lower bounds is presented in Section 5. The last section of this paper is Section 6 which concludes the paper and suggests further research topics.

2. Problem Statement

In this section, we first present some preliminary discussions regarding coded caching and review the related literature and then highlight some shortcomings in the related works which motivate us to propose the secure hierarchical coded caching scheme.

2.1. Related Works

Caching is a solution to the problem of temporally-nonuniform access to contents stored in servers which may causes the network bandwidth to be underutilized in the off-peak interval while it can render a bottleneck in the peak interval. This technique helps achieve more uniform network traffic and deteriorate the bottleneck problem by allowing the system to store frequently-accessed content items in local caches during the off-peak time.
Coded caching has been a research focus during recent years [5,6,7,8,9]. Coded caching is finding its application in modern technologies and services, such as content delivery [10,11,12], mobile computing [13,14,15], and information-enteric networks [16]. Different aspects of coded caching have recently been studied among which we can refer to as [17], centralized [18,19] and decentralized [18,19] coded caching, placement [20] and delivery [21,22] schemes, as well as added pre-fetching phase [23], multi-casting [22,24,25,26], scheduling [27], error correction [28], clustering [29], heterogeneity [12,25,30], the impact of file size [31,32], dealing with non-uniform user demands [33] and peak-time traffic reduction [1]. Moreover, security in coded caching has been considered as a concern [20,34,35,36] and cryptography has been among the best-studied security mechanisms for use in coded caching [37,38].
Examining the above problems has led to different variants of caching schemes. In this paper, we focus on coded caching schemes which try to service as many user requests as possible by transmitting a single coded data item in the peak time. Coded caching schemes can be classified into the following categories with respect to their behaviors in the placement phase.

2.2. Centralized Schemes

In these schemes, the server decides the data items which are to be stored in user caches during the placement phase [1,39,40,41,42,43]. It has been shown that a multiplicative factor of 1 1 + K M / N in size reduces the rate in centralized coded caching. This factor is referred to as global caching gain. As shown in [1], the centralized coded caching rate R C ( M ) is given by (1),
R C ( M ) K · ( 1 M / N ) · min 1 1 + K M / N , N K .

2.3. Decentralized Schemes

In the latter schemes, users are allowed to store random data in their caches. It was shown in [44] that the rate [1] in decentralized coded caching can be obtained from (2),
R D ( M ) K · ( 1 M / N ) · min N K M ( 1 ( 1 M / N ) K ) , N K .
An important point to note here is that the term “decentralized” does not refer to the underlying network and the network topology adopted in [44,45] are the same as the one considered in [1].

2.4. Hierarchical Coded Caching Scheme

The scheme introduced in [4] proposes a hierarchical coded caching scheme in which the content stored in the main server can be mirrored by intermediate servers in different levels of hierarchy before being placed in end user caches. In this scheme, the requests issued by each end user are first forwarded to the closest intermediate server. If not serviced the request is then forwarded to the higher hierarchy level. This implies the existence of different peak and off-peak intervals in different hierarchy levels.
Two different caching schemes have been proposed in this paper. The first scheme referred to as Scheme A allows simultaneous coded multicasting in both hierarchy levels. Each mirror first downloads the content items requested by its corresponding users from the main servers. Then, the items are coded and forwarded to the users. In Scheme B, mirrors act as memory-less routers. They receive the items from the main server and forward them without being stored or coded. It has been demonstrated that both schemes can individually perform sub-optimally [4]. The authors in this paper argued that because of the disjunctive relation between Scheme A and Scheme B, the rate of each link is the sum of the individual rates induced by the two schemes. They proposed a hybrid scheme named as the generalized coded caching scheme that attempts to incorporates a proper combination of Scheme A and Scheme B in order to approximately minimize the overall rate.

2.5. Secure Coded Caching Scheme

The scheme presented in [3] argued that the shared link may be eavesdropped by an adversary since it is publicly accessible as a broadcast medium. Thus they proposed an on time pad (OTP) cryptosystem [46] to preserve the confidentiality of the data items exchanged through this link. In their proposed scheme, the keys are placed in user cache along with the data in the placement phase. These confidentiality measures can be applied in both centralized and decentralized coded caching systems.
It is demonstrated in this paper that the secure rates for the centralized scheme and the decentralized scheme can be obtained through replacing M / N by ( M 1 ) / ( N 1 ) in Equations (1) and (2), respectively. The authors of [4] argued that the overall rate of the hierarchical network is the sum of the individual rates in different levels of the hierarchy. Thus, if the overall rate needs to be minimized, both levels should operate at their minimum rates.
The relationship between the goals followed by the mentioned schemes motivates our work in this paper. Moreover, we compare our results with the ones obtained in [4] as reference.

2.6. Motivations

The researchers who proposed the idea of coded caching made several simplifying assumptions regarding the system model [1]. These assumptions made the core idea more manageable in its early days. However, several aspects of the primary system model obviously need to be revisited in order for the scheme to be applicable to real-world networks. Scalability and security are two aspects considered by other researchers [3,4]. However, there are still several related issues which can motivate further research. For example, It should be considered that confidentiality (addressed in [3]) is not the only aspect of security. Moreover, the network topology (studied in [4]) is not the only factor affecting scalability. However, what motivates us for the work of this paper is the lack of a research on a system model which is both secure and confidential.
Achieving the confidentiality promised in [3], as well as the scalability of the network studied in [4] by combining both ideas looks an enticing natural idea. However, the important issue to consider here is that combining these ideas can bring about new problems. In fact, the traffic and memory space overhead caused by the secure coded caching is against the scalability aimed by the hierarchical network. The key transmission occupies the bandwidth of the network which adversely affects the scalability. This problem will look more prominent when we consider the fact that OTP requires a new key for each single transmission. On the other hand, storing the keys in user caches prevents some frequently-requested data items to be stored during the placement phase because of the limited cache sizes. This will affect the peak time rate and may, consequently, overshadow the scalability of the underlying hierarchical network. Thus, every research focusing on simultaneous confidentiality and scalability should consider the trade-off between the two parameters. This trade-off will appear as an extra cost induced by the security-related constraints which should be tolerated by the hierarchical coded caching scheme.
In this paper, we first present an extended coded caching scheme which incorporates OTP confidentiality provisions and hierarchical network topology in the system model. Then, we analyze the extra cost induced by confidentiality via comparing the rate bounds to the case of non-secure hierarchical coded caching.

3. Secure Hierarchical Coded Caching

In this section, we present our secure hierarchical coded caching scheme and the related system model. Our system model needs to be defined in two aspects. We first introduce the topology and resources of the underlying network and then discuss the caching scheme which describes the transmissions in placement and delivery phases. Next, we discuss the security-related considerations.
In terms of topology, we adopt the 2-level hierarchical topology, described in [4]. In the top level of the hierarchy, the main server is connected to a hub via a shared link and then to mirror servers via separate links. In each cluster in the next hierarchy level, a shared link connects the mirror server to the hub while end users are connected to the same hub using separate links. We assume the number of the clusters to be equal to K 1 each of which connects K 2 end users.
As for the resources, the main server is assumed to store N files represented by W 1 through W N each of size F bits. We assumed that the bits in a file are independent and uniformly distributed. The cache sizes in the mirrors and the end users are assumed to be M 1 F and M 2 F bits, respectively. The main and mirror servers are assumed to have unlimited processing power.
With respect to the caching scheme, we will assume the generalized caching scheme presented in [4] which is a combination of Scheme A and Scheme B. We follow the procedure to find the most efficient combination of the schemes.
During the delivery phase, each user makes exactly one demand. The local demands in each cluster are collected by the corresponding mirror server and then forwarded to the main server. The demand issued by U ( i , j ) is represented by the element d i , j in the demand matrix D . According to the demands, the main server encodes the proper content along with with the orthogonal keys and transmits them within a file X D of size R S 1 F bits to all mirrors. Then, each mirror re-encodes (Scheme A) or forwards (Scheme B) the data requested by its corresponding users along with the related keys and transmits them within a file Y D of size R S 2 F bits.
Security-related constraints are considered in order to keep the transferred contents confidential from an external adversary assumed to have access to every shared link. In order to add confidentiality to our caching scheme, we adopt the security constraints proposed in [2,3]. Adopting the orthogonal key scheme proposed in [3], user caches, as well as mirror server memories are considered to be partitioned into Data and Key regions in order to keep space for storing the keys in the placement phase. Figure 1 shows the access model of adversary as well as the security-related configuration.
The mentioned security constraints guarantees that I ( X D ; W 1 , W 2 , , W N ) = ϵ 1 and I ( Y D ; W 1 , W 2 , , W N ) = ϵ 2 , where ϵ 1 0 and ϵ 2 0 which states that the external adversary cannot reveal any information regarding the files W 1 , W 2 , , W N by eavesdropping the shared links without access to users’ and mirrors’ caches. It is to be noted that ϵ 1 0 and ϵ 2 0 are for the case when file size is sufficiently large, i.e., when file size . The minimum number of mirrors or users needed to be compromised in order to break the security was discussed in [3].
Adopting the security constraints from [3] requires some extra assumption regarding the placement phase in Scheme A. Since the users cannot establish an immediate communication with the main server, we assume a delegation between the main server and mirror servers in the placement phase in Scheme A. It means that the mirror servers are trusted and granted access to the keys because they need to decrypt and encrypt the contents again before and after re-encoding them.
Another assumption adopted from [3] in our system model is that every user is interested in no more than one file in the delivery phase and the demanded files are mutually different. The system cannot allocate resources, such as private links, network bandwidth, and cache space to a user with no demands in the delivery phase. Therefore, we suppose every user makes exactly one request in this phase. The latter assumptions obviously result in N K 1 K 2 as a criterion for the server to be able to answer all user requests. Again, we note that it is not reasonable to store files which will never be demanded. Thus, we assume that N = K 1 K 2 . Throughout, we assume that the placement phase is secure and links are error-free.
Let us represent the secure rate in the top hierarchy level by R S 1 and the second level secure rate by R S 2 . For a demand matrix D and for a large-enough file size F, a tuple ( M 1 , M 2 , R S 1 , R S 2 ) is said to be feasible for D if each user U ( i , j ) is able to recover its requested file d i , j securely with a probability arbitrarily close to unity. Moreover, ( M 1 , M 2 , R S 1 , R S 2 ) is feasible if it is feasible for all possible request matrices D . Throughout, we assume feasible rate region in our analysis.

4. Fundamental Limits and Cost Analysis

The procedure we follow in our evaluations in this section can be described as follows. In order to maximize the secure achievable rate in the generalized scheme, we try to find the most effective combination of the Schemes A and B. To do this, we first parameterize the combination. We assume that a fraction equal to α of each file residing in the server (as well as transmissions in the top hierarchy level) are ruled by Scheme A and the rest ( 1 α ) are transmitted on the basis of Scheme B. The corresponding fractions in the user cache (as well as transmissions in the second hierarchy level) are assumed to be equal to β and 1 β , respectively. Then we try to find the best possible values for α and β which will result in the most effective combination. We denote the latter values by α * and β * . In the next step, we calculate the secure achievable rate for the generalized scheme via calculating the rates for both Schemes A and B and then combining the results together assigning the values α * and β * to α and β . We calculate the lower bounds of the rates through a similar procedure and then analyze the gap between the achievable rates and the rates specified by the lower bounds. A comparison between our results and those obtained in [4] highlights the cost of security in hierarchical network caching.

4.1. Preliminary Discussions

While analyzing the rates in each scheme, we separately consider each of the three regimes proposed in [4]. This makes it plausible to compare our results to those obtained in [4]. The mentioned regimes are characterized as shown in Equation (3) in terms of M 1 and M 2 ,
1 M 1 + M 2 K 2 N and 0 M 1 N / 4 , 2 M 1 + M 2 K 2 < N , 3 M 1 + M 2 K 2 N and N / 4 < M 1 N .
The results in [4], to which we compare our own results are as follows. The optimum values of α and β for the mentioned regimes in the non-secure hierarchical coded caching scheme are [4],
( α * , β * ) M 1 N , M 1 N in regime 1 , M 1 M 1 + M 2 K 2 , 0 in regime 2 , M 1 N , 1 4 in regime 3 .
Moreover, the corresponding non-secure achievable rates for Scheme A and Scheme B have been calculated as functions of α * and β * in [4],
R 1 ( α * , β * ) min K 1 K 2 1 , N M 2 in regime 1 , min K 1 K 2 , M 1 M 1 + M 2 K 2 · ( N M 1 ) K 2 M 1 + M 2 K 2 , + M 1 M 1 + M 2 K 2 · N K 2 M 1 M 1 + M 2 K 2 in regime 2 , ( N M 1 ) 2 N M 2 in regime 3 ,
R 2 ( α * , β * ) min K 2 , N M 2 .
See Figure 2 for different regimes of M 1 , M 2 for α * and β * . In (4) and (5), the approximation is within a constant additive and multiplicative as given by (7) and (8),
R 1 R 1 l b ( M 1 , M 2 ) 1 60 R 1 ( α * , β * ) 4 ,
R 2 R 2 l b ( M 1 , M 2 ) 1 36 R 1 ( α * , β * ) 16 .

4.2. Secure Achievable Rates

Before beginning to develop our mathematical modelings, let us introduce some notations we will use in our equations. We will refer to the jth user ( j [ 1 , 2 , , K 2 ] ) in the ith cluster where i [ 1 , 2 , , K 1 ] as U ( i , j ) and refer to the corresponding cache as C ( i , j ) . Let us represent the coded content items transmitted in the first and second levels of the hierarchy by X D and Y D , respectively, where D is the request matrix. Furthermore, let us represent the secure rate in the top hierarchy level by R S 1 and the second level secure rate by R S 2 .
Now, let us begin the derivation of our model by calculating R S 1 and R S 2 for scheme A. For N files and K 1 mirrors each with a cache size of M 1 N K 2 K 1 · t 1 + K 2 , where t 1 { 0 , 1 , 2 , , K 1 } , R S 1 for scheme A is given by
R S 1 A = K 2 · r M 1 K 2 N K 2 , K 1 ,
where r ( . , . ) is defined as:
r M N , K K · 1 M N · N K M 1 1 M N K + ,
with [ x ] + m a x x , 0 . Moreover, R S 2 for Scheme A considering K 2 users each with a cache size of M 2 N 1 K 2 · t 2 + 1 , where t 2 { 0 , 1 , , K 2 } , can be obtained from
R S 2 A = r M 2 1 N 1 , K 2 .
Similarly, R S 1 and R S 2 for the scheme B can be calculated as
R S 1 B = r M 2 1 N 1 , K 1 K 2 ,
R S 2 B = r M 2 1 N 1 , K 2 .
Let us normalize the total file size, mirror memory size, and user cache size involved by scheme A as shown in (14) and (15), respectively,
F α F ,
M 1 M 1 F F = M 1 α ,
M 2 β M 2 F F = β M 2 α .
Moreover, let us normalize user cache size involved by scheme B as shown in (16),
F ( 1 α ) F ,
M 2 ( 1 β ) M 2 F F = ( 1 β ) M 2 1 α .
Thus, the secure rates induced by scheme A and scheme B can be normalized with respect to the file size as given by
R S 1 A = α K 2 · r M 1 K 2 ( N K 2 ) , K 1 = α K 2 · r M 1 α K 2 α ( N K 2 ) , K 1 ,
R S 2 A = α · r M 2 1 ( N 1 ) , K 2 = α · r β M 2 α α ( N 1 ) , K 2 ,
and
R S 1 B = ( 1 α ) · r M 2 1 N 1 , K 1 K 2 = ( 1 α ) · r ( 1 β ) M 2 ( 1 α ) ( 1 α ) ( N 1 ) , K 1 K 2 ,
R S 2 B = ( 1 α ) · r M 2 1 N 1 , K 2 = ( 1 α ) · r ( 1 β ) M 2 ( 1 α ) ( 1 α ) ( N 1 ) , K 2 .
In the next step, we will calculate α * and β * for each of the regimes in a way that both R S 1 ( α , β ) and R S 2 ( α , β ) can be minimized. Let us begin with regime 1 . According to (17b) and (18b), for α = β it holds that R S 2 ( α , α ) = r ( ( M 2 1 ) / ( N 1 ) , K 2 ) . It can be verified that α = M / N results in a near-optimal value for R S 1 ( α , α ) in Regime A. Thus, we chose α * = M 1 / N and β * = M 1 / N in this regime. Choosing α * = M 1 / N allows each mirror to store the first part of each of the N files in the first transmission. Thus, there will be no need for further transmission between the server and the mirrors in the placement phase or key memory space in the mirrors.
Now let us proceed with regime 2 . In this regime, it can be verified from Equation (20b) that M 2 < N / K 2 which means that the M 2 cache area is very small. Thus, R S 2 ( α , β ) will be of order K 2 for any choice of α and β . Therefore, we only need to choose α and β in a way that R S 1 ( α , β ) is minimized. In this regime, the optimized values for α and β can be obtained as α * = M 1 / ( M 1 + M 2 K 2 ) and β * = M 1 / ( M 2 ( M 1 + M 2 K 2 ) ) .
In regime 3 (like in regime 1 ), a choice of α = β = M 1 / N is preferable. However, it should be considered that a large value of β leads to an unacceptably-large value of R S 1 ( α , β ) . Thus, a minimum threshold of β * = M 1 / M 2 N should be considered. Similar to the case of regime 1 , no extra transmission between the server and the mirrors in the placement phase or key area in the cache is required in this regime.
After deciding the proper choice of α * , β * , let us calculate R S 1 ( α * , β * ) and R S 2 ( α * , β * ) for the generalized scheme as a combination of the secure rates in the two schemes A and B.
Theorem 1.
We have the following conditions on R S 1 ( α * , β * ) and R S 2 ( α * , β * )
R S 1 ( α * , β * ) min K 1 K 2 , N 1 M 2 1 in regime 1 , min { K 1 K 2 , M 1 M 1 + M 2 K 2 · K 2 ( N M 1 ) M 1 + ( M 2 1 ) K 2 + M 2 K 2 M 1 + M 2 K 2 · ( N 1 ) K 2 M 2 ( M 2 1 ) ( M 1 + M 2 K 2 ) } in regime 2 , ( N M 1 ) 2 N ( M 2 1 ) in regime 3 ,
and
R S 2 ( α * , β * ) K 1 · min K 2 , N 1 M 2 1 .
Proof. 
The normalized achievable secure rates for the generalized scheme can be calculated in the form of functions of α and β ,
R S 1 ( α , β ) R S 1 A + R S 1 B , = α K 2 · r M 1 α K 2 α ( N K 2 ) , K 1 + ( 1 α ) · r ( 1 β ) M 2 ( 1 α ) ( 1 α ) ( N 1 ) , K 1 K 2 ,
and
R S 2 ( α , β ) R S 2 A + R S 2 B , = α · r β M 2 α α ( N 1 ) , K 2 + ( 1 α ) · r ( 1 β ) M 2 ( 1 α ) ( 1 α ) ( N 1 ) , K 2 .
With the proper choice of α * , β * , we proceed to calculate secure achievable rates R S 1 ( α * , β * ) , R S 2 ( α * , β * ) . As we observe, the secure achievable rates for the generalized caching scheme is a function of r ( . , . ) , as mentioned in the Equation (1). We observe the following,
r M N , K min K , N M 1 M N , 0 otherwise .
Now let us proceed with calculating the secure achievable rates for each of the regimes of M 1 and M 2 beginning with regime 1 . According to (5), (20), and (21), the secure achievable rates in this regime can be upper bounded as shown in inequalities (22a) and (22b),
R S 1 ( α * , β * ) = M 1 N K 2 · r ( 1 , K 1 ) + 1 M 1 N · r M 2 1 N 1 , K 1 K 2 0 + min K 1 K 2 N 1 M 2 1 = min K 1 K 2 , N 1 M 2 1 ,
and
R S 2 ( α * , β * ) = M 1 N · M 2 1 N 1 + 1 M 1 N · r M 2 1 N 1 , K 2 = r M 2 1 N 1 min K 2 , N 1 M 2 1 .
Through a similar reasoning, the upper bounds to the secure achievable rate R S 1 ( α * , β * ) in regime 2 can be obtained from the inequality (23a) (the form of equations are different from regime 1 ).
R S 1 ( α * , β * ) = M 1 K 2 M 1 + M 2 K 2 · r M 1 + M 2 K 2 K 2 N K 2 , K 1 + M 2 K 2 M 1 + M 2 K 2 · r ( M 2 1 ) ( M 1 + M 2 K 2 ) ( N 1 ) M 2 K 2 , K 1 K 2 M 1 K 2 M 1 + K 2 M 2 · min K 1 , N K 2 M 1 + M 2 K 2 K 2 1 + M 2 K 2 M 1 + M 2 K 2 · min K 1 K 2 , ( N 1 ) K 2 M 2 ( M 2 1 ) ( M 1 + M 2 K 2 ) 1 M 1 M 1 + M 2 K 2 · min K 1 K 2 , K 2 ( N M 1 ) M 1 + ( M 2 1 ) K 2 + M 2 K 2 M 1 + M 2 K 2 · min K 1 K 2 , ( N 1 ) K 2 M 2 ( M 2 1 ) ( M 1 + M 2 K 2 ) min K 1 K 2 , M 1 M 1 + M 2 K 2 · K 2 ( N M 1 ) M 1 + ( M 2 1 ) K 2 + M 2 K 2 M 1 + M 2 K 2 · ( N 1 ) K 2 M 2 ( M 2 1 ) ( M 1 + M 2 K 2 ) .
Furthermore, according to (20) and (21), the reader can easily verify that R S 2 ( α * , β * ) in regime 2 will be upper bounded by
R S 2 ( α * , β * ) K 2 = min K 2 , N 1 M 2 1 .
Additionally, for regime 3 we have,
R S 1 ( α * , β * ) = 1 M 1 N · r 1 M 1 N + r ( 1 1 / K 1 ) M 2 ( 1 M 1 / N ) ( 1 M 1 / N ) ( N 1 ) , K 2 0 + 1 M 1 N · min K 1 K 2 , ( N M 1 ) ( N 1 ) K 1 N M 2 ( K 1 1 ) K 1 ( N M 1 ) N M 1 N · K 1 ( N M 1 ) ( N 1 ) ( K 1 1 ) N ( M 2 1 ) K 1 K 1 1 · ( N M 1 ) 2 N ( M 2 1 ) ,
and
R S 2 ( α * , β * ) = M 1 N · r M 2 K 1 M 1 , K 2 + 1 M 1 N · r ( 1 1 / K 1 ) M 2 ( 1 M 1 / N ) ( 1 M 1 / N ) ( N 1 ) , K 2 M 1 N · min K 2 , K 1 ( N 1 ) M 2 1 + 1 M 1 N min K 2 , K 1 ( N 1 ) M 2 1 min K 2 , K 1 ( N 1 ) ( M 2 1 ) K 1 · min K 2 , N 1 M 2 1 .
Summarizing our results for the generalized scheme from the discussions above we have
R S 1 ( α * , β * ) min K 1 K 2 , N 1 M 2 1 in regime 1 , min { K 1 K 2 , M 1 M 1 + M 2 K 2 · K 2 ( N M 1 ) M 1 + ( M 2 1 ) K 2 + M 2 K 2 M 1 + M 2 K 2 · ( N 1 ) K 2 M 2 ( M 2 1 ) ( M 1 + M 2 K 2 ) } in regime 2 , ( N M 1 ) 2 N ( M 2 1 ) in regime 3 ,
and
R S 2 ( α * , β * ) K 1 · min K 2 , N 1 M 2 1 .
The proof is now complete. □

4.3. Memory Requirements

An information-theoretical analysis will reveal some minimum requirements regarding the cache size in the mirrors and uses cache. Consider a caching system in which two files denoted by A and B are residing in the main server N = 2 . Consider K 1 = 2 mirrors denoted by m 1 and m 2 with each of mirrors cache size M 1 . Cache contents denoted by Z 1 m , Z 2 m cached by users in the placement phase. Let us assume that mirror m 1 demands a content item A α in the delivery phase which is part of file A and mirror m 2 demands part of file B denoted by B α . Both demanded items are assumed to be of size α F which can be considered a fraction equal to α of the size of file A or B. The mentioned demands can be represented by a demand vector ( d 1 , d 2 ) = ( A α , B α ) .
In this setup, the main server will transmit X ( A α , B α ) to the mirrors which should be capable of regenerating the items A α and B α when combined with Z 1 m and Z 2 m . From an information theoretical point of view, the criterion stated by inequality (26) should hold to make it possible to achieve this goal,
H ( A α , B α | X ( A α , B α ) , Z 1 m , Z 2 m ) ϵ .
For the security constraint between the server and the mirrors, inequality (27) should hold in order to keep the delivery phase transmissions confidential,
I ( A α , B α ; X ( A α , B α ) ) δ .
Using (26) and (27) we have
2 α F H ( A α , B α ) , = I ( A α , B α ; X ( A α , B α ) , Z 1 m , Z 2 m ) + H ( A α , B α | X ( A α , B α ) , Z 1 m , Z 2 m ) , I ( A α , B α ; X ( A α , B α ) , Z 1 m , Z 2 m ) + ϵ , = I ( A α , B α ; X ( A α , B α ) + I ( A α , B α ; Z 1 m , Z 2 m | X ( A α , B α ) ) + ϵ , I ( A α , B α ; Z 1 m , Z 2 m | X ( A α , B α ) ) + δ + ϵ , H ( Z 1 m , Z 2 m | X ( A α , B α ) ) + δ + ϵ , 2 M 1 F + δ + ϵ .
From (28) we immediately obtain
M 1 α δ F ϵ F .
When δ and ϵ approach zero, inequality (29) will be converted to M 1 α . Again, we note that users should be able to recover both file A and file B from a single cached item Z 1 m along with two items within X ( A α , B α ) and X ( B α , A α ) transmitted in response to the demand vectors ( d 1 , d 2 ) = ( A α , B α ) and ( d 1 , d 2 ) = ( B α , A α ) , respectively. The latter requirement leads to the following inequalities,
H ( A α , B α | X ( A α , B α ) , X ( B α , A α ) , Z 1 m ) ϵ ,
I ( A α , B α ; X ( A α , B α ) ) δ .
Through similar reasoning the latter two inequalities will lead to R s * + M 1 2 α where R s * is minimum rate between the server and the mirrors.

4.4. Lower Bounds

Now let us discuss the lower bounds on secure rates R S 1 and R S 2 for different values of M 1 , M 2 given the feasibility of ( M 1 , M 2 , R S 1 , R S 2 ) . To do this, we follow the approach taken in [3] for the secure non-hierarchical scheme and extend the discussions and results to the case of the secure hierarchical network.
Theorem 2.
We have
R S 1 max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } m a x s 1 s 2 1 1 N 2 s 1 s 2 s 1 M 1 + s 1 s 2 ( M 2 1 ) , s 1 s 2 ( N s 1 M 1 s 1 s 2 M 2 ) N R S 1 l b ( M 1 , M 2 ) ,
and
R S 2 max t { 1 , 2 , , K 2 } t ( N t M 2 ) N .
Proof. 
Let us begin with the lower bound on R S 1 . For s 1 { 1 , 2 , , K 1 } and s 2 { 1 , 2 , , K 2 } , suppose the first s 1 mirrors store Z 1 m , Z 2 m , , Z s 1 m . Furthermore, assume that for i { 1 , 2 , , s 1 } and j { 1 , 2 , , s 2 } , every user C ( i , j ) caches Z i , 1 u , Z i , 2 u , , Z i , s 2 u . Suppose the mentioned users issue the demand matrix D 1 defined as d i , j 1 = ( i 1 ) s 2 + j which includes requests for the first s 1 s 2 files residing in the main server. The items transmitted by the main server within X 1 = X ( d 1 , 1 , , d s 1 s 2 ) , along with the mirrored items Z 1 m , Z 2 m , , Z s 1 m and cached items Z i , 1 u , Z i , 2 u , , Z i , s 2 u must able to decode the files W 1 , W 2 , , W s 1 s 2 .
Similarly, for the different request matrix D , where user U ( i , j ) demands d i , j = s 1 s 2 + ( i 1 ) s 2 + j , i.e., requesting next s 1 s 2 files from the server. The transmission X 2 , along with mirrors Z i , 1 m , Z i , 2 m , , Z i , s 2 m and users cache Z i , 1 u , Z i , 2 u , , Z i , s 2 u must be able to decode the files W s 1 s 2 + 1 , W s 1 s 2 + 2 , , W 2 s 1 s 2 . Likewise, considering all N / s 1 s 2 request matrices, multicast transmission X 1 , , X N / s 1 s 2 along with mirrors Z 1 m , Z 2 m , Z s 1 m and users cache Z i , 1 u , Z i , 2 u , , Z i , s 2 u , must be able to recover the files W 1 , , W s 1 s 2 N / s 1 s 2 . Let
W ˜ = { W 1 , , W s 1 s 2 N / s 1 s 2 } X ˜ = { X 1 , , X N / s 1 s 2 } X ˜ { l } = { X 1 , , X l 1 , X l + 1 , , X N / s 1 s 2 } Z ˜ m = { Z 1 m , , Z s 1 m } = Z ˜ i m Z ˜ u = { Z 1 , 1 u , , Z 1 , s 2 u , Z 2 , 1 u , , Z s 1 s 2 u } = { Z i , j u } .
Another point implied by the feasibility of ( M 1 , M 2 , R s 1 , R s 2 ) in our system model is that the external adversary should not be able to retrieve any information regarding the contents being transmitted in the delivery phase. This criterion is formally described by inequalities (32) and (33),
H ( W ˜ | X ˜ , Z ˜ m , Z ˜ u ) ϵ 1 ,
and
I ( W ˜ ; X l ) ϵ 2 , l = 1 , , q .
Consider the information flow consisting of multicast transmission X 1 , , X N / s 1 s 2 , mirrors Z 1 , Z 2 , , Z s 1 and users cache Z i , 1 , Z i , s 2 for decoding file W 1 , , W s 1 s 2 N / s 1 s 2 . We have
s 1 s 2 N / s 1 s 2 F H ( W ˜ ) = I ( W ˜ ; X ˜ , Z ˜ u , Z ˜ m ) + H ( W ˜ | X ˜ , Z ˜ u , Z ˜ m ) I ( W ˜ ; X ˜ , Z ˜ u , Z ˜ m ) + ϵ 1 = I ( W ˜ ; X l ) + I ( W ˜ ; X ˜ { l } , Z ˜ m , Z ˜ u | X l ) + ϵ 1 I ( W ˜ ; X ˜ { l } , Z ˜ m , Z ˜ u | X l ) + ϵ 1 + ϵ 2 H ( X ˜ { l } , Z ˜ m , Z u ) + ϵ k = 1 , k l N / s 1 s 2 H ( X k ) + i = 1 s 1 H ( Z i m ) + i = 1 s 1 j = 1 s 2 H ( Z i , j u ) + ϵ ( N / s 1 s 2 1 ) R s 1 F + s 1 M 1 F + s 1 s 2 M 2 F + ϵ .
So,
s 1 s 2 N / s 1 s 2 ( N / s 1 s 2 1 ) R S 1 + s 1 M 1 + s 1 s 2 M 2 + ϵ F .
Solving and optimizing for all possible values of s 1 and s 2 we obtain
R S 1 max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } lim ϵ 0 1 N / s 1 s 2 1 s 1 s 2 N / s 1 s 2 s 1 M 1 s 1 s 2 M 2 ϵ F max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } s 1 s 2 s 1 M 1 + s 1 s 2 ( M 2 1 ) N / s 1 s 2 2 max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } s 1 s 2 1 s 1 M 1 + s 1 s 2 ( M 2 1 ) N 2 s 1 s 2 .
We can obtain an alternate lower bound by using N / s 1 s 2 transmissions instead of N / s 1 s 2 in (35),
R S 1 max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } 1 N / s 1 s 2 1 ( N s 1 M 1 s 1 s 2 M 2 ) max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } 1 N / s 1 s 2 ( N s 1 M 1 s 1 s 2 M 2 ) max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } 1 N ( s 1 s 2 ( N s 1 M 1 s 1 s 2 M 2 ) ) .
The inequalities (36) and (37) hold for any value of s 1 { 1 , 2 , , K 1 } and s 2 { 1 , 2 , , K 2 } . So, we obtain the following lower bound on R S 1 for the tuple ( M 1 , M 2 , R S 1 , R S 2 ) to be feasible,
R S 1 max s 1 { 1 , 2 , , K 1 } s 2 { 1 , 2 , , K 2 } m a x s 1 s 2 1 1 N 2 s 1 s 2 s 1 M 1 + s 1 s 2 ( M 2 1 ) , s 1 s 2 ( N s 1 M 1 s 1 s 2 M 2 ) N R S 1 l b ( M 1 , M 2 ) .
After calculating the lower bound for R S 1 , let us proceed with that of R S 2 assuming the feasibility of ( M 1 , M 2 , R S 1 , R S 2 ) . Let t { 1 , 2 , , K 2 } . Consider the t users cache U ( 1 , j ) as Z 1 , 1 u , Z 1 , 2 u , , Z 1 , t u with j { 1 , 2 , , t } . Consider the request matrix D with demands d 1 , j = j , i.e., requesting t files from the server. The transmission Y 1 = Y ( d 1 , 1 , , d 1 , t ) , along with the users cache U ( 1 , j ) Z 1 , 1 u , Z 1 , 2 u , , Z 1 , t u must be able to decode the files W 1 , , W t . Similarly, for the different request matrix D , where the user demands d i , j = t + j , i.e., requesting another t files from the server. The transmission Y 2 along with the users cache U 1 , j , must be able to decode the files W t + 1 , , W 2 t . Likewise, considering the all N / t request matrices, multicast transmission, Y 1 , , Y N / t along with the users cache Z 1 , 1 u , Z 1 , 2 u , , Z 1 , t u must be able to recover the files W 1 , , W N . Assuming Y l be the information leaked to the external adversary through the link connecting between the mirror and its corresponding users.
W ˜ = { W 1 , , W N } Y ˜ = { Y 1 , , Y N / t } Y ˜ { l } = { Y 1 , , Y l 1 , Y l + 1 , , Y N / t } Z ˜ u = { Z 1 , 1 u , , Z 1 , t u } = { Z 1 , j u } , w h e r e j { 1 , 2 , , t } .
The file recovery and security constraints can be stated as
H ( W ˜ | Y ˜ , Z ˜ u ) ϵ 1 ,
I ( W ˜ ; Y l ) ϵ 2 , l = 1 , , Y N / t
This is similar case of single layer secure scheme. Consider the information flow consisting of multicast transmission Y 1 , , Y N / t and users cache Z 1 , 1 u , Z 1 , 2 u , , Z 1 , t u for decoding the files W 1 , W 2 , , W N . We have
N F = H ( W ˜ ) = I ( W ˜ ; Y ˜ , Z ˜ u ) + ( W ˜ | Y ˜ , Z ˜ u ) I ( W ˜ ; Y ˜ , Z ˜ u ) + ϵ 1 = I ( W ˜ ; Y l ) + I ( W ˜ ; Y ˜ { l } , Z ˜ u | Y l ) + ϵ 1 I ( W ˜ ; Y ˜ { l } , Z ˜ u | Y l ) + ϵ 1 + ϵ 2 H ( Y ˜ { l } , Z ˜ u ) + ϵ , w h e r e ϵ 1 + ϵ 2 = ϵ i = 1 , i l N / t H ( Y i ) + j = 1 t H ( Z 1 , j u ) + ϵ ( N / t 1 ) R S 2 F + t M 2 F + ϵ .
Therefore,
N = ( N / t 1 ) R S 2 + t M 2 + ϵ F .
Solving and optimizing for all value of t, we obtain the following lower bound
R S 2 max t { 1 , 2 , , K 2 } lim ϵ 0 N t M 2 2 ϵ F N / t 1 max t { 1 , 2 , , K 2 } N t M 2 N / t = max t { 1 , 2 , , K 2 } t ( N t M 2 ) N R S 2 l b ( M 1 , M 2 ) .

5. Gap Analysis

In this section, we analyze the gap between the secure achievable rates and the corresponding lower bounds.

5.1. R S 1 ( α * , β * ) against R S 1 l b ( M 1 , M 2 )

Theorem 3.
R S 1 ( α * , β * ) will vary between a constant multiplicative and a constant additive gap with R S 1 l b ( M 1 , M 2 ) . Specifically,
R S 1 R S 1 l b ( M 1 , M 2 ) 1 96 R S 1 ( α * , β * ) 4 .
Proof. 
The values of α * and β * , and consequently R s 1 ( α * , β * ) obviously depend on the regime characterized by M 1 and M 2 . This makes it necessary to examine each of the regimes. We begin with regime 1 assuming that N K 1 K 2 , K 1 4 and K 2 4 .
Regime 1: M 1 + M 2 K 2 N and 0 M 1 N K 1 where M 1 M 1 K 2 N , M 2 1 . Inequalities (25a) and (38) give the achievable secure rate R S 1 ( α * , β * ) , as well as the lower bound on R S 1 ( M 1 , M 2 ) for regime 1.
In order to make the margin of the gap more manageable, we further divide our discussions regarding this regime into three sub-regimes specified as follows:
1 . A ) M 1 K 2 N M 1 N 2 K 1 , 3 N 4 K 2 M 2 N 4 , 1 . B ) N 2 K 1 M 1 N K 1 , 3 N 4 K 2 M 2 N 4 , 1 . C ) M 1 K 2 N M 1 N K 1 , N 4 M 2 N .
For sub-regime 1.A), let us feed s 1 = 1 and s 2 = N 2 M 2 (which is a valid choice because z z / 2 for any z 1 ) into (38) which gives
R S 1 l b ( M 1 , M 2 ) N 2 M 2 N M 1 N 2 M 2 M 2 N ( a ) 1 N N 4 M 2 N N 2 K 1 N 2 M 2 · M 2 N 4 M 2 1 1 2 K 1 1 2 ( b ) N 4 M 2 1 2 1 8 ( c ) 3 N 32 M 2 3 32 · 4 5 · N 1 M 2 1 , 3 40 min K 1 K 2 , N 1 M 2 1 .
In deriving (44), we have used ( a ) : z z / 2 z 1 , ( b ) : K 1 4 and ( c ) : N K 1 K 2 . Combining (44) and (22a), we obtain
R S 1 l b ( M 1 , M 2 ) 3 40 R S 1 ( α * , β * ) .
For sub-regime 1.B, let
( s 1 , s 2 ) = N K 1 M 1 , M 1 M 2 for M 1 M 2 , N K 1 M 1 , 1 otherwise .
Note that for M 1 M 2 , we have
1 = N K 1 . N / K 1 N K 1 M 1 N K 1 M 1 2 , 1 M 1 M 2 M 1 M 2 N / K 1 3 N / ( 4 K 2 ) = 4 K 2 3 K 1 ,
and for M 1 < M 2 , we have
1 = N 4 N / 4 N 4 M 2 N K 1 M 1 N K 1 M 1 2 .
Finally, feeding the chosen values of s 1 , s 2 into (38) we obtain
R S 1 l b ( M 1 , M 2 ) N 4 K 1 M 2 N N K 1 M 1 · M 1 N 4 M 2 · M 2 N N 4 K 1 M 2 1 1 K 1 1 4 N 32 M 2 1 32 · 4 5 · N 1 M 2 1 1 40 min K 1 K 2 , N 1 M 2 1 .
Combining (46) and (22a), we obtain
R S 1 l b ( M 1 , M 2 ) 1 40 R S 1 ( α * , β * ) .
Similarly, in sub-regime 1.C, we have
R S 1 l b ( M 1 , M 2 ) N M 2 4 N 1 M 2 1 4 min K 1 K 2 , N 1 M 2 1 4 .
Combining (48) and (22a), we obtain
R S 1 l b ( M 1 , M 2 ) R S 1 ( α * , β * ) 4 .
Our analysis for sub-regimes 1.A, 1.B and 1.C demonstrate that the secure achievable rate R S 1 l b ( M 1 , M 2 ) is within a constant multiplicative and additive gap for regime 1.
As for regime 2, we further divide it into the following sub-regimes.
( 2 . A ) M 1 K 2 M 1 + M 2 K 2 M 1 < N K 1 , 1 M 2 < N K 1 K 2 , ( 2 . B ) M 1 K 2 M 1 + M 2 K 2 M 1 < N K 1 , N K 1 K 2 M 2 < N 3 K 2 , ( 2 . C ) M 1 K 2 M 1 + M 2 K 2 M 1 < N K 1 , N 3 K 2 M 2 < N 4 , ( 2 . D ) N K 1 M 1 N , 1 M 2 < N M 1 2 K 2 , ( 2 . E ) N K 1 M 1 N , N M 1 2 K 2 M 2 < N M 1 K 2 .
For sub-regime 2.A, we assume s 1 = K 1 3 and s 2 = K 2 . Using z z / 2 for any z 1 , we see that it is a valid choice of s 1 , s 2 , since K 1 4 and thus K 1 / 3 1 . Equating the values of s 1 , s 2 in (38), we obtain
R S 1 l b ( M 1 , M 2 ) 1 N K 1 3 K 2 ( N K 1 3 M 1 K 1 3 K 2 M 2 ) ( a ) 1 N K 1 K 2 6 N M 1 K 1 3 M 2 K 1 K 2 3 ( b ) 1 N K 1 K 2 6 N N 3 N 3 = K 1 K 2 18 1 18 min K 1 K 2 , M 1 M 1 + M 2 K 2 · K 2 ( N M 1 ) M 1 + ( M 2 1 ) K 2 + M 2 K 2 M 1 + M 2 K 2 · ( N 1 ) K 2 M 2 ( M 2 1 ) ( M 1 + M 2 K 2 ) ,
where ( a ) follows from z z / 2 for any z 1 ; and ( b ) follows from M 1 < N / K 1 , M 2 < N / ( K 1 K 2 ) . Combining the result with (23a), we obtain
R S 1 l b ( M 1 , M 2 ) 1 18 R S 1 ( α * , β * ) .
The remaining sub-regimes of this regime can be analyzed in a similar manner, thus we present only the values chosen for s 1 and s 2 , as well as the final inequality for each sub-regime. The values ( N 3 M 2 K 2 , K 2 ) , ( 1 , K 2 ) , ( 1 , N 4 M 2 ) and ( 1 , N M 1 2 M 2 ) are chosen for ( s 1 , s 2 ) in sub-regimes 2.B, 2.C, 2.D and 2.E, respectively. Moreover, inequalities (52) through (55) demonstrate the gaps for the same sub-regimes, respectively,
R S 1 l b ( M 1 , M 2 ) 2 135 R S 1 ( α * , β * ) ,
R S 1 l b ( M 1 , M 2 ) 3 64 R S 1 ( α * , β * ) ,
R S 1 l b ( M 1 , M 2 ) 1 32 R S 1 l b ( α * , β * ) ,
R S 1 l b ( M 1 , M 2 ) 1 96 R S 1 ( α * , β * ) .
We will also study regime 3 through dividing it into two sub-regimes as follows:
3 . A ) N K 1 M 1 N , N M 1 K 2 M 2 < N M 1 2 , 3 . B ) N K 1 M 1 N , N M 1 2 M 2 N .
The reasoning method is similar to the case of sub-regimes 2.A through 2.E. Therefore, we briefly mention only the chosen values for ( s 1 , s 2 ) and the final inequality obtained for each sub-regime. For sub-regime 3.A, we chose s 1 = 1 and s 2 = N M 1 2 M 2 and derive
R S 1 l b ( M 1 , M 2 ) 1 16 R S 1 ( α * , β * ) .
In sub-regime 3.B, we obtain
R S 1 l b ( M 1 , M 2 ) 0 = 8 3 8 3 4 3 · 2 · 1 8 3 ( a ) K 1 K 1 1 · N M 1 M 2 · N M 1 N 8 3 K 1 K 1 1 ( N M 1 ) 2 M 2 N 8 3 = M 2 1 M 2 · K 1 K 1 1 ( N M 1 ) 2 N ( M 2 1 ) 8 3 ( b ) 5 6 R S 1 ( α * , β * ) 8 3 ,
where ( a ) follows from N M 1 M 2 2 and ( b ) follows from N K 1 K 2 , K 1 4 , and (24a).
The results obtained for sub-regimes 3.A and 3.B suggest that the gap analysis for regime 3 will be similar to the case of regime 1 and regime 2. On the other hand, we show that regimes 1, 2 and 3 cover the entire ( M 1 , M 2 ) plane. This helps us come into the conclusion that in each subregime R S 1 ( α * , β * ) and R S 1 l b ( M 1 , M 2 ) are within a constant multiplicative and additive gap. Therefore, the unified final result which we will obtain for all the studied regimes is
R S 1 R S 1 l b ( M 1 , M 2 ) 1 96 R S 1 ( α * , β * ) 4 .

5.2. R S 2 ( α * , β * ) against R S 2 l b ( M 1 , M 2 )

Theorem 4.
R S 2 ( α * , β * ) is within a constant multiplicative and additive gap with R S 2 l b for every possible value of ( M 1 , M 2 ) . Specifically,
R S 2 R S 2 l b ( M 1 , M 2 ) 1 45 R S 2 ( α * , β * ) 16 .
Proof. 
Let us focus on the case where N K 1 K 2 , K 1 4 and K 2 4 . Recall from (25b) that achievable secure rate R S 2 ( α * , β * ) is upper bounded as
R S 2 ( α * , β * ) K 1 · m i n K 2 , N 1 M 2 1 .
Furthermore, the lower bound on R S 2 ( α * , β * ) can be obtained from (43) as
R S 2 l b ( M 1 , M 2 ) = max t { 1 , 2 , , K 2 } t ( N t M 2 ) N .
In the rest of our discussion we will partition the ( M 1 , M 2 ) plane by distinguishing the following two cases:
( 1 ) 1 M 2 N 4 , ( 2 ) N 4 M 2 N .
We will examine the mentioned cases in order to improve the margin of the gap.
( 1 ) 1 M 2 N K 2 , let t = 1 3 min K 2 , N M 2 in (60). This is a valid choice since K 2 4 . Thus,
1 1 3 min K 2 , N M 2 K 2 3 .
By feeding the value of t into (60), it follows that
R S 2 l b ( M 1 , M 2 ) 1 N 1 3 min K 2 , N M 2 · N 1 3 min K 2 , N M 2 M 2 .
Since z 1 : z z / 2 , we can continue as follows,
R S 2 l b ( M 1 , M 2 ) 1 N 1 6 min K 2 , N M 2 N N 3 = 1 9 min K 2 , N M 2 .
Because N K 1 K 2 and K 1 4 , we have
R S 2 l b ( M 1 , M 2 ) 1 9 · 4 5 min K 2 , N 1 M 2 1 = 4 45 min K 2 , N 1 M 2 1 1 45 · K 1 · min K 2 , N 1 M 2 1 .
From (61) and (25b) we obtain
R S 2 l b ( M 1 , M 2 ) 1 45 R S 2 ( α * , β * ) .
( 2 ) For N 4 M 2 N , it holds that
R S 2 l b ( M 1 , M 2 ) 0 = K 1 N M 2 K 1 N M 2 K 1 · min K 2 , N M 2 K 1 N M 2 ( K 1 4 ) K 1 · 3 4 min K 2 , N 1 M 2 1 16 .
Therefore,
R S 2 l b ( M 1 , M 2 ) 3 4 R S 2 ( α * , β * ) 16 .
The entire ( M 1 , M 2 ) plane is obviously covered by cases 1 ) and 2 ) . Thus, R S 2 ( α * , β * ) and R S 2 l b ( M 1 , M 2 ) are shown to be embraced by constant additive and multiplicative curves as shown by (64) which is derived via combining (62) and (63),
R S 2 R S 2 l b ( M 1 , M 2 ) 1 45 R S 2 ( α * , β * ) 16 .

6. Conclusions and Further Works

In this paper, we further developed the system model of a coded caching scheme by simultaneously assuming a hierarchical network and adversaries tapping the shared links in peak time. We calculated the secure achievable rates of each link in the proposed scheme. The parameters considered in previously-proposed hierarchical scheme have been reconsidered here to obtain approximate minimum achievable rates. Furthermore, we calculated the lower bound on the feasible rates. We showed that the secure achievable rates are within a constant multiplicative and additive gap to the corresponding lower bounds. These results are similar to those obtained in the non-secure hierarchical scheme, but the cost of security appears in the form of larger constants. Our work can be continued by proposing and evaluating yet more complex system models. More complicated models can assume that the adversary has access to the shared links in the placement phase or allow the users to issue more than one request in the delivery phase.

Author Contributions

Formal analysis, B.Z., V.S., B.K.R., K.B. and T.K.; Writing—original draft, B.Z., V.S., B.K.R., K.B. and T.K. All authors have read and agreed to the published version of the manuscript.

Funding

TK was supported in part by JSPS Grant-in-Aids for Scientific Research (A) No.16H01705 and No.21H04879, for Scientific Research (B) No.17H01695, and for Challenging Exploratory Research No.19K22849.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers for helpful comments that improved the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef] [Green Version]
  2. Sengupta, A.; Tandon, R.; Clancy, T.C. Decentralized caching with secure delivery. In Proceedings of the IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014. [Google Scholar]
  3. Sengupta, A.; Tandon, R.; Clancy, T.C. Fundamental limits of caching with secure delivery. IEEE Trans. Inf. Forensics Secur. 2015, 10, 355–370. [Google Scholar] [CrossRef] [Green Version]
  4. Karamchandani, N.; Niesen, U.; Maddah-Ali, M.A.; Diggavi, S.N. Hierarchical coded caching. IEEE Trans. Inf. Theory 2016, 62, 3212–3229. [Google Scholar] [CrossRef]
  5. Bai, B.; Li, W.; Wang, L.; Zhang, G. Coded caching in fog-ran: b-matching approach. IEEE Trans. Commun. 2019, 67, 3753–3767. [Google Scholar] [CrossRef]
  6. Cao, H.; Yan, Q.; Tang, X. Reducing search complexity of coded caching by shrinking search space. IEEE Commun. Lett. 2019, 23, 568–571. [Google Scholar] [CrossRef]
  7. Kim, G.; Hong, B.; Choi, W.; Park, H. Mds-coded caching leveraged by coordinated multi-point transmission. IEEE Commun. Lett. 2018, 22, 1220–1223. [Google Scholar] [CrossRef]
  8. Luo, T.; Peleato, B. The transfer load-i/o trade-off for coded caching. IEEE Commun. Lett. 2018, 22, 1524–1527. [Google Scholar] [CrossRef]
  9. Zhang, J.; Lin, X.; Wang, X. Coded caching under arbitrary popularity distributions. IEEE Trans. Inf. Theory 2018, 64, 349–366. [Google Scholar] [CrossRef]
  10. Cao, Y.; Tao, M. Treating content delivery in multi-antenna coded caching as general message sets transmission: A dof region perspective. IEEE Trans. Wirel. Commun. 2019, 18, 3129–3141. [Google Scholar] [CrossRef]
  11. Ngo, K.-H.; Yang, S.; Kobayashi, M. Scalable content delivery with coded caching in multi-antenna fading channels. IEEE Trans. Wirel. Commun. 2018, 17, 548–562. [Google Scholar] [CrossRef] [Green Version]
  12. Yang, Q.; Gündüz, D. Coded caching and content delivery with heterogeneous distortion requirements. IEEE Trans. Inf. Theory 2018, 64, 4347–4364. [Google Scholar] [CrossRef]
  13. Pedersen, J.; Amat, A.G.; Andriyanova, I.; Brännström, F. Optimizing mds coded caching in wireless networks with device-to-device communication. IEEE Trans. Wirel. Commun. 2019, 18, 286–295. [Google Scholar] [CrossRef]
  14. Shariatpanahi, S.P.; Caire, G.; Khalaj, B.H. Physical-layer schemes for wireless coded caching. IEEE Trans. Inf. Theory 2019, 65, 2792–2807. [Google Scholar] [CrossRef] [Green Version]
  15. Tang, A.; Roy, S.; Wang, X. Coded caching for wireless backhaul networks with unequal link rates. IEEE Trans. Commun. 2018, 66, 1–13. [Google Scholar] [CrossRef]
  16. Panigrahi, B.; Shailendra, S.; Rath, H.K.; Simha, A. Universal caching model and markov-based cache analysis for information centric networks. Photonic Netw. Commun. 2015, 30, 428–438. [Google Scholar] [CrossRef]
  17. Cheng, M.; Jiang, J.; Yan, Q.; Tang, X. Constructions of coded caching schemes with flexible memory size. IEEE Trans. Commun. 2019, 67, 4166–4176. [Google Scholar] [CrossRef] [Green Version]
  18. Shangguan, C.; Zhang, Y.; Ge, G. Centralized coded caching schemes: A hypergraph theoretical approach. IEEE Trans. Inf. Theory 2018, 64, 5755–5766. [Google Scholar] [CrossRef] [Green Version]
  19. Vilardebó, J.G. A novel centralized coded caching scheme with coded prefetching. IEEE J. Sel. Areas Commun. 2018, 36, 1165–1175. [Google Scholar] [CrossRef]
  20. Wang, J.; Cheng, M.; Yan, X.; Tang, Q. Placement delivery array design for coded caching scheme in d2d networks. IEEE Trans. Commun. 2019, 67, 3388–3395. [Google Scholar] [CrossRef] [Green Version]
  21. Asghari, S.M.; Ouyang, Y.; Nayyar, A.; Avestimehr, A.S. An approximation algorithm for optimal clique cover delivery in coded caching. IEEE Trans. Commun. 2019, 67, 4683–4695. [Google Scholar] [CrossRef]
  22. Zheng, L.; Yan, Q.; Chen, Q.; Tang, X. Delivery design for coded caching over wireless multicast networks. IEEE Access 2019, 7, 72803–72817. [Google Scholar] [CrossRef]
  23. Zhang, K.; Tian, C. Fundamental limits of coded caching: From uncoded prefetching to coded prefetching. IEEE J. Sel. Areas Commun. 2018, 36, 1153–1164. [Google Scholar] [CrossRef]
  24. Bayat, M.; Mungara, R.K.; Caire, G. Achieving spatial scalability for coded caching via coded multipoint multicasting. IEEE Trans. Wirel. Commun. 2019, 18, 227–240. [Google Scholar] [CrossRef]
  25. Vettigli, G.; Ji, M.; Shanmugam, K.; Llorca, G.; Tulino, A.M.; Caire, G. Efficient algorithms for coded multicasting in heterogeneous caching networks. Entropy 2019, 21, 324. [Google Scholar] [CrossRef] [Green Version]
  26. Zhong, S.; Wang, X. Joint multicast and unicast beamforming for coded caching. IEEE Trans. Commun. 2018, 66, 3354–3367. [Google Scholar] [CrossRef]
  27. Combes, R.; Ghorbel, A.; Kobayashi, M.; Yang, S. Utility optimal scheduling for coded caching in general topologies. IEEE J. Sel. Areas Commun. 2018, 36, 1692–1705. [Google Scholar] [CrossRef] [Green Version]
  28. Karat, N.S.; Thomas, A.; Rajan, B.S. Error correction in coded caching with symmetric batch prefetching. IEEE Trans. Commun. 2019, 67, 7264–7274. [Google Scholar] [CrossRef]
  29. Pääkkönen, G.; Barreal, A.; Hollanti, C.; Tirkkonen, O. Coded caching clusters with device-to-device communications. IEEE Trans. Mob. Comput. 2019, 18, 264–275. [Google Scholar] [CrossRef] [Green Version]
  30. Ibrahim, A.A.; Zewail, A.M.; Yener, A. Coded caching for heterogeneous systems: An optimization perspective. IEEE Trans. Commun. 2019, 67, 5321–5335. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, J.; Lin, X.; Wang, C.-C.; Wang, X. Coded caching for files with distinct file sizes. In Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015; pp. 1686–1690. [Google Scholar]
  32. Lampiris, E.; Elia, P. Adding transmitters dramatically boosts coded-caching gains for finite file sizes. IEEE J. Sel. Areas Commun. 2018, 36, 1176–1188. [Google Scholar] [CrossRef] [Green Version]
  33. Niesen, U.; Maddah-Ali, M.A. Coded caching with nonuniform demands. IEEE Trans. Inf. Theory 2014, 63, 221–226. [Google Scholar]
  34. Ding, Y.; Wang, L.; Wu, H.; Shen, H.V.; Poor, X. Tradeoff of content sharing efficiency and secure transmission in coded caching systems. In Proceedings of the IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018. [Google Scholar]
  35. Kiskani, M.K.; Sadjadpour, H.R. Secure coded caching in wireless ad hoc networks. In Proceedings of the International Conference on Computing, Networking and Communications (ICNC), Silicon Valley, CA, USA, 26–29 January 2018. [Google Scholar]
  36. Zewail, A.A.; Yener, A. Coded caching for resolvable networks with security requirements. In Proceedings of the IEEE Conference on Communications and Network Security (CNS): The Workshop on Physical-Layer Methods for Wireless Security, Philadelphia, PA, USA, 17–19 October 2016. [Google Scholar]
  37. Kamel, M.; Wigger, S.; Sarkiss, M. Decentralized coded caching for wiretap broadcast channels. In Proceedings of the IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018. [Google Scholar]
  38. Suthan, I.; Chugh, C.H.H.; Krishnan, P. An improved secretive coded caching scheme exploiting common demands. In Proceedings of the IEEE Information Theory Workshop (ITW), Kaohsiung, Taiwan, 6–10 November 2017. [Google Scholar]
  39. Hachem, J.; Karamchandani, N.; Diggavi, S.N. Coded caching for multi-level popularity and access. IEEE Trans. Inf. Theory 2017, 63, 3108–3141. [Google Scholar] [CrossRef]
  40. Lim, S.H.; Wang, C.; Gastpar, M. Information-theoretic caching: The multi-user case. IEEE Trans. Inf. Theory 2017, 63, 7018–7037. [Google Scholar] [CrossRef]
  41. Sengupta, A.; Tandom, R.; Clancy, T.C. Improved approximation of storage-rate tradeoff for caching via new outer bounds. In Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015. [Google Scholar]
  42. Vijit, K.K.P.; Rai, B.K.; Jacob, T. Towards the exact rate memory tradeoff in coded caching. In Proceedings of the National Conference on Communications (NCC), Bangalore, India, 20–23 February 2019. [Google Scholar]
  43. Wei, Y.; Ulukus, S. Coded caching with multiple file requests. In Proceedings of the 55th Annual Allerton Conference on Communication, Control and Computing (Allerton), Monticello, IL, USA, 3–6 October 2017. [Google Scholar]
  44. Maddah-Ali, M.A.; Niesen, U. Decentralized coded caching attains order-optimal memory-rate tradeoff. IEEE/ACM Trans. Netw. (TON) 2015, 23, 1029–1040. [Google Scholar] [CrossRef] [Green Version]
  45. Wei, Y.; Ulukus, S. Novel decentralized coded caching through coded prefetching. In Proceedings of the 2017 IEEE Information Theory Workshop (ITW), Kaohsiung, Taiwan, 6–10 November 2017. [Google Scholar]
  46. Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
Figure 1. A hierarchical caching system with external adversaries acting overall shared links.
Figure 1. A hierarchical caching system with external adversaries acting overall shared links.
Entropy 23 01459 g001
Figure 2. Different regimes of M 1 , M 2 for α * and β * .
Figure 2. Different regimes of M 1 , M 2 for α * and β * .
Entropy 23 01459 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zolfaghari, B.; Singh, V.; Rai, B.K.; Bibak, K.; Koshiba, T. Cryptography in Hierarchical Coded Caching: System Model and Cost Analysis. Entropy 2021, 23, 1459. https://doi.org/10.3390/e23111459

AMA Style

Zolfaghari B, Singh V, Rai BK, Bibak K, Koshiba T. Cryptography in Hierarchical Coded Caching: System Model and Cost Analysis. Entropy. 2021; 23(11):1459. https://doi.org/10.3390/e23111459

Chicago/Turabian Style

Zolfaghari, Behrouz, Vikrant Singh, Brijesh Kumar Rai, Khodakhast Bibak, and Takeshi Koshiba. 2021. "Cryptography in Hierarchical Coded Caching: System Model and Cost Analysis" Entropy 23, no. 11: 1459. https://doi.org/10.3390/e23111459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop