Next Article in Journal
A Smart Visual Sensor for Smoke Detection Based on Deep Neural Networks
Previous Article in Journal
Using Machine Learning Multiclass Classification Technique to Detect IoT Attacks in Real Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Micro/Macro Cache for Device-to-Device Caching Systems in Multi-Operator Environments

Department of Information and Communication Engineering, Dongguk University, Seoul 04620, Republic of Korea
Sensors 2024, 24(14), 4518; https://doi.org/10.3390/s24144518
Submission received: 29 April 2024 / Revised: 8 July 2024 / Accepted: 10 July 2024 / Published: 12 July 2024
(This article belongs to the Section Sensor Networks)

Abstract

:
In a device-to-device (D2D) caching system that utilizes a device’s available storage space as a content cache, a device called a helper can provide content requested by neighboring devices, thereby reducing the burden on the wireless network. To enhance the efficiency of a limited-size cache, one can consider not only macro caching, which is content-based caching based on content popularity, but also micro caching, which is chunk-based sequential prefetching and stores content chunks slightly behind the one that a nearby device is currently viewing. If the content in a cache can be updated intermittently even during peak hours, the helper can improve the hit ratio by performing micro caching, which stores chunks that are expected to be requested by nearby devices in the near future. In this paper, we discuss the performance and effectiveness of micro D2D caching when there are multiple operators, the helpers can communicate with the devices of other operators, and the operators are under a low load independently of each other. We also discuss the ratio of micro caching in the cache area when the cache space is divided into macro and micro cache areas. Good performance can be achieved by using micro D2D caching in conjunction with macro D2D caching when macro caching alone does not provide sufficient performance, when users are likely to continue viewing the content they are currently viewing, when the content update cycle for the cache is short and a sufficient number of chunks can be updated for micro caching, and when there are multiple operators in the region.

Graphical Abstract

1. Introduction

Wireless data requirements are increasing rapidly due to the rise of high-definition video streaming services over wireless networks [1,2,3]. To meet the desired data requirements of all users, it is necessary to increase the wireless capacity per unit area, and efforts are underway to install more cells and increase the wireless capacity of each cell [4,5]. In recent years, massive multiple-input multiple-output (MIMO) technology has increased spectrum efficiency and the development of technologies using high-frequency carriers has led to a rapid increase in the available wireless bandwidth [6,7,8]. For example, in terms of maximum data rates, 4G is targeted at 1 Gbps, 5G at 20 Gbps, and 6G aims at up to 1000 Gbps [9,10,11,12]. However, there are significant form factor and cost challenges in applying massive MIMO technology to base stations using low-frequency carriers, and base stations using high-frequency carriers have coverage issues, leaving many areas in the shadow of high-frequency carriers unless a sufficient number of base stations are installed [13,14,15]. In addition, the maximum wireless data rate may exceed the capacity of the existing wired backhaul, so unless the base stations are equipped with ultra-high speed wired backhaul, large capacity may not be available due to the backhaul bottleneck [1]. As a result, the wireless capacity available in different regions can vary widely. Also, as cells get smaller, the averaging effect disappears, and the number of devices and data requirements in each cell can vary significantly.
High-definition video streaming services account for a large and growing share of wireless data capacity. One of the other directions to address the exploding wireless data requirements is the use of video content caching systems [16,17,18,19]. A device-to-device (D2D) caching system, which uses a device’s available storage space as a cache, reduces the load on wireless networks by allowing devices to store content in their caches and deliver content to other devices using D2D communication when the content requested by a nearby device is in the cache [20,21,22,23].
D2D communication can use WiFi Direct or 5G New Ratio (NR) sidelink. WiFi Direct uses an unlicensed spectrum, and NR sidelink can use an in-band licensed spectrum, out-band dedicated spectrum, or unlicensed spectrum [24,25,26,27]. When D2D communication does not use an in-band licensed spectrum, there is no interference with cellular communication.
In this paper, a device that stores content and delivers it to other devices using D2D communication is called a helper, and a device that receives content from a helper is called a user equipment (UE) [28]. A device can be both a UE that receives content from another device and a helper that provides content to other devices.
For a device to act as a helper, it must have sufficient storage space and be less power hungry to support continuous D2D communication. Not all devices can act as helpers because typical devices may not have enough free storage space and may have power consumption, security, or copyright issues [28]. Therefore, the number of UEs may be larger than the number of helpers, and multiple UEs may be associated with a single helper, forming a star topology. If the D2D communication does not use the in-band spectrum, helpers can also provide content to UEs of other operators, thus increasing the utility of the caches when there are multiple operators [29,30].
A content cache is similar in concept to a computer’s cache. A computer’s cache is based on temporal and spatial locality, with the assumption that data used once is likely to be used again. Similarly, if it can be assumed that popular content can be used repeatedly by multiple users, offloading can be achieved by storing popular content in the cache [31,32,33]. Not only should content be stored before peak hours, but even during peak hours; if there are intermittent periods when data demand is not high and additional data supply is possible, the cache can be updated to reflect real-time changes in popularity. In this paper, content-based caching based on content popularity is referred to as macro caching. However, macro caching alone may not be sufficient to achieve the desired offload performance, because users have different tastes and preferences, and the amount of actual video content is almost infinite, while the amount of free storage on a device may be small.
Various techniques are used to improve macro caching performance. If the mobility of users can be predicted by analyzing their movement patterns or schedule management, it is possible to predict which users will go to a congested cell and store content for them in advance [34,35,36,37]. If we know the social relationships between users, we can predict which UEs will be near a helper, and macro caching performance can be improved by storing content preferred by these UEs [38,39,40]. If users with similar interests are grouped together and encouraged to engage in D2D communication, the effectiveness of the cache can be increased because content stored for oneself can be used for other users with similar tastes [41,42].
When a device has a recommendation system, many users tend to choose from the recommended content [43,44]. The effectiveness of the cache can be increased if recommendations are made from cached content by jointly optimizing a caching system and a recommendation system [45,46,47], or if caching is performed taking into account the recommendation system [48].
Although these methods can increase the effectiveness of macro caching, it may be difficult to overcome the fundamental problem of macro D2D caching, which is that a device’s cache can only store a very small fraction of the total content. In this paper, we consider a caching method called micro caching, which has a different approach than macro caching. Micro caching is chunk-based sequential prefetching and stores content chunks slightly behind the one that a nearby device is currently viewing. We discuss how to improve the offload performance by allowing a helper to update its cache based on the content chunk viewed by nearby UEs instead of updating its cache based on content popularity, assuming that the helper’s operator is intermittently under low load even during peak hours. When D2D communication does not use an in-band spectrum, a helper can provide content to UEs from different operators, which can improve the performance of micro caching. We discuss the effectiveness of micro caching when there are multiple operators, devices from different operators are capable of D2D communication, and the operators are independently under low load.
The micro D2D caching method proposed in this paper does not conflict with or compare with existing macro D2D caching methods, but can be used in conjunction with them, and the performance can be further improved using various existing techniques. The contributions of this paper are as follows:
(1)
While most of the literature related to content caching uses content-based methods, this paper considers micro caching, which is chunk-based sequential prefetching.
(2)
While many studies improve performance by considering social relationships, mobility patterns, recommendations, etc. based on content popularity, this paper considers micro caching which does not consider content popularity.
(3)
This paper discusses how performance improvements can be achieved when the cache content can be updated intermittently during peak hours.
(4)
Micro caching is not always better than macro caching, so this paper considers mixed caching, where a cache space is divided into micro and macro cache areas. This paper also discusses under what conditions and in what proportions the two cache areas are divided.
(5)
This paper discusses how micro D2D caching can be utilized when there are multiple operators in a region and a helper can also serve content to UEs belonging to other operators.
This paper is organized as follows. Section 2 introduces micro D2D caching and describes how micro D2D caching works in single-operator and multi-operator environments. In Section 3, we discuss the proportion of the micro D2D cache area when a cache area is divided into micro and macro cache areas. Section 4 discusses the usefulness of micro D2D caching through numerical results and the percentage of the micro D2D cache area in various situations. Finally, conclusions are drawn in Section 5.

2. Micro D2D Caching

2.1. Micro D2D Caching

Computer caches can use the concept of sequential prefetching to increase efficiency, as well as the notion that data used once can be used again. In applications where data is used sequentially, such as filters, data that is expected to be used in the future can be retrieved in advance, stored in the cache, and used when needed. The same concept can be used to cache video content.
When streaming videos on YouTube, Netflix, etc., users do not retrieve the video content all at once, but in small chunks or segments of a few seconds as they watch the video. The Hyper Text Transfer Protocol (HTTP) server breaks the video content into a large number of small chunks and stores them, and the user requests and receives the necessary chunks from the HTTP server at a time as the video plays [49,50]. For smooth video playback, users can do some sequential prefetching. However, this prefetching is intended to reduce the delay in fetching data or to resolve the mismatch between compressed playback and the data fetch speed and is far from reducing the network load. Prefetching is used on a limited basis because fetching future chunks that are not certain to be used can cause unnecessary network load. The method of prefetching and storing chunks slightly behind the content chunk being played by neighboring devices when the wireless network is intermittently under low load during peak hours is referred to in this paper as micro caching.
As shown in Figure 1, in this paper, the method of storing the entire chunks of video content is called macro caching, and the method of storing some video chunks slightly behind the chunk being played by a nearby UE is called micro caching. The fundamental difference between macro caching and micro caching is that macro caching is content-based caching (on the order of minutes or tens of minutes) based on content popularity, while micro caching is chunk-based caching (on the order of seconds) using sequential prefetching, as shown in Figure 2. Assuming that video content consists of a very large number of video chunks and that the video chunks are being watched in sequence, it is possible to predict with relatively high accuracy which chunks will be needed as a user watches a video. Even when one video ends and a new video begins, users tend to watch content that is related to the current content, so it is possible to predict to some extent which chunks will be needed through content recommendation and content prediction algorithms [38,51,52]. The accuracy of sequential prefetching can be determined by how far into the future the prediction is made. Predicting chunks farther in the future can be less accurate because it increases the likelihood that a user will finish watching the current video and start watching another video or skip or stop while watching. In addition, when predicting the distant future, D2D communication may not be possible because the UE leaves the helper’s coverage area.
Micro caching can have very different characteristics from macro caching. In this paper, we investigate the characteristics of micro caching through a rather simple system model to compare it with macro caching.
A helper can update content when there are intermittent periods of low data demand even during peak hours. If a helper prefetches and stores chunks slightly behind the content chunk being played by the neighboring devices and delivers them when needed, a high hit ratio can be achieved, provided that the users continue to watch the videos and stay within the D2D communication range of the helper. In this paper, it is assumed that the number of helpers is small compared to the number of UEs, so that one helper can deliver content to multiple UEs. It is assumed that the network structure of D2D communication has a star topology and that not too many UEs are connected to a specific helper to avoid congestion in the helper.
If the D2D communication can be performed using an unlicensed band, the D2D communication can be performed between devices belonging to different operators [29,30]. Figure 3 illustrates the D2D communication considering multiple operators. If there are multiple operators in a certain region and devices can receive data from helpers of other operators through D2D communication, even if one operator is overloaded, a helper of another operator may be able to perform micro D2D caching for the UEs in the overloaded operator.

2.2. Caching Scenario

Depending on the mobility of devices, they can be categorized as fixed devices that are attached to a specific location, such as Internet of Things (IoT) devices, nomadic devices that move at very low or intermittent speeds, such as pedestrians, and mobile devices that move at high speeds, such as vehicles. When a helper is fixed, only the region in which it is located is considered for caching, while when a helper is mobile at high speed, micro caching can be performed only for UEs that are traveling together with the helper. If there is no UE moving with a mobile helper, micro caching may not be appropriate because UEs will leave the D2D communication range. To simplify the discussion, this paper does not explicitly consider fixed helpers or group mobility and assumes that helpers performing micro caching are nomadic. However, the discussion in this paper can easily be extended to fixed helpers located at a given location or mobile helpers traveling in groups with some UEs.
Macro caching is generally content-based rather than chunk-based. However, in order to compare the characteristics of micro caching with macro caching, this paper considers macro caching which also stores on a chunk basis. In this case, macro caching and micro caching are both chunk-based, and chunks can be stored based on content popularity in macro caching, while chunk-based sequential prefetching is performed in micro caching.
Micro caching is not always better than macro caching, so this paper considers mixed caching, where a cache space is divided into micro and macro cache areas. Consider a helper that divides the cache space into two parts: the macro cache area and the micro cache area, as shown in Figure 4. Before peak hours, the helper considers the popularity of the content and fills the macro cache area in order of decreasing popularity. If the wireless network has an intermittent low load even during peak hours, the macro cache area can be updated by taking into account real-time changes in popularity. For simplicity, this paper does not consider changes in popularity over time, and content is stored in the macro cache area before peak hours and remains in that state during peak hours, so the helper only updates the micro cache area during peak hours when the wireless network is under a low load.
We assume that helpers use the operator’s spectrum to store content, but the D2D communication between devices uses an unlicensed spectrum, allowing devices belonging to different operators to communicate. The number of helpers is small compared to the number of UEs, and multiple UEs are associated with a UE, forming a star topology. If there are multiple helpers near a UE, the UE is assumed to select and associate with one of the nearby helpers. In particular, when there are helpers from multiple operators in the vicinity, a UE is assumed to associate with the one that is able to perform micro D2D caching by updating the cache. If there is a request for a content chunk from a UE and the helper has the content chunk in the cache, the helper will deliver it to the UE. If the helper does not have the content chunk for a UE, the UE requests the content chunk from the wireless network. If a UE moves out of the helper’s coverage area, or if the wireless network load becomes too congested for the helper to update its cache, the UE can associate with other nearby helpers.
Figure 5 illustrates a caching scenario. In a given region, operators and their helpers can be in one of the following states: off-peak time, (peak time) overload, and (peak time) low-load states. In the off-peak time state, a helper stores content chunks in the macro cache area, and in the low-load state, a helper periodically updates content chunks in the micro cache area. In the overload state, no content chunks can be stored or updated. In the peak time states, a UE establishes an association with a nearby helper, and the helper attempts to perform micro D2D caching for the associated UEs. When a UE requests a content chunk, it is delivered via D2D communication if the helper has the chunk in its cache. If the helper does not have the chunk, it is delivered via cellular communication from the base station. When a helper enters the overload state, UEs associated with the helper check for another helper in the low-load state nearby and attempt to establish associations with that helper if necessary.
Assume that the region under consideration has an area of A t o t a l and that there are N o p e r a t o r operators in this region. For simplicity, assume that the helpers of each operator are uniformly distributed and that a UE is associated with one helper at a time. If there are multiple helpers nearby, a UE chooses one of them and establishes an association with that helper. Assume that in the region under consideration, N h e l p e r helpers are independently and uniformly distributed for each operator and the D2D radius is R D 2 D . Considering only one operator, the probability that a UE can connect to any helper is
P h e l p e r s i n g l e = 1 1 π   R D 2 D 2 A t o t a l N h e l p e r .
The probability that a UE can be associated with one of the helpers of N o p e r a t o r operators is
P h e l p e r m u l t i p l e = 1 1 P h e l p e r s i n g l e N o p e r a t o r .
If we assume that the number of helpers is sufficiently large, Equation (1) may become close to 1, and Equation (2) will also converge to 1. In this paper, we assume that these values are close to 1 and do not specifically consider the probability that there is no helper around a UE. When the helper density is very low and there are no helpers around the UE, the effectiveness of D2D caching is greatly reduced, and both macro caching and micro caching become meaningless. In order to provide a simple comparison between micro caching and macro caching when D2D caching is effective, the cases where the UE cannot be associated with any helper are not explicitly considered in this paper.
Assuming that N U E UEs are independently and uniformly distributed for each operator in the considered area, the average number of UEs connected to a single helper is as follows:
N d e v i c e s i n g l e = P h e l p e r s i n g l e N U E N h e l p e r N U E N h e l p e r .
Assuming that there are N o p e r a t o r operators in the considered region and a UE associates with the helper of the operator with the lowest load, the average number of devices per helper is as follows:
N d e v i c e m u l t i p l e = N o p e r a t o r N d e v i c e s i n g l e N o p e r a t o r N U E N h e l p e r .
It is assumed that a UE periodically performs reassociation and, if possible, connects to a helper in the low-load state, and that each helper is tuned to be associated with N d e v i c e m u l t i p l e or fewer UEs to avoid the congestion.

2.3. Low-Load State

For the sake of simplicity, let’s assume that video chunks have the same data size and playback time and that the playback time of a chunk is T c h u n k . The state of an operator or a helper can be divided into off-peak time and peak time states. Even during the peak time state, if the helper can perform the content update required for micro D2D caching in a period of less than T p e r i o d T c h u n k , the state is called the (peak time) low-load state. Otherwise, the peak time state is called the (peak time) overload state. Assuming that N d e v i c e m u l t i p l e activated UEs are connected to a helper, the UEs request T p e r i o d N d e v i c e m u l t i p l e content chunks between content update periods.
Each helper can store up to K c a c h e chunks in the cache, and in the low-load state, a helper can update up to K s t o r e chunks per each update cycle. In the low-load state, a helper considers the chunks expected to be requested by N d e v i c e m u l t i p l e UEs and updates up to the following number of chunks:
K m a x m i c r o = min K s t o r e , K c a c h e , T p e r i o d N d e v i c e m u l t i p l e .
Each operator may have regions where sufficient data can be supplied by falling within the coverage of high-frequency carriers or by using a large number of antennas at the base station, and there may be regions where sufficient data cannot be supplied. Assuming that, for an operator, the considered region is divided into N r e g i o n regions according to the probability going into the low-load state, let P i r e g i o n   ( i = 1 , , N r e g i o n ) denote the proportion of each region and P i u n d e r l o a d   ( i = 1 , , N r e g i o n ) denote the probability of being under a low load in each region. The probability of being under a low load for an opeator is written as
P u n d e r l o a d s i n g l e = i = 1 N r e g i o n P i r e g i o n P i u n d e r l o a d .
The probability of being under a low load can have a large value if a large percentage of the area is covered by high-frequency carriers. Suppose the region under consideration is divided into N r e g i o n regions independently per operator and each region is under low load independently per operator. The probability that any N o p e r a t o r operators will be under a low load at any given time at any given location is
P u n d e r l o a d m u l t i p l e = 1 1 P u n d e r l o a d s i n g l e N o p e r a t o r .
As the number of operators considered increases, the probability that any one of them will be under a low load increases.

3. Hit Ratio of D2D Caching

3.1. When to Use Macro Caching Only

Suppose the total number of content chunks is K t o t a l   K c a c h e . When a content chunk is requested by a UE, the probability that the chunk is the k -th content chunk C k m a c r o is called P k m a c r o , and C k m a c r o is sorted in descending order of P k m a c r o . In other words,
P k m a c r o P k + 1 m a c r o         k = 1 , , K c a c h e 1 .
If the helper’s cache can store K c a c h e chunks and the cache is used only for macro D2D caching, the hit ratio for a request for N d e v i c e m u l t i p l e T p e r i o d content chunks is
H o n l y m a c r o = N d e v i c e m u l t i p l e T p e r i o d k = 1 K c a c h e P k m a c r o N d e v i c e m u l t i p l e T p e r i o d = k = 1 K c a c h e P k m a c r o .
Because the total number of content chunks is nearly infinite while the cache on a device is not large, macro D2D caching alone may not produce satisfactory results.

3.2. When to Use Micro Caching Only

For T p e r i o d N d e v i c e m u l t i p l e chunks expected to be requested by N d e v i c e m u l t i p l e UEs connected to the helper, let U k be the user of the content chunk C k m i c r o and T k u s e be the time at which it is used. Let P v i e w u , t ( u = 1 , , N d e v i c e m u l t i p l e , t = 1 , , T p e r i o d ) be the probability that the UE u continues to view the content at time t . If a UE does not continue to view the content at time t , it is assumed that the UE is viewing other content. P v i e w ( u , t ) can be related to the characteristics of the user, the characteristics of the content the user is viewing, the viewing time, etc., and can decrease as t increases. Let P c o v e r a g e u , t ( u = 1 , , N d e v i c e m u l t i p l e , t = 1 , , T p e r i o d ) be the probability that UE u remains in the helper’s D2D coverage at time t . Assume that a UE moves out of the helper’s coverage area, another UE moves into the area, and thus there is a constant number of UEs in the coverage area. P c o v e r a g e ( u , t ) is related to the relative velocity of the helper and the UE and can become smaller as t increases. The probability that the k -th content chunk C k m i c r o will be used is written as
P k m i c r o P c o v e r a g e U k , T k u s e P v i e w U k , T k u s e k = 1 , , T p e r i o d N d e v i c e m u l t i p l e 1 .
Suppose C k m i c r o is sorted in descending order of P k m i c r o , in other words,
P k m i c r o P k + 1 m i c r o         k = 1 , , T p e r i o d N d e v i c e m u l t i p l e 1 .
Consider the case where K s t o r e and T p e r i o d N d e v i c e m u l t i p l e are greater than K c a c h e , i.e., K m a x m i c r o = K c a c h e in Equation (5). If the cache is used only as micro D2D caching, the hit ratio is written as
H o n l y m i c r o = P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = 1 K c a c h e P k m i c r o .

3.3. When to Maximize Micro Caching

This time, consider the case where K s t o r e or T p e r i o d N d e v i c e m u l t i p l e is smaller than K c a c h e , i.e., K m a x m i c r o < K c a c h e . Even if the cache is used as much as possible as micro D2D caching, if the case size K c a c h e is larger than the maximum considered micro cache size K m a x m i c r o , the remaining K c a c h e K m a x m i c r o area can be used for macro D2D caching. When K m a x m i c r o area is used for micro D2D caching, the hit ratio for that portion is
H m a x m i c r o = P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = 1 K m a x m i c r o P k m i c r o .
The remaining K c a c h e K m a x m i c r o area can be used for macro D2D caching, and the hit ratio of the macro D2D caching portion is
H m i n m a c r o = k = 1 K c a c h e K m a x m i c r o P k m a c r o .
The hit ratio of micro D2D caching can be calculated by considering both the micro caching part and the macro caching part. If micro caching and macro caching are independent, the cache hit ratio is written as follows:
H m a x m i x e d = k = 1 K m a x m i c r o ( P u n d e r l o a d m u l t i p l e P k m i c r o + 1 P u n d e r l o a d m u l t i p l e P k m i c r o H m i n m a c r o ) T p e r i o d N d e v i c e m u l t i p l e + 1 K m a x m i c r o T p e r i o d N d e v i c e m u l t i p l e H m i n m a c r o = k = 1 K m a x m i c r o P u n d e r l o a d m u l t i p l e P k m i c r o 1 H m i n m a c r o T p e r i o d N d e v i c e m u l t i p l e + H m i n m a c r o = H m a x m i c r o + H m i n m a c r o H m a x m i c r o H m i n m a c r o .
If K s t o r e and T p e r i o d N d e v i c e m u l t i p l e are greater than K c a c h e , then K m a x m i c r o becomes K c a c h e , so H m a x m i c r o in Equation (15) becomes H o n l y m i c r o and H m i n m a c r o becomes zero. Therefore, Equation (15) can be considered as a general case including Equation (12).

3.4. When to Use the Right Ratio

In the previous subsection, the helper cache space was used as much as possible as a micro cache area, and the remaining area was used as a macro cache area. However, micro caching is not always superior to macro caching, so it may be necessary to split the two areas appropriately. Figure 4 shows that the cache area is divided into a micro cache area and a macro cache area, where the micro cache area stores content chunks with high preference for micro caching and the macro cache area stores content chunks with high preference for macro caching. It may be more efficient to divide the cache area into appropriate proportions than to store only the highly favored content chunks for macro caching or, conversely, only the highly favored content chunks for micro caching.
Suppose a helper’s cache space is divided into micro and macro cache areas. Assuming that the helper is nomadic, it does not know in advance which region it will be in during peak hours, so the ratio of micro to macro cache areas must be determined in advance, regardless of the helper’s current location. Based on the pre-determined ratio, popular content is stored in the macro cache before peak hours, and chunks are updated for micro D2D caching when the helper’s operator is under low load.
Let k 0   0 k 0 K m a x m i c r o be the number of chunks for micro caching and K c a c h e k 0 be the number of chunks for macro caching among the K c a c h e content chunks stored in the helper’s cache. The helper caches content chunks C k m a c r o from 1 to K c a c h e k 0 before peak hours. In this case, the hit ratio of the macro cache area alone is as follows:
H m a c r o k 0 = k = 1 K c a c h e k 0 P k m a c r o       0 k 0 K m a x m i c r o .
The helper periodically updates the micro cache area when the operator is under low load during peak hours. The hit ratio of the micro cache area alone is as follows:
H m i c r o k 0 = P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = 1 k 0 P k m i c r o           0 k 0 K m a x m i c r o .
When micro caching and macro caching are independent, the cache hit ratio is written as:
H m i x e d k 0 = H   m i c r o k 0 + H   m a c r o k 0 H   m i c r o k 0 H   m a c r o k 0       0 k 0 K m a x m i c r o .
The optimal value of k 0 is determined such that Equation (18) is maximized, i.e.,
H o p t m i x e d max k 0 H m i x e d k 0 ,
k o p t argmax k 0 H m i x e d k 0 .
By calculating all k 0 values from zero to K m a x m i c r o , the optimal value can be found. However, let’s take a quick look at the properties of k o p t . If H   m i c r o k o p t and H   m a c r o k o p t are sufficiently small compared to 1, then H   m i c r o k o p t H   m a c r o k o p t can be negligible in Equation (18). Consider the approximate hit ratio:
H ~ m i x e d k 0 H   m i c r o k 0 + H   m a c r o k 0 = P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = 1 k 0 P k m i c r o + k = 1 K c a c h e k 0 P k m a c r o .
Suppose K 1 satisfies the following equation:
P K c a c h e K 1 + 1 m a c r o P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d P K 1 m i c r o , P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d P K 1 + 1 m i c r o P K c a c h e K 1 m i c r o .
When K 1 K m a x m i c r o , the following is satisfied for k 0 ( < K 1 ) :
H ~ m i x e d K 1 H ~ m i x e d k 0 = P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = k 0 + 1 K 1 P k m i c r o k = K c a c h e K 1 + 1 K c a c h e k 0 P k m a c r o P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d   K 1 k 0 P K 1 m i c r o   K 1 k 0 P K c a c h e K 1 + 1 m a c r o =   K 1 k 0 P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d P K 1 m i c r o P K c a c h e K 1 + 1 m a c r o 0 .
When K 1 K m a x m i c r o , the following is satisfied for k 0 ( > K 1 ) :
H ~ m i x e d K 1 H ~ m i x e d k 0 = k = K c a c h e k 0 + 1 K c a c h e K 1 P k m a c r o P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = K 1 + 1 k 0 P k m i c r o     k 0 K 1 P K c a c h e K 1 m a c r o P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d   k 0 K 1 P K 1 + 1 m i c r o =   k 0 K 1 P K c a c h e K 1 m a c r o P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d P K 1 + 1 m i c r o 0 .  
When K 1 > K m a x m i c r o , the following is satisfied for k 0 ( < K m a x m i c r o ) :
H ~ m i x e d K m a x m i c r o H ~ m i x e d k 0 = P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = k 0 + 1 K m a x m i c r o P k m i c r o k = K c a c h e K m a x m i c r o + 1 K c a c h e k 0 P k m a c r o P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d   K m a x m i c r o k 0 P K m a x m i c r o m i c r o   K m a x m i c r o k 0 P K c a c h e K m a x m i c r o + 1 m a c r o =   K m a x m i c r o k 0 P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d P K m a x m i c r o m i c r o P K c a c h e K m a x m i c r o + 1 m a c r o 0 .  
Therefore, the value of k 0 that maximizes H ~ m i x e d k 0 is
k ~ o p t argmax k 0 H ~ m i x e d k 0 = min K m a x m i c r o , K 1 .
In Equations (10), (22) and (26), k ~ o p t may increase as the probability of being under low load increases, the mobility of devices decreases, and the probability of continuing to view content increases. As the number of UEs decreases or the update cycle decreases, k ~ o p t can increase if K m a x m i c r o is much larger than K 1 . If K m a x m i c r o is not sufficiently larger than K 1 , then reducing the number of UEs or the update cycle may reduce K m a x m i c r o , resulting in a smaller k ~ o p t .
The performance gain of H o p t m i x e d over H o n l y m a c r o , which is measured as the difference between H o p t m i x e d and H o n l y m a c r o , is approximated as follows:
H o p t m i x e d H o n l y m a c r o H ~ m i x e d k ~ o p t H o n l y m a c r o = H   m i c r o k ~ o p t H o n l y m a c r o H   m a c r o k ~ o p t = P u n d e r l o a d m u l t i p l e N d e v i c e m u l t i p l e T p e r i o d k = 1 min K m a x m i c r o , K 1 P k m i c r o k = K c a c h e min K m a x m i c r o , K 1 + 1 K c a c h e P k m a c r o .
The performance gain is determined by how much larger the hit ratio of the micro cache area is compared to the hit ratio when that area is used for macro caching. The greater the probability of being under low load, the less mobile the devices, the more likely the video is to be viewed continuously, and the greater the performance gain.

4. Numerical Results

In this section, we examine the hit ratio and the ratio of the micro cache area to the cache size when using mixed caching. In the simulation, the total number of content chunks, K t o t a l , is 1,000,000, the macro preferences of the content chunks have a Zipf distribution with Zipf coefficient λ = 0.8, and the number of operators, N o p e r a t o r , is 4. Each operator has two types of regions, one for a high-frequency carrier coverage and the other for a low-frequency carrier coverage, and the proportions and low-load state probabilities of the regions are P 1 r e g i o n = 0.4, P 2 r e g i o n = 0.6, P 1 u n d e r l o a d = 0.7, and P 2 u n d e r l o a d = 0.1. Region 1 may refer to a high-frequency carrier region with a high probability of being under a low load and Region 2 may refer to the low-frequency carrier region with a low probability of being under a low load.
The number of devices of each operator in the D2D area, N d e v i c e s i n g l e , is five, and therefore the number of UEs supported by a helper, N d e v i c e m u l t i p l e , is limited to 20. The cache update cycle, T p e r i o d , is 20, the cache size of a helper, K c a c h e , is 400, and the number of chunks that can be updated per update cycle, K s t o r e , is 400. Since N d e v i c e m u l t i p l e T p e r i o d , K c a c h e , and K s t o r e are all 400, K m a x m i c r o is also 400. The probability that UE u is still viewing content at time t is:
P   v i e w u , t = α v i e w exp β v i e w t     u = 1 , , N d e v i c e m u l t i p l e ,   t = 1 , , T p e r i o d  
where α v i e w is 0.8 and β v i e w is between 0 and 0.2. Assuming that a UE is mobile with a probability of P m o b i l e , the probability that a UE will remain within the helper’s coverage, P   c o v e r a g e ( u , t ) , is
P   c o v e r a g e u , t = 1                                     i f   u N d e v i c e m u l t i p l e 1 P m o b i l e α c o v e r a g e               exp β c o v e r a g e t         o t h e r w i s e u = 1 , , N d e v i c e m u l t i p l e , t = 1 , , T p e r i o d .
where P m o b i l e is 0.25, α c o v e r a g e is 0.8, and β c o v e r a g e is 0.2. Each experiment shows two figures: the first figure shows the hit ratios H o n l y m a c r o of macro caching and H o p t m i x e d of mixed caching, and the second figure shows the proportion of the micro cache area in the cache k o p t / K c a c h e and the approximate ratio k ~ o p t / K c a c h e . In most cases in the experiments, K m a x m i c r o = K c a c h e and the maximum value of the micro caching ratio is one. The optimal results were found through an exhaustive search. The simulation parameters are summarized in Table 1.
Figure 6 and Figure 7 show the numerical results when the Zipf coefficient is 0.6, 0.8, and 1.0. The maximum number of chunks that can be micro D2D cached, K m a x m i c r o , is 400, so the maximum micro D2D caching ratio is one. When the Zipf coefficient is large, a high hit ratio can be achieved even with a small cache size, but as the Zipf coefficient becomes smaller, the performance of macro caching deteriorates. By using a portion of the cache as a micro cache area, a significant performance improvement can be achieved, especially when the Zipf coefficient is not large and thus the performance of macro caching alone is not satisfactory with a limited cache size. As β v i e w increases, the probability that content chunks stored in the micro cache will not be used increases, so the effectiveness of micro caching decreases and the proportion of the micro cache area also decreases. When the Zipf coefficient is very large and the cache hit ratio is high, H   m i c r o k o p t H m a c r o k o p t cannot be ignored and k ~ o p t is somewhat different from k o p t . In other cases, however, k ~ o p t has a similar value to k o p t .
Figure 8 and Figure 9 show the experimental results when the cache size is varied to 200, 400, and 600. The number of chunks considered for micro caching is 400, so even if the cache size is increased to 600, the micro cache area cannot be increased beyond 400 and only the macro cache area becomes larger. In this case, the proportion of the micro cache area is less than 2/3. Figure 8 shows that the performance improvement from increasing the macro cache area from 400 to 600 is not significant. On the other hand, when the cache size is reduced from 400 to 200, the performance drops significantly because there is not enough space for micro caching. When only macro caching is used, it can be seen that the performance difference depending on the cache size is relatively small. When using micro caching, it is important to ensure that enough cache space is available for micro caching and the performance improvement is not significant even if the cache size becomes much larger than the maximum micro cache size. In this simulation, there are no cases with very large Zipf coefficients, so k ~ o p t has a similar value to k o p t .
Figure 10 and Figure 11 show the results when the number of operators is varied to 1, 2, and 4. Since it is assumed that a UE is associated with only one helper at a time, the helpers do not cooperate to store different content chunks from each other, and there is no performance improvement for macro caching as the number of operators increases. In this paper, we do not consider that helpers cooperate with each other, but as the number of operators increases, the probability that at least one of the operators will be under a low load increases, thereby improving the performance of micro caching. When the number of operators is 1 and 2, the number of chunks for micro caching is 100 and 200, respectively, so the proportion of the micro cache area is less than 1/4 and 1/2, respectively. In these cases, the remaining area in the cache is used for macro caching to benefit from improved caching performance, but the performance of mixed caching deteriorates due to the low probability that at least one of the operators will be under a low load.
Figure 12 and Figure 13 show the results when the percentage of the high-frequency carrier region is varied to 0.2, 0.4, and 0.6. As the percentage of the high-frequency carrier region increases, cache updates become more frequent, the effectiveness of micro caching increases, and the proportion of the micro cache area also increases. For macro caching, we do not consider temporal changes in content preferences, so there is no performance improvement by cache updates, and therefore there is no performance difference depending on the proportion of the high-frequency carrier region. If we consider the changes in content preferences over time in macro caching, increasing the high-frequency carrier region will allow cache updates during peak hours, resulting in an improvement in macro caching performance.
Figure 14 and Figure 15 show the performance by changing the proportion of mobile UEs to 0, 0.25, and 0.5. Micro caching becomes less effective as more UEs become mobile, making it less likely that the UEs will stay within the helper’s coverage area. Since it is not effective to perform micro caching for mobile devices, the percentage of micro cache area in the cache decreases with a large proportion of mobile devices. To enable micro caching for fast moving UEs, it may be necessary to have a mobile helper that moves together with the moving UEs.
Figure 16 and Figure 17 show the results when the number of UEs in D2D coverage in each operator is varied to 3, 5, and 7. Since the number of operators is 4, the number of UEs that can be associated with a helper is 12, 20, and 28, respectively, and the number of chunks updated for micro caching is 240, 400, and 560, respectively. The cache size is 400, so the proportion of the micro cache area is less than 0.6 when the number of UEs per operator is 3. In this case, the micro cache area is small, and the remaining part can be used for macro caching, resulting in a slightly larger hit ratio, but the difference is not significant. On the other hand, if the number of UEs per operator is 7, the hit ratio drops significantly because the micro cache area is insufficient. For micro caching to work well, there must be enough cache area to store the chunks that the UEs are expected to request.
Figure 18 and Figure 19 show the results when the content update cycle is changed to 12, 20, and 28. As in the case of varying the number of UEs in Figure 16 and Figure 17, the number of chunks by varying the content update cycle is 240, 400, and 560, respectively. However, the hit ratios are somewhat different from the results in Figure 16, especially when β v i e w increases. When the cache update cycle is shortened, the hit ratio is less affected, even with a large β v i e w . Conversely, as the cache update cycle increases, micro caching becomes less effective, and the percentage of the micro cache area decreases as β v i e w increases. It can be seen that micro caching benefits from a shorter cache update cycle.
Figure 20 and Figure 21 show the results of varying the maximum number of chunks that can be updated per content update cycle to 200, 300, and 400. The maximum percentage of micro caching is 0.5 when the number of chunks that can be updated is 200 and 0.75 when it is 300. Since micro caching is less effective for mobile UEs anyway, there is no need to store all the content chunks that are expected to be requested by mobile UEs, and reducing the number of chunks that can be updated to 300 does not have a significant impact on the performance. However, if the number of chunks that can be updated becomes very small, the hit ratio will not be satisfactory. Being able to update a sufficient number of chunks when updating the cache is critical for micro caching to work well.

5. Conclusions

In this paper, we investigated the performance and effectiveness of micro D2D caching when there are multiple operators, devices can communicate with devices of other operators, and operators are under a low load independently of each other. Assuming that the cache can be updated intermittently even during peak hours and that the time for the operator to become under a low load is independent, a significant performance improvement can be achieved by micro D2D caching. In this paper, it is shown that using a mixture of micro and macro caching, by dividing helpers’ cache space into micro and macro cache areas, can result in a significant performance improvement over macro caching alone. In particular, the use of micro D2D caching can provide maximum benefit in the following cases:
  • When macro caching alone does not provide sufficient performance.
  • When there is sufficient storage space in a helper for chunk prefetching.
  • When there are multiple operators and the operators are under a low load independently of each other.
  • If there are enough high-frequency carrier regions that a helper’s cache can be updated intermittently even during peak hours.
  • If the proportion of mobile devices is small.
  • If users are likely to continue viewing the content they are currently viewing.
  • If the content update cycle is short.
  • If a sufficient number of chunks can be updated per content update cycle.
The mixed D2D caching method proposed in this paper, which is a combination of micro and macro caching, can be used in conjunction with conventional methods to improve the performance of macro D2D caching and can be further improved by using a combination of different techniques. For example, when predicting the mobility pattern of devices or considering recommendation systems, it is possible to benefit from both micro and macro caching, and further research is needed concerning how to maximize the synergies between these techniques.
For simplicity, this paper assumes that a UE is associated with a single helper at any given time. However, further performance improvements can be achieved if a UE can be associated with multiple nearby helpers, which can increase the effective storage space of the cooperative helpers. In the future, research is needed on how to store and update content chunks when multiple helpers cooperate to perform both micro caching and macro caching.

Funding

This work was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (MSIT) Korea Government under Grant NRF-2022R1F1A1062987.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Sun, H.; Chen, Y.; Sha, K.; Huang, S.; Wang, X.; Shi, W. A Proactive On-Demand Content Placement Strategy in Edge Intelligent Gateways. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 2072–2090. [Google Scholar] [CrossRef]
  2. Xia, Q.; Jiao, Z.; Xu, Z. Online Learning Algorithms for Context-Aware Video Caching in D2D Edge Networks. IEEE Trans. Parallel Distrib. Syst. 2024, 35, 1–19. [Google Scholar] [CrossRef]
  3. Zheng, K.; Luo, R.; Liu, X.; Qiu, J.; Liu, J. Distributed DDPG-based Resource Allocation for Age of Information Minimization in Mobile Wireless-Powered Internet of Things. IEEE Internet Things J. 2024; Early Access. [Google Scholar]
  4. Han, M.; Lee, J.; Rim, M.; Kang, C.G. Dynamic Bandwidth Part Allocation in 5G Ultra Reliable Low Latency Communication for Unmanned Aerial Vehicles with High Data Rate Traffic. Sensors 2021, 21, 1308. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, X.; Xu, B.; Zheng, K.; Zheng, H. Throughput Maximization of Wireless-Powered Communication Network with Mobile Access Points. IEEE Trans. Wirel. Commun. 2023, 22, 4401–4415. [Google Scholar] [CrossRef]
  6. Lee, J.; Han, M.; Rim, M.; Kang, C.G. 5G K-SimSys for Open/Modular/Flexible System-Level Simulation: Overview and Its Application to Evaluation of 5G Massive MIMO. IEEE Access 2021, 9, 94017–94052. [Google Scholar] [CrossRef]
  7. Busari, S.A.; Mumtaz, S.; Al-Rubaye, S.; Rodriguez, J. 5G Millimeter-Wave Mobile Broadband: Performance and Challenges. IEEE Commun. Mag. 2018, 56, 137–143. [Google Scholar] [CrossRef]
  8. Mahmood, A.; Kiah, M.L.M.; Azizul, Z.H.; Azzuhri, S.R. Analysis of Terahertz (THz) Frequency Propagation and Link Design for Federated Learning in 6G Wireless Systems. IEEE Access 2024, 12, 23782–23797. [Google Scholar] [CrossRef]
  9. Akyildiz, I.F.; Kak, A.; Nie, S. 6G and Beyond: The Future of Wireless Communications Systems. IEEE Access 2020, 8, 133995–134030. [Google Scholar] [CrossRef]
  10. Khan, L.U.; Yaqoob, I.; Imran, M.; Han, Z.; Hong, C.S. 6G Wireless Systems: A Vision, Architecture Elements, and Future Directions. IEEE Access 2020, 8, 147029–147044. [Google Scholar] [CrossRef]
  11. Wang, C.X.; You, X.; Gao, X.; Zhu, X.; Li, Z.; Wang, H.; Huang, Y.; Chen, Y.; Haas, H.; Thompson, J.S.; et al. On the Road to 6G: Visions, Requirements, Key Technologies, and Testbeds. IEEE Commun. Surv. Tutor. 2023, 25, 905–974. [Google Scholar] [CrossRef]
  12. Hassan, B.; Baig, S.; Asif, M. Key Technologies for Ultra-Reliable and Low-Latency Communication in 6G. IEEE Commun. Stand. Mag. 2021, 5, 106–113. [Google Scholar] [CrossRef]
  13. Chen, W.; Lin, X.; Lee, J.; Toskala, A.; Sun, S.; Chiasserini, C.F.; Liu, L. 5G-Advanced Toward 6G: Past, Present, and Future. IEEE J. Sel. Areas Commun. 2023, 41, 1592–1619. [Google Scholar] [CrossRef]
  14. Fang, X.; Feng, W.; Chen, Y.; Ge, N.; Zhang, Y. Joint Communication and Sensing Toward 6G: Models and Potential of Using MIMO. IEEE Internet Things J. 2023, 10, 4093–4116. [Google Scholar] [CrossRef]
  15. Bai, S.; Gao, Z.; Liao, X. Distributed Noncoherent Joint Transmission Based on Multi-Agent Reinforcement Learning for Dense Small Cell Networks. IEEE Trans. Commun. 2023, 71, 851–863. [Google Scholar] [CrossRef]
  16. Li, L.; Zhao, G.; Blum, R.S. A Survey of Caching Techniques in Cellular Networks: Research Issues and Challenges in Content Placement and Delivery Strategies. IEEE Commun. Surv. Tutor. 2018, 20, 1710–1732. [Google Scholar] [CrossRef]
  17. Cheng, G.; Jiang, C.; Yue, B.; Wang, R.; Alzahrani, B.; Zhang, Y. AI-Driven Proactive Content Caching for 6G. IEEE Wirel. Commun. 2023, 30, 180–188. [Google Scholar] [CrossRef]
  18. Zhang, T.; Fang, X.; Liu, Y.; Nallanathan, A. Content-Centric Mobile Edge Caching. IEEE Access 2020, 8, 11722–11731. [Google Scholar] [CrossRef]
  19. Dinh, N.T.; Kim, Y. An Efficient Distributed Content Store-Based Caching Policy for Information-Centric Networking. Sensors 2022, 22, 1577. [Google Scholar] [CrossRef]
  20. Maher, S.M.; Ebrahim, G.A.; Hosny, S.; Salah, M.M. A Cache-Enabled Device-to-Device Approach Based on Deep Learning. IEEE Access 2023, 11, 76953–76963. [Google Scholar] [CrossRef]
  21. Li, L.; Xu, Y.; Yin, J.; Liang, W.; Li, X.; Chen, W.; Han, Z. Deep Reinforcement Learning Approaches for Content Caching in Cache-Enabled D2D Networks. IEEE Internet Things J. 2020, 7, 544–557. [Google Scholar] [CrossRef]
  22. Chen, B.; Yang, C.; Xiong, Z. Optimal Caching and Scheduling for Cache-Enabled D2D Communications. IEEE Commun. Lett. 2017, 21, 1155–1158. [Google Scholar] [CrossRef]
  23. Rim, M.; Kang, C.G. Peak-Hour Caching Schemes of Mobile Devices for Overloaded Cells in Wireless Caching Systems. IEEE Access 2020, 8, 195274–195289. [Google Scholar] [CrossRef]
  24. Areqi, M.A.; Zahary, A.T.; Ali, M.N. State-of-the-Art Device-to-Device Communication Solutions. IEEE Access 2023, 11, 46734–46764. [Google Scholar] [CrossRef]
  25. Fujitsu. RWS-210290: Views on Rel-18 Sidelink, 3GPP TSG RAN Rel-18 Workshop; Fujitsu: Tokyo, Japan, 2021. [Google Scholar]
  26. Tan, S.; Shen, C.; Xu, R.; Shi, R.; Wang, X.; Zhang, D.; Ai, B. Performance Evaluation of 5G NR Traffic Offloading onto WiFi Direct. In Proceedings of the IEEE 8th International Conference on Computer and Communications, Guangzhou, China, 21–23 April 2023; pp. 304–308. [Google Scholar]
  27. Gismalla, M.S.M.; Azmi, A.I.; Salim, M.R.B.; Abdullah, M.F.L.; Iqbal, F.; Mabrouk, W.A.; Othman, M.B.; Ashyap, A.Y.; Supaat, A.S.M. Survey on Device to Device (D2D) Communication for 5GB/6G Networks: Concept, Applications, Challenges, and Future Directions. IEEE Access 2022, 10, 30792–30821. [Google Scholar] [CrossRef]
  28. Rim, M.; Kang, C.G. Cache Partitioning and Caching Strategies for Device-to-Device Caching Systems. IEEE Access 2021, 9, 8192–8211. [Google Scholar] [CrossRef]
  29. Zhang, H.; Liao, Y.; Song, L. D2D-U: Device-to-Device Communications in Unlicensed Bands for 5G System. IEEE Trans. Wirel. Commun. 2017, 16, 3507–3519. [Google Scholar] [CrossRef]
  30. Le, M.; Pham, Q.; Kim, H.; Hwang, W. Enhanced Resource Allocation in D2D Communications with NOMA and Unlicensed Spectrum. IEEE Syst. J. 2022, 16, 2856–2866. [Google Scholar] [CrossRef]
  31. Ahmad, F.; Ahmad, A.; Hussain, I.; Muhammad, G.; Uddin, Z.; AlQahtani, S.A. Proactive Caching in D2D Assisted Multitier Cellular Network. Sensors 2022, 22, 5078. [Google Scholar] [CrossRef]
  32. Li, D.; Zhang, H.; Ding, H.; Li, T.; Liang, D.; Yuan, D. User-Preference-Learning-Based Proactive Edge Caching for D2D-Assisted Wireless Networks. IEEE Internet Things J. 2023, 10, 11922–11937. [Google Scholar] [CrossRef]
  33. Sarkar, S.; Basu, S.; Paul, A.; Mukherjee, D.P. ViViD: View Prediction of Online Video Through Deep Neural Network-Based Analysis of Subjective Video Attributes. IEEE Trans. Broadcast. 2023, 69, 191–200. [Google Scholar] [CrossRef]
  34. Song, J.; Choi, W. Mobility-Aware Content Placement for Device-to-Device Caching Systems. IEEE Trans. Wirel. Commun. 2019, 18, 3658–3668. [Google Scholar] [CrossRef]
  35. Lee, J.; Lee, S.H.; Rim, M.; Kang, C.G. System-Level Spatiotemporal Offloading with Inter-Cell Mobility Model for Device-to-Device (D2D) Communication-Based Mobile Caching in Cellular Network. IEEE Access 2020, 8, 51570–51581. [Google Scholar] [CrossRef]
  36. Oh, S.; Park, S.; Shin, Y.; Lee, E. Optimized Distributed Proactive Caching Based on Movement Probability of Vehicles in Content-Centric Vehicular Networks. Sensors 2022, 22, 3346. [Google Scholar] [CrossRef]
  37. Chen, M.; Hao, Y.; Qiu, M.; Song, J.; Wu, D.; Humar, I. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks. Sensors 2016, 16, 974. [Google Scholar] [CrossRef] [PubMed]
  38. Fan, Y.; Yang, B.; Hu, D.; Yuan, X.; Xu, X. Social- and Content-Aware Prediction for Video Content Delivery. IEEE Access 2020, 8, 29219–29227. [Google Scholar] [CrossRef]
  39. Zhang, R.; Jia, S.; Ma, Y.; Xu, C. Social-Aware D2D Video Delivery Method Based on Mobility Similarity Measurement in 5G Ultra-Dense Network. IEEE Access 2020, 8, 52413–52427. [Google Scholar] [CrossRef]
  40. Lee, M.; Hong, Y.P. Socially-Aware Joint Recommendation and Caching Policy Design in Wireless D2D Networks. In Proceedings of the IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021. [Google Scholar]
  41. Bai, Y.; Wang, D.; Huang, G.; Song, B. A Deep-Reinforcement-Learning-Based Social-Aware Cooperative Caching Scheme in D2D Communication Networks. IEEE Internet Things J. 2023, 10, 9634–9645. [Google Scholar] [CrossRef]
  42. Yu, G.; Chen, Z.; Wu, J. Content-Aware Personalized Sharing Based on Cooperative User Selection and Attention in Mobile Internet of Things. IEEE Trans. Netw. Serv. Manag. 2023, 20, 521–532. [Google Scholar] [CrossRef]
  43. Zheng, D.; Chen, Y.; Yin, M.; Jiao, B. Cooperative Cache-Aware Recommendation System for Multiple Internet Content Providers. IEEE Wirel. Commun. Lett. 2020, 9, 2112–2115. [Google Scholar] [CrossRef]
  44. Fu, Y.; Salaun, L.; Yang, X.; Quek, T.Q.S. Caching Efficiency Maximization for Device-to-Device Communication Networks: A Recommend to Cache Approach. IEEE Trans. Wirel. Commun. 2021, 20, 6580–6594. [Google Scholar] [CrossRef]
  45. Hua, Y.; Fu, Y.; Zhu, Q. On Cost Minimization for Cache-Enabled D2D Networks with Recommendation. China Commun. 2022, 19, 257–267. [Google Scholar] [CrossRef]
  46. Song, M.; Shan, H.; Fu, Y.; Yang, H.H.; Hou, F.; Wang, W.; Quek, T.Q.S. Joint User-Side Recommendation and D2D-Assisted Offloading for Cache-Enabled Cellular Networks with Mobility Consideration. IEEE Trans. Wirel. Commun. 2023, 22, 8080–8095. [Google Scholar] [CrossRef]
  47. Yu, D.; Wu, T.; Liu, C.; Wang, D. Joint Content Caching and Recommendation in Opportunistic Mobile Networks through Deep Reinforcement Learning and Broad Learning. IEEE Trans. Serv. Comput. 2023, 16, 2727–2741. [Google Scholar] [CrossRef]
  48. Rim, M. Content Caching Strategies with On-Device Recommendation Systems in Wireless Caching Systems. IEEE Access 2024, 12, 28186–28200. [Google Scholar] [CrossRef]
  49. Kua, J.; Armitage, G.; Branch, P. A Survey of Rate Adaptation Techniques for Dynamic Adaptive Streaming Over HTTP. IEEE Commun. Surv. Tutor. 2017, 19, 1842–1866. [Google Scholar] [CrossRef]
  50. Bentaleb, A.; Taani, B.; Begen, A.C.; Timmerer, C.; Zimmermann, R. A Survey on Bitrate Adaptation Schemes for Streaming Media Over HTTP. IEEE Commun. Surv. Tutor. 2019, 21, 562–585. [Google Scholar] [CrossRef]
  51. Rim, M.; Kang, C.G. Content Prefetching of Mobile Caching Devices in Cooperative D2D Communication Systems. IEEE Access 2020, 8, 141331–141341. [Google Scholar] [CrossRef]
  52. Wu, C.; Xu, Z.; He, X.; Lou, Q.; Xia, Y.; Huang, S. Proactive Caching With Distributed Deep Reinforcement Learning in 6G Cloud-Edge Collaboration Computing. IEEE Trans. Parallel Distrib. Syst. 2024, 35, 1387–1399. [Google Scholar] [CrossRef]
Figure 1. Micro caching.
Figure 1. Micro caching.
Sensors 24 04518 g001
Figure 2. Sequential prefetching.
Figure 2. Sequential prefetching.
Sensors 24 04518 g002
Figure 3. D2D caching for multiple operators.
Figure 3. D2D caching for multiple operators.
Sensors 24 04518 g003
Figure 4. Macro and micro caching areas.
Figure 4. Macro and micro caching areas.
Sensors 24 04518 g004
Figure 5. Caching scenario.
Figure 5. Caching scenario.
Sensors 24 04518 g005
Figure 6. Hit ratio depending on the Zipf coefficient.
Figure 6. Hit ratio depending on the Zipf coefficient.
Sensors 24 04518 g006
Figure 7. Proportion of micro cache area depending on the Zipf coefficient.
Figure 7. Proportion of micro cache area depending on the Zipf coefficient.
Sensors 24 04518 g007
Figure 8. Hit ratio depending on the case size.
Figure 8. Hit ratio depending on the case size.
Sensors 24 04518 g008
Figure 9. Proportion of micro cache area depending on the cache size.
Figure 9. Proportion of micro cache area depending on the cache size.
Sensors 24 04518 g009
Figure 10. Hit ratio depending on the number of operators.
Figure 10. Hit ratio depending on the number of operators.
Sensors 24 04518 g010
Figure 11. Proportion of micro cache area depending on the number of operators.
Figure 11. Proportion of micro cache area depending on the number of operators.
Sensors 24 04518 g011
Figure 12. Hit ratio depending on the percentage of the high-frequency carrier region.
Figure 12. Hit ratio depending on the percentage of the high-frequency carrier region.
Sensors 24 04518 g012
Figure 13. Proportion of micro cache area depending on the percentage of the high-frequency carrier region.
Figure 13. Proportion of micro cache area depending on the percentage of the high-frequency carrier region.
Sensors 24 04518 g013
Figure 14. Hit ratio depending on the percentage of mobile devices.
Figure 14. Hit ratio depending on the percentage of mobile devices.
Sensors 24 04518 g014
Figure 15. Proportion of micro cache area depending on the percentage of mobile devices.
Figure 15. Proportion of micro cache area depending on the percentage of mobile devices.
Sensors 24 04518 g015
Figure 16. Hit ratio depending on the number of devices per operator.
Figure 16. Hit ratio depending on the number of devices per operator.
Sensors 24 04518 g016
Figure 17. Proportion of micro cache area depending on the number of devices per operator.
Figure 17. Proportion of micro cache area depending on the number of devices per operator.
Sensors 24 04518 g017
Figure 18. Hit ratio depending on the cache update cycle.
Figure 18. Hit ratio depending on the cache update cycle.
Sensors 24 04518 g018
Figure 19. Proportion of micro cache area depending on the cache update cycle.
Figure 19. Proportion of micro cache area depending on the cache update cycle.
Sensors 24 04518 g019
Figure 20. Hit ratio depending on the number of chunks that can be updated.
Figure 20. Hit ratio depending on the number of chunks that can be updated.
Sensors 24 04518 g020
Figure 21. Proportion of micro cache area depending on the number of chunks that can be updated.
Figure 21. Proportion of micro cache area depending on the number of chunks that can be updated.
Sensors 24 04518 g021
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParameterValue
The total number of content chunks ( K t o t a l )1,000,000
Zipf coefficient for macro caching ( λ )0.8
The number of operators ( N o p e r a t o r )4
The proportion of region 1 ( P 1 r e g i o n )0.4
The proportion of region 2 ( P 2 r e g i o n )0.6
The low-load state probability of region 1 ( P 1 u n d e r l o a d )0.7
The low-load state probability of region 2 ( P 2 u n d e r l o a d )0.1
The number of devices per each operator in the D2D area ( N d e v i c e s i n g l e )5
The number of UEs supported by a helper ( N d e v i c e m u l t i p l e )20
The cache update cycle ( T p e r i o d )20
The cache size of a helper ( K c a c h e )400
The number of chunks that can be updated per content update cycle ( K s t o r e )400
The probabilty that a UE is mobile ( P m o b i l e )0.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rim, M. Mixed Micro/Macro Cache for Device-to-Device Caching Systems in Multi-Operator Environments. Sensors 2024, 24, 4518. https://doi.org/10.3390/s24144518

AMA Style

Rim M. Mixed Micro/Macro Cache for Device-to-Device Caching Systems in Multi-Operator Environments. Sensors. 2024; 24(14):4518. https://doi.org/10.3390/s24144518

Chicago/Turabian Style

Rim, Minjoong. 2024. "Mixed Micro/Macro Cache for Device-to-Device Caching Systems in Multi-Operator Environments" Sensors 24, no. 14: 4518. https://doi.org/10.3390/s24144518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop