Next Article in Journal
Feed Conversion Ratio (FCR) and Performance Group Estimation Based on Predicted Feed Intake for the Optimisation of Beef Production
Previous Article in Journal
Two-Lane DNN Equalizer Using Balanced Random-Oversampling for W-Band PS-16QAM RoF Delivery over 4.6 km
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Cooperative Cache Based on Temporal Convolutional Networks in Vehicular Edge Network

School of Information Engineering, Henan University of Science and Technology, Luoyang 471000, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(10), 4619; https://doi.org/10.3390/s23104619
Submission received: 29 March 2023 / Revised: 3 May 2023 / Accepted: 8 May 2023 / Published: 10 May 2023
(This article belongs to the Section Vehicular Sensing)

Abstract

:
With the continuous development of intelligent vehicles, people’s demand for services has also rapidly increased, leading to a sharp increase in wireless network traffic. Edge caching, due to its location advantage, can provide more efficient transmission services and become an effective method to solve the above problems. However, the current mainstream caching solutions only consider content popularity to formulate caching strategies, which can easily lead to cache redundancy between edge nodes and lead to low caching efficiency. To solve these problems, we propose a hybrid content value collaborative caching strategy based on temporal convolutional network (called THCS), which achieves mutual collaboration between different edge nodes under limited cache resources, thereby optimizing cache content and reducing content delivery latency. Specifically, the strategy first obtains accurate content popularity through temporal convolutional network (TCN), then comprehensively considers various factors to measure the hybrid content value (HCV) of cached content, and finally uses a dynamic programming algorithm to maximize the overall HCV and make optimal cache decisions. We have obtained the following conclusion through simulation experiments: compared with the benchmark scheme, THCS has improved the cache hit rate by 12.3% and reduced the content transmission delay by 16.7%.

1. Introduction

With the rapid development of the Internet of Things (IoT) [1] and Artificial Intelligence (AI) [2] technology, intelligent connected vehicles (ICV) are on the rise. Gartner predicts that in 2023, most of the vehicles on the road will be connected to the Internet, and connected vehicles will be the largest market for 5G in 2025. Traditional vehicle networks are extensions of mobile networks that provide services for a variety of applications such as autonomous driving, traffic management, and driving safety [3]. In addition, most users of autonomous vehicles are also more focused on entertainment services. Emerging services include video, news, road inquiries, and other entertainment and leisure information services, which enhance the convenience and experience of users’ travel. Due to the high popularity of intelligent infotainment services, these contents will be frequently requested, so a large number of users and demand data will bring challenges to the performance of the wireless network, which may greatly affect the user experience.
Edge caching [4,5] is an effective way to cope with these problems. By arranging caching resources and computing resources at the edge servers, various computing and request services for ICVs can be satisfied at the edge servers, thus allowing users to obtain a better quality of experience (QoE). Therefore, edge caching networks for vehicles have recently attracted widespread attention in the academic community. For example, the authors in [6] explore active caching algorithms for high-capacity content. There are also caching schemes that use mobility prediction [7,8] and cooperative caching [9] to reduce network latency and increase network throughput. Most of these works use Zipf distribution models to model the popularity of content. However, in a real vehicle scenario, where the requested content is time-local and variable, modeling popularity using only the Zipf distribution model is not optimal. Therefore, building an efficient edge caching strategy for ICVs is challenging.
To solve the above problem, we formalize the cooperative caching process between edge nodes as an optimization problem, using our proposed cooperative caching algorithm to allocate edge cache resources reasonably to minimize the latency of content delivery. For this purpose, we creatively propose a hybrid cooperative caching strategy based on temporal convolutional networks, which mainly consists of three parts. Firstly, according to the mobility of vehicles, a dynamic vehicle cluster is established in this paper, then we select the cluster head (CH) and obtain the optimal cache location through the CH. Secondly, we have constructed a model that can improve the accuracy of content popularity prediction, which is based on a temporal convolutional network. TCN has the advantages of high parallelism and fast convergence speed. Then, considering that there are other factors that affect the caching performance of edge nodes, we use Hybrid Content Value to measure the value of the cached content. Finally, a dynamic programming scheme is designed to optimize the strategy. Our main contributions are as follows:
  • We predict content popularity based on TCN, and then a hybrid content value is proposed to measure the cached content;
  • Dynamic programming is proposed to make the best cache decision and maximize the overall HCV of the cache strategy;
  • Simulation experiments verify the excellent caching performance of THCS algorithm.
The structure of the remaining parts is as follows. In Section 2, we discuss related work. Section 3 briefly introduces the system model. In Section 4, we detail the design of the THCS scheme and then propose dynamic programming for caching decisions. The performance of our proposed THCS strategy is verified by simulation in Section 5 and then summarized in Section 6.

2. Related Works

A great deal of work has been focused on the study of caching policies. In this section, we present some of the current work in caching research, mainly covering both non-cooperative and cooperative caching.

2.1. Non-Cooperative Caching Strategy

In recent years, edge caching has played an important role in in-vehicle networks. Many existing articles design caching strategies by exploiting various properties of the content. For example, Ostrovskaya et al. [10] proposed a caching strategy for vehicular named data networks. The strategy considers three metrics, freshness of the content, popularity and the distance between the cache location and the cache’s current location. Yao et al. [11] uncover the relevance of content and space by investigating various areas in a city. Edge caching strategies in the Internet of Vehicles (IoV) can also be designed by analyzing the traffic characteristics of roads; modeling the mobility of vehicles has been introduced in models that consider the quality of service of caching. Al Nagar et al. [7] considered the effect of vehicle speed on the optimal caching decision and obtained the gain of content-active caching in roadside units (RSUs) to find the optimal caching strategy. Graph neural network [12] is a deep learning method which can deal with graph structure data and can represent many complex relationships. The graph neural network in the Internet of Vehicles can use the dynamic topological relationship between vehicles to learn the characteristics and behavior of vehicles, so as to perform tasks such as prediction, control, and optimization and realize real-time information update and transmission. Lian et al. [13] proposed a mobile edge caching strategy based on the spatiotemporal graph convolution model (STCC) by mining the spatiotemporal features of content popularity and then using heuristic strategies to minimize content access delays. Zhou et al. [14] designed a caching decision based on the simplex algorithm based on the prediction of the Spatio-Temporal Graph Neural Network (STGNN) and performed edge computing on the edge server in the 6G Internet of Vehicles to achieve rapid response to delay-sensitive tasks. Zhang et al. [15] developed a caching model to optimize network energy efficiency by using mobile vehicles on the edge. Through nonlinear technology and Lyapunov optimization theory, an online caching scheme was proposed to minimize network energy consumption, but the effect of cache file size was ignored. Kong et al. [16] proposed a security query scheme for the distribution of vehicle fog data. Reversible matrix is used to construct data requests from different vehicles to break the relationship between specific data requests and their original vehicles. This scheme significantly reduces the communication overhead on the premise of ensuring the security goal. Based on the problem of a large number of short video files, Song et al. [17] proposed a cache scheme considering the QoE of vehicle Internet users and optimized the cache content by establishing a user interest class model. In [18], in order to solve the delay-sensitive service, the authors jointly considered communication, caching, and computing to optimize the cache content retrieval delay, but they ignored the selection process of cache content when considering cache. Although these studies have made contributions to edge caching, most of them do not take into account the cooperation between edge nodes, which can easily cause the problem of cache redundancy, greatly waste edge storage space, and make it difficult to provide good cache services.

2.2. Cooperative Caching Strategy

As different caching strategies are proposed, caching research considering the cooperation between different nodes has gradually become the mainstream research direction. Amer et al. [19] proposed a cooperative cache architecture between clusters in order to improve communication quality and network performance, and optimized the network average delay by greedy algorithm. Wu et al. [20] proposed a user-centered content transmission scheme to cope with the growing number of video files in cellular networks, which improves the QoE of users by sharing cached content among cooperative users, but they ignore the communication interference between heterogeneous users. Ma et al. [21] modeled pre-caching and task allocation as a Markov Decision Process and obtained the optimal proportion of allocation among cooperative nodes through DDPG, so as to improve the data reception rate of mobile vehicles. Zhang et al. [22] responded to the dynamic changes of caching strategy in IoV with the network environment through cooperative caching in two-layer heterogeneous networks. Based on the random geometry theory, the analysis framework of spatial cooperative caching strategy is established, which effectively reduces the network load and provides better quality of service (QoS). Yu et al. [23] proposed a cooperative cache based on federated learning and considering vehicle mobility, which uses the deep learning model deployed by federated learning on mobile vehicles to predict regional content popularity, which protects user data security and improves caching efficiency, but the autopilot scenario they considered is not suitable for reality. Zhu et al. [24] proposed multi-layer cooperative caching in terrestrial satellite integrated network to reduce communication delay and optimized cache performance through cooperative and non-cooperative caching strategies. Rottenstreich et al. [25] introduced the dependency relationship between storage items in cooperative cache and put forward the problem of cooperative cache with dependency, which laid the foundation for cooperative cache with dependency. Chang et al. [26] studied the problem of cooperative edge caching in foggy wireless access networks (F-RAN). The cooperation between DDQN and Fog access Point (F-AP) is applied to formulate the global optimal caching strategy. Yao et al. [27] proposed the problem of cooperative caching in vehicle content centers, which comprehensively considered the future location of edge nodes and the cache redundancy between cooperative nodes to formulate a replacement strategy. Yang et al. [28] studied the multi-hop cooperative caching of wireless sensor networks on ICN and made a trade-off between saving energy and reducing delay. Although the above work takes into account the cooperation of nodes to optimize the cache strategy, most of them only consider content popularity as a measure of cache content. First of all, this will lead to cache redundancy among multiple nodes. Secondly, only considering the content popularity will affect the overall cache performance, resulting in high transmission delay.

3. System Model

We introduce the hybrid cooperative cache model of vehicle edge network based on temporal convolution network and then discuss the optimization of transmission latency in different scenarios.

3.1. Network Model

This paper considers the vehicular network environment based on intelligent transportation system. As shown in Figure 1, the network consists of a central server, RSUs, and various smart vehicles, allowing any edge node to cache content. We assume that the central server stores all the available content in the network, RSUs communicate with each other through optical fiber, and edge servers for computing and caching are deployed, so the RSU can cache all kinds of content to meet the content service needs of vehicle users. Since we have built a dynamic vehicle cluster, when an intelligent vehicle issues a content request, the flow of the network system is as follows:
The intelligent vehicle detects whether the requested content is cached by itself and the cluster head of the cluster, and if so, it can be obtained directly and the delay is generally ignored. Otherwise, the intelligent vehicle checks whether the request content is cached in the local RSU, and if so, the local RSU sends the content directly to the user; otherwise, the next step is to find out if there is any requested content in the nearby RSU. Second, the central server finds out whether the collaborative RSU has cached the content. If so, the cooperative RSU sends the requested content to the user through the retransmission of the local RSU, otherwise, the requested content can only be obtained from the central server. The central server forwards the request to the local RSU, which then sends it to the user by the local RSU. It is worth noting that obtaining request content from a central server increases delivery latency and data traffic compared with obtaining it from an RSU or local node. Nevertheless, the cache capacity on local or edge nodes is limited, and user preferences vary. Therefore, it is challenging to design the optimal caching strategy.

3.2. Problem Formulation

As shown in Figure 1, it is assumed that M RSU servers are deployed in a cooperation area, where the set of RSU servers is R = r 1 , r 2 , , r M , the cache capacity is U = u 1 , u 2 , , u M , and the set of intelligent vehicles is N. Let r 0 denote a central server and c 0 represent its capacity. By default, the storage capacity of the central server is unlimited. Assume that the central server has F content sets of E = e 1 , e 2 , , e F , and the corresponding size of each content is S = s 1 , s 2 , , s F . Each user individually requests content e i ( 1 i F ) from RSUs in the cooperation area. We use P m , i , t to express the popularity of the content e i on RSU r m at time t. We assume that content popularity is static during a given time interval. Cached information can be shared among collaborating RSU. We constructed a content cache matrix X = x m , i M × F .
x m , i = 1 if content e i is cached on r m 0 otherwise
The user’s request mode for content will not change during the time slot t . Based on this, we designed a cooperative caching strategy to maximize user-perceived QoE. Since content delivery delays are the most critical factor affecting QoE, we chose it as the main indicator of system evaluation. According to the network model of this paper, the content delivery delay is divided into three parts: the content transmission delay from the central server to the RSU, the content sharing delay between the RSU, and the content acquisition delay from the local RSU to the vehicle. We consider that the communication between vehicles and nodes uses the dedicated short-range communication technology based on 802.11 p and the communication between RSU uses wired links. In short, the problem of edge caching translates into how to minimize the overall latency under the constraint of cache capacity.
According to Shannon’s theorem, the transmission rate between cooperative vehicles and RSU in the cluster is calculated as:
d r , n = B log 2 1 + P h n τ x , V n σ 2
where B is the transmission bandwidth, P is the transmission power, σ 2 is Gaussian white noise, h n ( τ ( x , V n ) ) is the channel gain modeling of vehicle n [29], x represents vehicles, RSU, and central servers.
The transmission delay from RSU r n to RSU r m consists of two parts: (1) transmission delay d r n , r m t ; (2) propagation delay d r n , r m p . Therefore, the transmission delay d r n , r m , i for content e i from RSU r n to RSU r m is as follows:
d r n , r m , i = d r n , r m t s i + d r n , r m p
When sending content e i to r m , the cooperation RSU with the smallest delivery delay is:
r φ = arg min r n d r n , r m , i r n R r 0 , x n , i = 1
The content delivery delay of the RSU r m acquiring content e i is d r φ , r m , i . If the content e i is cached locally on the RSU server r m , r φ = r m , the content delivery delay is d r φ , r m , i = 0 ; if the content e i is not cached on r m , but on the rest of the RSU within the collaboration scope, the collaborative RSU transmission content with the minimum delivery delay will be selected. In this case, r φ R , the content delivery of the acquisition request is delayed to d r φ , r m , i ; if the content e i is not cached in any RSU server within the scope of collaboration, but is transferred from the central server to the local RSU and distributed to the user, then r φ R , the content delivery of the get request is delayed to d r 0 , r m , i .
Therefore, the optimization goal P 1 of cooperative caching is to minimize the average content delivery delay when the RSU cache capacity is limited. The optimization objectives are as follows:
P 1 : min m = 1 M i = 1 F P m , i , t d r φ , r m , i s . t . i = 1 F x m , i s i u m , r n R , x m , i { 0 , 1 } , r n R , e i E , x 0 , i = 1 , e i E .
The goal is to minimize content delivery latency. The first constraint ensures that the total amount of content cached in the RSU does not exceed the capacity of the RSU r m . The last constraint indicates that the central server stores all the content. The content in this question only has the option of caching, and no content can be partially cached.

4. Cooperative Caching Decisions Based on Temporal Convolutional Networks and Hybrid Content Values

In this section, we first construct dynamic vehicle clusters and then propose a cooperative caching strategy (called THCS) based on temporal convolutional networks and hybrid content value to solve the above problem.

4.1. Dynamic Cluster Construction

In order to efficiently deliver content and reduce the load of network traffic, this paper divides all nodes in the edge into several clusters. In order to reduce the communication interference within the cluster, the cluster members considered in this paper cannot communicate directly, and any communication can only be carried out through CH. Each cluster consists of cluster heads and cluster members (CM). Each cluster can have multiple CM but only one CH, depending on the coverage of CH and the size limit of the cluster. Each type of node in the cluster has different responsibilities, and CH is mainly responsible for caching all CM updated cache content information. Then, all nodes in the cluster except CH are CM. CM can be cached, and all CM participate in the CH selection process.

4.1.1. Cluster Head Selection

This paper uses the weighted clustering algorithm (WCA) to select cluster heads, while considering different factors in Equation (5). Where the node with the lowest calculated score is selected as CH, the factors considered are as follows:
(1) Specifying the reciprocal of node degree 1 / o α of node α ;
(2) Specifying the average transmission time T α from the node to all neighboring nodes in the node α range;
(3) The weighted sum H α of the neighbor jump distance of node α ;
(4) Specifying the cumulative power P α of node α during the time it acts as CH.
If the value of node α is w α < w m i n , it is selected as a CH node and then added to the CH list. This is expressed as follows:
w α = w 1 1 / o α M ¯ o + w 2 T α T ¯ + w 3 H α H ¯ + w 4 P α P ¯
where w 1 , w 2 , w 3 , w 4 is the weight factor corresponding to different parameters,
w 1 + w 2 + w 3 + w 4 = 1
o α in Equation (8) is the degree of node α expressed as:
o α = Neighbor ( α ) q N , q n dist ( α , q ) ( 0 , 1 )
where d i s t ( α , q ) = 1 indicates that q is within the transmission range of node α , otherwise it is 0. The average reciprocal of the node degree of all network nodes M ¯ o is as follows:
M ¯ o = 1 N α = 1 N 1 o α
T α is expressed as follows:
T α = 1 ψ α q ψ n t q , α
where ψ α is a collection of nodes α within the scope of the cluster, and t q , α represents the transfer time from node α to any node q. Similarly, the average transmission time of all network nodes is:
T ¯ = 1 N α = 1 N T α
In order to reduce the communication load, CH should choose nodes with few hops to establish communication. The H α value is as follows:
H α = k = 1 γ k × k count k
where c o u n t k is the number of k h o p neighbors of node α and γ k is the weight coefficient. The average weighted sum of hops in the range is as follows:
H ¯ = 1 N i = 1 N H i
P α is mainly related to the maximum power value of nodes. Compared with other cluster nodes, it is assumed that the CH power value is higher. With this in mind, the average power of network nodes is calculated as follows:
P ¯ = 1 N α = 1 N P α

4.1.2. Cluster Construction

After the CH is selected, the cluster is established through the neighbor nodes of the CH. In order to reduce the overhead of nodes in a cluster, the number of nodes in each cluster cannot exceed the upper limit β . When the nodes that do not belong to any cluster enter the transmission range of CH and the number of nodes in the cluster does not exceed β , the node is selected as CM by CH.
When a dynamic cluster moves from one RSU in a cooperative region to within range of another RSU, the request content cached in the previous RSU may be out of date, and the next RSU will not cache the requested content in advance for the upcoming cluster. Faced with this inefficient use of caching resources, we can use the mobility of the cluster to obtain the next RSU to arrive in the cluster and replace the cached content on the upcoming RSU in advance through cooperative transfer between RSUs. The created dynamic vehicle clusters can help replace cached content on RSUs, reducing content fetching latency and improving user experience.

4.2. Content Popularity Prediction Based on Temporal Convolutional Networks

In previous work, most studies have assumed by default that content popularity obeys a Zipf distribution [30,31,32]. In practice, due to the high-velocity mobility of vehicles and time-varying user demand, content popularity characteristics are difficult to capture in a timely manner and can only be predicted from historical information. Thus, this paper proposes a prediction method based on TCN. The TCN model consists of a content feature prediction module and a content popularity assessment module. After the content features are captured by the predictor module, the popularity assessment module will provide a weighted average of the past popularity data, thereby predicting the future popularity of the content, achieving a balance between long-term and short-term burst memory.

4.2.1. Content Feature Predictor

The purpose of this module is to build input–output maps based on TCN, thereby predicting the popularity of future content through historical request information, to help THCS make effective caching decisions. Specifically, the input vector of TCN is a content popularity feature, and the expected output is a collection of content popularity features over the next K time periods. This article uses TCN to predict content popularity because it can transform the problem of content popularity prediction into a time series problem; moreover, compared with the commonly used recurrent neural network (RNN), TCN is based on a parallel architecture and takes up less memory. In addition, TCN can prevent gradient explosion and disappearance; the architecture is shown in Figure 2.
Specifically, the input vector of TCN is a content popularity feature, and the expected output is a collection of content popularity features over the next K time periods. TCN uses a one-dimensional full convolution network (FCN) structure for prediction, and due to the causal relationship between convolution operations, future information will not leak into the past. Next, we will explain the fusion process of TCN and convolution structure in detail.
Dilated causal convolution: In order to establish long-term memory, causal convolution can introduce dilated convolution to ensure that the convolutional receptive field can be increased without changing the number of parameters. The principle can be simply understood as that dilated convolution can generate some new kernels, and then use these kernels to perform ordinary convolution. For the filter Y = { y 1 , y 2 , , y k } , the dilated casual convolution b t of V = { v 1 , v 2 , , v T } at v t :
b t = k 1 K y k v t ( K t ) d
where d represents the dilated factor and k is the size of the filter.
Residual block: After normalizing the weight of the dilated causal convolutions each time, the ReLU function is used to increase the nonlinear relationship between layers, and then a dropout term is added to achieve regularization. In addition, in order to prevent the TCN output from being inconsistent with the length of X, a 1 × 1 convolution is performed before the output, so that the TCN can output in the desired dimension.

4.2.2. Future Content Popularity Assessment

In order to achieve a balance between short-term and long-term memory in the popularity data P ¯ m , i , t + 1 output at the next moment, we will perform a weighting operation between the historical popularity P m , i , t and the predicted short-term future popularity P ^ m , i , T + 1 . The specific operations are as follows:
P ¯ m , i , T + 1 = ( 1 λ ) P ^ m , i , T + 1 + t = T n + 1 T λ T t + 1 P m , i , t
where λ ( 0 < λ < 1 ) is the weighted ratio between the new and historical data, and the larger λ represents the more important historical popularity; n represents the number of historical data we consider.

4.3. Hybrid Content Value (HCV)

The traditional popularity-based caching strategy is to improve cache performance by caching the most popular content in RSU. Most of them ignore the cooperation between RSUs, resulting in multiple RSU caches of the same content, which can easily cause cache redundancy. Moreover, if only popularity is considered without taking into account other factors that affect caching, overall performance will be affected.
Facing this problem, we propose a new metric named Hybrid Content Value. From the perspective of cooperation, HCV combines factors such as content popularity, content size, and transmission delay. When a user requests content e i from a cooperative RSU, the average delivery delay of the user-perceived request content e i is given by the following formula:
n = 1 M d r φ , r n , i P n , i , t
HCV Z t r m , e i weights the value of content e i on r m for different situations.
Case 1: If RSU r m does not cache content e i , the average delay of requesting content e i after r m caching content e i is as follows:
n = 1 M d r m , r n , i p n , i , t
where x m , i = 0 , if r m caches content e i , the benefit of HCV Z t r m , e i :
Z t r m , e i = n = 1 M d r φ , r n , i P n , i , t n = 1 M d r m , r n , i P n , i , t = n = 1 M max d r φ , r n , i d r m , r n , i , 0 P n , i , t
Case 2: If RSU r m does cache content e i , the average latency of user requests for content e i after r m removes content e i is as follows:
n = 1 M d r m , r n , i p n , i , t
where x m , i = 1 , if r m removes content e i , loss of HCV Z t r m , e i :
Z t r m , e i = n = 1 M d r m , r n , i P n , i , t n = 1 M d r φ , r n , i P n , i , t = n = 1 M max d r m , r n , i d r φ , r n , i , 0 P n , i , t

4.4. Decision Making Based on Dynamic Planning

In order to obtain the overall optimal HCV, this paper will present the problem P 2 on the basis of P 1 , which is expressed as follows:
P 2 : max m = 1 M i = 1 k Z t r m , e i x m , i s . t . i = 1 F x m , i S i u m , r n R x m , i { 0 , 1 } , r n R , e i E ,
For the proposed problem P 2 , we consider using the dynamic programming algorithm (DP) to divide the different stages of the cache to solve. P 2 is described in detail as follows:
We use { s t [ 1 ] , , s t [ i ] , , s t [ k ] } to represent the different stages, and the caching decisions for different stages are represented by z i . R i , j represents the status of the s t [ i ] phase, as follows:
R i , j = max k = 1 i Z t ( r m , e k ) W k
s . t . max k = 1 i W k s k + j u m
where whether or not to cache content e k in stage s t [ i ] is determined by W k .
When deciding on cache e i in the s t [ 1 ] stage, THCS needs to check whether the current cache space can accommodate content e i , the optimal solution of R i , j is as follows:
R i , j = max { R i 1 , j , R i 1 , j s i } + Z t [ r m , e i ] s i j R i 1 , j s i > j
When the current cache space is s i j , if the r m cache content e i cache space becomes j s i , R i , j becomes R i 1 , j s i + Z t [ r m , e i ] ; otherwise, R i , j becomes R i 1 , j . When the remaining space s i j , R i , j becomes R i 1 , j . Based on this, the optimal decision process Z can be obtained. Algorithm 1 summarizes the dynamic programming algorithm.
Algorithm 1 HCV-based dynamic planning caching algorithm.
Input: Cache space u m , Cache remaining space j, Total cache content k, a i = ( e i d i , s i d i , r i d i ) .
Output: Decision Z.
1:
for  i = 1 to k do;
2:
    for  j = 1 to u m  do;
3:
       if  a i 1 , 1 > j  then;
4:
            R i , j = R i 1 , j ;
5:
       else
6:
            R i , j = max { R i 1 , j , R i 1 , j a i 1 , 1 + a i 1 , 2 } ;
7:
       end if
8:
    end for
9:
end for
10:
Get cache parameters ( R , Z , k , u m )
11:
update Z

5. Experimental Results and Analysis

In this section, we compare the performance of the proposed THCS scheme with other benchmark schemes. Then, this article verifies the effectiveness of the model established in this article by comparing the prevalence prediction model established by TCN with classic prevalence prediction models.

5.1. Simulation Settings

This article considers setting up four RSUs in the cooperation area, and the cache capacity of each RSU is 256. Suppose there are 1000 video files in the network, the file size is a random number [33] in { 1 , 3 , 5 , 7 } and it follows the Zipf distribution [34] model with a coefficient of 0.55. We set the transmission delay of sending file i from the central server to RSU r m to be 10–20 ms, and the transmission delay between RSUs within the cooperation range is 2–4 ms, and the default delay of transmitting file i between RSUs is equal. In this paper, random search is used to set the parameters of the established TCN prediction model, which effectively avoids the curse of dimensionality. We set the parameters of the TCN prediction model by Random Search [35], which samples the search space instead of brute-forcing all possible parameter sets, thus avoiding the curse of dimensionality. The specific parameters are shown in Table 1 below:

5.2. Comparison of Algorithms and Indicators

LFU: This strategy prioritizes replacing the least frequently used content when storage space is low.
LRU: This strategy prioritizes replacing the least recently used content when storage space is low.
ECC [31]: This strategy uses the neural cooperative filtering algorithm to obtain popular content and then uses the greedy algorithm to optimize the caching strategy.
Distributed: This strategy caches a portion of top-ranked popular content and does not cache duplicate content, so distributed policies can cache more content items.
P_scheme [36]: This strategy establishes vehicle state and mobility models in the IoV network of ICN and performs active caching.
This article shows how THCS improves system performance through two metrics, namely, cache hit ratio (HR) and average content delivery latency (ADL).
HR represents the hit content as a percentage of the request content. Specifically, H R = B / G , where G represents the total number of requests received during period Δ t , and B is the number of request hits.
ADL indicates the average delay of the request, expressed as:
A D L = m i n m = 1 M i = 1 F P t ( r m , e i ) d r φ , r m , i

5.3. Analysis of Results and Comparison of Performance

To investigate the relationship between content popularity and cache performance, we changed the content heat distribution by adjusting parameter β of the Zipf distribution. As shown in Figure 3, the cache performance improves as the parameters increase. This is because as the parameters β become higher, more cache space is allocated to the more popular content. Compared to the ECC strategy, the THCS strategy improved HR by 1–12% and reduced ADL by 5–12.5%.
To investigate the effect of cache size on cache performance, we scaled the cache size of each RSU server from 100 to 600. As can be seen in Figure 4, cache performance improves as the cache size increases. The reason is that as the cache capacity increases, more content can be cached on the RSU servers and user requests are more likely to be available in the cooperation area rather than from remote servers. The THCS solution shows optimal performance in terms of both HR and ADL.
To investigate the relationship between content count and cache performance, we limit the content count to the interval 500–3000. From Figure 5, we can see that content count is negatively correlated with cache performance, which is because more content items cause the RSU not to cache all content, so that more content cannot be fetched directly from the cooperation region. This paper proposes that the THCS policy continues to outperform other policies because the THCS policy considers the selection of cached content more comprehensively.
Figure 6 compares the traditional content popularity-based caching strategy with the caching strategy proposed in this paper that integrates HCV (denoted as Popularity-THCS and HCV-THCS). As shown in Figure, the trends of HR and ADL of the two approaches as the cache capacity increases, it can be seen that the HCV-THCS proposed in this paper exhibits superior performance, with an 8.6–16% increase in HR and a 7.3–14.3% decrease in ADL compared with the traditional scheme HCV-THCS.

6. Conclusions

In this paper, we propose the THCS strategy to improve cache performance. Firstly, through dynamic vehicle and cluster construction, we optimize the cache location. Then, we introduce the concept of HCV in cache content selection, combining different attributes to evaluate the cache value of the content. Finally, this paper formulates a caching decision based on dynamic programming to obtain a near-optimal caching solution. The experimental results show that our proposed scheme can cache more content closer to the requester. Simulation experiments and performance results show that THCS shows good performance in terms of cache hit rate and average content delivery delay. In future research, we will continue to investigate ways to reduce transmission latency and improve cache hit ratios in vehicle edge networks. Spatial prediction via neural graph networks or learning topological relationships between dynamic vehicles have been considered to further optimize caching strategies.

Author Contributions

Conceptualization, H.W. and J.J.; investigation, H.W. and J.J.; writing— original draft preparation, writing—review and editing, H.W. and J.J.; supervision, H.M. and L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is fully supported by the National Natural Science Foundation of China (62272146, 62071170, 62171180, 62072158), the Program for Innovative Research Team in University of Henan Province (21IRTSTHN015), in part by the Key Science and the Research Program in University of Henan Province (21A510001), Henan Province Science Fund for Distinguished Young Scholars (222300420006), the Science and Technology Research Project of Henan Province under Grant (222102210001), and Leading Talent in Scientific and Technological Innovation in Zhongyuan (234200510018).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank the editor and anonymous referees for their helpful comments in improving the quality of this paper.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Fernández, Ó.B.; Sansano-Sansano, E.; Trilles, S.; Miedes, A.C. A Reactive Architectural Proposal for Fog/Edge Computing in the Internet of Things Paradigm with Application in Deep Learning. In Artificial Intelligence, Machine Learning, and Optimization Tools for Smart Cities: Designing for Sustainability; Pardalos, P.M., Rassia, S.T., Tsokas, A., Eds.; Springer Optimization and Its Applications, Springer: Berlin/Heidelberg, Germany, 2022; pp. 155–175. [Google Scholar] [CrossRef]
  2. Yu, G.; Cai, Z.; Wang, S.; Chen, H.; Liu, F.; Liu, A. Unsupervised Online Anomaly Detection With Parameter Adaptation for KPI Abrupt Changes. IEEE Trans. Netw. Serv. Manag. 2020, 17, 1294–1308. [Google Scholar] [CrossRef]
  3. Arena, F.; Pau, G. An Overview of Vehicular Communications. Future Internet 2019, 11, 27. [Google Scholar] [CrossRef]
  4. Tan, L.T.; Hu, R.Q.; Qian, Y. D2D Communications in Heterogeneous Networks With Full-Duplex Relays and Edge Caching. IEEE Trans. Ind. Inform. 2018, 14, 4557–4567. [Google Scholar] [CrossRef]
  5. Xu, J.; Ota, K.; Dong, M. Saving Energy on the Edge: In-Memory Caching for Multi-Tier Heterogeneous Networks. IEEE Commun. Mag. 2018, 56, 102–107. [Google Scholar] [CrossRef]
  6. Khelifi, H.; Luo, S.; Nour, B.; Sellami, A.; Moungla, H.; Naït-Abdesselam, F. An Optimized Proactive Caching Scheme Based on Mobility Prediction for Vehicular Networks. In Proceedings of the IEEE Global Communications Conference, GLOBECOM 2018, Abu Dhabi, United Arab Emirates, 9–13 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  7. AlNagar, Y.; Hosny, S.; El-Sherif, A.A. Towards Mobility-Aware Proactive Caching for Vehicular Ad hoc Networks. In Proceedings of the 2019 IEEE Wireless Communications and Networking Conference Workshop, WCNC Workshops 2019, Marrakech, Morocco, 15–18 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar] [CrossRef]
  8. Tan, L.T.; Hu, R.Q. Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2018, 67, 10190–10203. [Google Scholar] [CrossRef]
  9. Bitaghsir, S.A.; Khonsari, A. Cooperative caching for content dissemination in vehicular networks. Int. J. Commun. Syst. 2018, 31. [Google Scholar] [CrossRef]
  10. Ostrovskaya, S.; Surnin, O.; Hussain, R.; Bouk, S.H.; Lee, J.; Mehran, N.; Ahmed, S.H.; Benslimane, A. Towards Multi-metric Cache Replacement Policies in Vehicular Named Data Networks. In Proceedings of the 29th IEEE Annual International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC 2018, Bologna, Italy, 9–12 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar] [CrossRef]
  11. Yao, L.; Chen, A.; Deng, J.; Wang, J.; Wu, G. A Cooperative Caching Scheme Based on Mobility Prediction in Vehicular Content Centric Networks. IEEE Trans. Veh. Technol. 2018, 67, 5435–5444. [Google Scholar] [CrossRef]
  12. Jiang, W. Graph-based deep learning for communication networks: A survey. Comput. Commun. 2022, 185, 40–54. [Google Scholar] [CrossRef]
  13. Lian, L.; Chen, N.; Ou, P.; Yuan, X. Mobile Edge Cooperative Caching Strategy Based on Spatio-temporal Graph Convolutional Model. In Proceedings of the 25th IEEE International Conference on Computer Supported Cooperative Work in Design, CSCWD 2022, Hangzhou, China, 4–6 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1396–1401. [Google Scholar] [CrossRef]
  14. Zhou, X.; Bilal, M.; Dou, R.; Rodrigues, J.J.; Zhao, Q.; Dai, J.; Xu, X. Edge Computation Offloading with Content Caching in 6G-Enabled IoV. IEEE Trans. Intell. Transp. Syst. 2023. Available online: https://ieeexplore.ieee.org/document/10034418 (accessed on 23 May 2022). [CrossRef]
  15. Zhang, Y.; Li, C.; Luan, T.H.; Fu, Y.; Shi, W.; Zhu, L. A Mobility-Aware Vehicular Caching Scheme in Content Centric Networks: Model and Optimization. IEEE Trans. Veh. Technol. 2019, 68, 3100–3112. [Google Scholar] [CrossRef]
  16. Kong, Q.; Lu, R.; Ma, M.; Bao, H. A Privacy-Preserving and Verifiable Querying Scheme in Vehicular Fog Data Dissemination. IEEE Trans. Veh. Technol. 2019, 68, 1877–1887. [Google Scholar] [CrossRef]
  17. Song, C.; Xu, W.; Wu, T.; Yu, S.; Zeng, P.; Zhang, N. QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2021, 70, 5286–5295. [Google Scholar] [CrossRef]
  18. Kazmi, S.M.A.; Tri, N.D.; Yaqoob, I.; Ndikumana, A.; Ahmed, E.; Hussain, R.; Hong, C.S. Infotainment Enabled Smart Cars: A Joint Communication, Caching, and Computation Approach. IEEE Trans. Veh. Technol. 2019, 68, 8408–8420. [Google Scholar] [CrossRef]
  19. Amer, R.; Butt, M.M.; Bennis, M.; Marchetti, N. Inter-Cluster Cooperation for Wireless D2D Caching Networks. IEEE Trans. Wirel. Commun. 2018, 17, 6108–6121. [Google Scholar] [CrossRef]
  20. Wu, D.; Liu, Q.; Wang, H.; Yang, Q.; Wang, R. Cache Less for More: Exploiting Cooperative Video Caching and Delivery in D2D Communications. IEEE Trans. Multim. 2019, 21, 1788–1798. [Google Scholar] [CrossRef]
  21. Ma, T.; Chen, X.; Ma, Z.; Chen, Y. Deep Reinforcement Learning for Pre-caching and Task Allocation in Internet of Vehicles. In Proceedings of the 2020 IEEE International Conference on Smart Internet of Things, SmartIoT 2020, Beijing, China, 14–16 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 79–85. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Wang, R.; Hossain, M.S.; Alhamid, M.F.; Guizani, M. Heterogeneous Information Network-Based Content Caching in the Internet of Vehicles. IEEE Trans. Veh. Technol. 2019, 68, 10216–10226. [Google Scholar] [CrossRef]
  23. Yu, Z.; Hu, J.; Min, G.; Zhao, Z.; Miao, W.; Hossain, M.S. Mobility-Aware Proactive Edge Caching for Connected Vehicles Using Federated Learning. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5341–5351. [Google Scholar] [CrossRef]
  24. Zhu, X.; Jiang, C.; Kuang, L.; Zhao, Z. Cooperative Multilayer Edge Caching in Integrated Satellite-Terrestrial Networks. IEEE Trans. Wirel. Commun. 2022, 21, 2924–2937. [Google Scholar] [CrossRef]
  25. Rottenstreich, O.; Kulik, A.; Joshi, A.; Rexford, J.; Rétvári, G.; Menasché, D.S. Data Plane Cooperative Caching With Dependencies. IEEE Trans. Netw. Serv. Manag. 2022, 19, 2092–2106. [Google Scholar] [CrossRef]
  26. Chang, Q.; Jiang, Y.; Zheng, F.; Bennis, M.; You, X. Cooperative Edge Caching via Multi Agent Reinforcement Learning in Fog Radio Access Networks. In Proceedings of the IEEE International Conference on Communications, ICC 2022, Seoul, Republic of Korea, 16–20 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 3641–3646. [Google Scholar] [CrossRef]
  27. Yao, L.; Xu, X.; Deng, J.; Wu, G.; Li, Z. A Cooperative Caching Scheme for VCCN With Mobility Prediction and Consistent Hashing. IEEE Trans. Intell. Transp. Syst. 2022, 23, 20230–20242. [Google Scholar] [CrossRef]
  28. Yang, Y.; Song, T. Energy-Efficient Cooperative Caching for Information-Centric Wireless Sensor Networking. IEEE Internet Things J. 2022, 9, 846–857. [Google Scholar] [CrossRef]
  29. Wu, Q.; Zhao, Y.; Fan, Q.; Fan, P.; Wang, J.; Zhang, C. Mobility-Aware Cooperative Caching in Vehicular Edge Computing Based on Asynchronous Federated and Deep Reinforcement Learning. IEEE J. Sel. Top. Signal Process. 2023, 17, 66–81. [Google Scholar] [CrossRef]
  30. Narayanan, A.; Verma, S.; Ramadan, E.; Babaie, P.; Zhang, Z. DeepCache: A Deep Learning Based Framework For Content Caching. In Proceedings of the 2018 Workshop on Network Meets AI & ML, NetAI@SIGCOMM 2018, Budapest, Hungary, 24 August 2018; ACM: New York, NY, USA, 2018; pp. 48–53. [Google Scholar] [CrossRef]
  31. Chen, Y.; Liu, Y.; Zhao, J.; Zhu, Q. Mobile Edge Cache Strategy Based on Neural Collaborative Filtering. IEEE Access 2020, 8, 18475–18482. [Google Scholar] [CrossRef]
  32. Yu, G.; Wu, J. Content caching based on mobility prediction and joint user Prefetch in Mobile edge networks. Peer-to-Peer Netw. Appl. 2020, 13, 1839–1852. [Google Scholar] [CrossRef]
  33. Bai, S.; Kolter, J.Z.; Koltun, V. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar] [CrossRef]
  34. Breslau, L.; Cao, P.; Fan, L.; Phillips, G.; Shenker, S. Web Caching and Zipf-like Distributions: Evidence and Implications. In Proceedings of the IEEE INFOCOM ’99. The Conference on Computer Communications, Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies, The Future Is Now, New York, NY, USA, 21–25 March 1999; IEEE Computer Society: Washington, DC, USA, 1999; pp. 126–134. [Google Scholar] [CrossRef]
  35. Li, L.; Talwalkar, A. Random Search and Reproducibility for Neural Architecture Search. In Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence—UAI 2019, Tel Aviv, Israel, 22–25 July 2019; Globerson, A., Silva, R., Eds.; Proceedings of Machine Learning Research. AUAI Press: Arlington, VA, USA, 2019; Volume 115, pp. 367–377. Available online: http://proceedings.mlr.press/v115/li20c.html (accessed on 15 May 2022).
  36. Musa, S.S.; Zennaro, M.; Libsie, M.; Pietrosemoli, E. Mobility-Aware Proactive Edge Caching Optimization Scheme in Information-Centric IoV Networks. Sensors 2022, 22, 1387. [Google Scholar] [CrossRef]
Figure 1. Road model of vehicle edge network.
Figure 1. Road model of vehicle edge network.
Sensors 23 04619 g001
Figure 2. TCN prediction for caching.
Figure 2. TCN prediction for caching.
Sensors 23 04619 g002
Figure 3. The relationship between cache performance and Zipf parameters: (a) HR for different Zipf parameters, (b) ADL for different Zipf parameters.
Figure 3. The relationship between cache performance and Zipf parameters: (a) HR for different Zipf parameters, (b) ADL for different Zipf parameters.
Sensors 23 04619 g003
Figure 4. The relationship between cache capacity and cache performance: (a) HR for different cache capacity, (b) ADL for different cache capacity.
Figure 4. The relationship between cache capacity and cache performance: (a) HR for different cache capacity, (b) ADL for different cache capacity.
Sensors 23 04619 g004
Figure 5. The relationship between cache performance and content quantity: (a) The influence of different content quantity on HR; (b) The influence of different content quantity on ADL.
Figure 5. The relationship between cache performance and content quantity: (a) The influence of different content quantity on HR; (b) The influence of different content quantity on ADL.
Sensors 23 04619 g005
Figure 6. The relationship between cache capacity and cache performance: (a) HR with different cache capacity, (b) ADL with different cache capacity.
Figure 6. The relationship between cache capacity and cache performance: (a) HR with different cache capacity, (b) ADL with different cache capacity.
Sensors 23 04619 g006
Table 1. Simulation parameter table.
Table 1. Simulation parameter table.
ParameterValue
residual network depth10
dilation interval1, 2, 4, 8
kernel size2
sliding window2000
sliding step length200
input length20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, H.; Jin, J.; Ma, H.; Xing, L. Hybrid Cooperative Cache Based on Temporal Convolutional Networks in Vehicular Edge Network. Sensors 2023, 23, 4619. https://doi.org/10.3390/s23104619

AMA Style

Wu H, Jin J, Ma H, Xing L. Hybrid Cooperative Cache Based on Temporal Convolutional Networks in Vehicular Edge Network. Sensors. 2023; 23(10):4619. https://doi.org/10.3390/s23104619

Chicago/Turabian Style

Wu, Honghai, Jichong Jin, Huahong Ma, and Ling Xing. 2023. "Hybrid Cooperative Cache Based on Temporal Convolutional Networks in Vehicular Edge Network" Sensors 23, no. 10: 4619. https://doi.org/10.3390/s23104619

APA Style

Wu, H., Jin, J., Ma, H., & Xing, L. (2023). Hybrid Cooperative Cache Based on Temporal Convolutional Networks in Vehicular Edge Network. Sensors, 23(10), 4619. https://doi.org/10.3390/s23104619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop