Author Contributions
Conceptualization, S.T.; methodology, H.T.; software, H.T.; validation, H.T.; writing—original draft preparation, H.T.; writing—review and editing, S.T.; visualization, H.T.; supervision, S.T. All authors have read and agreed to the published version of the manuscript.
Figure 2.
CH sends a beacon containing a connection-expired-predicted CH ID.
Figure 2.
CH sends a beacon containing a connection-expired-predicted CH ID.
Figure 3.
After receiving the beacon, a new cluster will be created.
Figure 3.
After receiving the beacon, a new cluster will be created.
Figure 4.
Communication process of the proposed method.
Figure 4.
Communication process of the proposed method.
Figure 5.
Request counter table.
Figure 5.
Request counter table.
Figure 6.
Data packet forwarding operation.
Figure 6.
Data packet forwarding operation.
Figure 7.
The map used in Scenario 3 (around Chofu Station in Japan) (©OpenStreetMap contributors).
Figure 7.
The map used in Scenario 3 (around Chofu Station in Japan) (©OpenStreetMap contributors).
Figure 8.
Median latency for retrieving the requested content with respect to cluster radius (Scenario 2).
Figure 8.
Median latency for retrieving the requested content with respect to cluster radius (Scenario 2).
Figure 9.
Success ratio of retrieving requested content with respect to cluster radius (Scenario 2).
Figure 9.
Success ratio of retrieving requested content with respect to cluster radius (Scenario 2).
Figure 10.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 2).
Figure 10.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 2).
Figure 11.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 2).
Figure 11.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 2).
Figure 12.
Cache hit ratio with respect to cache buffer size (Scenario 2).
Figure 12.
Cache hit ratio with respect to cache buffer size (Scenario 2).
Figure 13.
In-cluster cache hit ratio with respect to cache buffer size (Scenario 2).
Figure 13.
In-cluster cache hit ratio with respect to cache buffer size (Scenario 2).
Figure 14.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 1).
Figure 14.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 1).
Figure 15.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 1).
Figure 15.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 1).
Figure 16.
Cache hit ratio with respect to cache buffer size (Scenario 1).
Figure 16.
Cache hit ratio with respect to cache buffer size (Scenario 1).
Figure 17.
In-cluster cache hit ratio with respect to cache buffer size (Scenario 1).
Figure 17.
In-cluster cache hit ratio with respect to cache buffer size (Scenario 1).
Figure 18.
Distribution of hop count required to retrieve the requested content (Scenario 1, buffer size: 100).
Figure 18.
Distribution of hop count required to retrieve the requested content (Scenario 1, buffer size: 100).
Figure 19.
Average channel usage (Scenario 1, buffer size: 100).
Figure 19.
Average channel usage (Scenario 1, buffer size: 100).
Figure 20.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 2).
Figure 20.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 2).
Figure 21.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 2).
Figure 21.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 2).
Figure 22.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 3).
Figure 22.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 3).
Figure 23.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 3).
Figure 23.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 3).
Figure 24.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 2).
Figure 24.
Median latency for retrieving the requested content with respect to cache buffer size (Scenario 2).
Figure 25.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 2).
Figure 25.
Success ratio of retrieving requested content with respect to cache buffer size (Scenario 2).
Table 1.
Comparison of related methods.
Table 1.
Comparison of related methods.
Reference | Kind | Description |
---|
[10] | Cluster construction | Construct clusters based on inter-vehicle distances, vehicle heights, and route stability |
[11] | Cluster construction | Clusters may be constructed with multi-hop distances |
[3] | Reducing the cache redundancy | Cache probability is determined by channel usage |
[7] | Reducing the cache redundancy | Cache probability is determined by content similarity and vehicle mobility |
[8] | Reducing the cache redundancy | Cache probability is determined by the characteristics of vehicles’ mobility |
[4] | Collaborative caching | CHs receive content from RSU; CMs receive content from both of them |
[12] | Collaborative caching | Vehicles broadcast alternately |
[5] | Collaborative caching | A vehicle preferentially requests content from others moving in the same lane |
[13] | Collaborative caching | A vehicle decides whether to cache content based on the vehicle’s power, the content’s popularity, the gain acquired by the content, and the distance (hop count) to the content provider |
[14] | Collaborative caching | Both vehicles and RSUs construct clusters |
[15] | Collaborative caching | Vehicles within an RSU are divided into clusters, and they send requests to the RSU |
[16] | Collaborative caching | Vehicles construct clusters and receive requested content from either their CHs or RSUs; part of the mathematical model is solvable as a knapsack problem |
[6] | Collaborative caching | Vehicles broadcast the names of cached content |
[17] | Collaborative caching | Cache policy is determined with deep reinforcement learning |
[18] | Collaborative caching | The problem of which content to store and where to store it is determined by machine learning |
[19] | Collaborative caching | Data carrier node is selected by reinforcement learning |
[20] | Cache place determination | Cache placement is treated as an MWVCP problem |
[21] | Cache place determination | Node n only stores content c, where |
[22] | Relay vehicle determination | Vehicles that lie in the common communication area are preferentially selected |
Table 2.
Simulation conditions.
Table 2.
Simulation conditions.
Key | Value |
---|
Simulator | Scenargie [23] |
MAC protocol | IEEE 802.11p |
Velocity of vehicles | 20 m/s |
Vehicle transmission range | 350 m |
Beacon transmission interval | 100 ms |
Buffer size (number of content) | 0, 10, …, 100 |
The interval for sending interest packets | 1 s |
The lifetime of content in temporary buffer | 500 ms |
Simulation time | 320 s |
Number of trials | 8 |
Table 3.
Simulation scenarios.
Table 3.
Simulation scenarios.
No. | Road Shape | Number of Vehicles |
---|
Scenario 1 | 4 km straight road | 100 |
Scenario 2 | 4 km straight road | 200 |
Scenario 3 | 1.6 km × 1.6 km urban area | 300 |
Table 4.
Values of parameters.
Table 4.
Values of parameters.
Variable Name | Value |
---|
| 300 ms |
| 150 m |
| 1.4 m/s |
| 100 |
| 1 s |
Table 5.
Sizes of interest/data packets and control messages.
Table 5.
Sizes of interest/data packets and control messages.
Packet Name | Size (Bytes) |
---|
Interest packet | 128 |
Data packet | 512 |
Content-Not-Found | 128 |
Request-To-Cache | 128 |
Sync (for n content) | |
Beacon (clustering disabled) | 48 |
Beacon (clustering enabled) | 80 |
Merge request | 16 |
Merge ack | 20 |