Abstract
The usage of social media applications such as Youtube, Facebook, and other applications is rapidly increasing with each passing day. These applications are used for uploading informational content such as images, videos, and voices, which results in exponential traffic overhead. Due to these overheads (high bandwidth consumption), the service providers fail to provide high-speed and low latency internet to users around the globe. The current internet cannot cope with such high data traffic due to its fixed infrastructure (host centric-network) degrading the network performance. A new internet paradigm known as Information Centric Networks (ICN) was introduced based on content-oriented addressing. The concept of ICN is to entertain the request locally by neighbor nodes without accessing the source, which will help offload the network’s data traffic. ICN can mitigate traffic overhead and meet future Internet requirements. In this work, we propose a novel decentralized placement scheme named self-organized cooperative caching () for mobile ICN to improve the overall network performance through efficient bandwidth utilization and less traffic overhead. The proposed scheme outperforms the state-of-the-art schemes in terms of energy consumption by 55%, average latency, and cache hit rate by a minimum of 35%.
1. Introduction
The emerging technologies in mobile networks give consumers and service providers facilities. The data traffic increase due to these technologies is surging mostly because of social applications [1]. This exponential data growth is placing a significant burden on the current wireless infrastructure [2,3]. According to the Cisco report [4], the internet users are nearly 4.5 billion, and it is predicted to be incremented to approximately 5.3 billion at the end of the year 2023. The handheld devices are equipped with new technologies (Wi-Fi direct, Near Field Communication, Bluetooth, etc., supported by 3GPP) with high storage capacity and fast processors. Due to recent layer-2 technology, a concept known as device-to-device (D2D) communication has been introduced that allows mobile devices to share data with nearby devices without using any core services. This D2D communication facilitates the mobile devices to connect with several other mobile devices in a multi-hop communication manner. These latest emerging technologies promote the technique of cooperative caching. ICN is a new paradigm where a cooperative technique is used to offload the traffic overhead of the network. It follows the content-based addressing technique, where the contents are placed close to the end-users to satisfy the request from its neighbouring user by minimizing the need for a source. In this way, the traffic overhead can be minimized on the core network with low latency and benefit service providers by reducing the wastage of bandwidth [5,6]. ICN is a futuristic approach that meets the future internet requirement and cellular network [7]. It provides an efficient energy consumption in advance wireless communication technology such as 5G [8]. Multiple schemes such as [9], and [10] have worked to optimize the overall performance of the ICN network by considering various parameters such as low latency, energy efficiency, high cache hit ratio, packet delivery, mitigating the content granularity, etc. Table 1 shows the abbreviations of some important keywords.
Table 1.
Abbreviations of key words.
The focus of this paper is to minimize energy consumption, lower latency, and improve the cache hit rate. In order to do so, this paper proposed a decentralized cooperative caching scheme for mobile ICN.
The main contributions of this work are:
- We establish a cooperating caching model for wireless ICN to minimize energy consumption, considering some significant constraints, such as limited storage capacity, content popularity, access to content, and placement of contents.
- The proposed scheme can support device-to-device (D2D) communication and minimizes transportation and energy costs.
- The proposed system improves the overall performance of the network in terms of efficient bandwidth utilization, low latency, and low energy consumption.
The rest of the paper is organized as follows. Section 2 provides the related work. In Section 3, we explain the methodology. In Section 4, we present the Network and Energy consumption models. In Section 5, we present the results and discussion. Finally, in Section 6, we draw our conclusions and present the future directions.
2. Related Work
To cope with the exponential growth of data traffic, a new paradigm known as ICN is introduced. The proposed model facilitates the end-users by satisfying the request of their neighbour users. It keeps packets of the content at the network’s edge, which helps the users acquire their desired data from the edge. In [11], the authors proposed a social based QoS routing scheme for ICN. The authors devised three types of relationships between the nodes namely neighbors (NB), interest friends (IF) and response friends (RF). The author used content popularity based cache scheme to check whether the content should be cached or not in CS. A content with high probability to be requested in future is given higher priority to be placed in limited sized CS and when CS is full, the content that has low probability to be requested in future is replaced by the high probability requested content. In [12], the authors proposed a scheme known as Cache Everything (CEE), to mitigate the average hop latency by placing the content at the neighbor user. It reduces the hop count for content retrieval but also faces the problem of cache redundancy. The cache redundancy problem arises along the path of sender and provider, where each node cache the content in their respective memory buffer. Thus, all the nodes in the network cache contain the redundant data in their memory buffers. The mobile devices has limited storage capacity, thus it apply the eviction scheme whenever the memory of these devices are full. In [13], the authors proposed a deep learning based mobile network architecture that identify QoS by directly mapping the state of mobile networks.
The cache redundancy problem arises along the path of sender and provider, where each node cache the content in their respective memory buffer. To overcome the problem of data redundancy, the authors in [14] devised a probabilistic caching scheme. In this scheme, the data redundancy problem is reduced by calculating the probability of each data content that arrived at the ICN node between the requester and provider to determine whether the data needs to be cached. In [15], the authors investigated the performance of different caching schemes with various content request models. The authors in [16], used the approach known as WAVE to overcome the cache redundancy problem. In this approach, the content is divided into small chunks. These small chunks contain the data packets. The requester sends the request for every chunk to acquire a data packet. This approach reduces the data redundancy problem and packet loss as we all know that most traffic overhead occurs due to popular or most liked content. The user interest data can be placed at its network edge or nearer to the end users to reduce traffic overhead.
In [17], authors proposed an approach to calculate the user’s interest content from the network details and placed the desired contents to their network edge by using a social distance parameter. On this parameter the arrived content is cached. The authors in [18], proposed a new energy-aware scheme known to rely on the improvement the LEACH protocol to overcome energy consumption. This approach optimally determine the cluster heads (CH) to minimize the average energy consumption and extend the life span of a wireless sensor network. Similarly, the authors in [19], proposed an efficient strategy to minimize the traffic load generated by social networks, thus increasing the data rate and reducing the time of delay for content availability Furthermore, core network load is reduced as well.
3. Methodology
3.1. Overview of ICN Architecture
ICN is an emerging networking architecture that was designed to satisfy the future Internet requirements. It was first devised by Van Jacobson in [20], based on content addressing where the user acquire the desired data from its neighboring nodes. It propagates the request for further forwarding to the content source. In this way ICN improves the overall efficiency and throughput of the network. The typical Internet is IP-based in which the user access the main content originator to attain the desired data whereas ICN supports name based addressing. Figure 1 shows the illustration of ICN architecture, that consist of four nodes (A, B, C and D) connected through multi-hop. Each ICN node contains three data structures namely, pending interest table (PIT), content store (CS) and future information Base (FIB). The PIT contains the unsatisfied request for content, that arrives at the node along with the ID and face of the requested node for path indication. The the node checks the content in PIT for every unsatisfied request. If the desired data content is not available in PIT, the node allot the content to PIT and then forward the request to its neighboring nodes. Once the request is satisfied by neighboring node, the content is removed from PIT, by the requesting node. The CS is a limited buffer where the contents are stored temporarily. FIB table store the entry of satisfied requested content along with content name and its full address of request been satisfied node. FIB is similar to the IP table, managed by Internet routers. When the user desired data packet is unsatisfied by the node (means CS does not contain the requested content name in its buffer and no entry is available in PIT table), it checks the FIB entry for matching the longest prefix and mention the name of content and outgoing faces. The packets generated in the network are named as Interest Packet and Data Packet.
Figure 1.
ICN Architecture: A network of four routers (A, B, C and D) and multiple mobile nodes.
3.2. Proposed Caching Scheme
The ICN node architecture generally consist of three data structures namely, PIT, CS and FIB. In proposed paper a new data structure named as self organized cooperative caching () table is incorporated in the node. The as shown in Figure 2 stores the requested content according to the frequency of its request, at the receiver node. The table is updated regularly, in descending order as per Algorithm 1. The notations are highlighted in Table 2.
| Algorithm 1-table Convergence |
| Require:n is the current node and is any request arrived for a content c along the path |
| if (A request for content c arrives at the node) then |
| Update the table for the required content c |
| Sort -table in descending order |
| end if |
Figure 2.
Proposed ICN Node.
Table 2.
Abbreviations of key parameters.
Algorithm 1 represents the convergence operation in local popularity table at the arrival of each request on a corresponding node n. As soon as the request is received, the node’s table is updated and the all its content are sorted descending order. The cache hit occurs if the request already exist at the corresponding node n. The rank of the request is updated in table and the data packet c is forwarded using reverse path to the corresponding content requested node. If the requested data are not available, then it generate the interest packet and disseminate it to the the neighboring nodes.
Suppose, if the user needs a content c, the corresponding node generates an interest packet and forward this packet to their neighbor nodes. At the same time, the Algorithm 1 updates the table at the current node and increment the popularity requested content by one and then sort the table according to the popularity based in descending order. On the other hand, when a request packet is arrived at a particular node. The node checks its cache and if the requested content c is available, it is sent back at the same path (face) from the interest packet came. Suppose, the cache is getting full, the node make space for the newly arrived content by removing the least popular content. Similarly, as per popularity of the arrived content the node decides whether the content is cache or not.
To better understand the Algorithm 2, suppose we have a small wireless network as shown in Figure 1, consist of nodes (Node A, Node B, Node C, ……, Node ). If the user node A generates an interest packet and forward to it neighbor nodes (node B, node C and node D), and the neighbor node D already exists the copy of corresponding content, so it sends back the data packet to the requester node A.
| Algorithm 2 The proposed Placement scheme at the arrival of c |
| Require:n is the current node and c is any content arrived along the path |
| if then ▹c not available in the local cache |
| ▹ c not available in the local cache and node n is the requester |
| if then |
| ▹ Insert c in |
| else |
| if then |
| ▹ Replace least recently used content and insert the arrived c in |
| end if |
| end if |
| end if |
| Forwarded to the neighbor nodes in order to reach the Requester |
Upon the arrival of data packet at the node A, first of all, node A checks its own whether arrived content is already available it discards the arrived content c, otherwise if the arrived () is not available then it decides whether to cache the new content c in its . Consider the cache of node A is full and can not cache the arrived c. In such case, to cache the arrived content c, the Algorithm 2 used the LRU replacement strategy to remove the oldest content from node A cache and makes space for new content c. The complexity of the proposed scheme is similar to that of LRU, only more storage is required for maintaining the convergence table . The flowchart of the proposed approach is shown in Figure 3.
Figure 3.
Flow Chart of the proposed cooperative caching scheme.
3.3. Content Request Generation Process
The behaviour of a common user for random request generation process is generally considered to be using Zipf distribution. It is a discrete distribution technique that calculates the frequency of the requested content and generates a random request, particularly used for user generated contents [21]. The Equation (1) as in [17] is used to calculate the probability distribution of each content, that represents the number of times, a data content is requested by the users in a network. In Equation (1), denotes corresponding content probability, is the normalization factor, is the distribution parameter and k is the corresponding content.
3.4. Performance Metrics
The performance of proposed approach is measured by using below three major parameters:
- Cache hit rate: As shown in Equation (2), it is the ratio of the requests satisfied by the caching node to the total number of requests.
- Average response: It is the average time required to forward the requested content to the requester node in a network. It is used to minimize the hop count, to decrease the response or waiting time of a request.
- Energy saving rate: Energy saving rate represents the amount of energy saved by any caching policy in comparison to a non caching in a system.
4. System Model
Consider a random topology of ICN network, constructed as a connected graph G = (), where , represents the number of nodes and indicates the bidirectional links. Suppose indicates the collection of contents in a network. In the start, it is assumed that the server pre-stored, all the available catalogue of the content C. The end nodes of the network are responsible to receive the user interest packets and forward it to the requester node in a network. In the constructed ICN network, each ICN node has the capacity to cache a data content in its memory buffer.
4.1. Linear Topology
In linear topology, a linear wireless network of ICN nodes is considered. As shown in Figure 4, the nodes are connected to the server in a multi-hop manner. The server (access point) is assumed to be equipped with content catalog C. The buffers of all the nodes are considered to be of same size. The nodes act both as a content requester and provider (relaying nodes).
Figure 4.
A Linear Network with one server and nodes. The server is equipped with content catalogue C is placed at the centre and connected with nodes in a multi-hop manner. The request packet travel to the left and the content ctravel towards the right.
4.2. Mobile Topology
As mobility act as a double edge sword in networking. It can enhance the network performance by bringing the requested contents closer to the requester node, thus, decreasing the average latency. It also has an adverse effect on the system performance which means if a specific node moves away, the requester node starts searching for it. In this work, a wireless mobile network consists of mobile ICN nodes interconnected to each other and deployed in a random position of probability-p as shown in Figure 5. The server is equipped with content catalogue C is placed at the centre and is connected with nodes in a multi-hop manner.
Figure 5.
A mobile network consists of one server and ICN mobile nodes and deployed in a random position of probability-p as shown in Figure 5. The server is equipped with content catalogue C is placed at the centre and connected with nodes in a multi-hop manner.
4.3. An Energy Consumption Model
The energy consumption model in [20], is used to calculate the total energy consumption of ICN network. Total energy consumption of the network is a sum of energy consumed while transmitting and caching the contents as shown in Equation (3).
Since, here, we only deal with the data packets instead of chunks thus, for any node , it is assumed that . The request rate for the content at node is within time interval t. Each node consumes caching power density in its buffer with energy density of the link. According to energy model in [20], the caching energy at each individual node , is consumed to cache the content at time t as shown in Equation (4).
Let us suppose, a node requested a content from another node . The distance between these nodes is , which is calculated in terms of hops. The set of nodes along the path is . The total energy consumed by the nodes and the links for handling and transmitting the request is and , respectively. The transmission energy consumed by the nodes and the communication links while transmitting the content from node to node is calculated using Equation (5).
As shown in Equation (6) the total energy consumed, is calculated by using the values of and from Equations (4) and (5), respectively.
The total server energy consumption can be calculated using Equation (7).
4.4. Simulation Environment
The simulations were performed in OMNET++ [22] by considering two different scenarios: linear and mobile. It is assumed, that the popularity content follows the Zipf distribution. As per Poisson process, the users generate 1 requests per 300 s for the interested content. The wireless communication in ICN node is enabled by using wifi-direct (IEEE 802.11 standard). The Random Walk mobility model is used for mobile nodes, as it randomly changes its position in a defined area, with random speed and direction at a time interval t. Each node can transmit data with constant bit rate (CBR), with a maximum transmission capacity of 30 m. For each communication link between nodes, the bandwidth is 2 Mbps with 10 dB noise in channel and 0.33 s propagation delay. The nodes can have the size of two contention windows and it can transmit up to 250 kbps. Assume that each ICN node consumes 1 joule energy for bi-directional communication.
For linear scenario, a fix storage size of node = 25 is assumed and for mobile scenario, a varying cache size from 10 MB to 500 MB is considered for each node. Initially, the cache nodes are empty and are of same size. The simulation are measured when the network is in steady state and the caches are full with the content. The simulations are run 10 times for each caching scheme and here the results are reported as average. The selection of requester node and content from the catalog follows uniform distribution and Zipf distribution, respectively. The edge nodes may cover a large number of users, hence the average request rate at edge nodes follows a uniform distribution. The parameters used are given in Table 3.
Table 3.
Simulation parameters.
5. Results and Discussions
5.1. Impact on Average Latency
Figure 6 shows the performance of the proposed approach and other state-of-the-art approaches in a linear wireless network. The size of the content catalogue is 100 and the zipf distribution parameter of . The cache size of each ICN node is assumed to be = 25 and The buffer size of = 25, enables each to store the top 25 most interested content in its buffer.
Figure 6.
Average Latency for Small Buffer size = 25.
As the distance between a node and the server increases, the latency increases while the request satisfaction decreases and vice versa. Similar is the case of hit ratio and throughput. The proposed caching scheme performed better than the others by a minimum of 1.3 hops. The NO-CACHE scheme performed worst compared to other approaches and is considered our reference point. The Cache Everything (CEE) and Random Caching p = 0.9 got the same results but failed to perform better than our proposed approach.
In the mobile scenario, we assume that the server is equipped with a catalogue of 1000 contents and a Zipf popularity parameter of . The cache size of all the nodes is kept the same, varying from 10 MB to 500 MB. In Figure 7, as the cache size increases, the possibility of satisfying users’ requests increases, thus, decreasing the average latency for all the approaches. The greater the caching capacity, the greater the chance to cache the content; hence, the latency that depends on the number of hops will be smaller. For smaller cache sizes, such as 50 MB, the proposed approach achieved a minimum average latency of 3.75 hops compared to CEE, Random caching p = 0.9 and 0.5, which achieved average latency of 4.00, 4.5 and 5.4 hops, respectively. As the cache size increases to 100 MB, the average latency for the proposed scheme decreases to 3.29 hops, while for CE, Random caching p = 0.9 and 0.5 decreases to 3.4, 4.00 and 5.1, respectively. Figure 7 shows that at a cache size of 500 MB, our proposed approach outperformed other state-of-the-art approaches by achieving a minimum average latency of 2.0 hops.
Figure 7.
Average latency vs. Cache size.
5.2. Impact on Cache Hit Ratio
Figure 8 shows that as the cache size increases from 10 MB to 500 MB, the cache hit ratio also increases. For smaller cache sizes such as 10 MB, the node has limited storage for caching the content; thus, the performance of all the approaches are enhanced slightly. CEE follows a strategy of caching non-popular content; hence it achieved a smaller hit ratio of 10% while Random Capacity p = 0.9 achieved a 26% cache hit ratio, which is 16% better than CEE. The better performance of Random Capacity p = 0.9 is its capability of caching the popular contents with a probability of 0.5. As the cache size increases from 10 MB to 210 MB, the storage capacity of the node increases; hence the overall performance of all the approaches also increases. In Figure 8 it can be observed that, at cache size of 210 MB, the cache hit ration of CE, p = 0.9, p = 0.5 and increases to 35%, 45%, 53% and 61%, respectively. At a cache size of 500 MB, outperformed other schemes by achieving the highest hit ratio of 65% while CEE, p = 0.9, p = 0.5 achieved hit ratios of 25%, 15% and 6%, respectively.
Figure 8.
Cache hit rate vs. Cache size.
5.3. Impact on Energy Consumption
In a wireless communication network, energy consumption is high due to the movement of devices from one position to another. Here, the energy is consumed during caching and transmitting the content . To minimize the overall energy consumption, the Transport energy needs to be minimized, as most of the energy is consumed during the transmission of content. Efficient caching will reduce the average latency and minimize the transportation energy.
The Figure 9 represents the relationship of energy saving ratio with cache size. It is computed for CEE, Random Caching p = 0.9, Random Caching p = 0.5, and the Proposed caching scheme. At 10 MB cache size, CEE saves only 20% energy, which is worst, while the Random Caching p = 0.9 performed quite better due to its probability strategy. The proposed scheme save high energy of 31.53%. At 210 MB of cache size, CEE, p = 0.9, p = 0.5 and achieve 19.2%, 38.4%, 46.78% and 52.32% energy saving, respectively. At 500 MB, CEE, p = 0.9 and p = 0.5 achieved energy saving of 26.11%, 15.97% and 5.91%, respectively while , achieved the highest energy saving of 55.45%. All of these experimental results validate that the scheme performs efficiently with the maximum storage capacity of the node.
Figure 9.
Energy saving rate vs. Cache size.
6. Conclusions and Future Works
The current Internet infrastructure is unable to cope with this huge data growth by social applications and ultimately degrades the overall performance of both the service provider and requester. Cooperative caching helps to reduce the load of the network and utilize bandwidth efficiently.
In this paper, we proposed a novel placement scheme based on the local popularity of the contents. The scheme maintains a real-time ranking table in which the contents are placed based on their popularity in the network. No extra communication overheads are needed to maintain the ranking table. Due to this effective cooperative caching, the network’s performance is enhanced compared to other caching schemes. The proposed approach was implemented in OMNET++, which outperformed other state-of-the-art approaches in terms of average latency (by 2-hops), average hit ratio (by 35%), and average energy saving (by 55.45%). In the future, we will evaluate the scalability and efficiency of the proposed system by using different wireless network topologies and other performance parameters.
Author Contributions
Conceptualization, J.I.; Data curation, A.R.; Formal analysis, Z.u.A. and A.Z.; Funding acquisition, S.A.H.M. and M.H.A.; Investigation, N.A. and M.H.A.; Methodology, J.I., N.A., S.H.K. and A.Z.; Project administration, S.A.H.M.; Resources, A.R.; Software, J.I., S.H.K. and A.R.; Supervision, M.H.A.; Validation, S.H.K. and S.A.H.M.; Writing—original draft, Z.u.A., N.A. and A.Z.; Writing—review & editing, S.A.H.M. and M.H.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ye, X.; Chen, M. Personalized Recommendation for Mobile Internet Wealth Management Based on User Behavior Data Analysis. Sci. Program. 2021, 2021, 9326932. [Google Scholar] [CrossRef]
- Iqbal, J.; Iqbal, M.A.; Ahmad, A.; Khan, M.; Qamar, A.; Han, K. Comparison of Spectral Efficiency Techniques in Device-to-Device Communication for 5G. IEEE Access 2019, 7, 57440–57449. [Google Scholar] [CrossRef]
- Mollahasani, S.; Eroğlu, A.; Demirkol, I.; Onur, E. Density-aware mobile networks: Opportunities and challenges. Comput. Netw. 2020, 175, 107271. [Google Scholar] [CrossRef]
- Cisco Annual Internet Report (2018–2023), Tech. Rep., 2020. Available online: https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html (accessed on 5 August 2022).
- Siddiqui, M.U.A.; Qamar, F.; Tayyab, M.; Hindia, M.N.; Nguyen, Q.N.; Hassan, R. Mobility Management Issues and Solutions in 5G-and-Beyond Networks: A Comprehensive Review. Electronics 2022, 11, 1366. [Google Scholar] [CrossRef]
- Tran, T.X.; Pompili, D. Adaptive Bitrate Video Caching and Processing in Mobile-Edge Computing Networks. IEEE Trans. Mob. Comput. 2019, 18, 1965–1978. [Google Scholar] [CrossRef]
- Gupta, M.; Garg, A. A Perusal of Replication in Content Delivery Network. In Next-Generation Networks; Lobiyal, D.K., Mansotra, V., Singh, U., Eds.; Springer: Singapore, 2018; pp. 341–349. [Google Scholar] [CrossRef]
- Khanh, Q.V.; Hoai, N.V.; Manh, L.D.; Le, A.N.; Jeon, G. Wireless Communication Technologies for IoT in 5G: Vision, Applications, and Challenges. Wirel. Commun. Mob. Comput. 2022, 2022, 12. [Google Scholar] [CrossRef]
- Cao, X.; Liu, L.; Cheng, Y.; Shen, X. Towards Energy-Efficient Wireless Networking in the Big Data Era: A Survey. IEEE Commun. Surv. Tutorials 2018, 20, 303–332. [Google Scholar] [CrossRef]
- Soleimani, S.; Tao, X. Caching and placement for in-network caching in device-to-device communications. Wirel. Commun. Mob. Comput. 2018, 2018, 9539502. [Google Scholar] [CrossRef]
- Qu, D.; Wang, X.; Huang, M.; Li, K.; Das, S.K.; Wu, S. A Cache-Aware Social-Based QoS Routing Scheme in Information Centric Networks. J. Netw. Comput. Appl. 2018, 121, 20–32. [Google Scholar] [CrossRef]
- Sourlas, V.; Flegkas, P.; Tassiulas, L. Cache-aware routing in Information-Centric Networks. In Proceedings of the 2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013), Ghent, Belgium, 27–31 May 2013; pp. 582–588. [Google Scholar]
- Luo, G.; Yuan, Q.; Li, J.; Wang, S.; Yang, F. Artificial Intelligence Powered Mobile Networks: From Cognition to Decision. IEEE Netw. 2022, 36, 136–144. [Google Scholar] [CrossRef]
- Psaras, I.; Chai, W.K.; Pavlou, G. Probabilistic In-network Caching for Information-centric Networks. In Proceedings of the Second Edition of the ICN Workshop on Information-Centric Networking, New York, NY, USA, 17 August 2012; ICN ’12. ACM: New York, NY, USA, 2012; pp. 55–60. [Google Scholar] [CrossRef]
- Iqbal, J.; Giaccone, P.; Rossi, C. Local cooperative caching policies in multi-hop D2D networks. In Proceedings of the 2014 IEEE 10th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Larnaca, Cyprus, 8–10 October 2014; pp. 245–250. [Google Scholar] [CrossRef]
- Cho, K.; Lee, M.; Park, K.; Kwon, T.T.; Choi, Y.; Pack, S. WAVE: Popularity-based and collaborative in-network caching for content-oriented networks. In Proceedings of the 2012 Proceedings IEEE INFOCOM Workshops, Orlando, FL, USA, 25–30 March 2012; pp. 316–321. [Google Scholar] [CrossRef]
- Iqbal, J.; Giaccone, P. Interest-based cooperative caching in multi-hop wireless networks. In Proceedings of the 2013 IEEE Globecom Workshops (GC Wkshps), Atlanta, GA, USA, 9–13 December 2013; pp. 617–622. [Google Scholar] [CrossRef]
- Zhang, S.; Li, J.; Yang, Q.; Qin, M.; Kwak, K.S. Residual-energy Aware LEACH Approach for Wireless Sensor Networks. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Zagreb, Croatia, 2–5 July 2019; pp. 413–418. [Google Scholar] [CrossRef]
- Wang, T.; Li, P.; Wang, X.; Wang, Y.; Guo, T.; Cao, Y. A Comprehensive Survey on Mobile Data Offloading in Heterogeneous Network. Wirel. Netw. 2019, 25, 573–584. [Google Scholar] [CrossRef]
- Jacobson, V.; Smetters, D.K.; Thornton, J.D.; Plass, M.F.; Briggs, N.H.; Braynard, R.L. Networking named content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies, Rome, Italy, 1–4 December 2009; pp. 1–12. [Google Scholar] [CrossRef]
- Adamic, L.A.; Huberman, B.A. Zipf’s law and the Internet. Glottometrics 2002, 3, 143–150. [Google Scholar]
- Varga, A. OMNeT++. In Modeling and Tools for Network Simulation; Springer: Berlin/Heidelberg, Germany, 2010; pp. 35–59. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).








