Next Article in Journal
Analyzing Mobility Patterns at Scale in Pandemic Scenarios Leveraging the Mobile Network Ecosystem
Previous Article in Journal
Application of Genetic Algorithms for Strejc Model Parameter Tuning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Federated Learning on Improving the IoT-Based Network in a Sustainable Smart Cities

1
School of Science, Guangdong University of Petrochemical Technology, Maoming 525000, China
2
School of Computer, Guangdong University of Petrochemical Technology, Maoming 525000, China
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(18), 3653; https://doi.org/10.3390/electronics13183653
Submission received: 22 July 2024 / Revised: 4 September 2024 / Accepted: 8 September 2024 / Published: 13 September 2024

Abstract

:
The caching mechanism of federated learning in smart cities is vital for improving data handling and communication in IoT environments. Because it facilitates learning among separately connected devices, federated learning makes it possible to quickly update caching strategies in response to data usage without invading users’ privacy. Federated learning caching promotes improved dynamism, effectiveness, and data reachability for smart city services to function properly. In this paper, a new caching strategy for Named Data Networking (NDN) based on federated learning in smart cities’ IoT contexts is proposed and described. The proposed strategy seeks to apply a federated learning technique to improve content caching more effectively based on its popularity, thereby improving its performance on the network. The proposed strategy was compared to the benchmark in terms of the cache hit ratio, delay in content retrieval, and energy utilization. These benchmarks evidence that the suggested caching strategy performs far better than its counterparts in terms of cache hit rates, the time taken to fetch the content, and energy consumption. These enhancements result in smarter and more efficient smart city networks, a clear indication of how federated learning can revolutionize content caching in NDN-based IoT.

1. Introduction

In the current world, innovations are being adopted in urban settings as more importance is placed on smart technologies in infrastructural design. Intelligent Transportation Systems (ITS), smart power networks, and multi-sensory networks are some of the components of this change, and when combined, they are referred to as smart cities [1]. These innovations hinge on the processing and sharing of data, which is central to the effective functioning of cities’ activities and, thus, the overall improvement of the populace’s quality of life in a sustainable manner. Moreover, technological development has led to IoT technology playing a significant role in smart cities’ functionality and performance to develop better connectors, resources, and services. Under these conditions, Named Data Networking (NDN) can be considered a viewpoint that can improve the performance of data transmission and data access in the IoT [2]. As opposed to IP-based networking, where attention is paid to the position where the content is located, NDN emphasizes the content itself, which makes the latter more suitable for IoT networks that are characterized by their dynamic and data-oriented nature [3,4]. As the smart city has been shaped by interconnected things around it, the problems of how to store and control the data gathered by such things have become more crucial [5].
Traditional methodologies appear to significantly impact the speed and sustainability of smart city use cases. This paper outlines a novel caching approach for the federated learning-based NDN IoT network that leverages robust learning algorithms to determine data popularity and proactively update the cache’s location. Federated learning is another distributed learning model in which edge nodes participate in the process of training a global model while no raw data are exchanged. This technique helps to maintain data privacy and cuts overhead communication; therefore, it is suitable for a smart city ecosystem that values data privacy and minimizes resource utilization. Thus, in the context of the proposed model, federated learning allows for predicting the popularity of data items using local data and aggregating individual predictions to make caching decisions. This approach not only improves the speed of retrieving information from large data sets but also reduces energy usage, which in turn adheres to the sustainability of smart cities. Thus, NDN is considered a promising approach to managing data challenges in smart cities [6,7]. NDN, on the other hand, does not place as much emphasis on the host addresses as traditional IP-based networking, but rather on the content, which enhances the actual data’s delivery and dissemination. This is especially true for IoT applications, where data-oriented communications result in lower latencies and better network performance [8]. However, the billions of interconnected devices in a smart city create a problem of big data management for storage and energy consumption [9]. To maintain these data, some effective caching strategies are required. The conventional centralized caching strategies are not optimal because they are highly power intensive and cause delay, which is unrealistic for the dynamic and distributed characteristics of smart city applications [10,11]. In order to meet these challenges, it is proposed to develop an energy-efficient caching scheme for IoT NDN with predictive analysis based on federal learning in smart cities. Federated learning allows the distribution of the model training process among edge nodes while keeping the data locally without sending them to a centralized location so that privacy is preserved and the need for inter-node communication is minimized [12]. By predicting how popular the data item in question will be, the placement of each data item in the cache is changed to minimize data retrieval time and energy use [13].
The proposed model incorporates several key components. It is composed of small smart devices, like sensors and actuators, smart gateways and routers, and a large server at the center. The edge nodes also contain a local machine learning model that predicts the data requests in the corresponding part of the network in the future, while the results of these calculations are periodically transferred to a central server for global fine tuning of the model. The approach to federation guarantees that the caching strategy remains dynamic and is capable of adjusting to fluctuations in data and the network environment. As a result, our goal is to develop a model that combines the aforementioned predictive models with optimization for efficient caching energy use. As a result, the proposed model considers both retrieval delay and energy consumption to improve the efficiency of smart city networks. The key contributions are given below.
  • We present a survey of the proposed caching model, the federated learning frameworks, and the optimization problem of the proposed system.
  • We also prove our method through simulations that demonstrate a significant increase in data retrieval efficiency and energy savings compared to conventional methods.
  • The proposed caching strategy is compared in a simulation environment to evaluate the efficiency by comparing it with the benchmark strategy in terms of the cache hit ratio, energy consumption, and content retrieval delay.
  • This research provides a novel caching strategy to be considered a part of the large movement aimed at making cities more intelligent and resource efficient through the further development of data management tools.
The rest of the paper follows, as Section 2 provides related work where existing caching strategies are described and the drawbacks of the current method are also analyzed. Section 3 explores the challenges of caching in a smart city. Section 4 focuses on the details of the federated learning-based caching mechanism and its architecture, as well as the algorithms and data models. Section 5 describes the simulation environment and procedure used in benchmarking the caching approach. In Section 6, the conclusion of the paper and future research avenues are discussed.

2. Related Studies

This section presents several caching strategies and their impacts on the IoT networks that are based on NDN. Among the different caching techniques, probabilistic-based caching, centrality-based caching, and content- and node-based as well as popularity-based caching have been developed to overcome different criteria of enhancing caching for provision. For instance, Cache Everything Everywhere [14] wants high availability and consequently, it copies the content that consumes many resources. Consequently, besides adding validity to the content and optimizing the filtering procedure, CCS and TC add new pressure on the clients and the storage systems [15]. Some of the caching techniques that are based on a node’s centrality, such as betweenness centrality, aim at reducing the response time by caching contents that are most often requested by the users [4,16]. However, it is pertinent that such strategies are disadvantageous in that they cause congestion and boost the stretch ratios of paths in a network. Another caching policy is probabilistic caching in which caching probability is adjusted based on caching parameters and request rates, such as ProbCache and pCASTING [17]. These approaches are geared towards the elimination of energy and latency and are intended to work on the placement of caches. Moreover, different studies have been proposed using the FL concept provided in [18,19,20] regarding edge computing, IoT, vehicular networks [21,22,23], and blockchain [24].
The authors in [25] define a Probabilistic Cache (ProbCache) that tries to cache data near consumers. A probability distribution to store records is employed, where caching is performed on the foam of the probability that is inversely proportional to the distance between the consumer and the producer. Nevertheless, it is unfair in the distribution of the resources among the nodes, involving a high computation cost, and it is compulsory to set and adjust many parameters before applying it, which is recklessly known as ProCache. In [15], the betweenness centrality (Btw) strategy was also introduced, and it caches data once on the reverse path at the node with the highest betweenness centrality. This strategy is used in the assessment of the centrality of nodes in the path between pairs of nodes. It is complex for resource-scarce nodes to compute betweenness centrality. While the edge caching strategy described in [26] is specific to tree topologies, the idea is to cache content at the leaves. This leads to high duplication in neighboring leaves; however, similar to PoolCache [27], it offloads and decreases the caching overhead on the core network. These steps are general requirements by which the new caching schemes can be measured. In [28], consumer caching stores data on routers that are connected to consumers; this type of caching resembles edge caching on tree topology but it behaves like Leave Copy Everywhere (LCE) [2], that is, every consumer is connected to all the routers on the reverse path. To produce its performance level, it relies on the network’s architecture, consumer distribution, and content popularity. The in-network caching scheme for inter-cache cooperation with Content Space Partitioning and Hash Routing (CPHR) in [29] is used so the dominating node can partition the content space and assign these partitions to caches so as to maximize the hit ratio. Though it increases propagation latency, necessitates additional content tables, and assumes a predetermined content space, it is not compatible with dynamic IoT networks. In the HCC strategy suggested in [30], the amounts that need to be cached are split into two layers, and a weighted-based clustering algorithm (WCA) is used in the hierarchical cluster solution. This strategy incurs a large communication overhead because of the frequent exchange of information and does not work properly in cases where cluster heads are unavailable. It assumes a static network, which is not friendly when it comes to IoT cases. A study by Yahui et al. [31], formulated as part of the NDN-based IoT networks, examines the use of caching with benefits such as receiver and sender isolation, minimizing unnecessary data delivery, and improved Internet expandability. This paper expounds on different caching ideas and difficulties in the NDN-IoT context while stressing the significance of the NDN and IoT structures and current caching approaches. However, this study revealed drawbacks, like poor approaches in the use of resources and computational hindrances and the discretional nature of caching parameters in making suitable adjustments based on different IoT scenarios.
Energy consumption is one of the few factors that should be considered, but it has received limited attention from the NDN-IoT caching community. Most current caching strategies focus more on parameters, such as latency and the cache hit ratio, than on the energy aspect, which is critical for IoT networks’ long-term functionality. In this regard, the proposed caching strategy incorporates the optimal method of selecting contents to cache and stresses energy efficiency. It also helps in reducing the data retrieval latency since high-demand data are usually placed closer to consumers, thus making IoT devices’ energy consumption minimal. Such a balanced context of performance and power-saving rules guarantees the network’s ability to control data traffic levels while also maximizing the longevity of battery-integrated IoT devices used throughout smart city applications, thus increasing the overall efficiency and sustainability of smart city services.

3. Problem Statement

The ever-advancing technology and increased adoption of IoT devices in smart cities have led to a data explosion, necessitating optimal and effective data management and retrieval to improve service delivery [32]. NDN uses a content-centric approach to develop a scheme that can significantly improve data delivery to the IoT. Nevertheless, the conventional NDN caching paradigms do not necessarily meet these challenges concerning latency and energy, which are crucial in smart city scenarios where conspicuous IoT devices are frequently constrained and geographically disseminated [33]. The traditional caching techniques in NDN might not be very effective in addressing the temporal variability of data requests, resulting in cache misses whereby devices will be forced to fetch the data from distant servers, thereby causing high request response times. Delay reduces the efficacy of latency-critical applications like smart traffic control, smart emergency response, and smart health services in smart cities [34]. Also, these IoT devices in smart cities use batteries for power, and many operations, such as continuously retrieving data or fetching data from remote servers, may drain the batteries [35]. High energy consumption shortens the life of IoT devices, increases maintenance costs, and disrupts smart city services. Day by day, the number of IoT devices in smart cities is increasing, so caching strategies play a vital role in handling them efficiently. Traditional centralized caching strategies may end up posing constraints, resulting in inefficiency and power consumption. When scaled up, some sub-systems’ inefficiencies can lead to poor performance, high latency, and high power consumption in smart city applications. Smart city environments are indeed rather dynamic, and the amount of data generated in a specific period at a particular place and in a given context may significantly differ [32]. Static caching mechanisms do not address these changes and, thus, offer rather low efficiency.
Caching leads to usage of the network when devices request data that have been cached, thus creating a problem of congestion in the network, especially when many devices are accessing the cached data at the same time during rush hours [36,37]. Delay and energy waste are experienced due to overload because of congestion that results in repeated transmission and processing of data in a network. This means that if the transitions fail to respond to dynamic data requirements, there will be latencies and inadequate network utilization, affecting the quality of smart city services [38,39]. Caching policies that are not optimal can result in high utilization of network bandwidth, especially when information has to be copied from remote servers [40]. High bandwidth utilization causes data transmission delays, network congestion, a high latency period, and high operational costs, all of which have an impact on the efficiency of smart city networks [41]. The two objectives do not go hand in hand most of the time, as it is possible to attain a high cache hit rate at the cost of more energy consumption [31]. The tension between these objectives is rather difficult to manage. It is noted that a cache, though optimized for high performance, often consumes more power than necessary, and this can negatively influence the IoT networks with service provision and the longevity of the IoT devices [42].

4. Proposed Model

The proposed caching model based on federated learning and less energy consumption for NDN is developed to solve the following issues. Federated learning is used for training the models for each node independently, and the cache placement of data items can be predicted; therefore, the retrieval latency of the cache can be minimized. A dynamic refresh of cached contents results from frequent updates of the cache based on data access patterns and predicted usage and exploitation of the smart city. Introducing the energy consumption rate into the caching decision and optimizing cache hits for energy conservation will increase battery life for IoT devices and retain efficiency as the number of IoT devices begins to increase across the numerous domains of a smart city. To this end, the proposed model tries to improve various aspects of caching architecture for NDN-based IoT in smart cities and make smart city services more effective. When designing an energy-efficient model for the caching of NDN-based IoT networks in a smart city using federated learning, we should follow a set of structured efficient functional activities for data. The system definition is followed by the nodes, and content and interactions within the network are to be determined. The set of IoT devices, including sensors and actuators, is denoted by D = { d 1 ,   d 2 ,   d n } , and the network nodes are represented by N = { n 1 ,   n 2 ,   n n } . The contents in the network are represented as X = { x 1 ,   x 2 ,   x n } and the packets of prospective requests in such a network are the requests for the above contents. The set of requests generated for those contents is represented as R = { r 1 ,   r 2 ,   r n } . For content selection, the next step is to proceed with a process of forming federated learning for the prediction of content popularity. Due to the request packet scheme, each edge node has the local request distribution rates λ i , j ( t ) of each node n i and content x i . Each node n i uses its local federated learning model to estimate the future request rate λ i , j ( t ) for each content x i . These local models are then combined at the central server to create a global model that then sent to the various edges to be refined. Table 1 shows the symbols and their corresponding description used in the model.
In order to identify the cacheable node, there are several factors that need to be considered, like the cache state and the capacity of a network node. The Algorithm 1 shows how to train the FL based model to identify the future demands of the end users. The cache state at node n i is defined as follows:
Algorithm 1: Federated Learning Model for Popularity Prediction
1. Data Collection
  Input:
     N = { n 1 ,   n 2 ,   n n } // Edge Nodes
     X = { x 1 ,   x 2 ,   x n } // Contents
  Output:
     L o c a l   r e q u e s t   r a t e   λ i , j ( t )   c o l l e c t e d   a t   e a c h   e d g e   n o d e   n i
Procedure: Find the Request Rate
 (a)
F o r   e a c h   e d g e   n o d e   n i   i n   N   d o  
 (b)
   F o r   e a c h   c o n t e n t   x i   i n   X   d o
 (c)
    C o l l e c t   l o c a l   r e q u e s t   r a t e     λ i , j ( t )      
 (d)
   e n d   f o r
 (e)
e n d   f o r
2. Local Model Training
  Input:
      L o c a l   r e q u e s t   r a t e     λ i , j t   a t   e g d e   e a c h   n i
  Output:
     L o c a l   M L   m o d e l s   a t   e a c h   e d g e   n o d e   n i
Procedure: Find the Local ML model
 (a)
f o r   e a c h   e d g e   n o d e   n i   i n   N   d o
 (b)
    f o r   e a c h   c o n t e n t   x i   i n   X   d o
 (c)
      t r a i n   L o c a l   M L   t o   p r e d i c t   f u t u r e   r e q u e s t   r a t e     λ i , j ( t )
 (d)
    e n d   f o r
 (e)
e n d   f o r
3. Federated Learning Aggregation
  Input:
     L o c a l   M L   m o d e l   a t   e a c h   e d g e   n o d e   n i
  Output:
     G l o b a l   m o d e l   a g g r e g a t e d   a t   c e n t e r   s e r v e r   S
Procedure: Find the Global model and Distribute
 (a)
P e r i o d i c a l l y   d o
 (b)
    f o r   e a c h   e d g e   n o d e   n i   i n   N   d o
 (c)
      s e n d   l o c a l   m o d e l s   t o   e e n t r a l   s e r v e r   S
 (d)
    e n d   f o r
 (e)
A t   c e n t r a l   s e r v e r   S   d o
 (f)
    A g g r e g a t e   l o c a l   m o d e l s   t o   c r e a t e   g l o b a l   m o d e l
 (g)
    D i s t r i b u t e   g l o b a l   m o d e l   b a c k   t o   e d g e   n o d e s   n i   f o r   f u t u r e   t r a i n i n g  
 (h)
e n d   p e r i o d i c a l l y
e n d   a l g o r i t h m
X i t   refers to the collection of content available at the mentioned node at a particular time t . Caching is a concept of storing content, and this notation aims at capturing the fact that contents that are stored in the cache may change over time. Each network node n i in the network has its cache capacity predefined, and it is denoted by C i . This capacity is a constraining parameter for the size of content which can be stored at a node n i and has a cache capacity C i . Therefore, the number of contents can be cached at a node regarding its cache capacity as given by the following Equation (1):
| X i t | C i
It shows that the number of contents X i t must not exceed the cache capacity C i of a node n i . This constraint prevents the cache full problem and plays an important role in controlling the storage capacity. The topology of the cache is controlled by another decision variable A i , j ( t ) , which determines whether a given content x i is cached at node n i at time ( t ) . Therefore, considering the variable A i , j ( t ) value as 1 represents the presence of content x i in node n i ; otherwise, it shows 0 values. With the help of this binary variable, how the placement of content is managed and regulated at various nodes in the network is facilitated. Thus, the sum of content cached at a node can be represented by the following Equations (2) and (3):
A i , j t = 1 i f   x i   w i t h i n   n i   a t t 0 O t h e r w i s e
j = 1 k A i , j t C i
It shows that the sum of the decision variables A i , j t for all content x j cannot exceed the cache capacity. This constraint makes it possible to limit the number of contents in the cache to the node’s capacity, making the caching strategy feasible. To evaluate the efficiency of a caching strategy, we need to consider two key factors: content retrieval latency and energy consumption. These are the characteristics that are relevant to determining the degree of access to the cache and the power consumption rate. To determine the latency for retrieving content x i by node n i at time ( t ) , two conditions must be encountered: the content has to be either locally cached or has to be fetched from a remote server. The latency L i , j ( t ) is defined as follows:
L i , j t = l l o c a l i f   A i , j t = 1 l r e m o t e + l n e t w o r k    O t h e r w i s e
If content x i is cached at node n i (i.e.,   A i , j ( t ) = 1 ), then the retrieval latency for content x i will be l l o c a l . This indicates the time it took to obtain the content from the local cache, and this normally takes a very short time. Conversely, if the content x i is not cached at node n i (i.e., A i , j ( t ) ), the latency consists of two components: the time for the content to transfer from one remote server l r e m o t e and the time taken for interconnectivity   l n e t w o r k . Therefore, the total latency is the sum of the remote latency and the network latency, which is given by l r e m o t e + l n e t w o r k . By assessing the level of data retrieval latency attainable through the caching strategy, the time which is usually taken to obtain more data can be minimized on the often-required content. This improves the efficiency of the network, and there is a better response from the services and a better performance from the network. On the other hand, the energy expended for accessing content x i from the node n i at time ( t ) depends on whether the content is anchored in the node itself or if it has to be downloaded from a server. Energy consumption E i , j ( t ) is defined by the following Equation (5):
E i , j t = e l o c a l i f   A i , j t = 1 e r e m o t e + e n e t w o r k    O t h e r w i s e
Therefore, if content x i is cached at node n i (i.e., A i , j ( t ) ), the energy consumption will be e l o c a l . This represents the energy that the system uses in retrieving the data from the local cache, which is mostly limited. Conversely, energy consumption is represented by two factors. First, for fetching the data from a remote server if content x i is not cached at node n i , and second, for network communication energy e r e m o t e + e n e t w o r k . Energy consumption can be modeled to make caching strategies that are efficient when it comes to energy use. Thus, energy consumption remains one of the main concerns in many applications, and in particular, IoT and mobile networks, since they are always restricted by energy and battery constraints. Hence, through optimal consumption of energy on the network, the network can be more efficient and cheaper to run. Therefore, one can conclude that it is possible to understand both latency and energy consumption, providing a balanced approach to caching systems. There are socio-technical strategies that may enhance by giving minimal latency, while there are others that aim at conserving energy. Thus, the number of function calls on the one hand and their costs on the other can be optimized to fit the specifics of the application at hand. This step helps in the management of resources in that it avoids situations whereby the caching technique places too much load on the network in terms of latency or energy usage. It is useful in the management and scheduling of the physical architecture in order to accommodate different loads and conditions. Algorithm 3 shows the mechanism to identify the energy consumption at local and global level.
The objective function of the caching model is to try to achieve the minimization of the weighted sum of average data retrieval latency and energy consumption. This is performed in order to reduce the latency while at the same time minimizing the energy consumption, which is a measure of efficiency performance. The objective function can be defined as follows:
J o b j = i = 1 n j = 1 k α λ i , j t .   L i , j t + β λ i , j t . E i , j t
This represents the request rate or probability that node n i asks for content x i at time ( t ) . So, it helps to calculate the latency and energy consumption by the frequency of requests for each content. α and β are the latency weighting factor and the energy consumption weighting factor. They define how much latency effect trade-off is preferable to an energy effect trade-off. By unifying the latency and energy consumption into one, the amount of optimization can be controlled easily to find a balance between the two factors. Thus, weighting factors α and β   can adjust a share of the one between the two protocols, depending on the requirements of the given network. Moreover, the objective function is useful because it preserves correlations between the two vital factors, delay and energy, making the optimization plan more inclusive. It should be noted that different applications and different network scenarios have different priorities. For instance, an RT application may have lower latency, as its key characteristic and an IoT network that comprises devices with batteries may require low energy utilization. Weighting factors α and β offer variations to different priorities in the caching strategy.
The objective function of each entity helps in the optimum utilization of the network resources by minimizing both latency and energy consumption. It results in an enhanced battery time for the devices and low operational costs, in addition to a general improvement in network operations. The quality of service is used to measure the improvement of latency concerning data access, which increases user satisfaction. Effective management of energy usage can enhance the durability of the network devices so that they stay productive for a longer duration; this would lead to a higher availability of the network. Hence, the combined objective function is the only parameter that considers all caching tier levels in the context of caching while considering the data retrieval latency and energy consumption at the same time. In addition, as it is the sole way of looking at the overall caching system and its requirements, it leads to the concept of improvement in the efficacy or improving the best use of resources, flexibility, improvement to the end-user’s usage or experience, and an increase in models for the caching networks. The Algorithm 2 shows the cache state of the network nodes and cache placement.
Algorithm 2: Cache Capacity and Placement
1. C a c h e   S t a t e   a n d   C a p a c i t y
   I n p u t :
     D = d 1 , d 2 , , d n / /     I o T Devices
     N = n 1 , n 2 , , n n / /     E d g e Nodes
     C   // Central Server
     X = { x 1 , x 2 , x n } // Contents
  Output:
     C a c h e   S t a t e   C a c h e   X i t   a n d   c a p a c i t y   C i   f o r   e a c h   n o d e
Procedure: C a c h e   S t a t e   a n d   C a p a c i t y
 (a)
f o r   e a c h   n o d e   n i   i n   D N C   d o
 (b)
      X i ( t ) = { } // Initialize Cache State
 (c)
       D e f i n e   C a p a c i t y   C i
 (d)
        E n s u r e   | X i ( t ) | C i
 (e)
e n d   f o r
2. C a c h e   P l a c e m e n t
  Input:
     C a c h e   S t a t e   C a c h e   X i t   a n d   c a p a c i t y   C i ,   C o n t e n t   X
  Output:
     C a c h e   p l a c e m e n t   A i , j t   f o r   e a c h   n o d e
Procedure: P l a c e m e n t
 (a)
f o r   e a c h   n o d e   n i   i n   D N C   d o
 (b)
    f o r   e a c h   c o n t e n t   x i   i n   X   d o
 (c)
     D e f i n e   d e c i s i o n   v a r i a b l e   A i , j ( t )
 (d)
      i f   x i   i s   c a c h e d   a t   n i   a t   t   t h e n
 (e)
       A i , j t = 1
 (f)
      e l s e
 (g)
       A i , j t = 0
 (h)
      e n d   i f
 (i)
     e n d   f o r
 (j)
    e n s u r e   j = 2 k A i , j t C i
 (k)
e n d   f o r
e n d   a l g o r i t h m
Algorithm 3: Energy-Efficient Objective Function and Combined Objective Function
1. Energy-Efficient Objective Function
  Input:
     D = d 1 , d 2 , , d n / /     I o T Devices
     N = n 1 , n 2 , , n n / /     E d g e Nodes
     C   // Central Server
     X = { x 1 , x 2 , x n } // Contents
    A i , j ( t ) // Cache Placement
  Output:
    D e l a y   D i , j t   a n d   e n e r g y   E i , j ( t )
Procedure: Energy-Effiiciency
 (a)
   f o r   e a c h   n o d e   n i   i n   D N C   d o
 (b)
      f o r   e a c h   x I   i n   X   d o
 (c)
       i f   A i , j   t = 1
 (d)
       D i , j t = d l o c a l
 (e)
       E i , j t = e l o c a l
 (f)
      E l s e
 (g)
        D i , j t = d l o c a l + d n e t w o r k
 (h)
       E i , j t = e l o c a l + e n e t w o r k
 (i)
      e n d   i f
 (j)
     e n d   f o r
 (k)
e n d   f o r
2. Combined Objective Function
  Input:
     P r e d i c t e d   R e q u e s t   R a t e   λ i , j ( t )
     R e t r i e v a l   D e l a y   D i , j ( t )
     E n e r g y   c o n s u m p t i o n   E i , j ( t )
  Output:
     O b j e t i v e   F u n c t i o n
Procedure: Objective function j:
        J o b j = i = 1 n j = 1 k ( α λ i , j t .   D i , j t + β λ i , j t . E i , j ( t ) )
end Algorithm

4.1. Optimization Problem

To identify the caching strategy as optimal, we define this an optimization problem. The aim is to minimize the summation of the objective function that relates to the time taken to access the content and energy consumption. The optimization problem is defined as follows:
m i n i m i z e A i , j t               i = 1 n j = 1 k α λ i , j t .   L i , j t + β λ i , j t . E i , j t
S u b j e c t   t o               j = 1 k A i , j t C i ,                       i
A i , j t 0 ,   1 ,                         i , j
Therefore, the objective is to minimize the energy and latency, and the cache capacity constraint guarantees that the number of contents stored in the cache of node n i does not go beyond its cache limit, while the binary decision variable guarantees that the decision variable is binary based on whether content x i is cached at node n i . In this way, posing the problem as an optimization one enables the existence of a well-structured set of steps that will lead to the discovery of the optimal caching strategy. It has the role of providing a logical framework through which the problem can be broken down to solve. Thus, the objective function contributes to a reduction in both timing and energy, which, in turn, enhance network resource utilization. This eliminates the chances of the network being sluggish or running aimlessly, which wastes a lot of energy. The cache capacity constraint helps to guarantee that the caching strategy is realistic and does not exceed any node’s capacity to cache. This eliminates congestion of the cache and also ensures that the network’s procedures and works are not interfered with. The optimization framework of the system is flexible enough to adapt to the network condition, request rates, and node capacities. The caching strategy becomes adaptable to continue being effective over time. Weighting factors α and β in the objective function help the decision maker to control the trade-off between delay and energy costs. This flexibility assists in the fact that the strategy of caching can be fine tuned in accordance with the needs of the network details.

4.2. Caching Based on Popularity Prediction

The caching strategy using the pre-analysis of popularity attributes’ values aims at determining the best positions of the content by targeting the most requested content. This approach applies anticipated future requests in order to keep the cache as relevant as possible in order to reduce retrieval time and energy consumption. Each edge node contains a local machine learning model, which is one part of a federated learning system, to predict the future request rate λ i , j ( t ) for each content x i . This request rate gives the probability or the frequency with which a special content associated with a particular node in the network will be accessed by the IoT devices connected to that node at a specific time ( t ) .
For the overall popularity of each content x i in the entire network, the predicted request rates from all nodes are accumulated. The aggregated predicted popularity for a given content x i is calculated as follows:
p j t = i = 1 n λ i , j t
This sum gives the total demand for content x i from all the network nodes providing the global popularity of the given content x i . As the popularities of the available contents are identified, the contents are rearranged in descending order regarding their popularity values to rank the contents. This ranking allows for finding out which contents are likely to be required most often in the network. Based on the ranked list, each edge node caches the top k content, and k is identified by the node’s cache capacity C i . This approach is aimed to guarantee that the content with a higher access frequency is cached; thus, the number of remote accesses is minimized, and the decision variable A i , j ( t ) is then updated according to the binary values (0, 1). Therefore, when caching frequently required contents, the probability of the availability of several contents in the local cache is enhanced. This greatly reduces the mean delay times in accessing the required content, as most of the users need the frequently accessed content to be actively cached and available for quick access from the cache rather than obtaining the same content from the remote server. It is also beneficial for a large number of frequently accessed content to be stored locally, and this, in turn, minimizes the energy usage in the retrieval of the content. Algorithm 4 describes the method used to find the popularity of transmitted contents.
Algorithm 4: Predicted Popularity and Cache Update
1 : C a c h e   b a s e d   o n   p o p u l a r i t y   p r e d i c t i o n
   I n p u t :
     N = { n 1 , n 2 , . . , n n }
     X = { x 1 , x 2 , , x n }
     P r e d i c t e d   R e q u e s t   R a t e   λ i , j ( t )
     C a c h e   C a p a c i t y   C i
   O u t p u t :
     C a c h e   b a s e d   o n   p o p u l a r i t y
Procedure: Find the popularity
 (a)
f o r   e a c h   x i   i n   X   d o
 (b)
    C a c l c u l a t e   p r e d i c t e d   p o p u l a r i t y   p j t = i = 1 n λ i , j ( t )
 (c)
     f o r   e a c h   n i   i n   D N C   d o  //
 (d)
      C a c h e   x i   b a s e d   o n   C i
 (e)
     e n d   f o r
 (f)
e n d   f o r
2: D y n a m i c   C a c h e   u p d a t e
I n p u t :
   N = { n 1 , n 2 , . . , n n }
   X = { x 1 , x 2 , , x n }
   P r e d i c t e d   P o p u l a r i t y   p j ( t )
O u t p u t :
   C a c h e   b a s e d   o n   p o p u l a r i t y
   U p d a t e   c a c h e   p l a c e m e n t
P r o c e d u r e : Update Cache based on popularity
 (a)
P e r i o d i c a l l y   d o
 (b)
   U p d a t e   p o p u l a r i t y   u s i n g   F L   m o d e l
 (c)
   A d j u s t   p l a c e m e n t   d y n a m i c a l l y   b a s e d   o n   u p d a t e d   p r e d i c t i o n s
 (d)
e n d   p e r i o d i c a l l y
e n d   a l g o r i t h m
Therefore, the predicted request rate is updated at each node dynamically using the popularity ranks of the available contents. There is always an effective way of utilizing the available cache capacity, and this is performed by arranging the content in a manner that ranks it according to the expected popularity, and the cache is only the content that is likely to be frequently accessed. This makes the use of cache very effective by ensuring that no space in the cache is wasted by caching less frequently requested content. Moreover, the users are able to access the content that frequently needs to be looked up, enhancing the user experience, especially in application areas where content requests are most frequent, such as smart city applications. This strategy is suitable to be implemented in large networks, numerous edge nodes, and content. Each node has its own cache since the decision on which content should be stored is made based on the aggregated values of the popularity of occurrence computed at each node. It uses the predicted request rates to change the cache content more effectively. The strategy optimizes contents that are in high demand, which in turn improves the response rate, lowers power usage, adjusts to changing request patterns, and optimizes cache space, ultimately improving the network performance.

5. Evaluation

The following are the parameters that are considered for the simulations with the help of ndnSIM to obtain a network of 80 nodes. Concerning the locations of nodes, including IoT devices and edge servers, there are 80 nodes in total in a simulation environment, regardless of their distribution. Specifically, the cache capacity of every connected IoT device is 100 MB, and the transmission power is 1. As a result, the edge servers are equipped with 500 MB, whereas the transmission power is 2. For the general parameters concerning federated learning, the following have to be put into practice. For model aggregation, the following must be employed. The learning rate has been adjusted to 0, including the content number first, the content ID of the line first, and training according to the incremented frequency of every five minutes. In addition to defining the application’s mobility, there are mobility parameters. The node speed is 1 m/s, and the pause time between movements is 5 s. We have the long-distance propagation loss model together with IEEE 802.11. Table 2 shows the simulation parameters and their corresponding values. We chose three caching strategies, Cache Everything Everywhere (CEE) [33], Smart Caching Strategy (SCS) [43], and Energy-Aware Caching Placement (EACP) [44], for the comparison and evaluation of the proposed caching strategy. We selected the most significant performance metrics, such as the cache hit ratio, energy consumption, and data retrieval delay, to evaluate the performance of the proposed strategy. The following sections provide details about the performance of the proposed strategy against the benchmark.

5.1. Cache Hit Ratio

The cache hit ratio represents the number of data requests that have been met from the local cache without having to access the source or a distant server. This metric is critical for measuring the effectiveness of caching approaches in NDN, as well as gauging consumers’ experiences and network resource consumption levels. Figure 1a displays the cache hit ratio across various cache sizes for four different caching strategies. The current strategies that are in use are CEE, SCS, EACP, and the proposed strategy. The cache size values are on the abscissa and range from 50 MB to 250 MB, whereas the ordinate describes the cache hit ratio, which measures how efficient each caching strategy is at satisfying requests. In every case, as cache size grows, all strategies exhibit a benefit in cache hit ratios, implying the effectiveness of delivering popular objects locally. The CEE strategy represented the lowest performance, which gradually rose but was still behind the other strategies. Just as was observed with CEE, the SCS strategy shows moderate levels of efficiency. Thus, the EACP strategy is higher than CEE and SCS, suggesting that more areas of the cache space are being used effectively. For the efficiencies of the four cache configurations, the proposed strategy demonstrates the highest cache hit ratio among the strategies of all cache sizes.
Figure 1b depicts the cache hit ratios for comparing strategies. The horizontal axis, or the x-axis, is dictated by the timeline in terms of seconds and increases progressively; on the other hand, the vertical axis, or the y-axis, symbolizes the cache hit ratio, depicting the efficiency of the various caching techniques to the number of times data are retrieved directly from the cache instead of the source. Over time, all strategies exhibit a rising trend, with arguments that the systems’ abilities to learn request patterns improve as time goes on. The CEE strategy has risen at a moderate rate, indicating that it is performing better. The SCS strategy, as illustrated, outperforms CEE by a small margin, and its upward trajectory is more prominent than LCE’s trajectory. The EACP strategy illustrated a better starting cache hit than both CEE and EACP. Furthermore, the performance was enhanced, remarkably underscoring the efficiency of caching picked contents. The FLEEC strategy begins at an already high level and, once again, provides the highest cache hit ratios by the end of simulation time.
Figure 1c represents the cache hit ratio for the four strategies with varying alpha distributions. These values, varying from 0.2 to 1.0, depict the skewness that is used to determine the relative concentration of requests for popular content, thus having a higher value to show more concentration on popular content. As is evident from the graph, for all the strategies, the cache hit ratios enhance themselves as the alpha value rises. CEE and SCS form the core of the cache eviction strategies demonstrate moderate performance. However, CEE performs slightly worse than SCS in all the compared alpha values. The comparison made reveals that EACP outperforms the others, especially as the alpha value is increased, affirming the effectiveness of content popularity in caching. The FLEEC strategy stands out from the rest with the highest cache hit ratios, and this implies that it is the best strategy for caching contents in ways that meet the demand of the users as much as possible by making minimal requests to other less efficient sources of data in the network.

5.2. Energy Consumption

Figure 2a shows the energy awareness of the four caching strategies computed and depicted at different cache sizes and split in intervals of 50 MB up to 250 MB. As the cache size gets larger, the energy consumption starts to decrease for all strategies, indicating that using more cache benefits all implemented methods. CEE is the most energy hungry of all cache sizes, although it has the same downward trend. SCS requires a marginally lesser amount of energy than CEE, and a similarity in the trend is observed where the energy consumption decreases with the increase in cache size. EACP starts at a lower level than CEE and SCS and continues to decline, indicating better efficiency. The proposed FLEEC strategy begins with the lowest energy consumption and maintains the largest energy reduction as the cache size increases.
Figure 2b shows the energy consumption of caching algorithms compared to the cache size in MB. Analyzing the results, it can be stated that there is a perceivable decrease in energy consumption for all four strategies, as the cache size grows from 50 MB to 250 MB. This perception implies that when large cache sizes are integrated, there is a decrease in the frequency with which data are pulled from distant servers, thereby decreasing energy consumption. Starting from the maximum energy, the CEE strategy drops from one step to another; hence, it could be considered relatively less efficient than the other strategies. In EACP, the concept of SCS also decreases, but at a slightly lesser rate than in CEE, which indicates better performance. The analysis of the results shows that the FLEEC strategy has the lowest energy consumption rates, growing with the increase in cache size, which only proves its efficiency in terms of cache energy management.
For different alpha distributions, the simulation in Figure 2c depicts the energy consumption of four caching techniques, namely, CEE, SCS, EACP, and FLEEC. CEE always has the highest energy consumption for all analyzed values of the alpha variable. CEE and SCS strategies show that while SCS offers better results than CEE in terms of finding shorter paths, it still has high energy consumption, especially for low alpha values. EACP outperforms the conventional strategy in terms of energy efficiency, most notably when the alpha value is rising, which indicates the work’s efficacy in utilizing content popularity in cache management. The proposed FLEEC has the least energy consumption at any alpha value of the proposed strategy. This means that the proposed strategy is efficient in enhancing energy consumption, thereby giving the best results among the four strategies. For different alpha values, all strategies have a general decreasing energy consumption with the alpha values’ increase, which indicates the influence of the content popularity distribution on caching performance. Nevertheless, the proposed strategy remains unmatched in this regard, explaining the design’s competitive advantage in terms of energy consumption and caching efficiency. Hence, it is more energy friendly. Thus, this figure helps to understand that, indeed, a growing cache size may significantly reduce energy consumption, as the energy efficiency of the proposed strategy is higher than that of other methods in the observed range. Moreover, it can be deduced that the FLEEC strategy takes the least time, although the reason for this may stem from the use of an efficient algorithm that correctly estimates what contents will be frequently accessed most and stores them in the cache to maximize the latter’s usage.

5.3. Content Retrieval Delay

Content retrieval delay is the time taken to fetch requested content from the network. Some of the time taken in a cache delay comprises the time it takes to search for the wanted content in the cache, the time taken to process the content, the time taken to send the content back to the requester, and network latency. This delay is at times minimized by efficient caching techniques that deposit demanded content closer to the end user’s vicinity in an attempt to enhance the network’s response time and efficiency. Figure 3a describes the correlation between cache size and content retrieval delay in the discussed caching strategies. Here, it is evident that as cache size rises, the retrieval delay decreases in all cases, meaning that large caches can enhance the reduction in the content from other areas. This trend holds for all the strategies, and the proposed strategy has the most evident reduction in delay with the increase in cache size. This indicates that a higher cache capacity allows for the local storage of multiple contents, a feature that is highly desirable in strategies that manage their cache contents optimally, like the H-proposed strategy, thereby enhancing content delivery and overall system performance. Figure 3b shows an explanation of how the content retrieval delay varies with time for the mentioned caching strategies. Every strategy is characterized by a certain level of initial delay, and as time goes on, the indicated rate increases. The graph depicting the CEE strategy begins with the highest delay and retires step by step, thus depicting a slow enhancement of the efficacy of the retrieval process. SCS and EACP have shorter first delays and demonstrate a steeper incline as cache data are managed more effectively over time. The FLEEC strategy is depicted with the smallest delay and continues to decline, which indicates that it is the most efficient in providing quick access to content. Such a factor may be rooted in enhanced ‘heuristics’ that accurately forecast and coordinate cache contents based on the precedence for data requests. Figure 3c depicts the effect that the change in alpha, which represents the skewness of content popularity, has on the content access latency of each caching algorithm. With the increase in alpha, which corresponds to a value of low demand distribution of contents where few are highly popular, the observed results indicate faster retrieval times for all strategies. It can be assumed that in the case of many small requests, the caching mechanisms are more efficient when a few popular contents make up the majority of requests. As with the previous strategy, and for every value of alpha, the proposed strategy offers, once again, the best results, proving to be, once again, the best strategy concerning the demanded content prioritization and faster access to the most popular content, with the retrieval time substantially decreasing as the longitudinal popularity skew increases.

6. Conclusions

In this paper, we have designed and analyzed a new caching strategy for federated NDN learning in the smart city of the IoT. This makes the recommended strategy a motion entailing an energy-efficient and optimal method that is used to locate and cache contents most commonly used again. This is meant to optimize the effectiveness of the overall network, decrease the amount of time it takes to access the content, and at the same time elevate the chance of accessing the cache while at the same time minimizing energy consumption. As a result, the proposed caching strategy is compared in terms of the cache hit ratio, energy consumption, and content retrieval delay with the benchmark strategies, which are CEE, SCS, and EACP, to indicate the efficiency of the proposed approach. In the outcomes, it is seen that the proposed strategy has a higher cache hit, which means the chance of finding the requested content in the overall performance is higher; hence, less data are fetched from the source. Also, it reduces, to a very significant degree, the time that it takes to efficiently obtain the wanted content that is in high demand for a specific user or a set of specific users, and thus the speed in the network is equally enhanced. Additionally, energy consumption reveals that FLEEC minimizes energy utilization, aiding in the prolonging of the longevity of the devices in the smart city environment and, therefore, cutting the cost of operation. In addition to that, since energy is efficiently utilized for the proposed method, the strategy involved makes it relevant to the concept of sustainable development, satisfactorily enhancing the wired IoT device resilience in the network. Therefore, based on the proposed caching strategy for NDN in the IoT using federated learning, one can come up with a viable solution for smart city applications. Compared to benchmark strategies, it is an improvement in terms of higher cache hit ratios, lower delays when acquiring content, and less energy consumption. This work provides future direction and reference for the subsequent research and development related to intelligent caching strategies for the improvement of the creation of strong smart city networks.

Author Contributions

Conceptualization, S.C. and M.A.N.; methodology, M.A.N.; software, S.C.; validation, S.C., M.A.N. and Y.M.; formal analysis, M.A.N.; investigation, S.C.; resources, S.C.; data curation, S.C.; writing—original draft preparation, M.A.N.; writing—review and editing, M.A.N.; visualization, M.A.N.; supervision, S.C.; project administration, S.C.; funding acquisition, M.A.N. All authors have read and agreed to the published version of the manuscript.

Funding

The research grants given to Muhammad Ali Naeem from the Projects of Talents Recruitment of GDUPT (NO. 2022rcyj2015), in Guangdong Province, China.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Islam, S.; Budati, A.K.; Mohammad, K.H.; Goyal, S.; Raju, D. A multi-sensory real-time data transmission method with sustainable and robust 5G energy signals for smart cities. Sustain. Energy Technol. Assess. 2023, 57, 103278. [Google Scholar] [CrossRef]
  2. Meng, Y.; Naeem, M.A.; Sohail, M.; Bashir, A.K.; Ali, R.; Bin Zikria, Y. Elastic caching solutions for content dissemination services of ip-based internet technologies prospective. Multimed. Tools Appl. 2020, 80, 16997–17022. [Google Scholar] [CrossRef]
  3. Naeem, M.A.; Ali, R.; Kim, B.-S.; Nor, S.A.; Hassan, S. A Periodic Caching Strategy Solution for the Smart City in Information-Centric Internet of Things. Sustainability 2018, 10, 2576. [Google Scholar] [CrossRef]
  4. Naeem, M.A.; Ali, R.; Alazab, M.; Meng, Y.; Bin Zikria, Y. Enabling the content dissemination through caching in the state-of-the-art sustainable information and communication technologies. Sustain. Cities Soc. 2020, 61, 102291. [Google Scholar] [CrossRef]
  5. Imoize, A.L.; Adedeji, O.; Tandiya, N.; Shetty, S. 6G enabled smart infrastructure for sustainable society: Opportunities, challenges, and research roadmap. Sensors 2021, 21, 1709. [Google Scholar] [CrossRef]
  6. Kalafatidis, S.; Skaperas, S.; Demiroglou, V.; Mamatas, L.; Tsaoussidis, V. Logically-centralized SDN-based NDN strategies for wireless mesh smart-city networks. Future Internet 2022, 15, 19. [Google Scholar] [CrossRef]
  7. Meng, Y.; Naeem, M.A.; Ali, R.; Kim, B.-S. EHCP: An Efficient Hybrid Content Placement Strategy in Named Data Network Caching. IEEE Access 2019, 7, 155601–155611. [Google Scholar] [CrossRef]
  8. Huang, M.; Liu, A.; Xiong, N.N.; Wang, T.; Vasilakos, A.V. An effective service-oriented networking management architecture for 5G-enabled internet of things. Comput. Netw. 2020, 173, 107208. [Google Scholar] [CrossRef]
  9. Gharaibeh, A.; Salahuddin, M.A.; Hussini, S.J.; Khreishah, A.; Khalil, I.; Guizani, M.; Al-Fuqaha, A. Smart cities: A survey on data management, security, and enabling technologies. IEEE Commun. Surv. Tutor. 2017, 19, 2456–2501. [Google Scholar] [CrossRef]
  10. Mishra, P.; Singh, G. Energy management systems in sustainable smart cities based on the internet of energy: A technical review. Energies 2023, 16, 6903. [Google Scholar] [CrossRef]
  11. Mishra, P.; Singh, G. Energy Management of Sustainable Smart Cities Using Internet-of-Energy, in Sustainable Smart Cities: Enabling Technologies, Energy Trends and Potential Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 143–173. [Google Scholar]
  12. Beltrán, E.T.M.; Pérez, M.Q.; Sánchez, P.M.S.; Bernal, S.L.; Bovet, G.; Gil Pérez, M.; Pérez, G.M.; Celdrán, A.H. Decentralized federated learning: Fundamentals, state of the art, frameworks, trends, and challenges. IEEE Commun. Surv. Tutor. 2023, 25, 2983–3013. [Google Scholar] [CrossRef]
  13. Wu, J.; Dong, F.; Leung, H.; Zhu, Z.; Zhou, J.; Drew, S. Topology-aware federated learning in edge computing: A comprehensive survey. ACM Comput. Surv. 2024, 56, 1–41. [Google Scholar] [CrossRef]
  14. Naeem, M.A.; Rehman, M.A.U.; Ullah, R.; Kim, B.-S. A Comparative Performance Analysis of Popularity-Based Caching Strategies in Named Data Networking. IEEE Access 2020, 8, 50057–50077. [Google Scholar] [CrossRef]
  15. Naeem, M.A.; Ullah, R.; Meng, Y.; Ali, R.; Lodhi, B.A. Caching Content on the Network Layer: A Performance Analysis of Caching Schemes in ICN-Based Internet of Things. IEEE Internet Things J. 2021, 9, 6477–6495. [Google Scholar] [CrossRef]
  16. Meng, Y.; Naeem, M.A.; Ali, R.; Bin Zikria, Y.; Kim, S.W. DCS: Distributed caching strategy at the edge of vehicular sensor networks in information-centric networking. Sensors 2019, 19, 4407. [Google Scholar] [CrossRef]
  17. Naeem, M.A.; Nguyen, T.N.; Ali, R.; Cengiz, K.; Meng, Y.; Khurshaid, T. Hybrid cache management in IoT-based named data networking. IEEE Internet Things J. 2021, 9, 7140–7150. [Google Scholar] [CrossRef]
  18. Chen, X.; Liu, G. Federated deep reinforcement learning-based task offloading and resource allocation for smart cities in a mobile edge network. Sensors 2022, 22, 4738. [Google Scholar] [CrossRef]
  19. Tam, P.; Corrado, R.; Eang, C.; Kim, S. Applicability of deep reinforcement learning for efficient federated learning in massive IoT communications. Appl. Sci. 2023, 13, 3083. [Google Scholar] [CrossRef]
  20. Singh, P.; Hazarika, B.; Singh, K.; Pan, C.; Huang, W.-J.; Li, C.-P. DRL-Based Federated Learning for Efficient Vehicular Caching Management. IEEE Internet Things J. 2024, 1. [Google Scholar] [CrossRef]
  21. Krishnendu, S.; Bharath, B.N.; Garg, N.; Bhatia, V.; Ratnarajah, T. Learning to cache: Federated caching in a cellular network with correlated demands. IEEE Trans. Commun. 2021, 70, 1653–1665. [Google Scholar]
  22. Wu, Q.; Zhao, Y.; Fan, Q.; Fan, P.; Wang, J.; Zhang, C. Mobility-aware cooperative caching in vehicular edge computing based on asynchronous federated and deep reinforcement learning. IEEE J. Sel. Top. Signal Process. 2022, 17, 66–81. [Google Scholar] [CrossRef]
  23. Yu, Z.; Hu, J.; Min, G.; Zhao, Z.; Miao, W.; Hossain, M.S. Mobility-aware proactive edge caching for connected vehicles using federated learning. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5341–5351. [Google Scholar] [CrossRef]
  24. Cui, L.; Su, X.; Ming, Z.; Chen, Z.; Yang, S.; Zhou, Y.; Xiao, W. CREAT: Blockchain-assisted compression algorithm of federated learning for content caching in edge computing. IEEE Internet Things J. 2020, 9, 14151–14161. [Google Scholar] [CrossRef]
  25. Naeem, M.A.; Nor, S.A.; Hassan, S.; Kim, B.-S. Performances of Probabilistic Caching Strategies in Content Centric Networking. IEEE Access 2018, 6, 58807–58825. [Google Scholar] [CrossRef]
  26. Fayazbakhsh, S.K.; Lin, Y.; Tootoonchian, A.; Ghodsi, A.; Koponen, T.; Maggs, B.; Ng, K.C.; Shenker, S.; Sekar, V. Less pain, most of the gain: Incrementally deployable icn. ACM SIGCOMM Comput. Commun. Rev. 2013, 43, 147–158. [Google Scholar] [CrossRef]
  27. Alahmri, B.; Al-Ahmadi, S.; Belghith, A. Efficient pooling and collaborative cache management for NDN/IoT networks. IEEE Access 2021, 9, 43228–43240. [Google Scholar] [CrossRef]
  28. Yovita, L.V.; Syambas, N.R. Caching on Named Data Network: A Survey and Future Research. Int. J. Electr. Comput. Eng. (2088-8708) 2018, 8, 4456–4466. [Google Scholar] [CrossRef]
  29. Zhang, M.; Xie, P.; Zhu, J.; Wu, Q.; Zheng, R.; Zhang, H. NCPP-based caching and NUR-based resource allocation for information-centric networking. J. Ambient Intell. Humaniz. Comput. 2017, 10, 1739–1745. [Google Scholar] [CrossRef]
  30. Yan, H.; Gao, D.; Su, W.; Foh, C.H.; Zhang, H.; Vasilakos, A.V. Caching strategy based on hierarchical cluster for named data networking. IEEE Access 2017, 5, 8433–8443. [Google Scholar] [CrossRef]
  31. Meng, Y.; Ahmad, A.B. Performance Measurement through Caching in Named Data Networking based Internet of Things. IEEE Access 2023, 11, 120569–120584. [Google Scholar] [CrossRef]
  32. Dolmans, S.A.M.; van Galen, W.P.L.; Walrave, B.; Ouden, E.D.; Valkenburg, R.; Romme, A.G.L. A Dynamic Perspective on Collaborative Innovation for Smart City Development: The role of uncertainty, governance, and institutional logics. Organ. Stud. 2023, 44, 1577–1601. [Google Scholar] [CrossRef]
  33. Naeem, M.A.; Nor, S.A.; Hassan, S.; Kim, B.-S. Compound Popular Content Caching Strategy in Named Data Networking. Electronics 2019, 8, 771. [Google Scholar] [CrossRef]
  34. Shukla, S.; Hassan, M.F.; Tran, D.C.; Akbar, R.; Paputungan, I.V.; Khan, M.K. Improving latency in Internet-of-Things and cloud computing for real-time data transmission: A systematic literature review (SLR). Clust. Comput. 2021, 26, 2657–2680. [Google Scholar] [CrossRef]
  35. Rao, P.M.; Deebak, B.D. Security and privacy issues in smart cities/industries: Technologies, applications, and challenges. J. Ambient Intell. Humaniz. Comput. 2022, 14, 10517–10553. [Google Scholar] [CrossRef]
  36. Gupta, D.; Rani, S.; Ahmed, S.H. ICN-edge caching scheme for handling multimedia big data traffic in smart cities. Multimed. Tools Appl. 2022, 82, 39697–39717. [Google Scholar] [CrossRef]
  37. Alubady, R.; Salman, M.; Mohamed, A.S. A review of modern caching strategies in named data network: Overview, classification, and research directions. Telecommun. Syst. 2023, 84, 581–626. [Google Scholar] [CrossRef]
  38. Anitha, P.; Vimala, H.S.; Shreyas, J. Comprehensive review on congestion detection, alleviation, and control for IoT networks. J. Netw. Comput. Appl. 2024, 221, 103749. [Google Scholar]
  39. Abujassar, R.S. A highly effective algorithm for mitigating and identifying congestion through continuous monitoring of IoT networks, improving energy consumption. Wirel. Netw. 2024, 30, 3161–3180. [Google Scholar] [CrossRef]
  40. Barrios, C.; Kumar, M. Ervice caching and computation reuse strategies at the edge: A survey. ACM Comput. Surv. 2023, 56, 1–38. [Google Scholar] [CrossRef]
  41. Rafiq, I.; Mahmood, A.; Razzaq, S.; Jafri, S.H.M.; Aziz, I. IoT applications and challenges in smart cities and services. J. Eng. 2023, 2023, e12262. [Google Scholar] [CrossRef]
  42. Mansour, M.; Gamal, A.; Ahmed, A.I.; Said, L.A.; Elbaz, A.; Herencsar, N.; Soltan, A. Internet of things: A comprehensive overview on protocols, architectures, technologies, simulation tools, and future directions. Energies 2023, 16, 3465. [Google Scholar] [CrossRef]
  43. Shrimali, R.; Shah, H.; Chauhan, R. Proposed caching scheme for optimizing trade-off between freshness and energy consumption in name data networking based IoT. Adv. Internet Things 2017, 7, 11–24. [Google Scholar] [CrossRef]
  44. Serhane, O.; Yahyaoui, K.; Nour, B.; Moungla, H. Energy-aware cache placement scheme for IoT-based ICN networks. In Proceedings of the ICC 2021-IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021; IEEE: New York, NY, USA, 2021. [Google Scholar]
Figure 1. Performance on the cache hit ratio.
Figure 1. Performance on the cache hit ratio.
Electronics 13 03653 g001
Figure 2. Performance on energy consumption.
Figure 2. Performance on energy consumption.
Electronics 13 03653 g002
Figure 3. Performance on content retrieval delay.
Figure 3. Performance on content retrieval delay.
Electronics 13 03653 g003
Table 1. Symbols and their corresponding description.
Table 1. Symbols and their corresponding description.
SymbolsDescription
D Set of IoT devices, including sensors and actuators.
N Set of network and edge nodes.
X Set of contents in the network.
R Set of requests generated for contents.
λ i , j ( t ) Local   request   distribution   rate   of   node   n i   for   content   x i   at   time   ( t ) .
| X i t | Cache   state   at   node   n i   at   time   ( t ) .
C i Cache   capacity   of   node   n i .
A i , j ( t ) Decision   variable   indicating   if   content   x i   is   cached   at   node   n i   at   time   ( t ) .
L i , j ( t ) Latency   for   retrieving   content   x i   by   node   n i   at   time   ( t ) .
E i , j ( t ) Energy   consumed   when   accessing   content   x i   by   node   n i at   time   ( t ) .
l l o c a l Latency for retrieving content from the local cache.
l r e m o t e Latency for transferring content from a remote server.
l n e t w o r k Energy for network interconnectivity.
e l o c a l Energy consumption for retrieving data from the local cache.
e r e m o t e Energy consumption for fetching data from a remote server.
e n e t w o r k Energy consumption for network communication.
p i , j ( t ) Aggregated   predicted   popularity   of   content   x i   at   time   ( t ) .
J o b j Objective function representing the weighted sum of average data retrieval. Latency and energy consumption.
α Latency weighting factor.
β Energy consumption weighting factor.
Table 2. Parameters and values.
Table 2. Parameters and values.
Simulation ParametersCorresponding Values
Distribution of nodesRandom
Number of nodes 80
Communication range 150 m
Cache size50–250 MBs
Popularity modelZipf
α skewness0.2, 0.4, 0.6, 0.8, 1.0
Topology area 500   × 500
Traffic rate10 packets/s
Packet size52 bytes
Model aggregation frequency Every 5 min
Mobility model Random
Wireless technology IEEE 802.11
Channel loss0.1 (10%)
Initial energy1 J
Transmission energy 50 nJ/bit
Data caching energy 10 nJ/bit
Amplifier energy 10 pJ/bit/m²
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Naeem, M.A.; Meng, Y.; Chaudhary, S. The Impact of Federated Learning on Improving the IoT-Based Network in a Sustainable Smart Cities. Electronics 2024, 13, 3653. https://doi.org/10.3390/electronics13183653

AMA Style

Naeem MA, Meng Y, Chaudhary S. The Impact of Federated Learning on Improving the IoT-Based Network in a Sustainable Smart Cities. Electronics. 2024; 13(18):3653. https://doi.org/10.3390/electronics13183653

Chicago/Turabian Style

Naeem, Muhammad Ali, Yahui Meng, and Sushank Chaudhary. 2024. "The Impact of Federated Learning on Improving the IoT-Based Network in a Sustainable Smart Cities" Electronics 13, no. 18: 3653. https://doi.org/10.3390/electronics13183653

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop