Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = edge caching mechanisms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 748 KiB  
Article
Task Offloading Scheme Based on Proximal Policy Optimization Algorithm
by Yutong Ma and Junfeng Tian
Appl. Sci. 2025, 15(9), 4761; https://doi.org/10.3390/app15094761 - 25 Apr 2025
Viewed by 457
Abstract
The rapid development of mobile Internet technology has made users’ requirements for quality of service (QoS) continuously improve. The task unloading process of mobile edge computing has the problem that it is impossible to balance delay and energy consumption for task unloading under [...] Read more.
The rapid development of mobile Internet technology has made users’ requirements for quality of service (QoS) continuously improve. The task unloading process of mobile edge computing has the problem that it is impossible to balance delay and energy consumption for task unloading under the condition of fluctuating network bandwidth. To address this issue, this paper proposes a task offloading scheme based on the Proximal Policy Optimization (PPO) algorithm. On the basis of traditional cloud edge collaborative architecture, the collaborative computing mechanism between edge node devices is further integrated, and the concept of service caching is introduced to reduce duplicate data transmission, reduce communication latency and network load, and improve overall system performance. Firstly, this article constructs an energy efficiency function with a certain weight ratio of energy consumption and latency as the core optimization objective. Then, the task offloading process of mobile terminal devices is modeled as a Markov Decision Process (MDP). Finally, the deep reinforcement learning PPO algorithm is used for training and learning, and the model is solved. The simulation results show that the proposed scheme has significant advantages in reducing energy consumption and latency compared to the comparative scheme. Full article
Show Figures

Figure 1

29 pages, 1776 KiB  
Article
Deep Reinforcement Learning-Enabled Computation Offloading: A Novel Framework to Energy Optimization and Security-Aware in Vehicular Edge-Cloud Computing Networks
by Waleed Almuseelem
Sensors 2025, 25(7), 2039; https://doi.org/10.3390/s25072039 - 25 Mar 2025
Viewed by 1267
Abstract
The Vehicular Edge-Cloud Computing (VECC) paradigm has gained traction as a promising solution to mitigate the computational constraints through offloading resource-intensive tasks to distributed edge and cloud networks. However, conventional computation offloading mechanisms frequently induce network congestion and service delays, stemming from uneven [...] Read more.
The Vehicular Edge-Cloud Computing (VECC) paradigm has gained traction as a promising solution to mitigate the computational constraints through offloading resource-intensive tasks to distributed edge and cloud networks. However, conventional computation offloading mechanisms frequently induce network congestion and service delays, stemming from uneven workload distribution across spatial Roadside Units (RSUs). Moreover, ensuring data security and optimizing energy usage within this framework remain significant challenges. To this end, this study introduces a deep reinforcement learning-enabled computation offloading framework for multi-tier VECC networks. First, a dynamic load-balancing algorithm is developed to optimize the balance among RSUs, incorporating real-time analysis of heterogeneous network parameters, including RSU computational load, channel capacity, and proximity-based latency. Additionally, to alleviate congestion in static RSU deployments, the framework proposes deploying UAVs in high-density zones, dynamically augmenting both storage and processing resources. Moreover, an Advanced Encryption Standard (AES)-based mechanism, secured with dynamic one-time encryption key generation, is implemented to fortify data confidentiality during transmissions. Further, a context-aware edge caching strategy is implemented to preemptively store processed tasks, reducing redundant computations and associated energy overheads. Subsequently, a mixed-integer optimization model is formulated that simultaneously minimizes energy consumption and guarantees latency constraint. Given the combinatorial complexity of large-scale vehicular networks, an equivalent reinforcement learning form is given. Then a deep learning-based algorithm is designed to learn close-optimal offloading solutions under dynamic conditions. Empirical evaluations demonstrate that the proposed framework significantly outperforms existing benchmark techniques in terms of energy savings. These results underscore the framework’s efficacy in advancing sustainable, secure, and scalable intelligent transportation systems. Full article
(This article belongs to the Special Issue Vehicle-to-Everything (V2X) Communication Networks 2024–2025)
Show Figures

Figure 1

30 pages, 6408 KiB  
Article
Construction of a Deep Learning Model for Unmanned Aerial Vehicle-Assisted Safe Lightweight Industrial Quality Inspection in Complex Environments
by Zhongyuan Jing and Ruyan Wang
Drones 2024, 8(12), 707; https://doi.org/10.3390/drones8120707 - 27 Nov 2024
Viewed by 1283
Abstract
With the development of mobile communication technology and the proliferation of the number of Internet of Things (IoT) terminal devices, a large amount of data and intelligent applications are emerging at the edge of the Internet, giving rise to the demand for edge [...] Read more.
With the development of mobile communication technology and the proliferation of the number of Internet of Things (IoT) terminal devices, a large amount of data and intelligent applications are emerging at the edge of the Internet, giving rise to the demand for edge intelligence. In this context, federated learning, as a new distributed machine learning method, becomes one of the key technologies to realize edge intelligence. Traditional edge intelligence networks usually rely on terrestrial communication base stations as parameter servers to manage communication and computation tasks among devices. However, this fixed infrastructure is difficult to adapt to the complex and ever-changing heterogeneous network environment. With its high degree of flexibility and mobility, the introduction of unmanned aerial vehicles (UAVs) into the federated learning framework can provide enhanced communication, computation, and caching services in edge intelligence networks, but the limited communication bandwidth and unreliable communication environment increase system uncertainty and may lead to a decrease in overall energy efficiency. To address the above problems, this paper designs a UAV-assisted federated learning with a privacy-preserving and efficient data sharing method, Communication-efficient and Privacy-protection for FL (CP-FL). A network-sparsifying pruning training method based on a channel importance mechanism is proposed to transform the pruning training process into a constrained optimization problem. A quantization-aware training method is proposed to automate the learning of quantization bitwidths to improve the adaptability between features and data representation accuracy. In addition, differential privacy is applied to the uplink data on this basis to further protect data privacy. After the model parameters are aggregated on the pilot UAV, the model is subjected to knowledge distillation to reduce the amount of downlink data without affecting the utility. Experiments on real-world datasets validate the effectiveness of the scheme. The experimental results show that compared with other federated learning frameworks, the CP-FL approach can effectively mitigate the communication overhead, as well as the computation overhead, and has the same outstanding advantage in terms of the balance between privacy and usability in differential privacy preservation. Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing in Drone Swarms)
Show Figures

Figure 1

16 pages, 430 KiB  
Article
Multi-Agent Deep-Q Network-Based Cache Replacement Policy for Content Delivery Networks
by Janith K. Dassanayake, Minxiao Wang, Muhammad Z. Hameed and Ning Yang
Future Internet 2024, 16(8), 292; https://doi.org/10.3390/fi16080292 - 14 Aug 2024
Cited by 1 | Viewed by 1602
Abstract
In today’s digital landscape, content delivery networks (CDNs) play a pivotal role in ensuring rapid and seamless access to online content across the globe. By strategically deploying a network of edge servers in close proximity to users, CDNs optimize the delivery of digital [...] Read more.
In today’s digital landscape, content delivery networks (CDNs) play a pivotal role in ensuring rapid and seamless access to online content across the globe. By strategically deploying a network of edge servers in close proximity to users, CDNs optimize the delivery of digital content. One key mechanism involves caching frequently requested content at these edge servers, which not only alleviates the load on the source CDN server but also enhances the overall user experience. However, the exponential growth in user demands has led to increased network congestion, subsequently reducing the cache hit ratio within CDNs. To address this reduction, this paper presents an innovative approach for efficient cache replacement in a dynamic caching environment while maximizing the cache hit ratio via a cooperative cache replacement policy based on reinforcement learning. This paper presents an innovative approach to enhance the performance of CDNs through an advanced cache replacement policy based on reinforcement learning. The proposed system model depicts a mesh network of CDNs, with edge servers catering to user requests, and a main source CDN server. The cache replacement problem is initially modeled as a Markov decision process, and it is extended to a multi-agent reinforcement learning problem. We propose a cooperative cache replacement algorithm based on a multi-agent deep-Q network (MADQN), where the edge servers cooperatively learn to efficiently replace the cached content to maximize the cache hit ratio. Experimental results are presented to validate the performance of our proposed approach. Notably, our MADQN policy exhibits superior cache hit ratios and lower average delays compared to traditional caching policies. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

19 pages, 6680 KiB  
Review
Reliablity and Security for Fog Computing Systems
by Egor Shiriaev, Tatiana Ermakova, Ekaterina Bezuglova, Maria A. Lapina and Mikhail Babenko
Information 2024, 15(6), 317; https://doi.org/10.3390/info15060317 - 29 May 2024
Cited by 1 | Viewed by 2011
Abstract
Fog computing (FC) is a distributed architecture in which computing resources and services are placed on edge devices closer to data sources. This enables more efficient data processing, shorter latency times, and better performance. Fog computing was shown to be a promising solution [...] Read more.
Fog computing (FC) is a distributed architecture in which computing resources and services are placed on edge devices closer to data sources. This enables more efficient data processing, shorter latency times, and better performance. Fog computing was shown to be a promising solution for addressing the new computing requirements. However, there are still many challenges to overcome to utilize this new computing paradigm, in particular, reliability and security. Following this need, a systematic literature review was conducted to create a list of requirements. As a result, the following four key requirements were formulated: (1) low latency and response times; (2) scalability and resource management; (3) fault tolerance and redundancy; and (4) privacy and security. Low delay and response can be achieved through edge caching, edge real-time analyses and decision making, and mobile edge computing. Scalability and resource management can be enabled by edge federation, virtualization and containerization, and edge resource discovery and orchestration. Fault tolerance and redundancy can be enabled by backup and recovery mechanisms, data replication strategies, and disaster recovery plans, with a residual number system (RNS) being a promising solution. Data security and data privacy are manifested in strong authentication and authorization mechanisms, access control and authorization management, with fully homomorphic encryption (FHE) and the secret sharing system (SSS) being of particular interest. Full article
(This article belongs to the Special Issue Digital Privacy and Security, 2nd Edition)
Show Figures

Figure 1

24 pages, 903 KiB  
Article
Computation Offloading Based on a Distributed Overlay Network Cache-Sharing Mechanism in Multi-Access Edge Computing
by Yazhi Liu, Pengfei Zhong, Zhigang Yang, Wei Li and Siwei Li
Future Internet 2024, 16(4), 136; https://doi.org/10.3390/fi16040136 - 19 Apr 2024
Cited by 1 | Viewed by 2198
Abstract
Multi-access edge computing (MEC) enhances service quality for users and reduces computational overhead by migrating workloads and application data to the network edge. However, current solutions for task offloading and cache replacement in edge scenarios are constrained by factors such as communication bandwidth, [...] Read more.
Multi-access edge computing (MEC) enhances service quality for users and reduces computational overhead by migrating workloads and application data to the network edge. However, current solutions for task offloading and cache replacement in edge scenarios are constrained by factors such as communication bandwidth, wireless network coverage, and limited storage capacity of edge devices, making it challenging to achieve high cache reuse and lower system energy consumption. To address these issues, a framework leveraging cooperative edge servers deployed in wireless access networks across different geographical regions is designed. Specifically, we propose the Distributed Edge Service Caching and Offloading (DESCO) network architecture and design a decentralized resource-sharing algorithm based on consistent hashing, named Cache Chord. Subsequently, based on DESCO and aiming to minimize overall user energy consumption while maintaining user latency constraints, we introduce the real-time computation offloading (RCO) problem and transform RCO into a multi-player static game, prove the existence of Nash equilibrium solutions, and solve it using a multi-dimensional particle swarm optimization algorithm. Finally, simulation results demonstrate that the proposed solution reduces the average energy consumption by over 27% in the DESCO network compared to existing algorithms. Full article
Show Figures

Figure 1

25 pages, 15500 KiB  
Article
Optimizing CNN Hardware Acceleration with Configurable Vector Units and Feature Layout Strategies
by Jinzhong He, Ming Zhang, Jian Xu, Lina Yu and Weijun Li
Electronics 2024, 13(6), 1050; https://doi.org/10.3390/electronics13061050 - 12 Mar 2024
Cited by 1 | Viewed by 1872
Abstract
Convolutional neural network (CNN) hardware acceleration is critical to improve the performance and facilitate the deployment of CNNs in edge applications. Due to its efficiency and simplicity, channel group parallelism has become a popular method for CNN hardware acceleration. However, when processing data [...] Read more.
Convolutional neural network (CNN) hardware acceleration is critical to improve the performance and facilitate the deployment of CNNs in edge applications. Due to its efficiency and simplicity, channel group parallelism has become a popular method for CNN hardware acceleration. However, when processing data involving small channels, there will be a mismatch between feature data and computing units, resulting in a low utilization of the computing units. When processing the middle layer of the convolutional neural network, the mismatch between the feature-usage order and the feature-loading order leads to a low input feature cache hit rate. To address these challenges, this paper proposes an innovative method inspired by data reordering technology, aiming to achieve CNN hardware acceleration that reuses the same multiplier resources. This method focuses on transforming the hardware acceleration process into feature organization, feature block scheduling and allocation, and feature calculation subtasks to ensure the efficient mapping of continuous loading and the calculation of feature data. Specifically, this paper introduces a convolutional algorithm mapping strategy and a configurable vector operation unit to enhance multiplier utilization for different feature map sizes and channel numbers. In addition, an off-chip address mapping and on-chip cache management mechanism is proposed to effectively improve the feature access efficiency and on-chip feature cache hit rate. Furthermore, a configurable feature block scheduling policy is proposed to strike a balance between weight reuse and feature writeback pressure. Experimental results demonstrate the effectiveness of this method. When using 512 multipliers and accelerating VGG16 at 100 MHz, the actual computing performance reaches 102.3 giga operations per second (GOPS). Compared with other CNN hardware acceleration methods, the average computing array utilization is as high as 99.88% and the computing density is higher. Full article
Show Figures

Figure 1

26 pages, 936 KiB  
Article
Enhancing Heterogeneous Network Performance: Advanced Content Popularity Prediction and Efficient Caching
by Zhiyao Sun and Guifen Chen
Electronics 2024, 13(4), 794; https://doi.org/10.3390/electronics13040794 - 18 Feb 2024
Cited by 2 | Viewed by 2067
Abstract
With the popularity of smart devices and the growth of high-bandwidth applications, the wireless industry is facing an increased surge in data traffic. This challenge highlights the limitations of traditional edge-caching solutions, especially in terms of content-caching effectiveness and network-communication latency. To address [...] Read more.
With the popularity of smart devices and the growth of high-bandwidth applications, the wireless industry is facing an increased surge in data traffic. This challenge highlights the limitations of traditional edge-caching solutions, especially in terms of content-caching effectiveness and network-communication latency. To address this problem, we investigated efficient caching strategies in heterogeneous network environments. The caching decision process becomes more complex due to the heterogeneity of the network environment, as well as due to the diversity of user behaviors and content requests. To address the problem of increased system latency due to the dynamically changing nature of content popularity and limited cache capacity, we propose a novel content placement strategy, the long-short-term-memory–content-population-prediction model, to capture the correlation of request patterns between different contents and the periodicity in the time domain, in order to improve the accuracy of the prediction of content popularity. Then, to address the heterogeneity of heterogeneous network environments, we propose an efficient content delivery strategy: the multi-intelligent critical collaborative caching policy. This strategy models the edge-caching problem in heterogeneous scenarios as a Markov decision process using multi-base-station-environment information. In order to fully utilize the multi-intelligence information, we have improved the actor–critic approach by integrating the attention mechanism into a neural network. Whereas the actor network is responsible for making decisions based on local information, the critic network evaluates and enhances the actor’s performance. We conducted extensive simulations, and the results showed that the Long Short Term Memory content population prediction model was more advantageous, in terms of content-popularity-prediction accuracy, with a 28.61% improvement in prediction error, compared to several other existing methods. The proposed multi-intelligence actor–critic collaborative caching policy algorithm improved the cache-hit-rate metric by up to 32.3% and reduced the system latency by 1.6%, demonstrating the feasibility and effectiveness of the algorithm. Full article
Show Figures

Figure 1

18 pages, 3850 KiB  
Article
Joint Optimization of Task Caching and Computation Offloading for Multiuser Multitasking in Mobile Edge Computing
by Xintong Zhu, Zongpu Jia, Xiaoyan Pang and Shan Zhao
Electronics 2024, 13(2), 389; https://doi.org/10.3390/electronics13020389 - 17 Jan 2024
Cited by 3 | Viewed by 2161
Abstract
Mobile edge computing extends the capabilities of the cloud to the edge to meet the latency performance required by new types of applications. Task caching reduces network energy consumption by caching task applications and associated databases in advance on edge devices. However, determining [...] Read more.
Mobile edge computing extends the capabilities of the cloud to the edge to meet the latency performance required by new types of applications. Task caching reduces network energy consumption by caching task applications and associated databases in advance on edge devices. However, determining an effective caching strategy is crucial since users generate numerous repetitive tasks, but edge devices and storage resources are limited. We aimed to address the problem of highly coupled decision variables in dynamic task caching and computational offloading for multiuser multitasking in mobile edge computing systems. This paper presents a joint computation and caching framework with the aim of minimizing delays and energy expenditure for mobile users and transforming the problem into a form of reinforcement learning. Based on this, an improved deep reinforcement learning algorithm, P-DDPG, is proposed to achieve efficient computation offloading and task caching decisions for mobile users. The algorithm integrates a deep and deterministic policy grading and a prioritized empirical replay mechanism to reduce system costs. The simulations show that the designed algorithm performs better in terms of task latencies and lower computing power consumption. Full article
(This article belongs to the Special Issue Advances in 5G Wireless Edge Computing)
Show Figures

Figure 1

21 pages, 2271 KiB  
Article
EDI-C: Reputation-Model-Based Collaborative Audit Scheme for Edge Data Integrity
by Fan Yang, Yi Sun, Qi Gao and Xingyuan Chen
Electronics 2024, 13(1), 75; https://doi.org/10.3390/electronics13010075 - 23 Dec 2023
Cited by 3 | Viewed by 1322
Abstract
The emergence of mobile edge computing (MEC) has facilitated the development of data caching technology, which enables application vendors to cache frequently used data on the edge servers close to the user, thereby providing low-latency data access services. However, in an unstable MEC [...] Read more.
The emergence of mobile edge computing (MEC) has facilitated the development of data caching technology, which enables application vendors to cache frequently used data on the edge servers close to the user, thereby providing low-latency data access services. However, in an unstable MEC environment, the multi-replica data cached by different edge servers is prone to corruption, making it crucial to verify the consistency of multi-replica data on different edge servers. Although the existing research realizes data integrity verification based on the cooperation of multiple edge servers, the integrity proof generated by multiple copies of data is the same, which has low verification efficiency and is vulnerable to attacks such as replay and replace. To address the above issues, based on homomorphic hash and sampling algorithms, this paper proposes an efficient and lightweight multi-replica integrity verification algorithm, which has significantly less storage cost and computational cost and can resist forgery and replay and replace attacks. Based on the verification algorithm, this paper further proposes a multi-replica edge data integrity collaborative audit scheme EDI-C based on the reputation model. EDI-C realizes the efficient collaborative audit of multiple edge servers in a distributed discrete environment through an incentive mechanism to avoid the trust problem of both sides caused by centralized audit. Also, it supports batch auditing of multiple copies of original data files at the same time through parallel processing and data block auditing technology, which not only significantly improves the verification efficiency but also realizes the accurate location and repair of corrupted data at the data block level. Finally, the security analyses and performance evaluation show the security and practicability of EDI-C. Compared with the representative schemes, the results show that EDI-C can ensure the integrity verification of cached data more efficiently in an MEC environment. Full article
Show Figures

Figure 1

32 pages, 3421 KiB  
Article
Federated Edge Intelligence and Edge Caching Mechanisms
by Aristeidis Karras, Christos Karras, Konstantinos C. Giotopoulos, Dimitrios Tsolis, Konstantinos Oikonomou and Spyros Sioutas
Information 2023, 14(7), 414; https://doi.org/10.3390/info14070414 - 18 Jul 2023
Cited by 15 | Viewed by 5279
Abstract
Federated learning (FL) has emerged as a promising technique for preserving user privacy and ensuring data security in distributed machine learning contexts, particularly in edge intelligence and edge caching applications. Recognizing the prevalent challenges of imbalanced and noisy data impacting scalability and resilience, [...] Read more.
Federated learning (FL) has emerged as a promising technique for preserving user privacy and ensuring data security in distributed machine learning contexts, particularly in edge intelligence and edge caching applications. Recognizing the prevalent challenges of imbalanced and noisy data impacting scalability and resilience, our study introduces two innovative algorithms crafted for FL within a peer-to-peer framework. These algorithms aim to enhance performance, especially in decentralized and resource-limited settings. Furthermore, we propose a client-balancing Dirichlet sampling algorithm with probabilistic guarantees to mitigate oversampling issues, optimizing data distribution among clients to achieve more accurate and reliable model training. Within the specifics of our study, we employed 10, 20, and 40 Raspberry Pi devices as clients in a practical FL scenario, simulating real-world conditions. The well-known FedAvg algorithm was implemented, enabling multi-epoch client training before weight integration. Additionally, we examined the influence of real-world dataset noise, culminating in a performance analysis that underscores how our novel methods and research significantly advance robust and efficient FL techniques, thereby enhancing the overall effectiveness of decentralized machine learning applications, including edge intelligence and edge caching. Full article
Show Figures

Figure 1

19 pages, 3464 KiB  
Article
LFDC: Low-Energy Federated Deep Reinforcement Learning for Caching Mechanism in Cloud–Edge Collaborative
by Xinyu Zhang, Zhigang Hu, Meiguang Zheng, Yang Liang, Hui Xiao, Hao Zheng and Aikun Xu
Appl. Sci. 2023, 13(10), 6115; https://doi.org/10.3390/app13106115 - 16 May 2023
Cited by 2 | Viewed by 2122
Abstract
The optimization of caching mechanisms has long been a crucial research focus in cloud–edge collaborative environments. Effective caching strategies can substantially enhance user experience quality in these settings. Deep reinforcement learning (DRL), with its ability to perceive the environment and develop intelligent policies [...] Read more.
The optimization of caching mechanisms has long been a crucial research focus in cloud–edge collaborative environments. Effective caching strategies can substantially enhance user experience quality in these settings. Deep reinforcement learning (DRL), with its ability to perceive the environment and develop intelligent policies online, has been widely employed for designing caching strategies. Recently, federated learning, when combined with DRL, has been in gaining popularity for optimizing caching strategies and protecting data training privacy from eavesdropping attacks. However, online federated deep reinforcement learning algorithms face high environmental dynamics, and real-time training can result in increased training energy consumption despite improving caching efficiency. To address this issue, we propose a low-energy federated deep reinforcement learning strategy for caching mechanisms (LFDC) that balances caching efficiency and training energy consumption. The LFDC strategy encompasses a novel energy efficiency model, a deep reinforcement learning mechanism, and a dynamic energy-saving federated policy. Our experimental results demonstrate that the proposed LFDC strategy significantly outperforms existing benchmarks in terms of energy efficiency. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

12 pages, 2057 KiB  
Article
Communication, Computing, and Caching Trade-Off in VR Networks
by Yuqing Feng, Dongyu Wang and Yanzhao Hou
Electronics 2023, 12(7), 1577; https://doi.org/10.3390/electronics12071577 - 27 Mar 2023
Cited by 2 | Viewed by 1936
Abstract
As technology continues to advance, virtual reality (VR) video services are able to provide an increasingly realistic video experience. VR applications are limited, since the creation of an immersive experience requires processing and delivery of incredibly huge amounts of data. A potential technique [...] Read more.
As technology continues to advance, virtual reality (VR) video services are able to provide an increasingly realistic video experience. VR applications are limited, since the creation of an immersive experience requires processing and delivery of incredibly huge amounts of data. A potential technique to decrease the operation time for VR, as well as its energy use, is mobile edge computing (MEC). In this study, we develop a VR network in which several MEC servers can supply field-of-view (FOV) files to a VR device in order to satisfy the transmission requirements of VR video service and improve the quality of the experience. In this way, the projection process from 2D FOV to 3D FOV and the cached data is possible on an MEC server or a VR device. A cooperative computational offloading and caching strategy is developed as a decision matrix to reduce transmission requirements based on the service time constraint requirement. The VR video service mechanism is examined through the decision matrix. The trade-off between communication, caching, and computation (3C trade-off) is further implemented by means of a closed equation for the decision matrix. Results from simulations show that the suggested technique can perform close to optimally compared to alternative opposing methods. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

18 pages, 890 KiB  
Article
Intelligent Computation Offloading Mechanism with Content Cache in Mobile Edge Computing
by Feixiang Li, Chao Fang, Mingzhe Liu, Ning Li and Tian Sun
Electronics 2023, 12(5), 1254; https://doi.org/10.3390/electronics12051254 - 6 Mar 2023
Cited by 5 | Viewed by 2203
Abstract
Edge computing is a promising technology to enable user equipment to share computing resources for task offloading. Due to the characteristics of the computing resource, how to design an efficient computation incentive mechanism with the appropriate task offloading and resource allocation strategies is [...] Read more.
Edge computing is a promising technology to enable user equipment to share computing resources for task offloading. Due to the characteristics of the computing resource, how to design an efficient computation incentive mechanism with the appropriate task offloading and resource allocation strategies is an essential issue. In this manuscript, we proposed an intelligent computation offloading mechanism with content cache in mobile edge computing. First, we provide the network framework for computation offloading with content cache in mobile edge computing. Then, by deriving necessary and sufficient conditions, an optimal contract is designed to obtain the joint task offloading, resource allocation, and a computation strategy with an intelligent mechanism. Simulation results demonstrate the efficiency of our proposed approach. Full article
(This article belongs to the Special Issue Resource Allocation in Cloud–Edge–End Cooperation Networks)
Show Figures

Figure 1

27 pages, 4842 KiB  
Article
Cluster-Based Multi-User Multi-Server Caching Mechanism in Beyond 5G/6G MEC
by Rasha Samir, Hadia El-Hennawy and Hesham Elbadawy
Sensors 2023, 23(2), 996; https://doi.org/10.3390/s23020996 - 15 Jan 2023
Cited by 6 | Viewed by 2626
Abstract
The work on perfecting the rapid proliferation of wireless technologies resulted in the development of wireless modeling standards, protocols, and control of wireless manipulators. Several mobile communication technology applications in different fields are dramatically revolutionized to deliver more value at less cost. Multiple-access [...] Read more.
The work on perfecting the rapid proliferation of wireless technologies resulted in the development of wireless modeling standards, protocols, and control of wireless manipulators. Several mobile communication technology applications in different fields are dramatically revolutionized to deliver more value at less cost. Multiple-access Edge Computing (MEC) offers excellent advantages for Beyond 5G (B5G) and Sixth-Generation (6G) networks, reducing latency and bandwidth usage while increasing the capability of the edge to deliver multiple services to end users in real time. We propose a Cluster-based Multi-User Multi-Server (CMUMS) caching algorithm to optimize the MEC content caching mechanism and control the distribution of high-popular tasks. As part of our work, we address the problem of integer optimization of the content that will be cached and the list of hosting servers. Therefore, a higher direct hit rate will be achieved, a lower indirect hit rate will be achieved, and the overall time delay will be reduced. As a result of the implementation of this system model, maximum utilization of resources and development of a completely new level of services and innovative approaches will be possible. Full article
(This article belongs to the Special Issue Mobile Cloud Computing in Wireless Networks and IoT)
Show Figures

Figure 1

Back to TopTop