Next Article in Journal
Physiological, Mechanical, and Perceptual Responses to Comparing 7.5% and 10% Body Mass Load during the Cycling Sprint Interval Exercise in Physically Active Men
Previous Article in Journal
Hybrid A-Star Path Planning Method Based on Hierarchical Clustering and Trichotomy
Previous Article in Special Issue
Enhancing Sequence Movie Recommendation System Using Deep Learning and KMeans
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning

Department of Computer Science and Engineering, Jeonju University, Jeonju 55069, Republic of Korea
Appl. Sci. 2024, 14(13), 5583; https://doi.org/10.3390/app14135583
Submission received: 20 June 2024 / Accepted: 24 June 2024 / Published: 27 June 2024
Multi-access edge computing (MEC) is an emerging computing architecture that enhances and extends traditional mobile cloud computing. Edge computing refers to computing at or near the physical location of users or data sources. By processing computing services close to users’ end devices, users receive faster and more reliable services, and businesses benefit from flexible hybrid cloud computing. Edge computing is one way for businesses to distribute data computation and processing using a common pool of resources across multiple locations. Edge computing is used in many industries, including communications, manufacturing, transportation, and public services. Many edge use cases arise from situations where data must be processed locally and in real time because when data are sent to a data center for processing, latency is excessively high. Edge computing to solve these problems can reduce network costs, eliminate bandwidth constraints, reduce transmission delays and service failures, and more effectively control the movement of sensitive data. Edge computing, with its focus on data collection and real-time computation, helps enable the widespread use of data-intensive intelligent applications. For example, artificial intelligence/machine learning tasks, such as image recognition algorithms, can be more efficiently run closer to the data source, eliminating the need to transfer large amounts of data to centralized data centers. In other words, edge computing (fog computing) must be continuously developed, and many new technologies are emerging. Given this background, various new technologies in edge/fog computing are reviewed in this paper.

1. Introduction

Multi-access edge computing (MEC) is an emerging computing architecture that enhances and extends traditional mobile cloud computing [1]. Edge computing and fog computing have developed to directly process data generated from various sensors at a lower level and thus process the data in more detail. However, processing machine learning (ML) and deep learning is challenging with the device’s performance. Accordingly, edge computing and fog computing are interconnected and must be considered together in a cloud computing environment. By providing this environment, artificial intelligence (AI) and ML have been widely deployed in various business sectors and industries.
However, with the recent large-scale deployment of edge/fog applications and advances in AI and ML development, the orchestration of edge/fog computing may come from AI and ML. Accordingly, in this Special Issue (SI), we invited papers focusing on the topic “New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning”, and we sincerely thank you for your contributions.
A total of 11 papers have been published in this SI. The papers are as follows: “Enhancing Sequence Movie Recommendation System Using Deep Learning and K-Means”, “Traffic-aware Optimization of Task Offloading and Content Caching in the Internet of Vehicles”, “Enhancing the Performance of XR Environments Using Fog and Cloud Computing”, “A Proposed Settlement and Distribution Structure for Music Royalties in Korea and Their Artificial Intelligence-based Applications”, “Performance Analysis of a Keyword-based Trust Management System for Fog Computing”, “Use of Logarithmic Rates in Multiarmed Bandit-based Transmission Rate Control Embracing Frame Aggregations in Wireless Networks”, “Machine Learning-based Representative Spatiotemporal Event Document Classification”, “Crop Disease Diagnosis with Deep Learning-based Image Captioning and Object Detection”, “Joint Task Offloading, Resource Allocation, and Load-balancing Optimization in Multi-UAV-aided MEC Systems”, “High-Performance IoT Cloud Computing Framework”, and “Applicability of Deep Reinforcement Learning for Efficient Federated Learning in Massive IoT Communications”.

2. New Technologies and Applications

Recently, substantial data are increasing, not only through media but also through SNS. Filtering to find the desired content among this information is becoming difficult; given this, recommendation systems are being developed to solve these problems. Accordingly, one paper [2] in this Special Issue proposes a ranking and reinforcement sequence movie recommendation system using a deep learning combination model to solve data scalability, data shortage, and cold-start problems in existing recommendation systems. The authors propose a recommendation system model that analyzes new users using user information (age, gender, occupation) for recommendations and connects them with other users with similar preferences. To achieve this, they proceeded with two main processes. The first process was to apply deep learning and key converter functions to predict the next movie based on the user’s sequence and information. The second process trained the transformer model and then integrated the output prediction ratings with K-means clustering to generate a Top-N recommendation system for the target user. An evaluation of the proposed system on two MovieLens datasets (100 K and 1 M) resulted in RMSE, MAE, precision, recall, and F1 score (1.0756, 0.8741, 0.5516, 0.3260, and 0.4098 for the 1 M dataset, respectively). A remarkable improvement was observed for cases 0.9927 and 0.8007.
With the rapid development of new technologies and 5G technology, the number of vehicles continues to rapidly increase. This exponential growth, driven primarily by the emergence of Internet of Vehicles (IoV) technologies, has remarkably improved user driving experience. This improvement will lead drivers and passengers to use in-vehicle infotainment, streaming services, and real-time applications to enhance their travel experience. However, as the number of vehicles increases, computational tasks increase, content requests are duplicated, and resources are wasted. In other words, efficient compute offloading and content caching strategies are critical for the IoV to optimize performance in terms of time delay and energy consumption. Another paper [3] proposes a task offloading and content caching (TOCC) strategy based on a multi objective evolutionary algorithm, which is, in turn, based on decomposition (MOEA/D), a collaborative task offloading and content caching optimization method based on traffic flow prediction. It uses a forecasting open-source tool to extract temporal/spatial correlations to predict traffic flows to obtain a number of tasks. These are also obtained by decomposing them into several single problems using MOEA/D for task offloading and content caching. The experimental results of the proposed method verify the effectiveness of the TOCC strategy. Compared with other methods, latency is up to 29% longer, and energy consumption is up to 83% higher.
Recently, the rapid development of digital technology has brought about many changes in our daily lives, and one of the most notable paradigm shifts is extended reality (XR) technology, which encompasses augmented reality (AR), virtual reality (VR), and mixed reality (MR). These technologies require highly computational tasks in high-performance 3D rendering and extensive 3D point cloud processing. These tasks are particularly computationally intensive and must run in real time and be driven by user interaction, rendering virtual environments seamlessly and without interruption. Accordingly, the transition to the cloud is essential in order to provide users with a superior experience by building an XR environment in a more modern and scalable manner. However, edge/fog computing requires research focused on network bandwidth optimization and real-time processing optimization. Therefore, another paper [4] proposes a new XR system that utilizes edge computing and fog computing to overcome the limitations of the existing cloud-based XR environment. This aims to achieve network bandwidth optimization and real-time processing optimization to ensure a smooth XR experience for users without interruption. In this paper, computational complexity, latency issues, and real-time user interaction issues in XR are investigated. A system architecture that leverages edge and fog computing is proposed to overcome these challenges and improve the XR experience by efficiently processing input data, rendering output content, and minimizing latency for real-time user interaction. The experimental results of said paper showed that the proposed system achieves an average data compression ratio of approximately 70–80%, minimizing network latency between cloud and fog and allowing for the transmission of smaller data sizes. Reduced data transfer latency remarkably improves real-time interaction, improving user experience in XR environments.
Much time has passed since the music market changed from analog (tape, LP, etc.) to digital. However, as digital music became available through online service providers, the distribution of royalties has become an important issue for music copyright holders because of the unfair distribution of royalties due to the indiscriminate repetitive streaming of digital music. To solve this problem, another paper [5] proposes an AI-based application using music consumption log data. This application uses music usage data to detect acts of hoarding and predict the marketability of music, among other.
With the spread of the Internet of Things (IoT), fog computing has emerged as a solution for distributed data processing due to increased demand for low-latency applications. However, trust and security in a fog computing environment connected to sensors without processing capabilities pose serious problems. Malicious nodes, unauthorized access, and data breaches compromise the integrity and reliability of data processing. Therefore, this gap in trust management must be addressed, and the overall security of fog computing must be strengthened. As a response to these issues, another paper [6] proposes a new keyword-based trust management system for fog computing networks, aiming to improve network efficiency and ensure data integrity. The system proposed creates a node keyword table for each node and stores keywords assigned to each node. A total of 1000 keywords are stored in the table, and when the keywords are used, they are managed by replacement, using the least recently used approach. Simulation studies are conducted using iFog-Sim to evaluate the effectiveness of the proposed scheme in terms of latency and the packet delivery ratio.
As the demand for wireless communication increases, it has become an essential part of most people’s daily lives. Such demand for wireless communication has increased due to the development and use of IoT technology. To alleviate the ever-increasing demand for wireless communications, technologies, such as edge/fog computing, are becoming an important part of communications infrastructure. Most networks use CSMA/CA wireless networks, which lack central control; hence, when there are multiple users and points of access, network resources related to bandwidth and frequency channels are often underutilized. Accordingly, another paper [7] proposes the use of log values in data transmission for the multiarmed bandit (MAB) algorithm, which adjusts the modulation and coding method of data packets in CSMA/CA. To demonstrate the effectiveness of the proposal, we use two MAB algorithms that adopt the logarithmic rate of the transmission rate and use ns-3, an event-driven network simulator, to evaluate the performance of the proposed MAB algorithm. The experimental results showed that the proposed MAB algorithm outperformed the MAB algorithm, which does not adopt logarithmic transfer rates in fixed and nonfixed scenarios.
Document classification is one of the areas of natural language processing that classifies documents into categories. With the development of Internet technology and social network services, large amounts of text data and media are created. As a result, research is undertaken to analyze the latest social issues and consumer trends and detect spatiotemporal event sentences. However, documents contain not only important spatiotemporal events required for event analysis but also events that are not important for event analysis. For example, newspaper articles contain many events; thus, only the key events necessary for analysis must be extracted to improve the accuracy of event analysis. Therefore, the authors of another paper [8] define important representative spatiotemporal event documents for the core topic of a document and propose a BiLSTM-based document classification model to classify representative spatiotemporal event documents. BiLSTM is a method of processing the input order of LSTM in a chronological order. The problem of converging results based on previous patterns is solved by learning in both directions, by connecting the forward hidden layer and the reverse hidden layer to the input layer and output layer. Moreover, the A method is used to calculate the output value and derive the final output value. The experimental results showed that the BiLSTM model improved the F1 score by 2.6% and accuracy by 4.5% compared with the baseline CNN model.
Recently, the number of people turning to agriculture through urban farming and smart farms is increasing. As smart farm technology develops, farming has become easier, but support for basic agricultural technologies is still limited. Accordingly, deep learning-based crop disease diagnosis solutions have been researched, but the focus is on CNN-based disease detection; thus, the severity and characteristics of disease symptoms cannot be known. For a fundamental solution, the characteristics of disease symptoms necessary to prevent the spread of crop diseases must be identified and responded to in advance. Therefore, in another paper [9], the authors propose a crop disease diagnosis solution that applies deep learning-based image captioning and object detection technology to provide easy and practical help, even to novice farmers. The image caption model presents accurate disease names and generates accurate and meaningful sentences for understanding. In the object detection model, infected areas are detected, and damaged parts are recognized. As a result of the experiment, the average BLEU score of the image captioning model was 64.96%, which shows that sentence generation performance was high, and the mAP50 of the object detection model was 0.382, indicating that further improvement was needed.
Wireless sensors and IoT devices are ubiquitous worldwide, and their applications and uses have expanded considerably. In addition, with the introduction of stable high-speed internet, such as 5G and 6G, various new services and applications are emerging in the fields of virtual/augmented reality, facial recognition, video games and video streaming, e-health, vehicle networks, and AI. However, these services require improved computing capacity and energy efficiency, which are not possible due to limited computing and battery capacity. This can alleviate limitations by offloading mobile edge computing, but because IoT-intensive tasks can be sent to the edge server, there is a limit to performance improvement due to overload. Accordingly, in addition to integrating task offloading and load balancing, another paper [10] solves the resource allocation problem of a multitier UAV-assisted MEC system. It designed a load-balancing algorithm to optimize the load between MEC servers and solves collaborative offloading, load balancing, and resource allocation as an integer problem. The authors also proposed a deep reinforcement learning-based algorithm for efficient task off-reading. Experimental results show that the proposed model quickly and remarkably reduces system cost (i.e., approximately 41.9%, 44.2%, and 11%) compared with local execution, global offloading policy, and task offloading.
IoT combines various IoT framework technologies to transmit, receive, and manage real-time data from sensors. Sensors can be used to evaluate and control variables, such as temperature, humidity, vibration, or shock. Therefore, the application of IoT in various fields, especially in the field of manufacturing execution systems, can affect resource efficiency and remarkably improve production capacity. However, processing all collected data directly to a central server is inefficient and impractical due to limited computing, communication, and storage resources, overall energy and cost, and unreliable latency. Therefore, in another paper [11], the authors designed and implemented an IoT cloud platform using Pub/Sub technology to develop a high-performance IoT platform. To this end, as the size and frequency of data acquired from IoT nodes increased, performance was improved through MQTT and Kafka protocols and a multi-server architecture. Additionally, MQTT was applied for the fast processing of small-scale data, Kafka was applied for the stable processing of large-scale data, and various sensors and actuators were installed to measure the growth data of each device using protocols. The experimental results show that the MQTT Kafka platform is effective for use in environments where network bandwidth is limited or large amounts of data are continuously transmitted and received. In addition, the response time to user requests was measured to be within 100 ms on average, and the data transmission order for more than 13 million requests was confirmed to have an average of 113,134.89 records per second and 64,313 data processing performance per second.
IoT is expected to reach 30.9 billion connected devices by 2025, driving exponential data growth due to diverse application source classification, mission criticality, and privacy constraints. This growth poses challenges in real-world scenarios related to data privacy, automation of big data management, and latency-efficient connectivity. Building intelligent model learning on this improved existing architecture requires sending local data to cloud servers, resulting in high backhaul congestion, personalization leakage, and the underutilization of network resources. Federated learning (FL) was introduced to solve the problem, but the problem has still not been solved. Therefore, another paper [12] presents a state-of-the-art solution to optimize the orchestration of FL communication by querying a deep reinforcement learning (DRL)-based autonomy approach. In this paper, the authors summarize application areas and contributing perspectives of five major application areas to provide insightful system architectures, processing flows, and critical parameters proposed in recent research using DRL/FL-based approaches. Additionally, DRL-based eFL or incomplete DRL/FL support systems present potential challenges and directions for becoming a practical approach in real-world deployments.

3. Conclusions

This paper features 11 high-quality articles obtained following a rigorous review process. In this paper, we presented a recommendation system using deep learning and K-Means; traffic optimization of task offloading and content caching in IoV, performance improvement of XR environment in fog/edge computing, a keyword-based trust management system, a multiarmed-based transmission speed control, machine learning-based spatiotemporal event document classification system, deep learning-based image captioning and object detection, and an IoT cloud computing framework, among others. We hope that many readers will be able to make good use of the papers in this Special Issue.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jin, X.; Hu, J.; Wang, J.; Zhang, S. A Profit-Aware Double-Layer Edge Equipment Deployment Approach for Cloud Operators in Multi-Access Edge Computing. Hum. Centric Comput. Inf. Sci. 2024, 14, 23. [Google Scholar] [CrossRef]
  2. Siet, S.; Peng, S.; Ilkhomjon, S.; Kang, M.; Park, D.-S. Enhancing Sequence Movie Recommendation System Using Deep Learning and KMeans. Appl. Sci. 2024, 14, 2505. [Google Scholar] [CrossRef]
  3. Wang, P.; Wang, Y.; Qiao, J.; Hu, Z. Traffic-Aware Optimization of Task Offloading and Content Caching in the Internet of Vehicles. Appl. Sci. 2023, 13, 13069. [Google Scholar] [CrossRef]
  4. Lee, E.-S.; Shin, B.-S. Enhancing the Performance of XR Environments Using Fog and Cloud Computing. Appl. Sci. 2023, 13, 12477. [Google Scholar] [CrossRef]
  5. Kim, Y.; Kim, D.; Park, S.; Kim, Y.; Hong, J.; Hong, S.; Jeong, J.; Lee, B.; Oh, H. A Proposed Settlement and Distribution Structure for Music Royalties in Korea and Their Artificial Intelligence-Based Applications. Appl. Sci. 2023, 13, 11109. [Google Scholar] [CrossRef]
  6. Alwakeel, A.M. Performance Analysis of a Keyword-Based Trust Management System for Fog Computing. Appl. Sci. 2023, 13, 8714. [Google Scholar] [CrossRef]
  7. Cho, S. Use of Logarithmic Rates in Multi-Armed Bandit-Based Transmission Rate Control Embracing Frame Aggregations in Wireless Networks. Appl. Sci. 2023, 13, 8485. [Google Scholar] [CrossRef]
  8. Kim, B.; Yang, Y.; Park, J.S.; Jang, H.-J. Machine Learning Based Representative Spatio-Temporal Event Documents Classification. Appl. Sci. 2023, 13, 4230. [Google Scholar] [CrossRef]
  9. Lee, D.I.; Lee, J.H.; Jang, S.H.; Oh, S.J.; Doo, I.C. Crop Disease Diagnosis with Deep Learning-Based Image Captioning and Object Detection. Appl. Sci. 2023, 13, 3148. [Google Scholar] [CrossRef]
  10. Elgendy, I.A.; Meshoul, S.; Hammad, M. Joint Task Offloading, Resource Allocation, and Load-Balancing Optimization in Multi-UAV-Aided MEC Systems. Appl. Sci. 2023, 13, 2625. [Google Scholar] [CrossRef]
  11. Nam, J.; Jun, Y.; Choi, M. High Performance IoT Cloud Computing Framework Using Pub/Sub Techniques. Appl. Sci. 2022, 12, 11009. [Google Scholar] [CrossRef]
  12. Tam, P.; Corrado, R.; Eang, C.; Kim, S. Applicability of Deep Reinforcement Learning for Efficient Federated Learning in Massive IoT Communications. Appl. Sci. 2023, 13, 3083. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, J.S. New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning. Appl. Sci. 2024, 14, 5583. https://doi.org/10.3390/app14135583

AMA Style

Park JS. New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning. Applied Sciences. 2024; 14(13):5583. https://doi.org/10.3390/app14135583

Chicago/Turabian Style

Park, Ji Su. 2024. "New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning" Applied Sciences 14, no. 13: 5583. https://doi.org/10.3390/app14135583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop