Cloud and Edge Computing Systems for IoT

A special issue of IoT (ISSN 2624-831X).

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 7915

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
Interests: system software for cloud; IoT and edge

E-Mail Website
Guest Editor
Department of Computer Science, Southeast Missouri State University, One University Plaza, MS 5950 Cape Girardeau, MO 63701, USA
Interests: security and performance of distributed systems

Special Issue Information

Dear Colleagues,

We are in the midst of a rapid growth in Internet-of-Things (IoT) related multimodal applications in diverse areas such as smart cities, smart homes, autonomous vehicles, agriculture, manufacturing, and smart power grids. The rapid rise of distributed, heterogeneous, and collaborative IoT is empowered by low-cost embedded devices, on-demand, massively scalable computing in the Cloud, high-speed communication networks, and big data storage technologies and tools. Motivated by constraints on latency, bandwidth, cost, and data privacy issues, the IoT designers have been offloading resource-intensive functionalities to the emerging network infrastructure tier, i.e., edge nodes. This calls for the development of novel computation, memory, and power paradigms that support the interplay of Cloud, edge computing, and IoT devices.

With the increasing complexity of current and future IoT systems, new hardware and software systems and paradigms are urgently needed by the application programmers with limited systems-level knowledge and skills. This can allow them to rapidly develop and deploy diverse IoT applications that satisfy multiple criteria, such as functionality, performance, sustainability, and security requirements. For example, low-power embedded hardware accelerators for Deep Learning applications allow for computationally intensive AI algorithms to be run at the Edge instead of the Cloud, thus decreasing application latency, and reducing network bandwidth requirements. Similarly, a distributed software stack that spans Cloud, Edge, and IoT nodes, facilitate convenient abstractions for developing IoT applications that are required to scale to hundreds of thousands of IoT nodes over a large geographic area.

While the research community has started addressing the above described IoT systems challenges, many open research questions remain. Some of these include 1) What type of specialized hardware accelerators should be architected for IoT and Edge nodes such that power effective performance is achieved, while amortizing the development cost over a sufficiently large variety of applications? 2) What system software abstractions are required to manage the tremendous scale, and heterogeneity of IoT devices? 3) How can deep learning algorithms be modified to function efficiently within the limitations of IoT systems' available resources? 4) How can we facilitate, vendor neutral platforms that allow multi cloud operations thus allowing the use of the best-of-breed services from competing vendors, while preventing vendor lock-in? 5) How does the roll out of high speed 5G networks impact the placement and migration of computing tasks between the Edge and the Cloud? 6) How does the falling costs of high speed storage devices such as NVMe SSDs, the emergence of non-volatile memories, and new high speed interconnect standards such as CXL 2.0, impact the computing and storage stack at IoT Edge? , and 7) What are effective and efficient security and privacy capabilities that can best support diverse IoT applications and devices?

In this Special Issue on Cloud and Edge Computing Systems for IoT, we invite researchers to submit previously unpublished research that addresses the above questions and related issues.

Dr. Arun Ravindran
Dr. Reshmi Mitra
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. IoT is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • IoT
  • cloud
  • edge
  • systems
  • AI hardware accelerators
  • system software
  • security

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 1384 KiB  
Article
FedMon: A Federated Learning Monitoring Toolkit
by Moysis Symeonides, Demetris Trihinas and Fotis Nikolaidis
IoT 2024, 5(2), 227-249; https://doi.org/10.3390/iot5020012 - 11 Apr 2024
Viewed by 792
Abstract
Federated learning (FL) is rapidly shaping into a key enabler for large-scale Artificial Intelligence (AI) where models are trained in a distributed fashion by several clients without sharing local and possibly sensitive data. For edge computing, sharing the computational load across multiple clients [...] Read more.
Federated learning (FL) is rapidly shaping into a key enabler for large-scale Artificial Intelligence (AI) where models are trained in a distributed fashion by several clients without sharing local and possibly sensitive data. For edge computing, sharing the computational load across multiple clients is ideal, especially when the underlying IoT and edge nodes encompass limited resource capacity. Despite its wide applicability, monitoring FL deployments comes with significant challenges. AI practitioners are required to invest a vast amount of time (and labor) in manually configuring state-of-the-art monitoring tools. This entails addressing the unique characteristics of the FL training process, including the extraction of FL-specific and system-level metrics, aligning metrics to training rounds, pinpointing performance inefficiencies, and comparing current to previous deployments. This work introduces FedMon, a toolkit designed to ease the burden of monitoring FL deployments by seamlessly integrating the probing interface with the FL deployment, automating the metric extraction, providing a rich set of system, dataset, model, and experiment-level metrics, and providing the analytic means to assess trade-offs and compare different model and training configurations. Full article
(This article belongs to the Special Issue Cloud and Edge Computing Systems for IoT)
Show Figures

Figure 1

15 pages, 1126 KiB  
Article
Enhancing Automatic Modulation Recognition for IoT Applications Using Transformers
by Narges Rashvand, Kenneth Witham, Gabriel Maldonado, Vinit Katariya, Nishanth Marer Prabhu, Gunar Schirner and Hamed Tabkhi
IoT 2024, 5(2), 212-226; https://doi.org/10.3390/iot5020011 - 9 Apr 2024
Viewed by 641
Abstract
Automatic modulation recognition (AMR) is vital for accurately identifying modulation types within incoming signals, a critical task for optimizing operations within edge devices in IoT ecosystems. This paper presents an innovative approach that leverages Transformer networks, initially designed for natural language processing, to [...] Read more.
Automatic modulation recognition (AMR) is vital for accurately identifying modulation types within incoming signals, a critical task for optimizing operations within edge devices in IoT ecosystems. This paper presents an innovative approach that leverages Transformer networks, initially designed for natural language processing, to address the challenges of efficient AMR. Our Transformer network architecture is designed with the mindset of real-time edge computing on IoT devices. Four tokenization techniques are proposed and explored for creating proper embeddings of RF signals, specifically focusing on overcoming the limitations related to the model size often encountered in IoT scenarios. Extensive experiments reveal that our proposed method outperformed advanced deep learning techniques, achieving the highest recognition accuracy. Notably, our model achieved an accuracy of 65.75 on the RML2016 and 65.80 on the CSPB.ML.2018+ dataset. Full article
(This article belongs to the Special Issue Cloud and Edge Computing Systems for IoT)
Show Figures

Figure 1

24 pages, 1708 KiB  
Article
Constraint-Aware Federated Scheduling for Data Center Workloads
by Meghana Thiyyakat, Subramaniam Kalambur and Dinkar Sitaram
IoT 2023, 4(4), 534-557; https://doi.org/10.3390/iot4040023 - 8 Nov 2023
Viewed by 1477
Abstract
The use of data centers is ubiquitous, as they support multiple technologies across domains for storing, processing, and disseminating data. IoT applications utilize both cloud data centers and edge data centers based on the nature of the workload. Due to the stringent latency [...] Read more.
The use of data centers is ubiquitous, as they support multiple technologies across domains for storing, processing, and disseminating data. IoT applications utilize both cloud data centers and edge data centers based on the nature of the workload. Due to the stringent latency requirements of IoT applications, the workloads are run on hardware accelerators such as FPGAs and GPUs for faster execution. The introduction of such hardware alongside existing variations in the hardware and software configurations of the machines in the data center, increases the heterogeneity of the infrastructure. Optimal job performance necessitates the satisfaction of task placement constraints. This is accomplished through constraint-aware scheduling, where tasks are scheduled on worker nodes with appropriate machine configurations. The presence of placement constraints limits the number of suitable resources available to run a task, leading to queuing delays. As federated schedulers have gained prominence for their speed and scalability, we assess the performance of two such schedulers, Megha and Pigeon, within a constraint-aware context. We extend our previous work on Megha by comparing its performance with a constraint-aware version of the state-of-the-art federated scheduler Pigeon, PigeonC. The results of our experiments with synthetic and real-world cluster traces show that Megha reduces the 99th percentile of job response time delays by a factor of 10 when compared to PigeonC. We also describe enhancements made to Megha’s architecture to improve its scheduling efficiency. Full article
(This article belongs to the Special Issue Cloud and Edge Computing Systems for IoT)
Show Figures

Figure 1

19 pages, 639 KiB  
Article
Efficient Non-DHT-Based RC-Based Architecture for Fog Computing in Healthcare 4.0
by Indranil Roy, Reshmi Mitra, Nick Rahimi and Bidyut Gupta
IoT 2023, 4(2), 131-149; https://doi.org/10.3390/iot4020008 - 10 May 2023
Viewed by 1900
Abstract
Cloud-computing capabilities have revolutionized the remote processing of exploding volumes of healthcare data. However, cloud-based analytics capabilities are saddled with a lack of context-awareness and unnecessary access latency issues as data are processed and stored in remote servers. The emerging network infrastructure tier [...] Read more.
Cloud-computing capabilities have revolutionized the remote processing of exploding volumes of healthcare data. However, cloud-based analytics capabilities are saddled with a lack of context-awareness and unnecessary access latency issues as data are processed and stored in remote servers. The emerging network infrastructure tier of fog computing can reduce expensive latency by bringing storage, processing, and networking closer to sensor nodes. Due to the growing variety of medical data and service types, there is a crucial need for efficient and secure architecture for sensor-based health-monitoring devices connected to fog nodes. In this paper, we present publish/subscribe and interest/resource-based non-DHT-based peer-to-peer (P2P) RC-based architecture for resource discovery. The publish/subscribe communication model provides a scalable way to handle large volumes of data and messages in real time, while allowing fine-grained access control to messages, thus enabling heightened security. Our two − level overlay network consists of (1) a transit ring containing group-heads representing a particular resource type, and (2) a completely connected group of peers. Our theoretical analysis shows that our search latency is independent of the number of peers. Additionally, the complexity of the intra-group data-lookup protocol is constant, and the complexity of the inter-group data lookup is O(n), where n is the total number of resource types present in the network. Overall, it therefore allows the system to handle large data throughput in a flexible, cost-effective, and secure way for medical IoT systems. Full article
(This article belongs to the Special Issue Cloud and Edge Computing Systems for IoT)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 1141 KiB  
Review
Analyzing Threats and Attacks in Edge Data Analytics within IoT Environments
by Poornima Mahadevappa, Redhwan Al-amri, Gamal Alkawsi, Ammar Ahmed Alkahtani, Mohammed Fahad Alghenaim and Mohammed Alsamman
IoT 2024, 5(1), 123-154; https://doi.org/10.3390/iot5010007 - 5 Mar 2024
Viewed by 1603
Abstract
Edge data analytics refers to processing near data sources at the edge of the network to reduce delays in data transmission and, consequently, enable real-time interactions. However, data analytics at the edge introduces numerous security risks that can impact the data being processed. [...] Read more.
Edge data analytics refers to processing near data sources at the edge of the network to reduce delays in data transmission and, consequently, enable real-time interactions. However, data analytics at the edge introduces numerous security risks that can impact the data being processed. Thus, safeguarding sensitive data from being exposed to illegitimate users is crucial to avoiding uncertainties and maintaining the overall quality of the service offered. Most existing edge security models have considered attacks during data analysis as an afterthought. In this paper, an overview of edge data analytics in healthcare, traffic management, and smart city use cases is provided, including the possible attacks and their impacts on edge data analytics. Further, existing models are investigated to understand how these attacks are handled and research gaps are identified. Finally, research directions to enhance data analytics at the edge are presented. Full article
(This article belongs to the Special Issue Cloud and Edge Computing Systems for IoT)
Show Figures

Figure 1

Back to TopTop