Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 756 KB  
Article
AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities
by Chuhao Wu, He Zhang and John M. Carroll
Future Internet 2024, 16(10), 354; https://doi.org/10.3390/fi16100354 - 28 Sep 2024
Cited by 7 | Viewed by 15684
Abstract
Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions [...] Read more.
Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions (HEIs) becomes increasingly important. Leading universities have already published guidelines on Generative AI, with most attempting to embrace this technology responsibly. This study provides a new perspective by focusing on strategies for responsible AI governance as demonstrated in these guidelines. Through a case study of 14 prestigious universities in the United States, we identified the multi-unit governance of AI, the role-specific governance of AI, and the academic characteristics of AI governance from their AI guidelines. The strengths and potential limitations of these strategies and characteristics are discussed. The findings offer practical implications for guiding responsible AI usage in HEIs and beyond. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

28 pages, 3973 KB  
Systematic Review
Edge Computing in Healthcare: Innovations, Opportunities, and Challenges
by Alexandru Rancea, Ionut Anghel and Tudor Cioara
Future Internet 2024, 16(9), 329; https://doi.org/10.3390/fi16090329 - 10 Sep 2024
Cited by 32 | Viewed by 17752
Abstract
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things [...] Read more.
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things (IoT) devices and can potentially improve the systems’ performance, scalability, privacy, and security with applications in different domains. In the healthcare domain, modern IoT devices can nowadays be used to gather vital parameters and information that can be fed to edge Artificial Intelligence (AI) techniques able to offer precious insights and support to healthcare professionals. However, issues regarding data privacy and security, AI optimization, and computational offloading at the edge pose challenges to the adoption of edge AI. This paper aims to explore the current state of the art of edge AI in healthcare by using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology and analyzing more than 70 Web of Science articles. We have defined the relevant research questions, clear inclusion and exclusion criteria, and classified the research works in three main directions: privacy and security, AI-based optimization methods, and edge offloading techniques. The findings highlight the many advantages of integrating edge computing in a wide range of healthcare use cases requiring data privacy and security, near real-time decision-making, and efficient communication links, with the potential to transform future healthcare services and eHealth applications. However, further research is needed to enforce new security-preserving methods and for better orchestrating and coordinating the load in distributed and decentralized scenarios. Full article
(This article belongs to the Special Issue Privacy and Security Issues in IoT Systems)
Show Figures

Figure 1

34 pages, 2225 KB  
Review
Graph Attention Networks: A Comprehensive Review of Methods and Applications
by Aristidis G. Vrahatis, Konstantinos Lazaros and Sotiris Kotsiantis
Future Internet 2024, 16(9), 318; https://doi.org/10.3390/fi16090318 - 3 Sep 2024
Cited by 48 | Viewed by 23590
Abstract
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, [...] Read more.
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, we present a thorough examination of GATs, covering both diverse approaches and a wide range of applications. We examine the principal GAT-based categories, including Global Attention Networks, Multi-Layer Architectures, graph-embedding techniques, Spatial Approaches, and Variational Models. Furthermore, we delve into the diverse applications of GATs in various systems such as recommendation systems, image analysis, medical domain, sentiment analysis, and anomaly detection. This review seeks to act as a navigational reference for researchers and practitioners aiming to emphasize the capabilities and prospects of GATs. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technologies in Greece 2024–2025)
Show Figures

Figure 1

16 pages, 456 KB  
Review
A Survey on Data Availability in Layer 2 Blockchain Rollups: Open Challenges and Future Improvements
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(9), 315; https://doi.org/10.3390/fi16090315 - 29 Aug 2024
Cited by 9 | Viewed by 5333
Abstract
Layer 2 solutions have emerged in recent years as a valuable alternative to increase the throughput and scalability of blockchain-based architectures. The three primary types of Layer 2 solutions are state channels, sidechains, and rollups. The rollups are particularly promising, allowing significant improvements [...] Read more.
Layer 2 solutions have emerged in recent years as a valuable alternative to increase the throughput and scalability of blockchain-based architectures. The three primary types of Layer 2 solutions are state channels, sidechains, and rollups. The rollups are particularly promising, allowing significant improvements in transaction throughput, security, and efficiency, and have been adopted by many real-world projects, such as Polygon and Optimistic. However, the adoption of Layer 2 solutions has led to other challenges, such as the data availability problem, where transaction data processed off-chain must be posted back on the main chain. This is crucial to prevent data withholding attacks and ensure all participants can independently verify the blockchain state. This paper provides a comprehensive survey of existing rollup-based Layer 2 solutions with a focus on the data availability problem and discusses the major advantages and disadvantages of them. Finally, an analysis of open challenges and future research directions is provided. Full article
Show Figures

Graphical abstract

32 pages, 1667 KB  
Review
Artificial Intelligence Applications in Smart Healthcare: A Survey
by Xian Gao, Peixiong He, Yi Zhou and Xiao Qin
Future Internet 2024, 16(9), 308; https://doi.org/10.3390/fi16090308 - 27 Aug 2024
Cited by 11 | Viewed by 12899
Abstract
The rapid development of AI technology in recent years has led to its widespread use in daily life, where it plays an increasingly important role. In healthcare, AI has been integrated into the field to develop the new domain of smart healthcare. In [...] Read more.
The rapid development of AI technology in recent years has led to its widespread use in daily life, where it plays an increasingly important role. In healthcare, AI has been integrated into the field to develop the new domain of smart healthcare. In smart healthcare, opportunities and challenges coexist. This article provides a comprehensive overview of past developments and recent progress in this area. First, we summarize the definition and characteristics of smart healthcare. Second, we explore the opportunities that AI technology brings to the smart healthcare field from a macro perspective. Third, we categorize specific AI applications in smart healthcare into ten domains and discuss their technological foundations individually. Finally, we identify ten key challenges these applications face and discuss the existing solutions for each. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

29 pages, 521 KB  
Review
A Survey on the Use of Large Language Models (LLMs) in Fake News
by Eleftheria Papageorgiou, Christos Chronis, Iraklis Varlamis and Yassine Himeur
Future Internet 2024, 16(8), 298; https://doi.org/10.3390/fi16080298 - 19 Aug 2024
Cited by 21 | Viewed by 16868
Abstract
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall [...] Read more.
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information. Full article
Show Figures

Figure 1

37 pages, 1164 KB  
Article
Early Ransomware Detection with Deep Learning Models
by Matan Davidian, Michael Kiperberg and Natalia Vanetik
Future Internet 2024, 16(8), 291; https://doi.org/10.3390/fi16080291 - 11 Aug 2024
Cited by 5 | Viewed by 3702
Abstract
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware [...] Read more.
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware detection typically involves observing the malware’s behavior, specifically the sequence of application programming interface (API) calls it makes, such as reading and writing files or enumerating directories. While previous studies have used machine learning (ML) techniques to classify API call sequences, they have only considered the API call name. This paper systematically compares various subsets of API call features, different ML techniques, and context-window sizes to identify the optimal ransomware classifier. Our findings indicate that a context-window size of 7 is ideal, and the most effective ML techniques are CNN and LSTM. Additionally, augmenting the API call name with the operation result significantly enhances the classifier’s precision. Performance analysis suggests that this classifier can be effectively applied in real-time scenarios. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Graphical abstract

25 pages, 3302 KB  
Article
Multi-Class Intrusion Detection Based on Transformer for IoT Networks Using CIC-IoT-2023 Dataset
by Shu-Ming Tseng, Yan-Qi Wang and Yung-Chung Wang
Future Internet 2024, 16(8), 284; https://doi.org/10.3390/fi16080284 - 8 Aug 2024
Cited by 23 | Viewed by 11799
Abstract
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply [...] Read more.
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply seven deep learning models, including Transformer, to analyze network traffic characteristics and identify abnormal behavior and potential intrusions through binary and multivariate classifications. Compared with other papers, we not only use a Transformer model, but we also consider the model’s performance in the multi-class classification. Although the accuracy of the Transformer model used in the binary classification is lower than that of DNN and CNN + LSTM hybrid models, it achieves better results in the multi-class classification. The accuracy of binary classification of our model is 0.74% higher than that of papers that also use Transformer on TON-IOT. In the multi-class classification, our best-performing model combination is Transformer, which reaches 99.40% accuracy. Its accuracy is 3.8%, 0.65%, and 0.29% higher than the 95.60%, 98.75%, and 99.11% figures recorded in papers using the same dataset, respectively. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

25 pages, 3477 KB  
Article
Overlay and Virtual Private Networks Security Performances Analysis with Open Source Infrastructure Deployment
by Antonio Francesco Gentile, Davide Macrì, Emilio Greco and Peppino Fazio
Future Internet 2024, 16(8), 283; https://doi.org/10.3390/fi16080283 - 7 Aug 2024
Cited by 5 | Viewed by 2825
Abstract
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this [...] Read more.
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this objective. VPNs are well-established and can be patched to address security vulnerabilities, while overlay networks represent the next-generation solution for secure communication. In this paper, for both VPNs and ONs, we analyze some important network performance components (RTT and bandwidth) while varying the type of overlay networks utilized for interconnecting traffic between two or more hosts (in the same data center, in different data centers in the same building, or over the Internet). These networks establish connections between KVM (Kernel-based Virtual Machine) instances rather than the typical Docker/LXC/Podman containers. The first analysis aims to assess network performance as it is, without any overlay channels. Meanwhile, the second establishes various channels without encryption and the final analysis encapsulates overlay traffic via IPsec (Transport mode), where encrypted channels like VTI are not already available for use. A deep set of traffic simulation campaigns shows the obtained performance. Full article
Show Figures

Figure 1

32 pages, 15790 KB  
Review
Human–AI Collaboration for Remote Sighted Assistance: Perspectives from the LLM Era
by Rui Yu, Sooyeon Lee, Jingyi Xie, Syed Masum Billah and John M. Carroll
Future Internet 2024, 16(7), 254; https://doi.org/10.3390/fi16070254 - 18 Jul 2024
Cited by 6 | Viewed by 5642
Abstract
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by [...] Read more.
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by both agents and users. The technical challenges were categorized into four groups: agents’ difficulties in orienting and localizing users, acquiring and interpreting users’ surroundings and obstacles, delivering information specific to user situations, and coping with poor network connections. We also presented 15 real-world navigational challenges, including 8 outdoor and 7 indoor scenarios. Given the spatial and visual nature of these challenges, we identified relevant computer vision problems that could potentially provide solutions. We then formulated 10 emerging problems that neither human agents nor computer vision can fully address alone. For each emerging problem, we discussed solutions grounded in human–AI collaboration. Additionally, with the advent of large language models (LLMs), we outlined how RSA can integrate with LLMs within a human–AI collaborative framework, envisioning the future of visual prosthetics. Full article
Show Figures

Figure 1

23 pages, 714 KB  
Review
Smart Irrigation Systems from Cyber–Physical Perspective: State of Art and Future Directions
by Mian Qian, Cheng Qian, Guobin Xu, Pu Tian and Wei Yu
Future Internet 2024, 16(7), 234; https://doi.org/10.3390/fi16070234 - 29 Jun 2024
Cited by 16 | Viewed by 4796
Abstract
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The [...] Read more.
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The Food and Agriculture Organization of the United Nations (UN) has projected that by 2030, developing countries will expand their irrigated areas by 34%, while water consumption will only be up 14%. This discrepancy highlights the importance of accurately monitoring water flow and volume rather than people’s rough estimations. The smart irrigation systems, a key subsystem of smart agriculture known as the cyber–physical system (CPS) in the agriculture domain, automate the administration of water flow, volume, and timing via using cutting-edge technologies, especially the Internet of Things (IoT) technology, to solve the challenges. This study explores a comprehensive three-dimensional problem space to thoroughly analyze the IoT’s applications in irrigation systems. Our framework encompasses several critical domains in smart irrigation systems. These domains include soil science, sensor technology, communication protocols, data analysis techniques, and the practical implementations of automated irrigation systems, such as remote monitoring, autonomous operation, and intelligent decision-making processes. Finally, we discuss a few challenges and outline future research directions in this promising field. Full article
Show Figures

Figure 1

36 pages, 3662 KB  
Article
Enhancing Network Slicing Security: Machine Learning, Software-Defined Networking, and Network Functions Virtualization-Driven Strategies
by José Cunha, Pedro Ferreira, Eva M. Castro, Paula Cristina Oliveira, Maria João Nicolau, Iván Núñez, Xosé Ramon Sousa and Carlos Serôdio
Future Internet 2024, 16(7), 226; https://doi.org/10.3390/fi16070226 - 27 Jun 2024
Cited by 22 | Viewed by 7739
Abstract
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical [...] Read more.
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical infrastructure, each optimized for specific service requirements. Despite its numerous benefits, network slicing introduces significant security vulnerabilities that must be addressed to prevent exploitation by increasingly sophisticated cyber threats. This review explores the application of cutting-edge technologies—Artificial Intelligence (AI), specifically Machine Learning (ML), Software-Defined Networking (SDN), and Network Functions Virtualization (NFV)—in crafting advanced security solutions tailored for network slicing. AI’s predictive threat detection and automated response capabilities are analysed, highlighting its role in maintaining service integrity and resilience. Meanwhile, SDN and NFV are scrutinized for their ability to enforce flexible security policies and manage network functionalities dynamically, thereby enhancing the adaptability of security measures to meet evolving network demands. Thoroughly examining the current literature and industry practices, this paper identifies critical research gaps in security frameworks and proposes innovative solutions. We advocate for a holistic security strategy integrating ML, SDN, and NFV to enhance data confidentiality, integrity, and availability across network slices. The paper concludes with future research directions to develop robust, scalable, and efficient security frameworks capable of supporting the safe deployment of network slicing in next-generation networks. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

12 pages, 1053 KB  
Article
Adapting Self-Regulated Learning in an Age of Generative Artificial Intelligence Chatbots
by Joel Weijia Lai
Future Internet 2024, 16(6), 218; https://doi.org/10.3390/fi16060218 - 20 Jun 2024
Cited by 17 | Viewed by 7922
Abstract
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of [...] Read more.
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of GenAI is the large language model chatbot, which allows users to seek answers to their queries. This article seeks to adapt current SRL models to understand student learning with these chatbots. This is achieved by classifying the prompts supplied by a learner to an educational chatbot into learning actions and processes using the process–action library. Subsequently, through process mining, we can analyze these data to provide valuable insights for learners, educators, instructional designers, and researchers into the possible applications of chatbots for SRL. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

40 pages, 5898 KB  
Article
Authentication and Key Agreement Protocol in Hybrid Edge–Fog–Cloud Computing Enhanced by 5G Networks
by Jiayi Zhang, Abdelkader Ouda and Raafat Abu-Rukba
Future Internet 2024, 16(6), 209; https://doi.org/10.3390/fi16060209 - 14 Jun 2024
Cited by 11 | Viewed by 2478
Abstract
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud [...] Read more.
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud computing’s deficiencies, but security challenges remain, especially in Authentication and Key Agreement aspects due to the distributed and dynamic nature of fog computing. This study presents an innovative mutual Authentication and Key Agreement protocol that is specifically tailored to meet the security needs of fog computing in the context of the edge–fog–cloud three-tier architecture, enhanced by the incorporation of the 5G network. This study improves security in the edge–fog–cloud context by introducing a stateless authentication mechanism and conducting a comparative analysis of the proposed protocol with well-known alternatives, such as TLS 1.3, 5G-AKA, and various handover protocols. The suggested approach has a total transmission cost of only 1280 bits in the authentication phase, which is approximately 30% lower than other protocols. In addition, the suggested handover protocol only involves two signaling expenses. The computational cost for handover authentication for the edge user is significantly low, measuring 0.243 ms, which is under 10% of the computing costs of other authentication protocols. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks)
Show Figures

Figure 1

22 pages, 2903 KB  
Article
Implementation of Lightweight Machine Learning-Based Intrusion Detection System on IoT Devices of Smart Homes
by Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi and Hadi Larijani
Future Internet 2024, 16(6), 200; https://doi.org/10.3390/fi16060200 - 5 Jun 2024
Cited by 17 | Viewed by 5470
Abstract
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems [...] Read more.
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

32 pages, 1109 KB  
Article
Impact, Compliance, and Countermeasures in Relation to Data Breaches in Publicly Traded U.S. Companies
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara, Robson de Oliveira Albuquerque and Georges Daniel Amvame Nze
Future Internet 2024, 16(6), 201; https://doi.org/10.3390/fi16060201 - 5 Jun 2024
Cited by 13 | Viewed by 8924
Abstract
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result [...] Read more.
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Graphical abstract

21 pages, 718 KB  
Review
Using ChatGPT in Software Requirements Engineering: A Comprehensive Review
by Nuno Marques, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2024, 16(6), 180; https://doi.org/10.3390/fi16060180 - 21 May 2024
Cited by 35 | Viewed by 13047
Abstract
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role [...] Read more.
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process. Full article
Show Figures

Figure 1

30 pages, 5255 KB  
Article
Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection
by Muhammad Imran, Annalisa Appice and Donato Malerba
Future Internet 2024, 16(5), 168; https://doi.org/10.3390/fi16050168 - 12 May 2024
Cited by 10 | Viewed by 5168
Abstract
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that [...] Read more.
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

20 pages, 7164 KB  
Review
A Comprehensive Review of Machine Learning Approaches for Anomaly Detection in Smart Homes: Experimental Analysis and Future Directions
by Md Motiur Rahman, Deepti Gupta, Smriti Bhatt, Shiva Shokouhmand and Miad Faezipour
Future Internet 2024, 16(4), 139; https://doi.org/10.3390/fi16040139 - 19 Apr 2024
Cited by 10 | Viewed by 3696
Abstract
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task [...] Read more.
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task becomes notably more complex when multiple ambient sensors are deployed in homes with multiple residents, as opposed to single-resident environments. Additionally, the availability of datasets containing anomalies representing the full spectrum of abnormalities is limited. In our experimental study, we employed eight widely used machine learning and two deep learning classifiers to identify anomalies in human activities. We meticulously generated anomalies, considering all conceivable scenarios. Our findings reveal that the Gated Recurrent Unit (GRU) excels in accurately classifying normal and anomalous activities, while the naïve Bayes classifier demonstrates relatively poor performance among the ten classifiers considered. We conducted various experiments to assess the impact of different training–test splitting ratios, along with a five-fold cross-validation technique, on the performance. Notably, the GRU model consistently outperformed all other classifiers under both conditions. Furthermore, we offer insights into the computational costs associated with these classifiers, encompassing training and prediction phases. Extensive ablation experiments conducted in this study underscore that all these classifiers can effectively be deployed for anomaly detection in two-resident homes. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Graphical abstract

21 pages, 991 KB  
Article
Metaverse Meets Smart Cities—Applications, Benefits, and Challenges
by Florian Maier and Markus Weinberger
Future Internet 2024, 16(4), 126; https://doi.org/10.3390/fi16040126 - 8 Apr 2024
Cited by 13 | Viewed by 3974
Abstract
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the [...] Read more.
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the context of smart cities. The metaverse offers digitalization approaches and can strengthen citizens’ social community. While the existing literature covers the exemplary potential of smart city metaverse applications, this study aims to provide a comprehensive overview of the potential and already implemented metaverse applications in the context of cities and municipalities. In addition, challenges related to these applications are identified. The study combines literature reviews and expert interviews to ensure a broad overview. Forty-eight smart city metaverse applications from eleven areas were identified, and actual projects from eleven cities demonstrate the current state of development. Still, further research should evaluate the benefits of the various applications and find strategies to overcome the identified challenges. Full article
Show Figures

Figure 1

13 pages, 395 KB  
Article
Efficient and Secure Distributed Data Storage and Retrieval Using Interplanetary File System and Blockchain
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(3), 98; https://doi.org/10.3390/fi16030098 - 15 Mar 2024
Cited by 16 | Viewed by 4963
Abstract
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually [...] Read more.
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually stores only a fingerprint of data, such as the hash of data, while full, raw information is stored off-chain. This is generally enough to guarantee immutability and traceability, but misses to support another important property, that is, data availability. This is particularly true when a traditional, centralized database is chosen for off-chain storage. For this reason, many proposals try to properly combine blockchain with decentralized IPFS storage. However, the storage of data on IPFS could pose some privacy problems. This paper proposes a solution that properly combines blockchain, IPFS, and encryption techniques to guarantee immutability, traceability, availability, and data privacy. Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends)
Show Figures

Figure 1

17 pages, 2344 KB  
Article
An Advanced Path Planning and UAV Relay System: Enhancing Connectivity in Rural Environments
by Mostafa El Debeiki, Saba Al-Rubaye, Adolfo Perrusquía, Christopher Conrad and Juan Alejandro Flores-Campos
Future Internet 2024, 16(3), 89; https://doi.org/10.3390/fi16030089 - 6 Mar 2024
Cited by 23 | Viewed by 2947
Abstract
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and [...] Read more.
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and communication landscapes. In particular, mountainous areas pose difficulties due to the loss of connectivity caused by large valleys and the volumes of hazardous weather. In this paper, the connectivity issue in mountainous areas is addressed using a path planning algorithm for UAV relay. The approach is based on two main phases: (1) the detection of areas of interest where the connectivity signal is poor, and (2) an energy-aware and resilient path planning algorithm that maximizes the coverage links. The approach uses a viewshed analysis to identify areas of visibility between the areas of interest and the cell-towers. This allows the construction of a blockage map that prevents the UAV from passing through areas with no coverage, whilst maximizing the coverage area under energy constraints and hazardous weather. The proposed approach is validated under open-access datasets of mountainous zones, and the obtained results confirm the benefits of the proposed approach for communication networks in remote and challenging environments. Full article
Show Figures

Figure 1

19 pages, 3172 KB  
Article
Multi-Level Split Federated Learning for Large-Scale AIoT System Based on Smart Cities
by Hanyue Xu, Kah Phooi Seng, Jeremy Smith and Li Minn Ang
Future Internet 2024, 16(3), 82; https://doi.org/10.3390/fi16030082 - 28 Feb 2024
Cited by 12 | Viewed by 5121
Abstract
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of [...] Read more.
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of deep learning models within these systems encounters significant challenges, chiefly due to data privacy concerns and dealing with communication latency from large-scale IoT devices. To address these issues, multi-level split federated learning (multi-level SFL) has been proposed, merging the benefits of split learning (SL) and federated learning (FL). This framework introduces a novel multi-level aggregation architecture that reduces communication delays, enhances scalability, and addresses system and statistical heterogeneity inherent in large AIoT systems with non-IID data distributions. The architecture leverages the Message Queuing Telemetry Transport (MQTT) protocol to cluster IoT devices geographically and employs edge and fog computing layers for initial model parameter aggregation. Simulation experiments validate that the multi-level SFL outperforms traditional SFL by improving model accuracy and convergence speed in large-scale, non-IID environments. This paper delineates the proposed architecture, its workflow, and its advantages in enhancing the robustness and scalability of AIoT systems in smart cities while preserving data privacy. Full article
Show Figures

Figure 1

15 pages, 2529 KB  
Article
A Lightweight Neural Network Model for Disease Risk Prediction in Edge Intelligent Computing Architecture
by Feng Zhou, Shijing Hu, Xin Du, Xiaoli Wan and Jie Wu
Future Internet 2024, 16(3), 75; https://doi.org/10.3390/fi16030075 - 26 Feb 2024
Cited by 10 | Viewed by 3336
Abstract
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on [...] Read more.
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on the central server. In this article, we design an image preprocessing method and propose a lightweight neural network model called Linge (Lightweight Neural Network Models for the Edge). We propose a distributed intelligent edge computing technology based on the federated learning algorithm for disease risk prediction. The intelligent edge computing method we proposed for disease risk prediction directly performs prediction model training and inference at the edge without increasing storage space. It also reduces the load on network bandwidth and reduces the computing pressure on the server. The lightweight neural network model we designed has only 7.63 MB of parameters and only takes up 155.28 MB of memory. In the experiment with the Linge model compared with the EfficientNetV2 model, the accuracy and precision increased by 2%, the recall rate increased by 1%, the specificity increased by 4%, the F1 score increased by 3%, and the AUC (Area Under the Curve) value increased by 2%. Full article
Show Figures

Figure 1

16 pages, 2560 KB  
Article
Deep Learning for Intrusion Detection Systems (IDSs) in Time Series Data
by Konstantinos Psychogyios, Andreas Papadakis, Stavroula Bourou, Nikolaos Nikolaou, Apostolos Maniatis and Theodore Zahariadis
Future Internet 2024, 16(3), 73; https://doi.org/10.3390/fi16030073 - 23 Feb 2024
Cited by 17 | Viewed by 4673
Abstract
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential [...] Read more.
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential data, obstruct activity, etc. To this end, intrusion detection systems (IDSs) are needed to filter malicious traffic and prevent common attacks. In the past, these systems relied on a fixed set of rules or comparisons with previous attacks. However, with the increased availability of computational power and data, machine learning has emerged as a promising solution for this task. While many systems now use this methodology in real-time for a reactive approach to mitigation, we explore the potential of configuring it as a proactive time series prediction. In this work, we delve into this possibility further. More specifically, we convert a classic IDS dataset to a time series format and use predictive models to forecast forthcoming malign packets. We propose a new architecture combining convolutional neural networks, long short-term memory networks, and attention. The findings indicate that our model performs strongly, exhibiting an F1 score and AUC that are within margins of 1% and 3%, respectively, when compared to conventional real-time detection. Also, our architecture achieves an ∼8% F1 score improvement compared to an LSTM (long short-term memory) model. Full article
(This article belongs to the Special Issue Security in the Internet of Things (IoT))
Show Figures

Figure 1

26 pages, 1791 KB  
Article
The Future of Healthcare with Industry 5.0: Preliminary Interview-Based Qualitative Analysis
by Juliana Basulo-Ribeiro and Leonor Teixeira
Future Internet 2024, 16(3), 68; https://doi.org/10.3390/fi16030068 - 22 Feb 2024
Cited by 27 | Viewed by 11144
Abstract
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy [...] Read more.
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy between human experience and technology. To this end, 6 specific objectives were found, which were answered in the results through an empirical study based on interviews with 11 healthcare professionals. This article thus outlines strategic and policy guidelines for the integration of I5.0 in healthcare, advocating policy-driven change, and contributes to the literature by offering a solid theoretical basis on I5.0 and its impact on the healthcare sector. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

38 pages, 1021 KB  
Review
A Systematic Survey on 5G and 6G Security Considerations, Challenges, Trends, and Research Areas
by Paul Scalise, Matthew Boeding, Michael Hempel, Hamid Sharif, Joseph Delloiacovo and John Reed
Future Internet 2024, 16(3), 67; https://doi.org/10.3390/fi16030067 - 20 Feb 2024
Cited by 29 | Viewed by 11862
Abstract
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and [...] Read more.
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and maintain a sufficiently secure 5G environment that places user privacy and security at the forefront. Confidentiality, integrity, and availability are all pillars of a privacy and security framework that define major aspects of 5G operations. They are incorporated and considered in the design of the 5G standard by the 3rd Generation Partnership Project (3GPP) with the goal of providing a highly reliable network operation for all. Through a comprehensive review, we aim to analyze the ever-evolving landscape of 5G, including any potential attack vectors and proposed measures to mitigate or prevent these threats. This paper presents a comprehensive survey of the state-of-the-art research that has been conducted in recent years regarding 5G systems, focusing on the main components in a systematic approach: the Core Network (CN), Radio Access Network (RAN), and User Equipment (UE). Additionally, we investigate the utilization of 5G in time-dependent, ultra-confidential, and private communications built around a Zero Trust approach. In today’s world, where everything is more connected than ever, Zero Trust policies and architectures can be highly valuable in operations containing sensitive data. Realizing a Zero Trust Architecture entails continuous verification of all devices, users, and requests, regardless of their location within the network, and grants permission only to authorized entities. Finally, developments and proposed methods of new 5G and future 6G security approaches, such as Blockchain technology, post-quantum cryptography (PQC), and Artificial Intelligence (AI) schemes, are also discussed to understand better the full landscape of current and future research within this telecommunications domain. Full article
(This article belongs to the Special Issue 5G Security: Challenges, Opportunities, and the Road Ahead)
Show Figures

Figure 1

18 pages, 6477 KB  
Article
The Microverse: A Task-Oriented Edge-Scale Metaverse
by Qian Qu, Mohsen Hatami, Ronghua Xu, Deeraj Nagothu, Yu Chen, Xiaohua Li, Erik Blasch, Erika Ardiles-Cruz and Genshe Chen
Future Internet 2024, 16(2), 60; https://doi.org/10.3390/fi16020060 - 13 Feb 2024
Cited by 21 | Viewed by 4881
Abstract
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to [...] Read more.
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

24 pages, 8449 KB  
Article
A Secure Opportunistic Network with Efficient Routing for Enhanced Efficiency and Sustainability
by Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(2), 56; https://doi.org/10.3390/fi16020056 - 8 Feb 2024
Cited by 5 | Viewed by 3260
Abstract
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, [...] Read more.
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, and sustainable networking solutions has never been more pressing. Opportunistic networks, characterized by intermittent connectivity and dynamic network conditions, present unique challenges that necessitate innovative approaches for optimal performance and sustainability. This paper introduces a groundbreaking paradigm that integrates the principles of cybersecurity with opportunistic networks. At its core, this study presents a novel routing protocol meticulously designed to significantly outperform existing solutions concerning key metrics such as delivery probability, overhead ratio, and communication delay. Leveraging cybersecurity’s inherent strengths, our protocol not only fortifies the network’s security posture but also provides a foundation for enhancing efficiency and sustainability in opportunistic networks. The overarching goal of this paper is to address the inherent limitations of conventional opportunistic network protocols. By proposing an innovative routing protocol, we aim to optimize data delivery, minimize overhead, and reduce communication latency. These objectives are crucial for ensuring seamless and timely information exchange, especially in scenarios where traditional networking infrastructures fall short. By large-scale simulations, the new model proves its effectiveness in the different scenarios, especially in terms of message delivery probability, while ensuring reasonable overhead and latency. Full article
Show Figures

Figure 1

14 pages, 3418 KB  
Article
Enhancing Smart City Safety and Utilizing AI Expert Systems for Violence Detection
by Pradeep Kumar, Guo-Liang Shih, Bo-Lin Guo, Siva Kumar Nagi, Yibeltal Chanie Manie, Cheng-Kai Yao, Michael Augustine Arockiyadoss and Peng-Chun Peng
Future Internet 2024, 16(2), 50; https://doi.org/10.3390/fi16020050 - 31 Jan 2024
Cited by 10 | Viewed by 5372
Abstract
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a [...] Read more.
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a model aimed at enhancing real-time emergency response capabilities and swiftly identifying criminals. This initiative aims to foster a safer environment and better manage criminal activity within smart cities. The proposed architecture combines an image-to-image stable diffusion model with violence detection and pose estimation approaches. The diffusion model generates synthetic data while the object detection approach uses YOLO v7 to identify violent objects like baseball bats, knives, and pistols, complemented by MediaPipe for action detection. Further, a long short-term memory (LSTM) network classifies the action attacks involving violent objects. Subsequently, an ensemble consisting of an edge device and the entire proposed model is deployed onto the edge device for real-time data testing using a dash camera. Thus, this study can handle violent attacks and send alerts in emergencies. As a result, our proposed YOLO model achieves a mean average precision (MAP) of 89.5% for violent attack detection, and the LSTM classifier model achieves an accuracy of 88.33% for violent action classification. The results highlight the model’s enhanced capability to accurately detect violent objects, particularly in effectively identifying violence through the implemented artificial intelligence system. Full article
(This article belongs to the Special Issue Challenges in Real-Time Intelligent Systems)
Show Figures

Figure 1

44 pages, 38595 KB  
Article
Enhancing Urban Resilience: Smart City Data Analyses, Forecasts, and Digital Twin Techniques at the Neighborhood Level
by Andreas F. Gkontzis, Sotiris Kotsiantis, Georgios Feretzakis and Vassilios S. Verykios
Future Internet 2024, 16(2), 47; https://doi.org/10.3390/fi16020047 - 30 Jan 2024
Cited by 41 | Viewed by 9647
Abstract
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of [...] Read more.
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of the urban environment, fostering real-time monitoring, simulation, and analysis of urban systems. This study underscores the significance of real-time monitoring, simulation, and analysis of urban systems to support test scenarios that identify bottlenecks and enhance smart city efficiency. This paper delves into the crucial roles of citizen report analytics, prediction, and digital twin technologies at the neighborhood level. The study integrates extract, transform, load (ETL) processes, artificial intelligence (AI) techniques, and a digital twin methodology to process and interpret urban data streams derived from citizen interactions with the city’s coordinate-based problem mapping platform. Using an interactive GeoDataFrame within the digital twin methodology, dynamic entities facilitate simulations based on various scenarios, allowing users to visualize, analyze, and predict the response of the urban system at the neighborhood level. This approach reveals antecedent and predictive patterns, trends, and correlations at the physical level of each city area, leading to improvements in urban functionality, resilience, and resident quality of life. Full article
Show Figures

Graphical abstract

29 pages, 743 KB  
Article
TinyML Algorithms for Big Data Management in Large-Scale IoT Systems
by Aristeidis Karras, Anastasios Giannaros, Christos Karras, Leonidas Theodorakopoulos, Constantinos S. Mammassis, George A. Krimpas and Spyros Sioutas
Future Internet 2024, 16(2), 42; https://doi.org/10.3390/fi16020042 - 25 Jan 2024
Cited by 33 | Viewed by 6453
Abstract
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed [...] Read more.
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

57 pages, 2070 KB  
Review
A Holistic Analysis of Internet of Things (IoT) Security: Principles, Practices, and New Perspectives
by Mahmud Hossain, Golam Kayas, Ragib Hasan, Anthony Skjellum, Shahid Noor and S. M. Riazul Islam
Future Internet 2024, 16(2), 40; https://doi.org/10.3390/fi16020040 - 24 Jan 2024
Cited by 31 | Viewed by 11414
Abstract
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of [...] Read more.
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of crime and e-crime. Despite ongoing research and study, there remains a significant gap in the thorough analysis of security challenges, feasible solutions, and open secure problems for IoT. To bridge this gap, we provide a comprehensive overview of the state of the art in IoT security with a critical investigation-based approach. This includes a detailed analysis of vulnerabilities in IoT-based systems and potential attacks. We present a holistic review of the security properties required to be adopted by IoT devices, applications, and services to mitigate IoT vulnerabilities and, thus, successful attacks. Moreover, we identify challenges to the design of security protocols for IoT systems in which constituent devices vary markedly in capability (such as storage, computation speed, hardware architecture, and communication interfaces). Next, we review existing research and feasible solutions for IoT security. We highlight a set of open problems not yet addressed among existing security solutions. We provide a set of new perspectives for future research on such issues including secure service discovery, on-device credential security, and network anomaly detection. We also provide directions for designing a forensic investigation framework for IoT infrastructures to inspect relevant criminal cases, execute a cyber forensic process, and determine the facts about a given incident. This framework offers a means to better capture information on successful attacks as part of a feedback mechanism to thwart future vulnerabilities and threats. This systematic holistic review will both inform on current challenges in IoT security and ideally motivate their future resolution. Full article
(This article belongs to the Special Issue Cyber Security in the New "Edge Computing + IoT" World)
Show Figures

Figure 1

27 pages, 2022 KB  
Review
Overview of Protocols and Standards for Wireless Sensor Networks in Critical Infrastructures
by Spyridon Daousis, Nikolaos Peladarinos, Vasileios Cheimaras, Panagiotis Papageorgas, Dimitrios D. Piromalis and Radu Adrian Munteanu
Future Internet 2024, 16(1), 33; https://doi.org/10.3390/fi16010033 - 21 Jan 2024
Cited by 29 | Viewed by 6479
Abstract
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the [...] Read more.
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the market tension in recent years in the gradual development of wireless networks for industrial applications, and proceeds to categorize WSNs and examine the protocols and standards of WSNs in demanding environments like critical infrastructures, drawing on the recent literature. This review concentrates on the protocols and standards utilized in WSNs for critical infrastructures, and it concludes by identifying a notable gap in the literature concerning quality standards for equipment used in such infrastructures. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

42 pages, 2733 KB  
Review
A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
by Hassan Khazane, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Future Internet 2024, 16(1), 32; https://doi.org/10.3390/fi16010032 - 19 Jan 2024
Cited by 32 | Viewed by 9053
Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, [...] Read more.
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

17 pages, 3053 KB  
Article
Proximal Policy Optimization for Efficient D2D-Assisted Computation Offloading and Resource Allocation in Multi-Access Edge Computing
by Chen Zhang, Celimuge Wu, Min Lin, Yangfei Lin and William Liu
Future Internet 2024, 16(1), 19; https://doi.org/10.3390/fi16010019 - 2 Jan 2024
Cited by 15 | Viewed by 5117
Abstract
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. [...] Read more.
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. However, the inherent complexity and variability of MEC networks pose significant challenges in computational offloading decisions. To tackle this problem, we propose a proximal policy optimization (PPO)-based Device-to-Device (D2D)-assisted computation offloading and resource allocation scheme. We construct a realistic MEC network environment and develop a Markov decision process (MDP) model that minimizes time loss and energy consumption. The integration of a D2D communication-based offloading framework allows for collaborative task offloading between end devices and MEC servers, enhancing both resource utilization and computational efficiency. The MDP model is solved using the PPO algorithm in deep reinforcement learning to derive an optimal policy for offloading and resource allocation. Extensive comparative analysis with three benchmarked approaches has confirmed our scheme’s superior performance in latency, energy consumption, and algorithmic convergence, demonstrating its potential to improve MEC network operations in the context of emerging 5G and beyond technologies. Full article
Show Figures

Figure 1

34 pages, 2309 KB  
Review
Edge AI for Early Detection of Chronic Diseases and the Spread of Infectious Diseases: Opportunities, Challenges, and Future Directions
by Elarbi Badidi
Future Internet 2023, 15(11), 370; https://doi.org/10.3390/fi15110370 - 18 Nov 2023
Cited by 39 | Viewed by 20036
Abstract
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed [...] Read more.
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed at the edge of the network, far from centralized data centers. AI enables the careful analysis of large datasets derived from multiple sources, including electronic health records, wearable devices, and demographic information, making it possible to identify intricate patterns and predict a person’s future health. Federated learning, a novel approach in AI, further enhances this prediction by enabling collaborative training of AI models on distributed edge devices while maintaining privacy. Using edge computing, data can be processed and analyzed locally, reducing latency and enabling instant decision making. This article reviews the role of Edge AI in early health prediction and highlights its potential to improve public health. Topics covered include the use of AI algorithms for early detection of chronic diseases such as diabetes and cancer and the use of edge computing in wearable devices to detect the spread of infectious diseases. In addition to discussing the challenges and limitations of Edge AI in early health prediction, this article emphasizes future research directions to address these concerns and the integration with existing healthcare systems and explore the full potential of these technologies in improving public health. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

23 pages, 14269 KB  
Article
Implementation and Evaluation of a Federated Learning Framework on Raspberry PI Platforms for IoT 6G Applications
by Lorenzo Ridolfi, David Naseh, Swapnil Sadashiv Shinde and Daniele Tarchi
Future Internet 2023, 15(11), 358; https://doi.org/10.3390/fi15110358 - 31 Oct 2023
Cited by 8 | Viewed by 4263
Abstract
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications [...] Read more.
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications and virtualization technologies has introduced diverse and heterogeneous devices into wireless networks. This diversity encompasses variations in computation, communication, storage resources, training data, and communication modes among connected nodes. In this context, our study presents a pivotal contribution by analyzing and implementing FL processes tailored for 6G standards. Our work defines a practical FL platform, employing Raspberry Pi devices and virtual machines as client nodes, with a Windows PC serving as a parameter server. We tackle the image classification challenge, implementing the FL model via PyTorch, augmented by the specialized FL library, Flower. Notably, our analysis delves into the impact of computational resources, data availability, and heating issues across heterogeneous device sets. Additionally, we address knowledge transfer and employ pre-trained networks in our FL performance evaluation. This research underscores the indispensable role of artificial intelligence in IoT scenarios within the 6G landscape, providing a comprehensive framework for FL implementation across diverse and heterogeneous devices. Full article
Show Figures

Figure 1

32 pages, 419 KB  
Article
The 6G Ecosystem as Support for IoE and Private Networks: Vision, Requirements, and Challenges
by Carlos Serôdio, José Cunha, Guillermo Candela, Santiago Rodriguez, Xosé Ramón Sousa and Frederico Branco
Future Internet 2023, 15(11), 348; https://doi.org/10.3390/fi15110348 - 25 Oct 2023
Cited by 51 | Viewed by 6234
Abstract
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging [...] Read more.
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging with challenging network requirements. The primary goals of 6G include providing sophisticated and high-quality services, extremely reliable and further-enhanced mobile broadband (feMBB), low-latency communication (ERLLC), long-distance and high-mobility communications (LDHMC), ultra-massive machine-type communications (umMTC), extremely low-power communications (ELPC), holographic communications, and quality of experience (QoE), grounded in incorporating massive broad-bandwidth machine-type (mBBMT), mobile broad-bandwidth and low-latency (MBBLL), and massive low-latency machine-type (mLLMT) communications. In attaining its objectives, 6G faces challenges that demand inventive solutions, incorporating AI, softwarization, cloudification, virtualization, and slicing features. Technologies like network function virtualization (NFV), network slicing, and software-defined networking (SDN) play pivotal roles in this integration, which facilitates efficient resource utilization, responsive service provisioning, expanded coverage, enhanced network reliability, increased capacity, densification, heightened availability, safety, security, and reduced energy consumption. It presents innovative network infrastructure concepts, such as resource-as-a-service (RaaS) and infrastructure-as-a-service (IaaS), featuring management and service orchestration mechanisms. This includes nomadic networks, AI-aware networking strategies, and dynamic management of diverse network resources. This paper provides an in-depth survey of the wireless evolution leading to 6G networks, addressing future issues and challenges associated with 6G technology to support V2X environments considering presenting +challenges in architecture, spectrum, air interface, reliability, availability, density, flexibility, mobility, and security. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
26 pages, 4052 KB  
Article
Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities
by Edisa Lozić and Benjamin Štular
Future Internet 2023, 15(10), 336; https://doi.org/10.3390/fi15100336 - 13 Oct 2023
Cited by 48 | Viewed by 14421
Abstract
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots [...] Read more.
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

27 pages, 312 KB  
Article
A New Approach to Web Application Security: Utilizing GPT Language Models for Source Code Inspection
by Zoltán Szabó and Vilmos Bilicki
Future Internet 2023, 15(10), 326; https://doi.org/10.3390/fi15100326 - 28 Sep 2023
Cited by 20 | Viewed by 6389
Abstract
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied [...] Read more.
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied and in what areas? Programming, including the generation, interpretation, analysis, and documentation of static program code based on promptsis one of the most promising fields. With the GPT API, we have explored a new aspect of this: static analysis of the source code of front-end applications at the endpoints of the data path. Our focus was the detection of the CWE-653 vulnerability—inadequately isolated sensitive code segments that could lead to unauthorized access or data leakage. This type of vulnerability detection consists of the detection of code segments dealing with sensitive data and the categorization of the isolation and protection levels of those segments that were previously not feasible without human intervention. However, we believed that the interpretive capabilities of GPT models could be explored to create a set of prompts to detect these cases on a file-by-file basis for the applications under study, and the efficiency of the method could pave the way for additional analysis tasks that were previously unavailable for automation. In the introduction to our paper, we characterize in detail the problem space of vulnerability and weakness detection, the challenges of the domain, and the advances that have been achieved in similarly complex areas using GPT or other LLMs. Then, we present our methodology, which includes our classification of sensitive data and protection levels. This is followed by the process of preprocessing, analyzing, and evaluating static code. This was achieved through a series of GPT prompts containing parts of static source code, utilizing few-shot examples and chain-of-thought techniques that detected sensitive code segments and mapped the complex code base into manageable JSON structures.Finally, we present our findings and evaluation of the open source project analysis, comparing the results of the GPT-based pipelines with manual evaluations, highlighting that the field yields a high research value. The results show a vulnerability detection rate for this particular type of model of 88.76%, among others. Full article
18 pages, 1957 KB  
Article
An Enhanced Minimax Loss Function Technique in Generative Adversarial Network for Ransomware Behavior Prediction
by Mazen Gazzan and Frederick T. Sheldon
Future Internet 2023, 15(10), 318; https://doi.org/10.3390/fi15100318 - 22 Sep 2023
Cited by 14 | Viewed by 4434
Abstract
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such [...] Read more.
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such an approach assumes that the same features remain unchanged. This approach proves ineffective due to the polymorphic and metamorphic characteristics of ransomware, which generate unique attack patterns for each new target, particularly in the pre-encryption phase where evasiveness is prioritized. As a result, the selected features quickly become obsolete. Therefore, this study proposes an enhanced Bi-Gradual Minimax (BGM) loss function for the Generative Adversarial Network (GAN) Algorithm that compensates for the attack patterns insufficiency to represents the polymorphic behavior at the earlier phases of the ransomware lifecycle. Unlike existing GAN-based models, the BGM-GAN gradually minimizes the maximum loss of the generator and discriminator in the network. This allows the generator to create artificial patterns that resemble the pre-encryption data distribution. The generator is used to craft evasive adversarial patterns and add them to the original data. Then, the generator and discriminator compete to optimize their weights during the training phase such that the generator produces realistic attack patterns, while the discriminator endeavors to distinguish between the real and crafted patterns. The experimental results show that the proposed BGM-GAN reached maximum accuracy of 0.98, recall (0.96), and a minimum false positive rate (0.14) which all outperform those obtained by the existing works. The application of BGM-GAN can be extended to early detect malware and other types of attacks. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

16 pages, 1050 KB  
Review
Enhancing E-Learning with Blockchain: Characteristics, Projects, and Emerging Trends
by Mahmoud Bidry, Abdellah Ouaguid and Mohamed Hanine
Future Internet 2023, 15(9), 293; https://doi.org/10.3390/fi15090293 - 28 Aug 2023
Cited by 20 | Viewed by 5802
Abstract
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there [...] Read more.
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there has been a growing interest in investigating the potential of Blockchain technology to enhance diverse fields, such as e-learning. In this research, we undertook a systematic literature review to explore the potential of Blockchain technology in enhancing the e-learning domain. Our research focused on four main questions: (1) What potential characteristics of Blockchain can contribute to enhancing e-learning? (2) What are the existing Blockchain projects dedicated to e-learning? (3) What are the limitations of existing projects? (4) What are the future trends in Blockchain-related research that will impact e-learning? The results showed that Blockchain technology has several characteristics that could benefit e-learning. We also discussed immutability, transparency, decentralization, security, and traceability. We also identified several existing Blockchain projects dedicated to e-learning and discussed their potential to revolutionize learning by providing more transparency, security, and effectiveness. However, our research also revealed many limitations and challenges that could be addressed to achieve Blockchain technology’s potential in e-learning. Full article
(This article belongs to the Special Issue Future Prospects and Advancements in Blockchain Technology)
Show Figures

Figure 1

15 pages, 644 KB  
Review
Generative AI in Medicine and Healthcare: Promises, Opportunities and Challenges
by Peng Zhang and Maged N. Kamel Boulos
Future Internet 2023, 15(9), 286; https://doi.org/10.3390/fi15090286 - 24 Aug 2023
Cited by 222 | Viewed by 53869
Abstract
Generative AI (artificial intelligence) refers to algorithms and models, such as OpenAI’s ChatGPT, that can be prompted to generate various types of content. In this narrative review, we present a selection of representative examples of generative AI applications in medicine and healthcare. We [...] Read more.
Generative AI (artificial intelligence) refers to algorithms and models, such as OpenAI’s ChatGPT, that can be prompted to generate various types of content. In this narrative review, we present a selection of representative examples of generative AI applications in medicine and healthcare. We then briefly discuss some associated issues, such as trust, veracity, clinical safety and reliability, privacy, copyrights, ownership, and opportunities, e.g., AI-driven conversational user interfaces for friendlier human-computer interaction. We conclude that generative AI will play an increasingly important role in medicine and healthcare as it further evolves and gets better tailored to the unique settings and requirements of the medical domain and as the laws, policies and regulatory frameworks surrounding its use start taking shape. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things II)
Show Figures

Figure 1

21 pages, 538 KB  
Article
Prospects of Cybersecurity in Smart Cities
by Fernando Almeida
Future Internet 2023, 15(9), 285; https://doi.org/10.3390/fi15090285 - 23 Aug 2023
Cited by 22 | Viewed by 9296
Abstract
The complex and interconnected infrastructure of smart cities offers several opportunities for attackers to exploit vulnerabilities and carry out cyberattacks that can have serious consequences for the functioning of cities’ critical infrastructures. This study aims to address this phenomenon and characterize the dimensions [...] Read more.
The complex and interconnected infrastructure of smart cities offers several opportunities for attackers to exploit vulnerabilities and carry out cyberattacks that can have serious consequences for the functioning of cities’ critical infrastructures. This study aims to address this phenomenon and characterize the dimensions of security risks in smart cities and present mitigation proposals to address these risks. The study adopts a qualitative methodology through the identification of 62 European research projects in the field of cybersecurity in smart cities, which are underway during the period from 2022 to 2027. Compared to previous studies, this work provides a comprehensive view of security risks from the perspective of multiple universities, research centers, and companies participating in European projects. The findings of this study offer relevant scientific contributions by identifying 7 dimensions and 31 sub-dimensions of cybersecurity risks in smart cities and proposing 24 mitigation strategies to face these security challenges. Furthermore, this study explores emerging cybersecurity issues to which smart cities are exposed by the increasing proliferation of new technologies and standards. Full article
(This article belongs to the Special Issue Cyber Security Challenges in the New Smart Worlds)
Show Figures

Figure 1

38 pages, 7280 KB  
Article
SEDIA: A Platform for Semantically Enriched IoT Data Integration and Development of Smart City Applications
by Dimitrios Lymperis and Christos Goumopoulos
Future Internet 2023, 15(8), 276; https://doi.org/10.3390/fi15080276 - 18 Aug 2023
Cited by 9 | Viewed by 4698
Abstract
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a [...] Read more.
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a pivotal position in materializing these applications, given that they offer a suite of high-level services, which can be repurposed by developers. Although a variety of platforms are available to aid the creation of smart city applications, most fail to couple their services with geographical data, do not offer the ability to execute semantic queries on the available data, and possess restrictions that could impede the development process. This paper introduces SEDIA, a platform for developing smart applications based on diverse data sources, including geographical information, to support a semantically enriched data model for effective data analysis and integration. It also discusses the efficacy of SEDIA in a proof-of-concept smart city application related to air quality monitoring. The platform utilizes ontology classes and properties to semantically annotate collected data, and the Neo4j graph database facilitates the recognition of patterns and relationships within the data. This research also offers empirical data demonstrating the performance evaluation of SEDIA. These contributions collectively advance our understanding of semantically enriched data integration within the realm of smart city applications. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Graphical abstract

60 pages, 14922 KB  
Review
The Power of Generative AI: A Review of Requirements, Models, Input–Output Formats, Evaluation Metrics, and Challenges
by Ajay Bandi, Pydi Venkata Satya Ramesh Adapa and Yudu Eswar Vinay Pratap Kumar Kuchi
Future Internet 2023, 15(8), 260; https://doi.org/10.3390/fi15080260 - 31 Jul 2023
Cited by 338 | Viewed by 71017
Abstract
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate [...] Read more.
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate the fundamental aspects of generative AI systems, including their requirements, models, input–output formats, and evaluation metrics. The study addresses key research questions and presents comprehensive insights to guide researchers, developers, and practitioners in the field. Firstly, the requirements necessary for implementing generative AI systems are examined and categorized into three distinct categories: hardware, software, and user experience. Furthermore, the study explores the different types of generative AI models described in the literature by presenting a taxonomy based on architectural characteristics, such as variational autoencoders (VAEs), generative adversarial networks (GANs), diffusion models, transformers, language models, normalizing flow models, and hybrid models. A comprehensive classification of input and output formats used in generative AI systems is also provided. Moreover, the research proposes a classification system based on output types and discusses commonly used evaluation metrics in generative AI. The findings contribute to advancements in the field, enabling researchers, developers, and practitioners to effectively implement and evaluate generative AI models for various applications. The significance of the research lies in understanding that generative AI system requirements are crucial for effective planning, design, and optimal performance. A taxonomy of models aids in selecting suitable options and driving advancements. Classifying input–output formats enables leveraging diverse formats for customized systems, while evaluation metrics establish standardized methods to assess model quality and performance. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

26 pages, 1726 KB  
Article
Towards Efficient Resource Allocation for Federated Learning in Virtualized Managed Environments
by Fotis Nikolaidis, Moysis Symeonides and Demetris Trihinas
Future Internet 2023, 15(8), 261; https://doi.org/10.3390/fi15080261 - 31 Jul 2023
Cited by 17 | Viewed by 5339
Abstract
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes [...] Read more.
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes often possess limited computational power, network bandwidth, and energy resources. While various techniques have been developed to optimize the FL training process, an important question remains unanswered: how should resources be allocated in the training workflow? To address this question, it is crucial to understand the nature of these resources. In physical environments, the allocation is typically performed at the node level, with the entire node dedicated to executing a single workload. In contrast, virtualized environments allow for the dynamic partitioning of a node into containerized units that can adapt to changing workloads. Consequently, the new question that arises is: how can a physical node be partitioned into virtual resources to maximize the efficiency of the FL process? To answer this, we investigate various resource allocation methods that consider factors such as computational and network capabilities, the complexity of datasets, as well as the specific characteristics of the FL workflow and ML backend. We explore two scenarios: (i) running FL over a finite number of testbed nodes and (ii) hosting multiple parallel FL workflows on the same set of testbed nodes. Our findings reveal that the default configurations of state-of-the-art cloud orchestrators are sub-optimal when orchestrating FL workflows. Additionally, we demonstrate that different libraries and ML models exhibit diverse computational footprints. Building upon these insights, we discuss methods to mitigate computational interferences and enhance the overall performance of the FL pipeline execution. Full article
Show Figures

Figure 1

31 pages, 602 KB  
Review
A Review of ARIMA vs. Machine Learning Approaches for Time Series Forecasting in Data Driven Networks
by Vaia I. Kontopoulou, Athanasios D. Panagopoulos, Ioannis Kakkos and George K. Matsopoulos
Future Internet 2023, 15(8), 255; https://doi.org/10.3390/fi15080255 - 30 Jul 2023
Cited by 166 | Viewed by 41154
Abstract
In the broad scientific field of time series forecasting, the ARIMA models and their variants have been widely applied for half a century now due to their mathematical simplicity and flexibility in application. However, with the recent advances in the development and efficient [...] Read more.
In the broad scientific field of time series forecasting, the ARIMA models and their variants have been widely applied for half a century now due to their mathematical simplicity and flexibility in application. However, with the recent advances in the development and efficient deployment of artificial intelligence models and techniques, the view is rapidly changing, with a shift towards machine and deep learning approaches becoming apparent, even without a complete evaluation of the superiority of the new approach over the classic statistical algorithms. Our work constitutes an extensive review of the published scientific literature regarding the comparison of ARIMA and machine learning algorithms applied to time series forecasting problems, as well as the combination of these two approaches in hybrid statistical-AI models in a wide variety of data applications (finance, health, weather, utilities, and network traffic prediction). Our review has shown that the AI algorithms display better prediction performance in most applications, with a few notable exceptions analyzed in our Discussion and Conclusions sections, while the hybrid statistical-AI models steadily outperform their individual parts, utilizing the best algorithmic features of both worlds. Full article
(This article belongs to the Special Issue Smart Data and Systems for the Internet of Things)
Show Figures

Figure 1

30 pages, 583 KB  
Review
Task Allocation Methods and Optimization Techniques in Edge Computing: A Systematic Review of the Literature
by Vasilios Patsias, Petros Amanatidis, Dimitris Karampatzakis, Thomas Lagkas, Kalliopi Michalakopoulou and Alexandros Nikitas
Future Internet 2023, 15(8), 254; https://doi.org/10.3390/fi15080254 - 28 Jul 2023
Cited by 33 | Viewed by 7917
Abstract
Task allocation in edge computing refers to the process of distributing tasks among the various nodes in an edge computing network. The main challenges in task allocation include determining the optimal location for each task based on the requirements such as processing power, [...] Read more.
Task allocation in edge computing refers to the process of distributing tasks among the various nodes in an edge computing network. The main challenges in task allocation include determining the optimal location for each task based on the requirements such as processing power, storage, and network bandwidth, and adapting to the dynamic nature of the network. Different approaches for task allocation include centralized, decentralized, hybrid, and machine learning algorithms. Each approach has its strengths and weaknesses and the choice of approach will depend on the specific requirements of the application. In more detail, the selection of the most optimal task allocation methods depends on the edge computing architecture and configuration type, like mobile edge computing (MEC), cloud-edge, fog computing, peer-to-peer edge computing, etc. Thus, task allocation in edge computing is a complex, diverse, and challenging problem that requires a balance of trade-offs between multiple conflicting objectives such as energy efficiency, data privacy, security, latency, and quality of service (QoS). Recently, an increased number of research studies have emerged regarding the performance evaluation and optimization of task allocation on edge devices. While several survey articles have described the current state-of-the-art task allocation methods, this work focuses on comparing and contrasting different task allocation methods, optimization algorithms, as well as the network types that are most frequently used in edge computing systems. Full article
Show Figures

Figure 1

Back to TopTop