Next Issue
Volume 16, February
Previous Issue
Volume 15, December
 
 

Future Internet, Volume 16, Issue 1 (January 2024) – 35 articles

Cover Story (view full-size image): Exploring the security of widely used open-source 5G projects, such as Open5GS and Open Air Interface, represents an important step in uncovering security gaps. In particular, the experimental study on externally exposed Network Functions, such as AMF and NRF/NEF, may help to understand the pivotal role of Mobile Network Operators in implementing robust security measures. Amidst the shift to Network Function Virtualization, the experimental study underscores secure development practices to enhance 5G network function integrity. It highlights the importance of scrutinizing security in open-source 5G projects, offering insights for MNOs to establish effective security measures. The empirical investigation identifies potential vulnerabilities, guiding future enhancements and standards releases. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3048 KiB  
Article
A Bee Colony-Based Optimized Searching Mechanism in the Internet of Things
by Muhammad Sher Ramzan, Anees Asghar, Ata Ullah, Fawaz Alsolami and Iftikhar Ahmad
Future Internet 2024, 16(1), 35; https://doi.org/10.3390/fi16010035 - 22 Jan 2024
Viewed by 1279
Abstract
The Internet of Things (IoT) consists of complex and dynamically aggregated elements or smart entities that need decentralized supervision for data exchanging throughout different networks. The artificial bee colony (ABC) is utilized in optimization problems for the big data in IoT, cloud and [...] Read more.
The Internet of Things (IoT) consists of complex and dynamically aggregated elements or smart entities that need decentralized supervision for data exchanging throughout different networks. The artificial bee colony (ABC) is utilized in optimization problems for the big data in IoT, cloud and central repositories. The main limitation during the searching mechanism is that every single food site is compared with every other food site to find the best solution in the neighboring regions. In this way, an extensive number of redundant comparisons are required, which results in a slower convergence rate, greater time consumption and increased delays. This paper presents a solution to optimize search operations with an enhanced ABC (E-ABC) approach. The proposed algorithm compares the best food sites with neighboring sites to exclude poor sources. It achieves an efficient mechanism, where the number of redundant comparisons is decreased during the searching mechanism of the employed bee phase and the onlooker bee phase. The proposed algorithm is implemented in a replication scenario to validate its performance in terms of the mean objective function values for different functions, as well as the probability of availability and the response time. The results prove the superiority of the E-ABC in contrast to its counterparts. Full article
Show Figures

Figure 1

20 pages, 6535 KiB  
Article
An Innovative Information Hiding Scheme Based on Block-Wise Pixel Reordering
by Jui-Chuan Liu, Heng-Xiao Chi, Ching-Chun Chang and Chin-Chen Chang
Future Internet 2024, 16(1), 34; https://doi.org/10.3390/fi16010034 - 22 Jan 2024
Viewed by 1225
Abstract
Information has been uploaded and downloaded through the Internet, day in and day out, ever since we immersed ourselves in the Internet. Data security has become an area demanding high attention, and one of the most efficient techniques for protecting data is data [...] Read more.
Information has been uploaded and downloaded through the Internet, day in and day out, ever since we immersed ourselves in the Internet. Data security has become an area demanding high attention, and one of the most efficient techniques for protecting data is data hiding. In recent studies, it has been shown that the indices of a codebook can be reordered to hide secret bits. The hiding capacity of the codeword index reordering scheme increases when the size of the codebook increases. Since the codewords in the codebook are not modified, the visual performance of compressed images is retained. We propose a novel scheme making use of the fundamental principle of the codeword index reordering technique to hide secret data in encrypted images. By observing our experimental results, we can see that the obtained embedding capacity of 197,888 is larger than other state-of-the-art schemes. Secret data can be extracted when a receiver owns a data hiding key, and the image can be recovered when a receiver owns an encryption key. Full article
Show Figures

Figure 1

27 pages, 2022 KiB  
Review
Overview of Protocols and Standards for Wireless Sensor Networks in Critical Infrastructures
by Spyridon Daousis, Nikolaos Peladarinos, Vasileios Cheimaras, Panagiotis Papageorgas, Dimitrios D. Piromalis and Radu Adrian Munteanu
Future Internet 2024, 16(1), 33; https://doi.org/10.3390/fi16010033 - 21 Jan 2024
Cited by 2 | Viewed by 2017
Abstract
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the [...] Read more.
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the market tension in recent years in the gradual development of wireless networks for industrial applications, and proceeds to categorize WSNs and examine the protocols and standards of WSNs in demanding environments like critical infrastructures, drawing on the recent literature. This review concentrates on the protocols and standards utilized in WSNs for critical infrastructures, and it concludes by identifying a notable gap in the literature concerning quality standards for equipment used in such infrastructures. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

42 pages, 2733 KiB  
Review
A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
by Hassan Khazane, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Future Internet 2024, 16(1), 32; https://doi.org/10.3390/fi16010032 - 19 Jan 2024
Cited by 3 | Viewed by 2066
Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, [...] Read more.
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

18 pages, 2860 KiB  
Article
Investigation of Phishing Susceptibility with Explainable Artificial Intelligence
by Zhengyang Fan, Wanru Li, Kathryn Blackmond Laskey and Kuo-Chu Chang
Future Internet 2024, 16(1), 31; https://doi.org/10.3390/fi16010031 - 17 Jan 2024
Viewed by 2300
Abstract
Phishing attacks represent a significant and growing threat in the digital world, affecting individuals and organizations globally. Understanding the various factors that influence susceptibility to phishing is essential for developing more effective strategies to combat this pervasive cybersecurity challenge. Machine learning has become [...] Read more.
Phishing attacks represent a significant and growing threat in the digital world, affecting individuals and organizations globally. Understanding the various factors that influence susceptibility to phishing is essential for developing more effective strategies to combat this pervasive cybersecurity challenge. Machine learning has become a prevalent method in the study of phishing susceptibility. Most studies in this area have taken one of two approaches: either they explore statistical associations between various factors and susceptibility, or they use complex models such as deep neural networks to predict phishing behavior. However, these approaches have limitations in terms of providing practical insights for individuals to avoid future phishing attacks and delivering personalized explanations regarding their susceptibility to phishing. In this paper, we propose a machine-learning approach that leverages explainable artificial intelligence techniques to examine the influence of human and demographic factors on susceptibility to phishing attacks. The machine learning model yielded an accuracy of 78%, with a recall of 71%, and a precision of 57%. Our analysis reveals that psychological factors such as impulsivity and conscientiousness, as well as appropriate online security habits, significantly affect an individual’s susceptibility to phishing attacks. Furthermore, our individualized case-by-case approach offers personalized recommendations on mitigating the risk of falling prey to phishing exploits, considering the specific circumstances of each individual. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT III)
Show Figures

Figure 1

20 pages, 8168 KiB  
Article
An Imbalanced Sequence Feature Extraction Approach for the Detection of LTE-R Cells with Degraded Communication Performance
by Jiantao Qu, Chunyu Qi and He Meng
Future Internet 2024, 16(1), 30; https://doi.org/10.3390/fi16010030 - 16 Jan 2024
Viewed by 1257
Abstract
Within the Shuo Huang Railway Company (Suning, China ) the long-term evolution for railways (LTE-R) network carries core wireless communication services for trains. The communication performance of LTE-R cells directly affects the operational safety of the trains. Therefore, this paper proposes a novel [...] Read more.
Within the Shuo Huang Railway Company (Suning, China ) the long-term evolution for railways (LTE-R) network carries core wireless communication services for trains. The communication performance of LTE-R cells directly affects the operational safety of the trains. Therefore, this paper proposes a novel detection method for LTE-R cells with degraded communication performance. Considering that the number of LTE-R cells with degraded communication performance and that of normal cells are extremely imbalanced and that the communication performance indicator data for each cell are sequence data, we propose a feature extraction neural network structure for imbalanced sequences, based on shapelet transformation and a convolutional neural network (CNN). Then, to train the network, we set the optimization objective based on the Fisher criterion. Finally, using a two-stage training method, we obtain a neural network model that can distinguish LTE-R cells with degraded communication performance from normal cells at the feature level. Experiments on a real-world dataset show that the proposed method can realize the accurate detection of LTE-R cells with degraded communication performance and has high practical application value. Full article
Show Figures

Figure 1

31 pages, 3700 KiB  
Review
Cross-Layer Methods for Ad Hoc Networks—Review and Classification
by Valeriy Ivanov and Maxim Tereshonok
Future Internet 2024, 16(1), 29; https://doi.org/10.3390/fi16010029 - 16 Jan 2024
Viewed by 1427
Abstract
The OSI model used to be a common network model for years. In the case of ad hoc networks with dynamic topology and difficult radio communications conditions, gradual departure is happening from the classical kind of OSI network model with a clear delineation [...] Read more.
The OSI model used to be a common network model for years. In the case of ad hoc networks with dynamic topology and difficult radio communications conditions, gradual departure is happening from the classical kind of OSI network model with a clear delineation of layers (physical, channel, network, transport, application) to the cross-layer approach. The layers of the network model in ad hoc networks strongly influence each other. Thus, the cross-layer approach can improve the performance of an ad hoc network by jointly developing protocols using interaction and collaborative optimization of multiple layers. The existing cross-layer methods classification is too complicated because it is based on the whole manifold of network model layer combinations, regardless of their importance. In this work, we review ad hoc network cross-layer methods, propose a new useful classification of cross-layer methods, and show future research directions in the development of ad hoc network cross-layer methods. The proposed classification can help to simplify the goal-oriented cross-layer protocol development. Full article
Show Figures

Figure 1

23 pages, 933 KiB  
Article
Clustering on the Chicago Array of Things: Spotting Anomalies in the Internet of Things Records
by Kyle DeMedeiros, Chan Young Koh and Abdeltawab Hendawi
Future Internet 2024, 16(1), 28; https://doi.org/10.3390/fi16010028 - 16 Jan 2024
Viewed by 1524
Abstract
The Chicago Array of Things (AoT) is a robust dataset taken from over 100 nodes over four years. Each node contains over a dozen sensors. The array contains a series of Internet of Things (IoT) devices with multiple heterogeneous sensors connected to a [...] Read more.
The Chicago Array of Things (AoT) is a robust dataset taken from over 100 nodes over four years. Each node contains over a dozen sensors. The array contains a series of Internet of Things (IoT) devices with multiple heterogeneous sensors connected to a processing and storage backbone to collect data from across Chicago, IL, USA. The data collected include meteorological data such as temperature, humidity, and heat, as well as chemical data like CO2 concentration, PM2.5, and light intensity. The AoT sensor network is one of the largest open IoT systems available for researchers to utilize its data. Anomaly detection (AD) in IoT and sensor networks is an important tool to ensure that the ever-growing IoT ecosystem is protected from faulty data and sensors, as well as from attacking threats. Interestingly, an in-depth analysis of the Chicago AoT for anomaly detection is rare. Here, we study the viability of the Chicago AoT dataset to be used in anomaly detection by utilizing clustering techniques. We utilized K-Means, DBSCAN, and Hierarchical DBSCAN (H-DBSCAN) to determine the viability of labeling an unlabeled dataset at the sensor level. The results show that the clustering algorithm best suited for this task varies based on the density of the anomalous readings and the variability of the data points being clustered; however, at the sensor level, the K-Means algorithm, though simple, is better suited for the task of determining specific, at-a-glance anomalies than the more complex DBSCAN and HDBSCAN algorithms, though it comes with drawbacks. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

20 pages, 1128 KiB  
Article
Service Function Chain Deployment Algorithm Based on Deep Reinforcement Learning in Space–Air–Ground Integrated Network
by Xu Feng, Mengyang He, Lei Zhuang, Yanrui Song and Rumeng Peng
Future Internet 2024, 16(1), 27; https://doi.org/10.3390/fi16010027 - 16 Jan 2024
Viewed by 1444
Abstract
SAGIN is formed by the fusion of ground networks and aircraft networks. It breaks through the limitation of communication, which cannot cover the whole world, bringing new opportunities for network communication in remote areas. However, many heterogeneous devices in SAGIN pose significant challenges [...] Read more.
SAGIN is formed by the fusion of ground networks and aircraft networks. It breaks through the limitation of communication, which cannot cover the whole world, bringing new opportunities for network communication in remote areas. However, many heterogeneous devices in SAGIN pose significant challenges in terms of end-to-end resource management, and the limited regional heterogeneous resources also threaten the QoS for users. In this regard, this paper proposes a hierarchical resource management structure for SAGIN, named SAGIN-MEC, based on a SDN, NFV, and MEC, aiming to facilitate the systematic management of heterogeneous network resources. Furthermore, to minimize the operator deployment costs while ensuring the QoS, this paper formulates a resource scheduling optimization model tailored to SAGIN scenarios to minimize energy consumption. Additionally, we propose a deployment algorithm, named DRL-G, which is based on heuristics and DRL, aiming to allocate heterogeneous network resources within SAGIN effectively. Experimental results showed that SAGIN-MEC can reduce the end-to-end delay by 6–15 ms compared to the terrestrial edge network, and compared to other algorithms, the DRL-G algorithm can improve the service request reception rate by up to 20%. In terms of energy consumption, it reduces the average energy consumption by 4.4% compared to the PG algorithm. Full article
Show Figures

Figure 1

18 pages, 3483 KiB  
Article
Digital Communication and Social Organizations: An Evaluation of the Communication Strategies of the Most-Valued NGOs Worldwide
by Andrea Moreno-Cabanillas, Elizabet Castillero-Ostio and Antonio Castillo-Esparcia
Future Internet 2024, 16(1), 26; https://doi.org/10.3390/fi16010026 - 13 Jan 2024
Viewed by 1703
Abstract
The communication of organizations with their audiences has undergone changes thanks to the Internet. Non-Governmental Organizations (NGOs), as influential groups, are no exception, as much of their activism takes place through grassroots digital lobbying. The consolidation of Web 2.0 has not only provided [...] Read more.
The communication of organizations with their audiences has undergone changes thanks to the Internet. Non-Governmental Organizations (NGOs), as influential groups, are no exception, as much of their activism takes place through grassroots digital lobbying. The consolidation of Web 2.0 has not only provided social organizations with a new and powerful tool for disseminating information but also brought about significant changes in the relationship between nonprofit organizations and their diverse audiences. This has facilitated and improved interaction between them. The purpose of this article is to analyze the level of interactivity implemented on the websites of leading NGOs worldwide and their presence on social networks, with the aim of assessing whether these influential groups are moving towards more dialogic systems in relation to their audience. The results reveal that NGOs have a high degree of interactivity in the tools used to present and disseminate information on their websites. However, not all maintain the same level of interactivity in the resources available for interaction with Internet users, as very few have high interactivity regarding bidirectional resources. It was concluded that international non-governmental organizations still suffer from certain shortcomings in the strategic management of digital communication on their web platforms, while, on the other hand, a strong presence can be noted on the most-popular social networks. Full article
(This article belongs to the Special Issue Social Internet of Things (SIoT))
Show Figures

Figure 1

13 pages, 926 KiB  
Article
Classification Tendency Difference Index Model for Feature Selection and Extraction in Wireless Intrusion Detection
by Chinyang Henry Tseng, Woei-Jiunn Tsaur and Yueh-Mao Shen
Future Internet 2024, 16(1), 25; https://doi.org/10.3390/fi16010025 - 12 Jan 2024
Viewed by 1352
Abstract
In detecting large-scale attacks, deep neural networks (DNNs) are an effective approach based on high-quality training data samples. Feature selection and feature extraction are the primary approaches for data quality enhancement for high-accuracy intrusion detection. However, their enhancement root causes usually present weak [...] Read more.
In detecting large-scale attacks, deep neural networks (DNNs) are an effective approach based on high-quality training data samples. Feature selection and feature extraction are the primary approaches for data quality enhancement for high-accuracy intrusion detection. However, their enhancement root causes usually present weak relationships to the differences between normal and attack behaviors in the data samples. Thus, we propose a Classification Tendency Difference Index (CTDI) model for feature selection and extraction in intrusion detection. The CTDI model consists of three indexes: Classification Tendency Frequency Difference (CTFD), Classification Tendency Membership Difference (CTMD), and Classification Tendency Distance Difference (CTDD). In the dataset, each feature has many feature values (FVs). In each FV, the normal and attack samples indicate the FV classification tendency, and CTDI shows the classification tendency differences between the normal and attack samples. CTFD is the frequency difference between the normal and attack samples. By employing fuzzy C means (FCM) to establish the normal and attack clusters, CTMD is the membership difference between the clusters, and CTDD is the distance difference between the cluster centers. CTDI calculates the index score in each FV and summarizes the scores of all FVs in the feature as the feature score for each of the three indexes. CTDI adopts an Auto Encoder for feature extraction to generate new features from the dataset and calculate the three index scores for the new features. CTDI sorts the original and new features for each of the three indexes to select the best features. The selected CTDI features indicate the best classification tendency differences between normal and attack samples. The experiment results demonstrate that the CTDI features achieve better detection accuracy as classified by DNN for the Aegean WiFi Intrusion Dataset than their related works, and the detection enhancements are based on the improved classification tendency differences in the CTDI features. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

34 pages, 4069 KiB  
Article
Blockchain-Based Implementation of National Census as a Supplementary Instrument for Enhanced Transparency, Accountability, Privacy, and Security
by Sana Rasheed and Soulla Louca
Future Internet 2024, 16(1), 24; https://doi.org/10.3390/fi16010024 - 11 Jan 2024
Viewed by 1874
Abstract
A national population census is instrumental in offering a holistic view of a country’s progress, directly influencing policy formulation and strategic planning. Potential flaws in the census system can have detrimental impacts on national development. Our prior research has pinpointed various deficiencies in [...] Read more.
A national population census is instrumental in offering a holistic view of a country’s progress, directly influencing policy formulation and strategic planning. Potential flaws in the census system can have detrimental impacts on national development. Our prior research has pinpointed various deficiencies in current census methodologies, including inadequate population coverage, racial and ethnic discrimination, and challenges related to data privacy, security, and distribution. This study aims to address the “missing persons” challenge in the national census population and housing system. The integration of blockchain technology emerges as a promising solution for addressing these identified issues, enhancing the integrity and efficacy of census processes. Building upon our earlier research which examined the national census system of Pakistan, we propose an architecture design incorporating Hyperledger Fabric, performing system sizing for the entire nation count. The Blockchain-Based Implementation of National Census as a Supplementary Instrument for Enhanced Transparency, Accountability, Privacy, and Security (BINC-TAPS) seeks to provide a robust, transparent, scalable, immutable, and tamper-proof solution for conducting national population and housing censuses, while also fostering socio-economic advancements. This paper presents a comprehensive overview of our research, with a primary focus on the implementation of the blockchain-based proposed solution, including prototype testing and the resulting outcomes. Full article
Show Figures

Figure 1

14 pages, 1587 KiB  
Article
Future Sustainable Internet Energy-Defined Networking
by Alex Galis
Future Internet 2024, 16(1), 23; https://doi.org/10.3390/fi16010023 - 9 Jan 2024
Viewed by 1334
Abstract
This paper presents a comprehensive set of design methods for making future Internet networking fully energy-aware and sustainably minimizing and managing the energy footprint. It includes (a) 41 energy-aware design methods, grouped into Service Operations Support, Management Operations Support, Compute Operations Support, Connectivity/Forwarding [...] Read more.
This paper presents a comprehensive set of design methods for making future Internet networking fully energy-aware and sustainably minimizing and managing the energy footprint. It includes (a) 41 energy-aware design methods, grouped into Service Operations Support, Management Operations Support, Compute Operations Support, Connectivity/Forwarding Operations Support, Traffic Engineering Methods, Architectural Support for Energy Instrumentation, and Network Configuration; (b) energy consumption models and energy metrics are identified and specified. It specifies the requirements for energy-defined network compliance, which include energy-measurable network devices with the support of several control messages: registration, discovery, provisioning, discharge, monitoring, synchronization, flooding, performance, and pushback. Full article
Show Figures

Figure 1

31 pages, 1418 KiB  
Article
A Novel Semantic IoT Middleware for Secure Data Management: Blockchain and AI-Driven Context Awareness
by Mahmoud Elkhodr, Samiya Khan and Ergun Gide
Future Internet 2024, 16(1), 22; https://doi.org/10.3390/fi16010022 - 7 Jan 2024
Viewed by 1842
Abstract
In the modern digital landscape of the Internet of Things (IoT), data interoperability and heterogeneity present critical challenges, particularly with the increasing complexity of IoT systems and networks. Addressing these challenges, while ensuring data security and user trust, is pivotal. This paper proposes [...] Read more.
In the modern digital landscape of the Internet of Things (IoT), data interoperability and heterogeneity present critical challenges, particularly with the increasing complexity of IoT systems and networks. Addressing these challenges, while ensuring data security and user trust, is pivotal. This paper proposes a novel Semantic IoT Middleware (SIM) for healthcare. The architecture of this middleware comprises the following main processes: data generation, semantic annotation, security encryption, and semantic operations. The data generation module facilitates seamless data and event sourcing, while the Semantic Annotation Component assigns structured vocabulary for uniformity. SIM adopts blockchain technology to provide enhanced data security, and its layered approach ensures robust interoperability and intuitive user-centric operations for IoT systems. The security encryption module offers data protection, and the semantic operations module underpins data processing and integration. A distinctive feature of this middleware is its proficiency in service integration, leveraging semantic descriptions augmented by user feedback. Additionally, SIM integrates artificial intelligence (AI) feedback mechanisms to continuously refine and optimise the middleware’s operational efficiency. Full article
Show Figures

Figure 1

19 pages, 858 KiB  
Article
A Comprehensive Study and Analysis of the Third Generation Partnership Project’s 5G New Radio for Vehicle-to-Everything Communication
by G. G. Md. Nawaz Ali, Mohammad Nazmus Sadat, Md Suruz Miah, Sameer Ahmed Sharief and Yun Wang
Future Internet 2024, 16(1), 21; https://doi.org/10.3390/fi16010021 - 6 Jan 2024
Cited by 1 | Viewed by 2170
Abstract
Recently, the Third Generation Partnership Project (3GPP) introduced new radio (NR) technology for vehicle-to-everything (V2X) communication to enable delay-sensitive and bandwidth-hungry applications in vehicular communication. The NR system is strategically crafted to complement the existing long-term evolution (LTE) cellular-vehicle-to-everything (C-V2X) infrastructure, particularly to [...] Read more.
Recently, the Third Generation Partnership Project (3GPP) introduced new radio (NR) technology for vehicle-to-everything (V2X) communication to enable delay-sensitive and bandwidth-hungry applications in vehicular communication. The NR system is strategically crafted to complement the existing long-term evolution (LTE) cellular-vehicle-to-everything (C-V2X) infrastructure, particularly to support advanced services such as the operation of automated vehicles. It is widely anticipated that the fifth-generation (5G) NR system will surpass LTE C-V2X in terms of achieving superior performance in scenarios characterized by high throughput, low latency, and enhanced reliability, especially in the context of congested traffic conditions and a diverse range of vehicular applications. This article will provide a comprehensive literature review on vehicular communications from dedicated short-range communication (DSRC) to NR V2X. Subsequently, it delves into a detailed examination of the challenges and opportunities inherent in NR V2X technology. Finally, we proceed to elucidate the process of creating and analyzing an open-source 5G NR V2X module in network simulation-3 (ns-3) and then demonstrate the NR V2X performance in terms of different key performance indicators implemented through diverse operational scenarios. Full article
Show Figures

Figure 1

22 pages, 2705 KiB  
Article
Joint Beam-Forming Optimization for Active-RIS-Assisted Internet-of-Things Networks with SWIPT
by Lidong Liu, Shidang Li, Mingsheng Wei, Jinsong Xu and Bencheng Yu
Future Internet 2024, 16(1), 20; https://doi.org/10.3390/fi16010020 - 6 Jan 2024
Viewed by 1403
Abstract
Network energy resources are limited in communication systems, which may cause energy shortages in mobile devices at the user end. Active Reconfigurable Intelligent Surfaces (A-RIS) not only have phase modulation properties but also enhance the signal strength; thus, they are expected to solve [...] Read more.
Network energy resources are limited in communication systems, which may cause energy shortages in mobile devices at the user end. Active Reconfigurable Intelligent Surfaces (A-RIS) not only have phase modulation properties but also enhance the signal strength; thus, they are expected to solve the energy shortage problem experience at the user end in 6G communications. In this paper, a resource allocation algorithm for maximizing the sum of harvested energy is proposed for an active RIS-assisted Simultaneous Wireless Information and Power Transfer (SWIPT) system to solve the problem of low performance of harvested energy for users due to multiplicative fading. First, in the active RIS-assisted SWIPT system using a power splitting architecture to achieve information and energy co-transmission, the joint resource allocation problem is constructed with the objective function of maximizing the sum of the collected energy of all users, under the constraints of signal-to-noise ratio, active RIS and base station transmit power, and power splitting factors. Second, the considered non-convex problem can be turned into a standard convex problem by using alternating optimization, semi-definite relaxation, successive convex approximation, penalty function, etc., and then an alternating iterative algorithm for harvesting energy is proposed. The proposed algorithm splits the problem into two sub-problems and then performs iterative optimization separately, and then the whole is alternately optimized to obtain the optimal solution. Simulation results show that the proposed algorithm improves the performance by 45.2% and 103.7% compared to the passive RIS algorithm and the traditional without-RIS algorithm, respectively, at the maximum permissible transmitting power of 45 dBm at the base station. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
Show Figures

Figure 1

17 pages, 3053 KiB  
Article
Proximal Policy Optimization for Efficient D2D-Assisted Computation Offloading and Resource Allocation in Multi-Access Edge Computing
by Chen Zhang, Celimuge Wu, Min Lin, Yangfei Lin and William Liu
Future Internet 2024, 16(1), 19; https://doi.org/10.3390/fi16010019 - 2 Jan 2024
Viewed by 2116
Abstract
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. [...] Read more.
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. However, the inherent complexity and variability of MEC networks pose significant challenges in computational offloading decisions. To tackle this problem, we propose a proximal policy optimization (PPO)-based Device-to-Device (D2D)-assisted computation offloading and resource allocation scheme. We construct a realistic MEC network environment and develop a Markov decision process (MDP) model that minimizes time loss and energy consumption. The integration of a D2D communication-based offloading framework allows for collaborative task offloading between end devices and MEC servers, enhancing both resource utilization and computational efficiency. The MDP model is solved using the PPO algorithm in deep reinforcement learning to derive an optimal policy for offloading and resource allocation. Extensive comparative analysis with three benchmarked approaches has confirmed our scheme’s superior performance in latency, energy consumption, and algorithmic convergence, demonstrating its potential to improve MEC network operations in the context of emerging 5G and beyond technologies. Full article
Show Figures

Figure 1

23 pages, 1647 KiB  
Article
Controllable Queuing System with Elastic Traffic and Signals for Resource Capacity Planning in 5G Network Slicing
by Irina Kochetkova, Kseniia Leonteva, Ibram Ghebrial, Anastasiya Vlaskina, Sofia Burtseva, Anna Kushchazli and Konstantin Samouylov
Future Internet 2024, 16(1), 18; https://doi.org/10.3390/fi16010018 - 31 Dec 2023
Viewed by 1673
Abstract
Fifth-generation (5G) networks provide network slicing capabilities, enabling the deployment of multiple logically isolated network slices on a single infrastructure platform to meet specific requirements of users. This paper focuses on modeling and analyzing resource capacity planning and reallocation for network slicing, specifically [...] Read more.
Fifth-generation (5G) networks provide network slicing capabilities, enabling the deployment of multiple logically isolated network slices on a single infrastructure platform to meet specific requirements of users. This paper focuses on modeling and analyzing resource capacity planning and reallocation for network slicing, specifically between two providers transmitting elastic traffic, such during as web browsing. A controller determines the need for resource reallocation and plans new resource capacity accordingly. A Markov decision process is employed in a controllable queuing system to find the optimal resource capacity for each provider. The reward function incorporates three network slicing principles: maximum matching for equal resource partitioning, maximum share of signals resulting in resource reallocation, and maximum resource utilization. To efficiently compute the optimal resource capacity planning policy, we developed an iterative algorithm that begins with maximum resource utilization as the starting point. Through numerical demonstrations, we show the optimal policy and metrics of resource reallocation for two services: web browsing and bulk data transfer. The results highlight fast convergence within three iterations and the effectiveness of the balanced three-principle approach in resource capacity planning for 5G network slicing. Full article
(This article belongs to the Special Issue Performance and QoS Issues of 5G Wireless Networks and Beyond)
Show Figures

Figure 1

24 pages, 2445 KiB  
Article
Internet-of-Things Traffic Analysis and Device Identification Based on Two-Stage Clustering in Smart Home Environments
by Mizuki Asano, Takumi Miyoshi and Taku Yamazaki
Future Internet 2024, 16(1), 17; https://doi.org/10.3390/fi16010017 - 31 Dec 2023
Viewed by 1996
Abstract
Smart home environments, which consist of various Internet of Things (IoT) devices to support and improve our daily lives, are expected to be widely adopted in the near future. Owing to a lack of awareness regarding the risks associated with IoT devices and [...] Read more.
Smart home environments, which consist of various Internet of Things (IoT) devices to support and improve our daily lives, are expected to be widely adopted in the near future. Owing to a lack of awareness regarding the risks associated with IoT devices and challenges in replacing or the updating their firmware, adequate security measures have not been implemented. Instead, IoT device identification methods based on traffic analysis have been proposed. Since conventional methods process and analyze traffic data simultaneously, bias in the occurrence rate of traffic patterns has a negative impact on the analysis results. Therefore, this paper proposes an IoT traffic analysis and device identification method based on two-stage clustering in smart home environments. In the first step, traffic patterns are extracted by clustering IoT traffic at a local gateway located in each smart home and subsequently sent to a cloud server. In the second step, the cloud server extracts common traffic units to represent IoT traffic by clustering the patterns obtained in the first step. Two-stage clustering can reduce the impact of data bias, because each cluster extracted in the first clustering is summarized as one value and used as a single data point in the second clustering, regardless of the occurrence rate of traffic patterns. Through the proposed two-stage clustering method, IoT traffic is transformed into time series vector data that consist of common unit patterns and can be identified based on time series representations. Experiments using public IoT traffic datasets indicated that the proposed method could identify 21 IoTs devices with an accuracy of 86.9%. Therefore, we can conclude that traffic analysis using two-stage clustering is effective for improving the clustering quality, device identification, and implementation in distributed environments. Full article
Show Figures

Figure 1

22 pages, 828 KiB  
Review
An Analysis of Methods and Metrics for Task Scheduling in Fog Computing
by Javid Misirli and Emiliano Casalicchio
Future Internet 2024, 16(1), 16; https://doi.org/10.3390/fi16010016 - 30 Dec 2023
Cited by 1 | Viewed by 1757
Abstract
The Internet of Things (IoT) uptake brought a paradigm shift in application deployment. Indeed, IoT applications are not centralized in cloud data centers, but the computation and storage are moved close to the consumers, creating a computing continuum between the edge of the [...] Read more.
The Internet of Things (IoT) uptake brought a paradigm shift in application deployment. Indeed, IoT applications are not centralized in cloud data centers, but the computation and storage are moved close to the consumers, creating a computing continuum between the edge of the network and the cloud. This paradigm shift is called fog computing, a concept introduced by Cisco in 2012. Scheduling applications in this decentralized, heterogeneous, and resource-constrained environment is challenging. The task scheduling problem in fog computing has been widely explored and addressed using many approaches, from traditional operational research to heuristics and machine learning. This paper aims to analyze the literature on task scheduling in fog computing published in the last five years to classify the criteria used for decision-making and the technique used to solve the task scheduling problem. We propose a taxonomy of task scheduling algorithms, and we identify the research gaps and challenges. Full article
Show Figures

Figure 1

27 pages, 2667 KiB  
Article
Resource Indexing and Querying in Large Connected Environments
by Fouad Achkouty, Richard Chbeir, Laurent Gallon, Elio Mansour and Antonio Corral
Future Internet 2024, 16(1), 15; https://doi.org/10.3390/fi16010015 - 30 Dec 2023
Viewed by 1285
Abstract
The proliferation of sensor and actuator devices in Internet of things (IoT) networks has garnered significant attention in recent years. However, the increasing number of IoT devices, and the corresponding resources, has introduced various challenges, particularly in indexing and querying. In essence, resource [...] Read more.
The proliferation of sensor and actuator devices in Internet of things (IoT) networks has garnered significant attention in recent years. However, the increasing number of IoT devices, and the corresponding resources, has introduced various challenges, particularly in indexing and querying. In essence, resource management has become more complex due to the non-uniform distribution of related devices and their limited capacity. Additionally, the diverse demands of users have further complicated resource indexing. This paper proposes a distributed resource indexing and querying algorithm for large connected environments, specifically designed to address the challenges posed by IoT networks. The algorithm considers both the limited device capacity and the non-uniform distribution of devices, acknowledging that devices cannot store information about the entire environment. Furthermore, it places special emphasis on uncovered zones, to reduce the response time of queries related to these areas. Moreover, the algorithm introduces different types of queries, to cater to various user needs, including fast queries and urgent queries suitable for different scenarios. The effectiveness of the proposed approach was evaluated through extensive experiments covering index creation, coverage, and query execution, yielding promising and insightful results. Full article
Show Figures

Figure 1

26 pages, 1226 KiB  
Article
1-D Convolutional Neural Network-Based Models for Cooperative Spectrum Sensing
by Omar Serghini, Hayat Semlali, Asmaa Maali, Abdelilah Ghammaz and Salvatore Serrano
Future Internet 2024, 16(1), 14; https://doi.org/10.3390/fi16010014 - 29 Dec 2023
Viewed by 1556
Abstract
Spectrum sensing is an essential function of cognitive radio technology that can enable the reuse of available radio resources by so-called secondary users without creating harmful interference with licensed users. The application of machine learning techniques to spectrum sensing has attracted considerable interest [...] Read more.
Spectrum sensing is an essential function of cognitive radio technology that can enable the reuse of available radio resources by so-called secondary users without creating harmful interference with licensed users. The application of machine learning techniques to spectrum sensing has attracted considerable interest in the literature. In this contribution, we study cooperative spectrum sensing in a cognitive radio network where multiple secondary users cooperate to detect a primary user. We introduce multiple cooperative spectrum sensing schemes based on a deep neural network, which incorporate a one-dimensional convolutional neural network and a long short-term memory network. The primary objective of these schemes is to effectively learn the activity patterns of the primary user. The scenario of an imperfect transmission channel is considered for service messages to demonstrate the robustness of the proposed model. The performance of the proposed methods is evaluated with the receiver operating characteristic curve, the probability of detection for various SNR levels and the computational time. The simulation results confirm the effectiveness of the bidirectional long short-term memory-based method, surpassing the performance of the other proposed schemes and the current state-of-the-art methods in terms of detection probability, while ensuring a reasonable online detection time. Full article
Show Figures

Figure 1

20 pages, 892 KiB  
Article
Vnode: Low-Overhead Transparent Tracing of Node.js-Based Microservice Architectures
by Herve M. Kabamba, Matthew Khouzam and Michel R. Dagenais
Future Internet 2024, 16(1), 13; https://doi.org/10.3390/fi16010013 - 29 Dec 2023
Cited by 1 | Viewed by 1893
Abstract
Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. [...] Read more.
Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. With Node.js emerging as a leading development environment recognized for its rapidly growing ecosystem, there is a pressing need for innovative performance debugging approaches that reduce the telemetry data collection efforts and the overhead incurred by the environment’s instrumentation. In response, we introduce a new approach designed for transparent tracing and performance debugging of microservices in cloud settings. This approach is centered around our newly developed Internal Transparent Tracing and Context Reconstruction (ITTCR) technique. ITTCR is adept at correlating internal metrics from various distributed trace files to reconstruct the intricate execution contexts of microservices operating in a Node.js environment. Our method achieves transparency by directly instrumenting the Node.js virtual machine, enabling the collection and analysis of trace events in a transparent manner. This process facilitates the creation of visualization tools, enhancing the understanding and analysis of microservice performance in cloud environments. Compared to other methods, our approach incurs an overhead of approximately 5% on the system for the trace collection infrastructure while exhibiting minimal utilization of system resources during analysis execution. Experiments demonstrate that our technique scales well with very large trace files containing huge numbers of events and performs analyses in very acceptable timeframes. Full article
Show Figures

Figure 1

21 pages, 1096 KiB  
Article
Evaluating Embeddings from Pre-Trained Language Models and Knowledge Graphs for Educational Content Recommendation
by Xiu Li, Aron Henriksson, Martin Duneld, Jalal Nouri and Yongchao Wu
Future Internet 2024, 16(1), 12; https://doi.org/10.3390/fi16010012 - 29 Dec 2023
Cited by 1 | Viewed by 2451
Abstract
Educational content recommendation is a cornerstone of AI-enhanced learning. In particular, to facilitate navigating the diverse learning resources available on learning platforms, methods are needed for automatically linking learning materials, e.g., in order to recommend textbook content based on exercises. Such methods are [...] Read more.
Educational content recommendation is a cornerstone of AI-enhanced learning. In particular, to facilitate navigating the diverse learning resources available on learning platforms, methods are needed for automatically linking learning materials, e.g., in order to recommend textbook content based on exercises. Such methods are typically based on semantic textual similarity (STS) and the use of embeddings for text representation. However, it remains unclear what types of embeddings should be used for this task. In this study, we carry out an extensive empirical evaluation of embeddings derived from three different types of models: (i) static embeddings trained using a concept-based knowledge graph, (ii) contextual embeddings from a pre-trained language model, and (iii) contextual embeddings from a large language model (LLM). In addition to evaluating the models individually, various ensembles are explored based on different strategies for combining two models in an early vs. late fusion fashion. The evaluation is carried out using digital textbooks in Swedish for three different subjects and two types of exercises. The results show that using contextual embeddings from an LLM leads to superior performance compared to the other models, and that there is no significant improvement when combining these with static embeddings trained using a knowledge graph. When using embeddings derived from a smaller language model, however, it helps to combine them with knowledge graph embeddings. The performance of the best-performing model is high for both types of exercises, resulting in a mean Recall@3 of 0.96 and 0.95 and a mean MRR of 0.87 and 0.86 for quizzes and study questions, respectively, demonstrating the feasibility of using STS based on text embeddings for educational content recommendation. The ability to link digital learning materials in an unsupervised manner—relying only on readily available pre-trained models—facilitates the development of AI-enhanced learning. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

18 pages, 2202 KiB  
Article
Smart Grid Security: A PUF-Based Authentication and Key Agreement Protocol
by Nasour Bagheri, Ygal Bendavid, Masoumeh Safkhani and Samad Rostampour
Future Internet 2024, 16(1), 9; https://doi.org/10.3390/fi16010009 - 28 Dec 2023
Viewed by 1487
Abstract
A smart grid is an electricity network that uses advanced technologies to facilitate the exchange of information and electricity between utility companies and customers. Although most of the technologies involved in such grids have reached maturity, smart meters—as connected devices—introduce new security challenges. [...] Read more.
A smart grid is an electricity network that uses advanced technologies to facilitate the exchange of information and electricity between utility companies and customers. Although most of the technologies involved in such grids have reached maturity, smart meters—as connected devices—introduce new security challenges. To overcome this significant obstacle to grid modernization, safeguarding privacy has emerged as a paramount concern. In this paper, we begin by evaluating the security levels of recently proposed authentication methods for smart meters. Subsequently, we introduce an enhanced protocol named PPSG, designed for smart grids, which incorporates physical unclonable functions (PUF) and an elliptic curve cryptography (ECC) module to address the vulnerabilities identified in previous approaches. Our security analysis, utilizing a real-or-random (RoR) model, demonstrates that PPSG effectively mitigates the weaknesses found in prior methods. To assess the practicality of PPSG, we conduct simulations using an Arduino UNO board, measuring computation, communication, and energy costs. Our results, including a processing time of 153 ms, a communication cost of 1376 bits, and an energy consumption of 13.468 mJ, align with the requirements of resource-constrained devices within smart grids. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

17 pages, 459 KiB  
Article
Latent Autoregressive Student-t Prior Process Models to Assess Impact of Interventions in Time Series
by Patrick Toman, Nalini Ravishanker, Nathan Lally and Sanguthevar Rajasekaran
Future Internet 2024, 16(1), 8; https://doi.org/10.3390/fi16010008 - 28 Dec 2023
Viewed by 1266
Abstract
With the advent of the “Internet of Things” (IoT), insurers are increasingly leveraging remote sensor technology in the development of novel insurance products and risk management programs. For example, Hartford Steam Boiler’s (HSB) IoT freeze loss program uses IoT temperature sensors to monitor [...] Read more.
With the advent of the “Internet of Things” (IoT), insurers are increasingly leveraging remote sensor technology in the development of novel insurance products and risk management programs. For example, Hartford Steam Boiler’s (HSB) IoT freeze loss program uses IoT temperature sensors to monitor indoor temperatures in locations at high risk of water-pipe burst (freeze loss) with the goal of reducing insurances losses via real-time monitoring of the temperature data streams. In the event these monitoring systems detect a potentially risky temperature environment, an alert is sent to the end-insured (business manager, tenant, maintenance staff, etc.), prompting them to take remedial action by raising temperatures. In the event that an alert is sent and freeze loss occurs, the firm is not liable for any damages incurred by the event. For the program to be effective, there must be a reliable method of verifying if customers took appropriate corrective action after receiving an alert. Due to the program’s scale, direct follow up via text or phone calls is not possible for every alert event. In addition, direct feedback from customers is not necessarily reliable. In this paper, we propose the use of a non-linear, auto-regressive time series model, coupled with the time series intervention analysis method known as causal impact, to directly evaluate whether or not a customer took action directly from IoT temperature streams. Our method offers several distinct advantages over other methods as it is (a) readily scalable with continued program growth, (b) entirely automated, and (c) inherently less biased than human labelers or direct customer response. We demonstrate the efficacy of our method using a sample of actual freeze alert events from the freeze loss program. Full article
(This article belongs to the Special Issue Wireless Sensor Networks in the IoT)
Show Figures

Figure 1

18 pages, 1239 KiB  
Article
Utilizing User Bandwidth Resources in Information-Centric Networking through Blockchain-Based Incentive Mechanism
by Qiang Liu, Rui Han and Yang Li
Future Internet 2024, 16(1), 11; https://doi.org/10.3390/fi16010011 - 28 Dec 2023
Viewed by 1479
Abstract
Idle bandwidth resources are inefficiently distributed among different users. Currently, the utilization of user bandwidth resources mostly relies on traditional IP networks, implementing relevant techniques at the application layer, which creates scalability issues and brings additional system overheads. Information-Centric Networking (ICN), based on [...] Read more.
Idle bandwidth resources are inefficiently distributed among different users. Currently, the utilization of user bandwidth resources mostly relies on traditional IP networks, implementing relevant techniques at the application layer, which creates scalability issues and brings additional system overheads. Information-Centric Networking (ICN), based on the idea of separating identifiers and locators, offers the potential to aggregate idle bandwidth resources from a network layer perspective. This paper proposes a method for utilizing user bandwidth resources in ICN; specifically, we treat the use of user bandwidth resources as a service and assign service IDs (identifiers), and when network congestion (the network nodes are overloaded) occurs, the traffic can be routed to the user side for forwarding through the ID/NA (Network Address) cooperative routing mechanism of ICN, thereby improving the scalability of ICN transmission and the utilization of underlying network resources. To enhance the willingness of users to contribute idle bandwidth resources, we establish a secure and trustworthy bandwidth trading market using blockchain technology. We also design an incentive mechanism based on the Proof-of-Network-Contribution (PoNC) consensus algorithm; users can “mine” by forwarding packets. The experimental results show that utilizing idle bandwidth can significantly improve the scalability of ICN transmission under experimental conditions, bringing a maximum throughput improvement of 19.4% and reducing the packet loss rate. Compared with existing methods, using ICN technology to aggregate idle bandwidth for network transmission will have a more stable and lower latency, and it brings a maximum utilization improvement of 13.7%. Full article
Show Figures

Figure 1

15 pages, 1605 KiB  
Article
Automotive Cybersecurity Application Based on CARDIAN
by Emanuele Santonicola, Ennio Andrea Adinolfi, Simone Coppola and Francesco Pascale
Future Internet 2024, 16(1), 10; https://doi.org/10.3390/fi16010010 - 28 Dec 2023
Viewed by 1457
Abstract
Nowadays, a vehicle can contain from 20 to 100 ECUs, which are responsible for ordering, controlling and monitoring all the components of the vehicle itself. Each of these units can also send and receive information to other units on the network or externally. [...] Read more.
Nowadays, a vehicle can contain from 20 to 100 ECUs, which are responsible for ordering, controlling and monitoring all the components of the vehicle itself. Each of these units can also send and receive information to other units on the network or externally. For most vehicles, the controller area network (CAN) is the main communication protocol and system used to build their internal network. Technological development, the growing integration of devices and the numerous advances in the field of connectivity have allowed the vehicle to become connected, and the flow of information exchanged between the various ECUs (electronic control units) becomes increasingly important and varied. Furthermore, the vehicle itself is capable of exchanging information with other vehicles, with the surrounding environment and with the Internet. As shown by the CARDIAN project, this type of innovation allows the user an increasingly safe and varied driving experience, but at the same time, it introduces a series of vulnerabilities and dangers due to the connection itself. The job of making the vehicle safe therefore becomes critical. In recent years, it has been demonstrated in multiple ways how easy it is to compromise the safety of a vehicle and its passengers by injecting malicious messages into the CAN network present inside the vehicle itself. The purpose of this article is the construction of a system that, integrated within the vehicle network, is able to effectively recognize any type of intrusion and tampering. Full article
(This article belongs to the Special Issue Anomaly Detection in Modern Networks)
Show Figures

Figure 1

14 pages, 5946 KiB  
Article
Design and Implementation of a Digital Twin System for Log Rotary Cutting Optimization
by Yadi Zhao, Lei Yan, Jian Wu and Ximing Song
Future Internet 2024, 16(1), 7; https://doi.org/10.3390/fi16010007 - 25 Dec 2023
Viewed by 1328
Abstract
To address the low level of intelligence and low utilization of logs in current rotary cutting equipment, this paper proposes a digital twin-based system for optimizing the rotary cutting of logs using a five-dimensional model of digital twins. The system features a log [...] Read more.
To address the low level of intelligence and low utilization of logs in current rotary cutting equipment, this paper proposes a digital twin-based system for optimizing the rotary cutting of logs using a five-dimensional model of digital twins. The system features a log perception platform to capture three-dimensional point cloud data, outlining the logs’ contours. Utilizing the Delaunay3D algorithm, this model performs a three-dimensional reconstruction of the log point cloud, constructing a precise digital twin. Feature information is extracted from the point cloud using the least squares method. Processing parameters, determined through the kinematic model, are verified in rotary cutting simulations via Bool operations. The system’s efficacy has been substantiated through experimental validation, demonstrating its capability to output specific processing schemes for irregular logs and to verify these through simulation. This approach notably improves log recovery rates, decreasing volume error from 12.8% to 2.7% and recovery rate error from 23.5% to 5.7% The results validate the efficacy of the proposed digital twin system in optimizing the rotary cutting process, demonstrating its capability not only to enhance the utilization rate of log resources but also to improve the economic efficiency of the factory, thereby facilitating industrial development. Full article
(This article belongs to the Special Issue Digital Twins in Intelligent Manufacturing)
Show Figures

Figure 1

33 pages, 12844 KiB  
Article
TDLearning: Trusted Distributed Collaborative Learning Based on Blockchain Smart Contracts
by Jing Liu, Xuesong Hai and Keqin Li
Future Internet 2024, 16(1), 6; https://doi.org/10.3390/fi16010006 - 25 Dec 2023
Viewed by 1441
Abstract
Massive amounts of data drive the performance of deep learning models, but in practice, data resources are often highly dispersed and bound by data privacy and security concerns, making it difficult for multiple data sources to share their local data directly. Data resources [...] Read more.
Massive amounts of data drive the performance of deep learning models, but in practice, data resources are often highly dispersed and bound by data privacy and security concerns, making it difficult for multiple data sources to share their local data directly. Data resources are difficult to aggregate effectively, resulting in a lack of support for model training. How to collaborate between data sources in order to aggregate the value of data resources is therefore an important research question. However, existing distributed-collaborative-learning architectures still face serious challenges in collaborating between nodes that lack mutual trust, with security and trust issues seriously affecting the confidence and willingness of data sources to participate in collaboration. Blockchain technology provides trusted distributed storage and computing, and combining it with collaboration between data sources to build trusted distributed-collaborative-learning architectures is an extremely valuable research direction for application. We propose a trusted distributed-collaborative-learning mechanism based on blockchain smart contracts. Firstly, the mechanism uses blockchain smart contracts to define and encapsulate collaborative behaviours, relationships and norms between distributed collaborative nodes. Secondly, we propose a model-fusion method based on feature fusion, which replaces the direct sharing of local data resources with distributed-model collaborative training and organises distributed data resources for distributed collaboration to improve model performance. Finally, in order to verify the trustworthiness and usability of the proposed mechanism, on the one hand, we implement formal modelling and verification of the smart contract by using Coloured Petri Net and prove that the mechanism satisfies the expected trustworthiness properties by verifying the formal model of the smart contract associated with the mechanism. On the other hand, the model-fusion method based on feature fusion is evaluated in different datasets and collaboration scenarios, while a typical collaborative-learning case is implemented for a comprehensive analysis and validation of the mechanism. The experimental results show that the proposed mechanism can provide a trusted and fair collaboration infrastructure for distributed-collaboration nodes that lack mutual trust and organise decentralised data resources for collaborative model training to develop effective global models. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop