Next Issue
Volume 16, July
Previous Issue
Volume 16, May
 
 

Future Internet, Volume 16, Issue 6 (June 2024) – 40 articles

Cover Story (view full-size image): The paper “Visual Data and Pattern Analysis for Smart Education: A Robust DRL-Based Early Warning System for Student Performance Prediction” presents a comprehensive framework to enhance student performance monitoring in e-learning. By leveraging Deep Reinforcement Learning (DRL), the study develops an early warning system that uses visual data and pattern analysis to predict student engagement and performance. The approach includes federated learning-based authentication, multi-objective clustering, and personalised recommendation generation, aiming to create a more responsive and adaptive educational environment. This system significantly improves prediction accuracy and personalises educational support, ensuring timely interventions for at-risk students. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 1053 KiB  
Article
Adapting Self-Regulated Learning in an Age of Generative Artificial Intelligence Chatbots
by Joel Weijia Lai
Future Internet 2024, 16(6), 218; https://doi.org/10.3390/fi16060218 - 20 Jun 2024
Viewed by 656
Abstract
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of [...] Read more.
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of GenAI is the large language model chatbot, which allows users to seek answers to their queries. This article seeks to adapt current SRL models to understand student learning with these chatbots. This is achieved by classifying the prompts supplied by a learner to an educational chatbot into learning actions and processes using the process–action library. Subsequently, through process mining, we can analyze these data to provide valuable insights for learners, educators, instructional designers, and researchers into the possible applications of chatbots for SRL. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

18 pages, 2216 KiB  
Article
Optimizing Data Parallelism for FM-Based Short-Read Alignment on the Heterogeneous Non-Uniform Memory Access Architectures
by Shaolong Chen, Yunzi Dai, Liwei Liu and Xinting Yu
Future Internet 2024, 16(6), 217; https://doi.org/10.3390/fi16060217 - 19 Jun 2024
Viewed by 421
Abstract
Sequence alignment is a critical factor in the variant analysis of genomic research. Since the FM (Ferrainas–Manzini) index was developed, it has proven to be a model in a compact format with efficient pattern matching and high-speed query searching, which has attracted much [...] Read more.
Sequence alignment is a critical factor in the variant analysis of genomic research. Since the FM (Ferrainas–Manzini) index was developed, it has proven to be a model in a compact format with efficient pattern matching and high-speed query searching, which has attracted much research interest in the field of sequence alignment. Such characteristics make it a convenient tool for handling large-scale sequence alignment projects executed with a small memory. In bioinformatics, the massive success of next-generation sequencing technology has led to an exponential growth in genomic data, presenting a computational challenge for sequence alignment. In addition, the use of a heterogeneous computing system, composed of various types of nodes, is prevalent in the field of HPC (high-performance computing), which presents a promising solution for sequence alignment. However, conventional methodologies in short-read alignment are limited in performance on current heterogeneous computing infrastructures. Therefore, we developed a parallel sequence alignment to investigate the applicability of this approach in NUMA-based (Non-Uniform Memory Access) heterogeneous architectures against traditional alignment algorithms. This proposed work combines the LF (longest-first) distribution policy with the EP (enhanced partitioning) strategy for effective load balancing and efficient parallelization among heterogeneous architectures. The newly proposed LF-EP-based FM aligner shows excellent efficiency and a significant improvement over NUMA-based heterogeneous computing platforms. We provide significantly improved performance over several popular FM aligners in many dimensions such as read length, sequence number, sequence distance, alignment speedup, and result quality. These resultant evaluation metrics cover the quality assessment, complexity analysis, and speedup evaluation of our approach. Utilizing the capabilities of NUMA-based heterogeneous computing architectures, our approach effectively provides a convenient solution for large-scale short-read alignment in the heterogeneous system. Full article
Show Figures

Figure 1

14 pages, 6422 KiB  
Article
Discovery of Cloud Applications from Logs
by Ashot Harutyunyan, Arnak Poghosyan, Tigran Bunarjyan, Andranik Haroyan, Marine Harutyunyan, Lilit Harutyunyan and Nelson Baloian
Future Internet 2024, 16(6), 216; https://doi.org/10.3390/fi16060216 - 18 Jun 2024
Viewed by 443
Abstract
Continuous discovery and update of applications or their boundaries running in cloud environments in an automatic way is a highly required function of modern data center operation solutions. Prior attempts to address this problem within various products or projects were/are applying rule-driven approaches [...] Read more.
Continuous discovery and update of applications or their boundaries running in cloud environments in an automatic way is a highly required function of modern data center operation solutions. Prior attempts to address this problem within various products or projects were/are applying rule-driven approaches or machine learning techniques on specific types of data–network traffic as well as property/configuration data of infrastructure objects, which all have their drawbacks in effectively identifying roles of those resources. The current proposal (ADLog) leverages log data of sources, which contain incomparably richer contextual information, and demonstrates a reliable way of discriminating application objects. Specifically, using native constructs of VMware Aria Operations for Logs in terms of event types and their distributions, we group those entities, which then can be potentially enriched with indicative tags automatically and recommended for further management tasks and policies. Our methods differentiate not only diverse kinds of applications, but also their specific deployments, thus providing hierarchical representation of the applications in time and topology. For several applications under Aria Ops management in our experimental test bed, we discover those in terms of similarity behavior of their components with a high accuracy. The validation of the proposal paves the path for an AI-driven solution in cloud management scenarios. Full article
(This article belongs to the Special Issue Embracing Artificial Intelligence (AI) for Network and Service)
Show Figures

Figure 1

32 pages, 5130 KiB  
Article
Development of a Theoretical Model for Digital Risks Arising from the Implementation of Industry 4.0 (TMR-I4.0)
by Vitor Hugo dos Santos Filho, Luis Maurício Martins de Resende and Joseane Pontes
Future Internet 2024, 16(6), 215; https://doi.org/10.3390/fi16060215 - 17 Jun 2024
Viewed by 398
Abstract
This study aims to develop a theoretical model for digital risks arising from implementing Industry 4.0 (represented by the acronym TMR-I4.0). A systematic literature review was initially conducted using the Methodi Ordinatio methodology to map the principal dimensions and digital risks associated with [...] Read more.
This study aims to develop a theoretical model for digital risks arising from implementing Industry 4.0 (represented by the acronym TMR-I4.0). A systematic literature review was initially conducted using the Methodi Ordinatio methodology to map the principal dimensions and digital risks associated with Industry 4.0 in order to achieve this objective. After completing the nine steps of Methodi, a bibliographic portfolio with 118 articles was obtained. These articles were then subjected to content analysis using QSR Nvivo® version 10 software to categorize digital risks. The analysis resulted in the identification of 9 dimensions and 43 digital risks. The categorization of these risks allowed the construction of maps showing the digital risks and their impacts resulting from the implementation of Industry 4.0. This study advances the literature by proposing a comprehensive categorization of digital risks associated with Industry 4.0, which resulted from an exhaustive literature review. At the conclusion of the study, based on the proposed Theoretical Risk Model for Digital Risks arising from the implementation of Industry 4.0, a research agenda for future studies will be proposed, enabling other researchers to further explore the landscape of digital risks in Industry 4.0. Full article
Show Figures

Figure 1

17 pages, 1149 KiB  
Article
Adaptive Framework for Maintenance Scheduling Based on Dynamic Preventive Intervals and Remaining Useful Life Estimation
by Pedro Nunes, Eugénio Rocha and José Santos
Future Internet 2024, 16(6), 214; https://doi.org/10.3390/fi16060214 - 17 Jun 2024
Viewed by 412
Abstract
Data-based prognostic methods exploit sensor data to forecast the remaining useful life (RUL) of industrial settings to optimize the scheduling of maintenance actions. However, implementing sensors may not be cost-effective or practical for all components. Traditional preventive approaches are not based on sensor [...] Read more.
Data-based prognostic methods exploit sensor data to forecast the remaining useful life (RUL) of industrial settings to optimize the scheduling of maintenance actions. However, implementing sensors may not be cost-effective or practical for all components. Traditional preventive approaches are not based on sensor data; however, they schedule maintenance at equally spaced intervals, which is not a cost-effective approach since the distribution of the time between failures changes with the degradation state of other parts or changes in working conditions. This study introduces a novel framework comprising two maintenance scheduling strategies. In the absence of sensor data, we propose a novel dynamic preventive policy that adjusts intervention intervals based on the most recent failure data. When sensor data are available, a method for RUL prediction, designated k-LSTM-GFT, is enhanced to dynamically account for RUL prediction uncertainty. The results demonstrate that dynamic preventive maintenance can yield cost reductions of up to 51.8% compared to conventional approaches. The predictive approach optimizes the exploitation of RUL, achieving costs that are only 3–5% higher than the minimum cost achievable while ensuring the safety of critical systems since all of the failures are avoided. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

17 pages, 4391 KiB  
Article
Learning Modality Consistency and Difference Information with Multitask Learning for Multimodal Sentiment Analysis
by Cheng Fang, Feifei Liang, Tianchi Li and Fangheng Guan
Future Internet 2024, 16(6), 213; https://doi.org/10.3390/fi16060213 - 17 Jun 2024
Viewed by 1439
Abstract
The primary challenge in Multimodal sentiment analysis (MSA) lies in developing robust joint representations that can effectively learn mutual information from diverse modalities. Previous research in this field tends to rely on feature concatenation to obtain joint representations. However, these approaches fail to [...] Read more.
The primary challenge in Multimodal sentiment analysis (MSA) lies in developing robust joint representations that can effectively learn mutual information from diverse modalities. Previous research in this field tends to rely on feature concatenation to obtain joint representations. However, these approaches fail to fully exploit interactive patterns to ensure consistency and differentiation across different modalities. To address this limitation, we propose a novel framework for multimodal sentiment analysis, named CDML (Consistency and Difference using a Multitask Learning network). Specifically, CDML uses an attention mechanism to assign the attention weights of each modality efficiently. Adversarial training is used to obtain consistent information between modalities. Finally, the difference among the modalities is acquired by the multitask learning framework. Experiments on two benchmark MSA datasets, CMU-MOSI and CMU-MOSEI, showcase that our proposed method outperforms the seven existing approaches by at least 1.3% for Acc-2 and 1.7% for F1. Full article
Show Figures

Figure 1

13 pages, 1866 KiB  
Article
IMTIBOT: An Intelligent Mitigation Technique for IoT Botnets
by Umang Garg, Santosh Kumar and Aniket Mahanti
Future Internet 2024, 16(6), 212; https://doi.org/10.3390/fi16060212 - 17 Jun 2024
Viewed by 436
Abstract
The tremendous growth of the Internet of Things (IoT) has gained a lot of attention in the global market. The massive deployment of IoT is also inherent in various security vulnerabilities, which become easy targets for hackers. IoT botnets are one type of [...] Read more.
The tremendous growth of the Internet of Things (IoT) has gained a lot of attention in the global market. The massive deployment of IoT is also inherent in various security vulnerabilities, which become easy targets for hackers. IoT botnets are one type of critical malware that degrades the performance of the IoT network and is difficult to detect by end-users. Although there are several traditional IoT botnet mitigation techniques such as access control, data encryption, and secured device configuration, these traditional mitigation techniques are difficult to apply due to normal traffic behavior, similar packet transmission, and the repetitive nature of IoT network traffic. Motivated by botnet obfuscation, this article proposes an intelligent mitigation technique for IoT botnets, named IMTIBoT. Using this technique, we harnessed the stacking of ensemble classifiers to build an intelligent system. This stacking classifier technique was tested using an experimental testbed of IoT nodes and sensors. This system achieved an accuracy of 0.984, with low latency. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

14 pages, 7066 KiB  
Article
Improved Particle Filter in Machine Learning-Based BLE Fingerprinting Method to Reduce Indoor Location Estimation Errors
by Jingshi Qian, Jiahe Li, Nobuyoshi Komuro, Won-Suk Kim and Younghwan Yoo
Future Internet 2024, 16(6), 211; https://doi.org/10.3390/fi16060211 - 15 Jun 2024
Viewed by 490
Abstract
Indoor position fingerprint-based location estimation methods have been widely used by applications on smartphones. In these localization estimation methods, it is very popular to use the RSSI (Received Signal Strength Indication) of signals to represent the position fingerprint. This paper proposes the design [...] Read more.
Indoor position fingerprint-based location estimation methods have been widely used by applications on smartphones. In these localization estimation methods, it is very popular to use the RSSI (Received Signal Strength Indication) of signals to represent the position fingerprint. This paper proposes the design of a particle filter for reducing the estimation error of the machine learning-based indoor BLE location fingerprinting method. Unlike the general particle filter, taking into account the distance, the proposed system designs improved likelihood functions, considering the coordinates based on fingerprint points using mean and variance of RSSI values, combining the particle filter with the k-NN (k-Nearest Neighbor) algorithm to realize the reduction in indoor positioning error. The initial position is estimated by the position fingerprinting method based on the machine learning method. By comparing the fingerprint method based on k-NN with general particle filter processing, and the fingerprint estimation method based on only k-NN or SVM (Support Vector Machine), experiment results showed that the proposed method has a smaller minimum error and a better average error than the conventional method. Full article
Show Figures

Figure 1

14 pages, 3202 KiB  
Article
Reversible Data Hiding in Encrypted 3D Mesh Models Based on Multi-Group Partition and Closest Pair Prediction
by Xu Wang, Jui-Chuan Liu, Ching-Chun Chang and Chin-Chen Chang
Future Internet 2024, 16(6), 210; https://doi.org/10.3390/fi16060210 - 15 Jun 2024
Viewed by 399
Abstract
The reversible data hiding scheme in the encrypted domain is a potential solution to the concerns regarding user privacy in cloud applications. The 3D mesh model is an emerging file format and is widely used in engineering modeling, special effects, and video games. [...] Read more.
The reversible data hiding scheme in the encrypted domain is a potential solution to the concerns regarding user privacy in cloud applications. The 3D mesh model is an emerging file format and is widely used in engineering modeling, special effects, and video games. However, studies on reversible data hiding in encrypted 3D mesh models are still in the preliminary stage. In this paper, two novel techniques, multi-group partition (MGP) and closest pair prediction (CPP), are proposed to improve performance. The MGP technique adaptively classifies vertices into reference and embeddable vertices, while the CPP technique efficiently predicts embeddable vertices and generates shorter recovery information to vacate more redundancy for additional data embedding. Experimental results indicate that the proposed scheme significantly improves the embedding rate compared to state-of-the-art schemes and can be used in real-time applications. Full article
Show Figures

Figure 1

40 pages, 5898 KiB  
Article
Authentication and Key Agreement Protocol in Hybrid Edge–Fog–Cloud Computing Enhanced by 5G Networks
by Jiayi Zhang, Abdelkader Ouda and Raafat Abu-Rukba
Future Internet 2024, 16(6), 209; https://doi.org/10.3390/fi16060209 - 14 Jun 2024
Viewed by 410
Abstract
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud [...] Read more.
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud computing’s deficiencies, but security challenges remain, especially in Authentication and Key Agreement aspects due to the distributed and dynamic nature of fog computing. This study presents an innovative mutual Authentication and Key Agreement protocol that is specifically tailored to meet the security needs of fog computing in the context of the edge–fog–cloud three-tier architecture, enhanced by the incorporation of the 5G network. This study improves security in the edge–fog–cloud context by introducing a stateless authentication mechanism and conducting a comparative analysis of the proposed protocol with well-known alternatives, such as TLS 1.3, 5G-AKA, and various handover protocols. The suggested approach has a total transmission cost of only 1280 bits in the authentication phase, which is approximately 30% lower than other protocols. In addition, the suggested handover protocol only involves two signaling expenses. The computational cost for handover authentication for the edge user is significantly low, measuring 0.243 ms, which is under 10% of the computing costs of other authentication protocols. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks)
Show Figures

Figure 1

16 pages, 1641 KiB  
Article
Enabling End-User Development in Smart Homes: A Machine Learning-Powered Digital Twin for Energy Efficient Management
by Luca Cotti, Davide Guizzardi, Barbara Rita Barricelli and Daniela Fogli
Future Internet 2024, 16(6), 208; https://doi.org/10.3390/fi16060208 - 14 Jun 2024
Viewed by 458
Abstract
End-User Development has been proposed over the years to allow end users to control and manage their Internet of Things-based environments, such as smart homes. With End-User Development, end users are able to create trigger-action rules or routines to tailor the behavior of [...] Read more.
End-User Development has been proposed over the years to allow end users to control and manage their Internet of Things-based environments, such as smart homes. With End-User Development, end users are able to create trigger-action rules or routines to tailor the behavior of their smart homes. However, the scientific research proposed to date does not encompass methods that evaluate the suitability of user-created routines in terms of energy consumption. This paper proposes using Machine Learning to build a Digital Twin of a smart home that can predict the energy consumption of smart appliances. The Digital Twin will allow end users to simulate possible scenarios related to the creation of routines. Simulations will be used to assess the effects of the activation of appliances involved in the routines under creation and possibly modify them to save energy consumption according to the Digital Twin’s suggestions. Full article
Show Figures

Figure 1

18 pages, 1182 KiB  
Article
Towards a New Business Model for Streaming Platforms Using Blockchain Technology
by Rendrikson Soares and André Araújo
Future Internet 2024, 16(6), 207; https://doi.org/10.3390/fi16060207 - 13 Jun 2024
Viewed by 566
Abstract
Streaming platforms have revolutionized the digital entertainment industry, but challenges and research opportunities remain to be addressed. One current concern is the lack of transparency in the business model of video streaming platforms, which makes it difficult for content creators to access viewing [...] Read more.
Streaming platforms have revolutionized the digital entertainment industry, but challenges and research opportunities remain to be addressed. One current concern is the lack of transparency in the business model of video streaming platforms, which makes it difficult for content creators to access viewing metrics and receive payments without the intermediary of third parties. Additionally, there is no way to trace payment transactions. This article presents a computational architecture based on blockchain technology to enable transparency in audience management and payments in video streaming platforms. Smart contracts will define the business rules of the streaming services, while middleware will integrate the metadata of the streaming platforms with the proposed computational solution. The proposed solution has been validated through data transactions on different blockchain networks and interviews with content creators from video streaming platforms. The results confirm the viability of the proposed solution in enhancing transparency and auditability in the realm of audience control services and payments on video streaming platforms. Full article
Show Figures

Figure 1

26 pages, 2516 KiB  
Article
Visual Data and Pattern Analysis for Smart Education: A Robust DRL-Based Early Warning System for Student Performance Prediction
by Wala Bagunaid, Naveen Chilamkurti, Ahmad Salehi Shahraki and Saeed Bamashmos
Future Internet 2024, 16(6), 206; https://doi.org/10.3390/fi16060206 - 11 Jun 2024
Viewed by 560
Abstract
Artificial Intelligence (AI) and Deep Reinforcement Learning (DRL) have revolutionised e-learning by creating personalised, adaptive, and secure environments. However, challenges such as privacy, bias, and data limitations persist. E-FedCloud aims to address these issues by providing more agile, personalised, and secure e-learning experiences. [...] Read more.
Artificial Intelligence (AI) and Deep Reinforcement Learning (DRL) have revolutionised e-learning by creating personalised, adaptive, and secure environments. However, challenges such as privacy, bias, and data limitations persist. E-FedCloud aims to address these issues by providing more agile, personalised, and secure e-learning experiences. This study introduces E-FedCloud, an AI-assisted, adaptive e-learning system that automates personalised recommendations and tracking, thereby enhancing student performance. It employs federated learning-based authentication to ensure secure and private access for both course instructors and students. Intelligent Software Agents (ISAs) evaluate weekly student engagement using the Shannon Entropy method, classifying students into either engaged or not-engaged clusters. E-FedCloud utilises weekly engagement status, demographic information, and an innovative DRL-based early warning system, specifically ID2QN, to predict the performance of not-engaged students. Based on these predictions, the system categorises students into three groups: risk of dropping out, risk of scoring lower in the final exam, and risk of failing the end exam. It employs a multi-disciplinary ontology graph and an attention-based capsule network for automated, personalised recommendations. The system also integrates performance tracking to enhance student engagement. Data are securely stored on a blockchain using the LWEA encryption method. Full article
Show Figures

Figure 1

30 pages, 2075 KiB  
Article
Enhancing Efficiency and Security in Unbalanced PSI-CA Protocols through Cloud Computing and Homomorphic Encryption in Mobile Networks
by Wuzheng Tan, Shenglong Du and Jian Weng
Future Internet 2024, 16(6), 205; https://doi.org/10.3390/fi16060205 - 7 Jun 2024
Viewed by 539
Abstract
Private Set Intersection Cardinality (PSI-CA) is a cryptographic method in secure multi-party computation that allows entities to identify the cardinality of the intersection without revealing their private data. Traditional approaches assume similar-sized datasets and equal computational power, overlooking practical imbalances. In real-world applications, [...] Read more.
Private Set Intersection Cardinality (PSI-CA) is a cryptographic method in secure multi-party computation that allows entities to identify the cardinality of the intersection without revealing their private data. Traditional approaches assume similar-sized datasets and equal computational power, overlooking practical imbalances. In real-world applications, dataset sizes and computational capacities often vary, particularly in Internet of Things and mobile scenarios where device limitations restrict computational types. Traditional PSI-CA protocols are inefficient here, as computational and communication complexities correlate with the size of larger datasets. Thus, adapting PSI-CA protocols to these imbalances is crucial. This paper explores unbalanced scenarios where one party (the receiver) has a relatively small dataset and limited computational power, while the other party (the sender) has a large amount of data and strong computational capabilities.This paper, based on the concept of commutative encryption, introduces Cuckoo filter, cloud computing technology, and homomorphic encryption, among other technologies, to construct three novel solutions for unbalanced Private Set Intersection Cardinality (PSI-CA): an unbalanced PSI-CA protocol based on Cuckoo filter, an unbalanced PSI-CA protocol based on single-cloud assistance, and an unbalanced PSI-CA protocol based on dual-cloud assistance. Depending on performance and security requirements, different protocols can be employed for various applications. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

23 pages, 853 KiB  
Article
Usability Evaluation of Wearable Smartwatches Using Customized Heuristics and System Usability Scale Score
by Majed A. Alshamari and Maha M. Althobaiti
Future Internet 2024, 16(6), 204; https://doi.org/10.3390/fi16060204 - 6 Jun 2024
Viewed by 484
Abstract
The mobile and wearable nature of smartwatches poses challenges in evaluating their usability. This paper presents a study employing customized heuristic evaluation and use of the system usability scale (SUS) on four smartwatches, along with their mobile applications. A total of 11 heuristics [...] Read more.
The mobile and wearable nature of smartwatches poses challenges in evaluating their usability. This paper presents a study employing customized heuristic evaluation and use of the system usability scale (SUS) on four smartwatches, along with their mobile applications. A total of 11 heuristics were developed and validated by experts by combining Nielsen’s heuristic and Motti and Caines’ heuristics. In this study, 20 participants used the watches and participated in the SUS survey. A total of 307 usability issues were reported by the evaluators. The results of this study show that the Galaxy Watch 5 scored highest in terms of efficiency, ease of use, features, and battery life compared to the other three smartwatches and has fewer usability issues. The results indicate that ease of use, features, and flexibility are important usability attributes for future smartwatches. The Galaxy Watch 5 received the highest SUS score of 87.375. Both evaluation methods showed no significant differences in results, and customized heuristics were found to be useful for smartwatch evaluation. Full article
Show Figures

Figure 1

18 pages, 3409 KiB  
Article
Evaluation of Radio Access Protocols for V2X in 6G Scenario-Based Models
by Héctor Orrillo, André Sabino and Mário Marques da Silva
Future Internet 2024, 16(6), 203; https://doi.org/10.3390/fi16060203 - 6 Jun 2024
Viewed by 746
Abstract
The expansion of mobile connectivity with the arrival of 6G paves the way for the new Internet of Verticals (6G-IoV), benefiting autonomous driving. This article highlights the importance of vehicle-to-everything (V2X) and vehicle-to-vehicle (V2V) communication in improving road safety. Current technologies such as [...] Read more.
The expansion of mobile connectivity with the arrival of 6G paves the way for the new Internet of Verticals (6G-IoV), benefiting autonomous driving. This article highlights the importance of vehicle-to-everything (V2X) and vehicle-to-vehicle (V2V) communication in improving road safety. Current technologies such as IEEE 802.11p and LTE-V2X are being improved, while new radio access technologies promise more reliable, lower-latency communications. Moreover, 3GPP is developing NR-V2X to improve the performance of communications between vehicles, while IEEE proposes the 802.11bd protocol, aiming for the greater interoperability and detection of transmissions between vehicles. Both new protocols are being developed and improved to make autonomous driving more efficient. This study analyzes and compares the performance of the protocols mentioned, namely 802.11p, 802.11bd, LTE-V2X, and NR-V2X. The contribution of this study is to identify the most suitable protocol that meets the requirements of V2V communications in autonomous driving. The relevance of V2V communication has driven intense research in the scientific community. Among the various applications of V2V communication are Cooperative Awareness, V2V Unicast Exchange, and V2V Decentralized Environmental Notification, among others. To this end, the performance of the Link Layer of these protocols is evaluated and compared. Based on the analysis of the results, it can be concluded that NR-V2X outperforms IEEE 802.11bd in terms of transmission latency (L) and data rate (DR). In terms of the packet error rate (PER), it is shown that both LTE-V2X and NR-V2X exhibit a lower PER compared to IEEE protocols, especially as the distance between the vehicles increases. This advantage becomes even more significant in scenarios with greater congestion and network interference. Full article
Show Figures

Figure 1

29 pages, 2761 KiB  
Article
Metric Space Indices for Dynamic Optimization in a Peer to Peer-Based Image Classification Crowdsourcing Platform
by Fernando Loor, Veronica Gil-Costa and Mauricio Marin
Future Internet 2024, 16(6), 202; https://doi.org/10.3390/fi16060202 - 6 Jun 2024
Viewed by 478
Abstract
Large-scale computer platforms that process users’ online requests must be capable of handling unexpected spikes in arrival rates. These platforms, which are composed of distributed components, can be configured with parameters to ensure both the quality of the results obtained for each request [...] Read more.
Large-scale computer platforms that process users’ online requests must be capable of handling unexpected spikes in arrival rates. These platforms, which are composed of distributed components, can be configured with parameters to ensure both the quality of the results obtained for each request and low response times. In this work, we propose a dynamic optimization engine based on metric space indexing to address this problem. The engine is integrated into the platform and periodically monitors performance metrics to determine whether new configuration parameter values need to be computed. Our case study focuses on a P2P platform designed for classifying crowdsourced images related to natural disasters. We evaluate our approach under scenarios with high and low workloads, comparing it against alternative methods based on deep reinforcement learning. The results show that our approach reduces processing time by an average of 40%. Full article
Show Figures

Figure 1

32 pages, 1109 KiB  
Article
Impact, Compliance, and Countermeasures in Relation to Data Breaches in Publicly Traded U.S. Companies
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara, Robson de Oliveira Albuquerque and Georges Daniel Amvame Nze
Future Internet 2024, 16(6), 201; https://doi.org/10.3390/fi16060201 - 5 Jun 2024
Cited by 1 | Viewed by 922
Abstract
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result [...] Read more.
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Graphical abstract

22 pages, 2903 KiB  
Article
Implementation of Lightweight Machine Learning-Based Intrusion Detection System on IoT Devices of Smart Homes
by Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi and Hadi Larijani
Future Internet 2024, 16(6), 200; https://doi.org/10.3390/fi16060200 - 5 Jun 2024
Viewed by 849
Abstract
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems [...] Read more.
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

15 pages, 1885 KiB  
Article
Metaverse and Fashion: An Analysis of Consumer Online Interest
by Carmen Ruiz Viñals, Marta Gil Ibáñez and José Luis Del Olmo Arriaga
Future Internet 2024, 16(6), 199; https://doi.org/10.3390/fi16060199 - 4 Jun 2024
Viewed by 476
Abstract
Recent studies have demonstrated the value that the Internet and web applications bring to businesses. Among other tools are those that enable the analysis and monitoring of searches, such as Google Trends, which is currently used by the fashion industry to guide experiential [...] Read more.
Recent studies have demonstrated the value that the Internet and web applications bring to businesses. Among other tools are those that enable the analysis and monitoring of searches, such as Google Trends, which is currently used by the fashion industry to guide experiential practices in a context of augmented reality and/or virtual reality, and even to predict purchasing behaviours through the metaverse. Data from this tool provide insight into fashion consumer search patterns. Understanding and managing this digital tool is an essential factor in rethinking businesses’ marketing strategies. The aim of this study is to analyse online user search behaviour by analysing and monitoring the terms “metaverse” and “fashion” on Google Trends. A quantitative descriptive cross-sectional method was employed. The results show that there is growing consumer interest in both concepts on the Internet, despite the lack of homogeneity in the behaviour of the five Google search tools. Full article
Show Figures

Figure 1

17 pages, 2091 KiB  
Systematic Review
The Use of Artificial Intelligence in eParticipation: Mapping Current Research
by Zisis Vasilakopoulos, Theocharis Tavantzis, Rafail Promikyridis and Efthimios Tambouris
Future Internet 2024, 16(6), 198; https://doi.org/10.3390/fi16060198 - 3 Jun 2024
Viewed by 542
Abstract
Electronic Participation (eParticipation) enables citizens to engage in political and decision-making processes using information and communication technologies. As in many other fields, Artificial Intelligence (AI) has recently started to dictate some of the realities of eParticipation. As a result, an increasing number of [...] Read more.
Electronic Participation (eParticipation) enables citizens to engage in political and decision-making processes using information and communication technologies. As in many other fields, Artificial Intelligence (AI) has recently started to dictate some of the realities of eParticipation. As a result, an increasing number of studies are investigating the use of AI in eParticipation. The aim of this paper is to map current research on the use of AI in eParticipation. Following PRISMA methodology, the authors identified 235 relevant papers in Web of Science and Scopus and selected 46 studies for review. For analysis purposes, an analysis framework was constructed that combined eParticipation elements (namely actors, activities, effects, contextual factors, and evaluation) with AI elements (namely areas, algorithms, and algorithm evaluation). The results suggest that certain eParticipation actors and activities, as well as AI areas and algorithms, have attracted significant attention from researchers. However, many more remain largely unexplored. The findings can be of value to both academics looking for unexplored research fields and practitioners looking for empirical evidence on what works and what does not. Full article
Show Figures

Graphical abstract

18 pages, 1364 KiB  
Article
In-Home Evaluation of the Neo Care Artificial Intelligence Sound-Based Fall Detection System
by Carol Maher, Kylie A. Dankiw, Ben Singh, Svetlana Bogomolova and Rachel G. Curtis
Future Internet 2024, 16(6), 197; https://doi.org/10.3390/fi16060197 - 2 Jun 2024
Viewed by 615
Abstract
The Neo Care home monitoring system aims to detect falls and other events using artificial intelligence. This study evaluated Neo Care’s accuracy and explored user perceptions through a 12-week in-home trial with 18 households of adults aged 65+ years old at risk of [...] Read more.
The Neo Care home monitoring system aims to detect falls and other events using artificial intelligence. This study evaluated Neo Care’s accuracy and explored user perceptions through a 12-week in-home trial with 18 households of adults aged 65+ years old at risk of falls (mean age: 75.3 years old; 67% female). Participants logged events that were cross-referenced with Neo Care logs to calculate sensitivity and specificity for fall detection and response. Qualitative interviews gathered in-depth user feedback. During the trial, 28 falls/events were documented, with 12 eligible for analysis as others occurred outside the home or when devices were offline. Neo Care was activated 4939 times—4930 by everyday household sounds and 9 by actual falls. Fall detection sensitivity was 75.00% and specificity 6.80%. For responding to falls, sensitivity was 62.50% and specificity 17.28%. Users felt more secure with Neo Care but identified needs for further calibration to improve accuracy. Advantages included avoiding wearables, while key challenges were misinterpreting noises and occasional technical issues like going offline. Suggested improvements were visual indicators, trigger words, and outdoor capability. The study demonstrated Neo Care’s potential with modifications. Users found it beneficial, but highlighted areas for improvement. Real-world evaluations and user-centered design are crucial for healthcare technology development. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

22 pages, 912 KiB  
Article
Efficiency of Federated Learning and Blockchain in Preserving Privacy and Enhancing the Performance of Credit Card Fraud Detection (CCFD) Systems
by Tahani Baabdullah, Amani Alzahrani, Danda B. Rawat and Chunmei Liu
Future Internet 2024, 16(6), 196; https://doi.org/10.3390/fi16060196 - 2 Jun 2024
Viewed by 433
Abstract
Increasing global credit card usage has elevated it to a preferred payment method for daily transactions, underscoring its significance in global financial cybersecurity. This paper introduces a credit card fraud detection (CCFD) system that integrates federated learning (FL) with blockchain technology. The experiment [...] Read more.
Increasing global credit card usage has elevated it to a preferred payment method for daily transactions, underscoring its significance in global financial cybersecurity. This paper introduces a credit card fraud detection (CCFD) system that integrates federated learning (FL) with blockchain technology. The experiment employs FL to establish a global learning model on the cloud server, which transmits initial parameters to individual local learning models on fog nodes. With three banks (fog nodes) involved, each bank trains its learning model locally, ensuring data privacy, and subsequently sends back updated parameters to the global learning model. Through the integration of FL and blockchain, our system ensures privacy preservation and data protection. We utilize three machine learning and deep neural network learning algorithms, RF, CNN, and LSTM, alongside deep optimization techniques such as ADAM, SGD, and MSGD. The SMOTE oversampling technique is also employed to balance the dataset before model training. Our proposed framework has demonstrated efficiency and effectiveness in enhancing classification performance and prediction accuracy. Full article
Show Figures

Figure 1

19 pages, 1936 KiB  
Article
GreenLab, an IoT-Based Small-Scale Smart Greenhouse
by Cristian Volosciuc, Răzvan Bogdan, Bianca Blajovan, Cristina Stângaciu and Marius Marcu
Future Internet 2024, 16(6), 195; https://doi.org/10.3390/fi16060195 - 31 May 2024
Viewed by 444
Abstract
In an era of connectivity, the Internet of Things introduces smart solutions for smart and sustainable agriculture, bringing alternatives to overcome the food crisis. Among these solutions, smart greenhouses support crop and vegetable agriculture regardless of season and cultivated area by carefully controlling [...] Read more.
In an era of connectivity, the Internet of Things introduces smart solutions for smart and sustainable agriculture, bringing alternatives to overcome the food crisis. Among these solutions, smart greenhouses support crop and vegetable agriculture regardless of season and cultivated area by carefully controlling and managing parameters like temperature, air and soil humidity, and light. Smart technologies have proven to be successful tools for increasing agricultural production at both the macro and micro levels, which is an important step in streamlining small-scale agriculture. This paper presents an experimental Internet of Things-based small-scale greenhouse prototype as a proof of concept for the benefits of merging smart sensing, connectivity, IoT, and mobile-based applications, for growing cultures. Our proposed solution is cost-friendly and includes a photovoltaic panel and a buffer battery for reducing energy consumption costs, while also assuring functionality during night and cloudy weather and a mobile application for easy data visualization and monitoring of the greenhouse. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

14 pages, 3949 KiB  
Article
Research on Multi-Modal Pedestrian Detection and Tracking Algorithm Based on Deep Learning
by Rui Zhao, Jutao Hao and Huan Huo
Future Internet 2024, 16(6), 194; https://doi.org/10.3390/fi16060194 - 31 May 2024
Viewed by 386
Abstract
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal [...] Read more.
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal daytime lighting, visibility is enhanced, leading to superior pedestrian detection outcomes. Conversely, under low-light conditions, visible light mode imaging falters due to the inadequate provision of pedestrian target information, resulting in a marked decline in detection efficacy. In this context, infrared light mode imaging emerges as a valuable supplement, bolstering pedestrian information provision. This paper delves into pedestrian detection and tracking algorithms within a multi-modal image framework grounded in deep learning methodologies. Leveraging the YOLOv4 algorithm as a foundation, augmented by a channel stack fusion module, a novel multi-modal pedestrian detection algorithm tailored for intelligent transportation is proposed. This algorithm capitalizes on the fusion of visible and infrared light mode image features to enhance pedestrian detection performance amidst complex road environments. Experimental findings demonstrate that compared to the Visible-YOLOv4 algorithm, renowned for its high performance, the proposed Double-YOLOv4-CSE algorithm exhibits a notable improvement, boasting a 5.0% accuracy rate enhancement and a 6.9% reduction in logarithmic average missing rate. This research’s goal is to ensure that the algorithm can run smoothly even on a low configuration 1080 Ti GPU and to improve the algorithm’s coverage at the application layer, making it affordable and practical for both urban and rural areas. This addresses the broader research problem within the scope of smart cities and remote ends with limited computational power. Full article
Show Figures

Figure 1

16 pages, 335 KiB  
Article
Enhancing Sensor Data Imputation: OWA-Based Model Aggregation for Missing Values
by Muthana Al-Amidie, Laith Alzubaidi, Muhammad Aminul Islam and Derek T. Anderson
Future Internet 2024, 16(6), 193; https://doi.org/10.3390/fi16060193 - 31 May 2024
Viewed by 295
Abstract
Due to some limitations in the data collection process caused either by human-related errors or by collection electronics, sensors, and network connectivity-related errors, the important values at some points could be lost. However, a complete dataset is required for the desired performance of [...] Read more.
Due to some limitations in the data collection process caused either by human-related errors or by collection electronics, sensors, and network connectivity-related errors, the important values at some points could be lost. However, a complete dataset is required for the desired performance of the subsequent applications in various fields like engineering, data science, statistics, etc. An efficient data imputation technique is desired to fill in the missing data values to achieve completeness within the dataset. The fuzzy integral is considered one of the most powerful techniques for multi-source information fusion. It has a wide range of applications in many real-world decision-making problems that often require decisions to be made with partially observable/available information. To address this problem, algorithms impute missing data with a representative sample or by predicting the most likely value given the observed data. In this article, we take a completely different approach to the information fusion task in the ordered weighted averaging (OWA) context. In particular, we empirically explore for different distributions how the weights/importance of the missing sources are distributed across the observed inputs/sources. The experimental results on the synthetic and real-world datasets demonstrate the applicability of the proposed methods. Full article
Show Figures

Figure 1

16 pages, 5464 KiB  
Article
Prophet–CEEMDAN–ARBiLSTM-Based Model for Short-Term Load Forecasting
by Jindong Yang, Xiran Zhang, Wenhao Chen and Fei Rong
Future Internet 2024, 16(6), 192; https://doi.org/10.3390/fi16060192 - 31 May 2024
Viewed by 329
Abstract
Accurate short-term load forecasting (STLF) plays an essential role in sustainable energy development. Specifically, energy companies can efficiently plan and manage their generation capacity, lessening resource wastage and promoting the overall efficiency of power resource utilization. However, existing models cannot accurately capture the [...] Read more.
Accurate short-term load forecasting (STLF) plays an essential role in sustainable energy development. Specifically, energy companies can efficiently plan and manage their generation capacity, lessening resource wastage and promoting the overall efficiency of power resource utilization. However, existing models cannot accurately capture the nonlinear features of electricity data, leading to a decline in the forecasting performance. To relieve this issue, this paper designs an innovative load forecasting method, named Prophet–CEEMDAN–ARBiLSTM, which consists of Prophet, Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), and the residual Bidirectional Long Short-Term Memory (BiLSTM) network. Specifically, this paper firstly employs the Prophet method to learn cyclic and trend features from input data, aiming to discern the influence of these features on the short-term electricity load. Then, the paper adopts CEEMDAN to decompose the residual series and yield components with distinct modalities. In the end, this paper designs the advanced residual BiLSTM (ARBiLSTM) block as the input of the above extracted features to obtain the forecasting results. By conducting multiple experiments on the New England public dataset, it demonstrates that the Prophet–CEEMDAN–ARBiLSTM method can achieve better performance compared with the existing Prophet-based ones. Full article
Show Figures

Figure 1

22 pages, 2048 KiB  
Article
Harnessing the Cloud: A Novel Approach to Smart Solar Plant Monitoring
by Mohammad Imran Ali, Shahi Dost, Khurram Shehzad Khattak, Muhammad Imran Khan and Riaz Muhammad
Future Internet 2024, 16(6), 191; https://doi.org/10.3390/fi16060191 - 29 May 2024
Viewed by 626
Abstract
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature [...] Read more.
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature and sensitivity to environmental parameters (e.g., irradiance, dust, temperature, aging and humidity), real-time solar plant monitoring is imperative. This paper’s contribution is to compare and analyze current IoT trends and propose future research directions. As a result, this will be instrumental in the development of low-cost, real-time, scalable, reliable, and power-optimized solar plant monitoring systems. In this work, a comparative analysis has been performed on proposed solutions using the existing literature. This comparative analysis has been conducted considering five aspects: computer boards, sensors, communication, servers, and architectural paradigms. IoT architectural paradigms employed have been summarized and discussed with respect to communication, application layers, and storage capabilities. To facilitate enhanced IoT-based solar monitoring, an edge computing paradigm has been proposed. Suggestions are presented for the fabrication of edge devices and nodes using optimum compute boards, sensors, and communication modules. Different cloud platforms have been explored, and it was concluded that the public cloud platform Amazon Web Services is the ideal solution. Artificial intelligence-based techniques, methods, and outcomes are presented, which can help in the monitoring, analysis, and management of solar PV systems. As an outcome, this paper can be used to help researchers and academics develop low-cost, real-time, effective, scalable, and reliable solar monitoring systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 1583 KiB  
Article
Tracing Student Activity Patterns in E-Learning Environments: Insights into Academic Performance
by Evgenia Paxinou, Georgios Feretzakis, Rozita Tsoni, Dimitrios Karapiperis, Dimitrios Kalles and Vassilios S. Verykios
Future Internet 2024, 16(6), 190; https://doi.org/10.3390/fi16060190 - 29 May 2024
Viewed by 892
Abstract
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and [...] Read more.
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and reveal information about their academic behavior within a course. These learning paths can be depicted as sequences of transitions between various states, such as completing quizzes, submitting assignments, downloading files, and participating in forum discussions, among others. Considering that a specific learning path summarizes the students’ trajectory in a course during an academic year, we analyzed data on students’ actions extracted from Moodle logs to investigate how the distribution of user actions within different Moodle resources can impact academic achievements. Our analysis was conducted using a Markov Chain Model, whereby transition matrices were constructed to identify steady states, and eigenvectors were calculated. Correlations were explored between specific states in users’ eigenvectors and their final grades, which were used as a proxy of academic performance. Our findings offer valuable insights into the relationship between student actions, link weight vectors, and academic performance, in an attempt to optimize students’ learning paths, tutors’ guidance, and course structures in the Moodle environment. Full article
Show Figures

Figure 1

20 pages, 2832 KiB  
Article
Dynamic Spatial–Temporal Self-Attention Network for Traffic Flow Prediction
by Dong Wang, Hongji Yang and Hua Zhou
Future Internet 2024, 16(6), 189; https://doi.org/10.3390/fi16060189 - 25 May 2024
Viewed by 542
Abstract
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and [...] Read more.
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and dynamic spatiotemporal dependencies within the traffic data. In recent years, researchers have proposed various methods to enhance the accuracy of traffic flow prediction, but certain issues still persist. For instance, some methods rely on specific static assumptions, failing to adequately simulate the dynamic changes in the data, thus limiting their modelling capacity. On the other hand, some approaches inadequately capture the spatiotemporal dependencies, resulting in the omission of crucial information and leading to unsatisfactory prediction outcomes. To address these challenges, this paper proposes a model called the Dynamic Spatial–Temporal Self-Attention Network (DSTSAN). Firstly, this research enhances the interaction between different dimension features in the traffic data through a feature augmentation module, thereby improving the model’s representational capacity. Subsequently, the current investigation introduces two masking matrices: one captures local spatial dependencies and the other captures global spatial dependencies, based on the spatial self-attention module. Finally, the methodology employs a temporal self-attention module to capture and integrate the dynamic temporal dependencies of traffic data. We designed experiments using historical data from the previous hour to predict traffic flow conditions in the hour ahead, and the experiments were extensively compared to the DSTSAN model, with 11 baseline methods using four real-world datasets. The results demonstrate the effectiveness and superiority of the proposed approach. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop