Next Issue
Volume 15, June
Previous Issue
Volume 15, April
 
 

Future Internet, Volume 15, Issue 5 (May 2023) – 35 articles

Cover Story (view full-size image): Industry 5.0 has sparked a transformative era of unprecedented collaboration between humans and machines. This partnership is revolutionizing industries, driving innovation, and reshaping traditional processes. Artificial Intelligence (AI) technologies have significantly transformed the competitive industrial landscape. This shift emphasizes the emergence of a super-smart society or Society 5.0. Thus, this topic explores the concept of Humachine, which envisions a future where humans and machines coexist, leveraging the strengths of both. Thus, the objective is to examine and differentiate the capabilities and unique qualities of humans and machines, with the goal of establishing a foundation for enhancing the interaction between them, known as Human–Machine Interaction (HMI). View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 12723 KiB  
Article
Attacks on IoT: Side-Channel Power Acquisition Framework for Intrusion Detection
by Dominic Lightbody, Duc-Minh Ngo, Andriy Temko, Colin C. Murphy and Emanuel Popovici
Future Internet 2023, 15(5), 187; https://doi.org/10.3390/fi15050187 - 21 May 2023
Cited by 10 | Viewed by 3576
Abstract
This study proposes the wider use of non-intrusive side-channel power data in cybersecurity for intrusion detection. An in-depth analysis of side-channel IoT power behaviour is performed on two well-known IoT devices—a Raspberry Pi 3 model B and a DragonBoard 410c—operating under normal conditions [...] Read more.
This study proposes the wider use of non-intrusive side-channel power data in cybersecurity for intrusion detection. An in-depth analysis of side-channel IoT power behaviour is performed on two well-known IoT devices—a Raspberry Pi 3 model B and a DragonBoard 410c—operating under normal conditions and under attack. Attacks from the categories of reconnaissance, brute force and denial of service are applied, and the side-channel power data of the IoT testbeds are then studied in detail. These attacks are used together to further compromise the IoT testbeds in a “capture-the-flag scenario”, where the attacker aims to infiltrate the device and retrieve a secret file. Some clear similarities in the side-channel power signatures of these attacks can be seen across the two devices. Furthermore, using the knowledge gained from studying the features of these attacks individually and the signatures witnessed in the “capture the flag scenario”, we show that security teams can reverse engineer attacks applied to their system to achieve a much greater understanding of the events that occurred during a breach. While this study presents behaviour signatures analysed visually, the acquired power series datasets will be instrumental for future human-centred AI-assisted intrusion detection. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT II)
Show Figures

Figure 1

20 pages, 10090 KiB  
Article
Design of an SoC Based on 32-Bit RISC-V Processor with Low-Latency Lightweight Cryptographic Cores in FPGA
by Khai-Minh Ma, Duc-Hung Le, Cong-Kha Pham and Trong-Thuc Hoang
Future Internet 2023, 15(5), 186; https://doi.org/10.3390/fi15050186 - 19 May 2023
Cited by 3 | Viewed by 4587
Abstract
The security of Internet of Things (IoTs) devices in recent years has created interest in developing implementations of lightweight cryptographic algorithms for such systems. Additionally, open-source hardware and field-programable gate arrays (FPGAs) are gaining traction via newly developed tools, frameworks, and HDLs. This [...] Read more.
The security of Internet of Things (IoTs) devices in recent years has created interest in developing implementations of lightweight cryptographic algorithms for such systems. Additionally, open-source hardware and field-programable gate arrays (FPGAs) are gaining traction via newly developed tools, frameworks, and HDLs. This enables new methods of creating hardware and systems faster, more simply, and more efficiently. In this paper, the implementation of a system-on-chip (SoC) based on a 32-bit RISC-V processor with lightweight cryptographic accelerator cores in FPGA and an open-source integrating framework is presented. The system consists of a 32-bit VexRiscv processor, written in SpinalHDL, and lightweight cryptographic accelerator cores for the PRINCE block cipher, the PRESENT-80 block cipher, the ChaCha stream cipher, and the SHA3-512 hash function, written in Verilog HDL and optimized for low latency with fewer clock cycles. The primary aim of this work was to develop a customized SoC platform with a register-controlled bus suitable for integrating lightweight cryptographic cores to become compact embedded systems that require encryption functionalities. Additionally, custom firmware was developed to verify the functionality of the SoC with all integrated accelerator cores, and to evaluate the speed of cryptographic processing. The proposed system was successfully implemented in a Xilinx Nexys4 DDR FPGA development board. The resources of the system in the FPGA were low with 11,830 LUTs and 9552 FFs. The proposed system can be applicable to enhancing the security of Internet of Things systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

16 pages, 4564 KiB  
Article
Communication-Traffic-Assisted Mining and Exploitation of Buffer Overflow Vulnerabilities in ADASs
by Yufeng Li, Mengxiao Liu, Chenhong Cao and Jiangtao Li
Future Internet 2023, 15(5), 185; https://doi.org/10.3390/fi15050185 - 18 May 2023
Cited by 4 | Viewed by 1577
Abstract
Advanced Driver Assistance Systems (ADASs) are crucial components of intelligent vehicles, equipped with a vast code base. To enhance the security of ADASs, it is essential to mine their vulnerabilities and corresponding exploitation methods. However, mining buffer overflow (BOF) vulnerabilities in ADASs can [...] Read more.
Advanced Driver Assistance Systems (ADASs) are crucial components of intelligent vehicles, equipped with a vast code base. To enhance the security of ADASs, it is essential to mine their vulnerabilities and corresponding exploitation methods. However, mining buffer overflow (BOF) vulnerabilities in ADASs can be challenging since their code and data are not publicly available. In this study, we observed that ADAS devices commonly utilize unencrypted protocols for module communication, providing us with an opportunity to locate input stream and buffer data operations more efficiently. Based on the above observation, we proposed a communication-traffic-assisted ADAS BOF vulnerability mining and exploitation method. Our method includes firmware extraction, a firmware and system analysis, the locating of risk points with communication traffic, validation, and exploitation. To demonstrate the effectiveness of our proposed method, we applied our method to several commercial ADAS devices and successfully mined BOF vulnerabilities. By exploiting these vulnerabilities, we executed the corresponding commands and mapped the attack to the physical world, showing the severity of these vulnerabilities. Full article
Show Figures

Figure 1

19 pages, 3296 KiB  
Article
Deep Reinforcement Learning-Based Video Offloading and Resource Allocation in NOMA-Enabled Networks
by Siyu Gao, Yuchen Wang, Nan Feng, Zhongcheng Wei and Jijun Zhao
Future Internet 2023, 15(5), 184; https://doi.org/10.3390/fi15050184 - 18 May 2023
Cited by 9 | Viewed by 1810
Abstract
With the proliferation of video surveillance system deployment and related applications, real-time video analysis is very critical to achieving intelligent monitoring, autonomous driving, etc. Analyzing video stream with high accuracy and low latency through the traditional cloud computing represents a non-trivial problem. In [...] Read more.
With the proliferation of video surveillance system deployment and related applications, real-time video analysis is very critical to achieving intelligent monitoring, autonomous driving, etc. Analyzing video stream with high accuracy and low latency through the traditional cloud computing represents a non-trivial problem. In this paper, we propose a non-orthogonal multiple access (NOMA)-based edge real-time video analysis framework with one edge server (ES) and multiple user equipments (UEs). A cost minimization problem composed of delay, energy and accuracy is formulated to improve the quality of experience (QoE) of the UEs. In order to efficiently solve this problem, we propose the joint video frame resolution scaling, task offloading, and resource allocation algorithm based on the Deep Q-Learning Network (JVFRS-TO-RA-DQN), which effectively overcomes the sparsity of the single-layer reward function and accelerates the training convergence speed. JVFRS-TO-RA-DQN consists of two DQN networks to reduce the curse of dimensionality, which, respectively, select the offloading and resource allocation action, as well as the resolution scaling action. The experimental results show that JVFRS-TO-RA-DQN can effectively reduce the cost of edge computing and has better performance in terms of convergence compared to other baseline schemes. Full article
Show Figures

Figure 1

24 pages, 3478 KiB  
Article
Distributed Average Consensus Algorithms in d-Regular Bipartite Graphs: Comparative Study
by Martin Kenyeres and Jozef Kenyeres
Future Internet 2023, 15(5), 183; https://doi.org/10.3390/fi15050183 - 16 May 2023
Viewed by 2127
Abstract
Consensus-based data aggregation in d-regular bipartite graphs poses a challenging task for the scientific community since some of these algorithms diverge in this critical graph topology. Nevertheless, one can see a lack of scientific studies dealing with this topic in the literature. [...] Read more.
Consensus-based data aggregation in d-regular bipartite graphs poses a challenging task for the scientific community since some of these algorithms diverge in this critical graph topology. Nevertheless, one can see a lack of scientific studies dealing with this topic in the literature. Motivated by our recent research concerned with this issue, we provide a comparative study of frequently applied consensus algorithms for distributed averaging in d-regular bipartite graphs in this paper. More specifically, we examine the performance of these algorithms with bounded execution in this topology in order to identify which algorithm can achieve the consensus despite no reconfiguration and find the best-performing algorithm in these graphs. In the experimental part, we apply the number of iterations required for consensus to evaluate the performance of the algorithms in randomly generated regular bipartite graphs with various connectivities and for three configurations of the applied stopping criterion, allowing us to identify the optimal distributed consensus algorithm for this graph topology. Moreover, the obtained experimental results presented in this paper are compared to other scientific manuscripts where the analyzed algorithms are examined in non-regular non-bipartite topologies. Full article
(This article belongs to the Special Issue Modern Trends in Multi-Agent Systems II)
Show Figures

Figure 1

17 pages, 1573 KiB  
Article
Efficient Mobile Sink Routing in Wireless Sensor Networks Using Bipartite Graphs
by Anas Abu Taleb, Qasem Abu Al-Haija and Ammar Odeh
Future Internet 2023, 15(5), 182; https://doi.org/10.3390/fi15050182 - 14 May 2023
Cited by 13 | Viewed by 2037
Abstract
Wireless sensor networks (W.S.N.s) are a critical research area with numerous practical applications. W.S.N.s are utilized in real-life scenarios, including environmental monitoring, healthcare, industrial automation, smart homes, and agriculture. As W.S.N.s advance and become more sophisticated, they offer limitless opportunities for innovative solutions [...] Read more.
Wireless sensor networks (W.S.N.s) are a critical research area with numerous practical applications. W.S.N.s are utilized in real-life scenarios, including environmental monitoring, healthcare, industrial automation, smart homes, and agriculture. As W.S.N.s advance and become more sophisticated, they offer limitless opportunities for innovative solutions in various fields. However, due to their unattended nature, it is essential to develop strategies to improve their performance without draining the battery power of the sensor nodes, which is their most valuable resource. This paper proposes a novel sink mobility model based on constructing a bipartite graph from a deployed wireless sensor network. The proposed model uses bipartite graph properties to derive a controlled mobility model for the mobile sink. As a result, stationary nodes will be visited and planned to reduce routing overhead and enhance the network’s performance. Using the bipartite graph’s properties, the mobile sink node can visit stationary sensor nodes in an optimal way to collect data and transmit it to the base station. We evaluated the proposed approach through simulations using the NS-2 simulator to investigate the performance of wireless sensor networks when adopting this mobility model. Our results show that using the proposed approach can significantly enhance the performance of wireless sensor networks while conserving the energy of the sensor nodes. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

35 pages, 12618 KiB  
Article
Optimizing the Quality of Service of Mobile Broadband Networks for a Dense Urban Environment
by Agbotiname Lucky Imoize, Friday Udeji, Joseph Isabona and Cheng-Chi Lee
Future Internet 2023, 15(5), 181; https://doi.org/10.3390/fi15050181 - 12 May 2023
Cited by 3 | Viewed by 2433
Abstract
Mobile broadband (MBB) services in Lagos, Nigeria are marred with poor signal quality and inconsistent user experience, which can result in frustrated end-users and lost revenue for service providers. With the introduction of 5G, it is becoming more necessary for 4G LTE users [...] Read more.
Mobile broadband (MBB) services in Lagos, Nigeria are marred with poor signal quality and inconsistent user experience, which can result in frustrated end-users and lost revenue for service providers. With the introduction of 5G, it is becoming more necessary for 4G LTE users to find ways of maximizing the technology while they await the installation and implementation of the new 5G networks. A comprehensive analysis of the quality of 4G LTE MBB services in three different locations in Lagos is performed. Minimal optimization techniques using particle swarm optimization (PSO) are used to propose solutions to the identified problems. A methodology that involves data collection, statistical analysis, and optimization techniques is adopted to measure key performance indicators (KPIs) for MBB services in the three locations: UNILAG, Ikorodu, and Oniru VI. The measured KPIs include reference signal received power (RSRP), reference signal received quality (RSRQ), received signal strength indicator (RSSI), and signal-to-noise ratio (SINR). Specific statistical analysis was performed, and the mean, standard deviation, skewness, and kurtosis were calculated for the measured KPIs. Additionally, the probability distribution functions for each KPI were plotted to infer the quality of MBB services in each location. Subsequently, the PSO algorithm was used to optimize the KPIs in each location, and the results were compared with the measured data to evaluate the effectiveness of the optimization. Generally, the optimization process results in an improvement in the quality of service (QoS) in the investigated environments. Findings also indicated that a single KPI, such as RSRP, is insufficient for assessing the quality of MBB services as perceived by end-users. Therefore, multiple KPIs should be considered instead, including RSRQ and RSSI. In order to improve MBB performance in Lagos, recommendations require mapping and replanning of network routes and hardware design. Additionally, it is clear that there is a significant difference in user experience between locations with good and poor reception and that consistency in signal values does not necessarily indicate a good user experience. Therefore, this study provides valuable insights and solutions for improving the quality of MBB services in Lagos and can help service providers better understand the needs and expectations of their end users. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

14 pages, 1196 KiB  
Article
A Hybrid Text Generation-Based Query Expansion Method for Open-Domain Question Answering
by Wenhao Zhu, Xiaoyu Zhang, Qiuhong Zhai and Chenyun Liu
Future Internet 2023, 15(5), 180; https://doi.org/10.3390/fi15050180 - 12 May 2023
Cited by 1 | Viewed by 2028
Abstract
In the two-stage open-domain question answering (OpenQA) systems, the retriever identifies a subset of relevant passages, which the reader then uses to extract or generate answers. However, the performance of OpenQA systems is often hindered by issues such as short and semantically ambiguous [...] Read more.
In the two-stage open-domain question answering (OpenQA) systems, the retriever identifies a subset of relevant passages, which the reader then uses to extract or generate answers. However, the performance of OpenQA systems is often hindered by issues such as short and semantically ambiguous queries, making it challenging for the retriever to find relevant passages quickly. This paper introduces Hybrid Text Generation-Based Query Expansion (HTGQE), an effective method to improve retrieval efficiency. HTGQE combines large language models with Pseudo-Relevance Feedback techniques to enhance the input for generative models, improving text generation speed and quality. Building on this foundation, HTGQE employs multiple query expansion generators, each trained to provide query expansion contexts from distinct perspectives. This enables the retriever to explore relevant passages from various angles for complementary retrieval results. As a result, under an extractive and generative QA setup, HTGQE achieves promising results on both Natural Questions (NQ) and TriviaQA (Trivia) datasets for passage retrieval and reading tasks. Full article
Show Figures

Figure 1

47 pages, 1770 KiB  
Review
A Review on Deep-Learning-Based Cyberbullying Detection
by Md. Tarek Hasan, Md. Al Emran Hossain, Md. Saddam Hossain Mukta, Arifa Akter, Mohiuddin Ahmed and Salekul Islam
Future Internet 2023, 15(5), 179; https://doi.org/10.3390/fi15050179 - 11 May 2023
Cited by 20 | Viewed by 16668
Abstract
Bullying is described as an undesirable behavior by others that harms an individual physically, mentally, or socially. Cyberbullying is a virtual form (e.g., textual or image) of bullying or harassment, also known as online bullying. Cyberbullying detection is a pressing need in today’s [...] Read more.
Bullying is described as an undesirable behavior by others that harms an individual physically, mentally, or socially. Cyberbullying is a virtual form (e.g., textual or image) of bullying or harassment, also known as online bullying. Cyberbullying detection is a pressing need in today’s world, as the prevalence of cyberbullying is continually growing, resulting in mental health issues. Conventional machine learning models were previously used to identify cyberbullying. However, current research demonstrates that deep learning surpasses traditional machine learning algorithms in identifying cyberbullying for several reasons, including handling extensive data, efficiently classifying text and images, extracting features automatically through hidden layers, and many others. This paper reviews the existing surveys and identifies the gaps in those studies. We also present a deep-learning-based defense ecosystem for cyberbullying detection, including data representation techniques and different deep-learning-based models and frameworks. We have critically analyzed the existing DL-based cyberbullying detection techniques and identified their significant contributions and the future research directions they have presented. We have also summarized the datasets being used, including the DL architecture being used and the tasks that are accomplished for each dataset. Finally, several challenges faced by the existing researchers and the open issues to be addressed in the future have been presented. Full article
Show Figures

Figure 1

28 pages, 889 KiB  
Article
Survey of Distributed and Decentralized IoT Securities: Approaches Using Deep Learning and Blockchain Technology
by Ayodeji Falayi, Qianlong Wang, Weixian Liao and Wei Yu
Future Internet 2023, 15(5), 178; https://doi.org/10.3390/fi15050178 - 11 May 2023
Cited by 18 | Viewed by 3653
Abstract
The Internet of Things (IoT) continues to attract attention in the context of computational resource growth. Various disciplines and fields have begun to employ IoT integration technologies in order to enable smart applications. The main difficulty in supporting industrial development in this scenario [...] Read more.
The Internet of Things (IoT) continues to attract attention in the context of computational resource growth. Various disciplines and fields have begun to employ IoT integration technologies in order to enable smart applications. The main difficulty in supporting industrial development in this scenario involves potential risk or malicious activities occurring in the network. However, there are tensions that are difficult to overcome at this stage in the development of IoT technology. In this situation, the future of security architecture development will involve enabling automatic and smart protection systems. Due to the vulnerability of current IoT devices, it is insufficient to ensure system security by implementing only traditional security tools such as encryption and access control. Deep learning and blockchain technology has now become crucial, as it provides distinct and secure approaches to IoT network security. The aim of this survey paper is to elaborate on the application of deep learning and blockchain technology in the IoT to ensure secure utility. We first provide an introduction to the IoT, deep learning, and blockchain technology, as well as a discussion of their respective security features. We then outline the main obstacles and problems of trusted IoT and how blockchain and deep learning may be able to help. Next, we present the future challenges in integrating deep learning and blockchain technology into the IoT. Finally, as a demonstration of the value of blockchain in establishing trust, we provide a comparison between conventional trust management methods and those based on blockchain. Full article
(This article belongs to the Special Issue Securing Big Data Analytics for Cyber-Physical Systems)
Show Figures

Figure 1

17 pages, 2434 KiB  
Article
Blockchain Solution for Buildings’ Multi-Energy Flexibility Trading Using Multi-Token Standards
by Oana Marin, Tudor Cioara and Ionut Anghel
Future Internet 2023, 15(5), 177; https://doi.org/10.3390/fi15050177 - 10 May 2023
Cited by 9 | Viewed by 2853
Abstract
Buildings can become a significant contributor to an energy system’s resilience if they are operated in a coordinated manner to exploit their flexibility in multi-carrier energy networks. However, research and innovation activities are focused on single-carrier optimization (i.e., electricity), aiming to achieve Zero [...] Read more.
Buildings can become a significant contributor to an energy system’s resilience if they are operated in a coordinated manner to exploit their flexibility in multi-carrier energy networks. However, research and innovation activities are focused on single-carrier optimization (i.e., electricity), aiming to achieve Zero Energy Buildings, and miss the significant flexibility that buildings may offer through multi-energy coupling. In this paper, we propose to use blockchain technology and ERC-1155 tokens to digitize the heat and electrical energy flexibility of buildings, transforming them into active flexibility assets within integrated multi-energy grids, allowing them to trade both heat and electricity within community-level marketplaces. The solution increases the level of interoperability and integration of the buildings with community multi-energy grids and brings advantages from a transactive perspective. It permits digitizing multi-carrier energy using the same token and a single transaction to transfer both types of energy, processing transaction batches between the sender and receiver addresses, and holding both fungible and non-fungible tokens in smart contracts to support energy markets’ financial payments and energy transactions’ settlement. The results show the potential of our solution to support buildings in trading heat and electricity flexibility in the same market session, increasing their interoperability with energy markets while decreasing the transactional overhead and gas consumption. Full article
(This article belongs to the Special Issue Artificial Intelligence and Blockchain Technology for Smart Cities)
Show Figures

Figure 1

14 pages, 7742 KiB  
Review
Securing UAV Flying Base Station for Mobile Networking: A Review
by Sang-Yoon Chang, Kyungmin Park, Jonghyun Kim and Jinoh Kim
Future Internet 2023, 15(5), 176; https://doi.org/10.3390/fi15050176 - 9 May 2023
Cited by 11 | Viewed by 2726
Abstract
A flying base station based on an unmanned aerial vehicle (UAV) uses its mobility to extend its connectivity coverage and improve its communication channel quality to achieve a greater communication rate and latency performances. While UAV flying base stations have been used in [...] Read more.
A flying base station based on an unmanned aerial vehicle (UAV) uses its mobility to extend its connectivity coverage and improve its communication channel quality to achieve a greater communication rate and latency performances. While UAV flying base stations have been used in emergency events in 5G networking (sporadic and temporary), their use will significantly increase in 6G networking, as 6G expects reliable connectivity even in rural regions and requires high-performance communication channels and line-of-sight channels for millimeter wave (mmWave) communications. Securing the integrity and availability of the base station operations is critical because of the users’ increasing reliance on the connectivity provided by the base stations, e.g., the mobile user loses connectivity if the base station operation gets disrupted. This paper identifies the security issues and research gaps of flying base stations, focusing on their unique properties, while building on the existing research in wireless communications for stationary ground base stations and embedded control for UAV drones. More specifically, the flying base station’s user-dependent positioning, its battery-constrained power, and the dynamic and distributed operations cause vulnerabilities that are distinct from those in 5G and previous-generation mobile networking with stationary ground base stations. This paper reviews the relevant security research from the perspectives of communications (mobile computing, 5G networking, and distributed computing) and embedded/control systems (UAV vehicular positioning and battery control) and then identifies the security gaps and new issues emerging for flying base stations. Through this review paper, we inform readers of flying base station research, development, and standardization for future mobile and 6G networking. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
Show Figures

Figure 1

32 pages, 936 KiB  
Review
5G-MEC Testbeds for V2X Applications
by Prachi V. Wadatkar, Rosario G. Garroppo and Gianfranco Nencioni
Future Internet 2023, 15(5), 175; https://doi.org/10.3390/fi15050175 - 9 May 2023
Cited by 8 | Viewed by 4804
Abstract
Fifth-generation (5G) mobile networks fulfill the demands of critical applications, such as Ultra-Reliable Low-Latency Communication (URLLC), particularly in the automotive industry. Vehicular communication requires low latency and high computational capabilities at the network’s edge. To meet these requirements, ETSI standardized Multi-access Edge Computing [...] Read more.
Fifth-generation (5G) mobile networks fulfill the demands of critical applications, such as Ultra-Reliable Low-Latency Communication (URLLC), particularly in the automotive industry. Vehicular communication requires low latency and high computational capabilities at the network’s edge. To meet these requirements, ETSI standardized Multi-access Edge Computing (MEC), which provides cloud computing capabilities and addresses the need for low latency. This paper presents a generalized overview for implementing a 5G-MEC testbed for Vehicle-to-Everything (V2X) applications, as well as the analysis of some important testbeds and state-of-the-art implementations based on their deployment scenario, 5G use cases, and open source accessibility. The complexity of using the testbeds is also discussed, and the challenges researchers may face while replicating and deploying them are highlighted. Finally, the paper summarizes the tools used to build the testbeds and addresses open issues related to implementing the testbeds. Full article
Show Figures

Figure 1

18 pages, 3275 KiB  
Article
Predicting Football Team Performance with Explainable AI: Leveraging SHAP to Identify Key Team-Level Performance Metrics
by Serafeim Moustakidis, Spyridon Plakias, Christos Kokkotis, Themistoklis Tsatalas and Dimitrios Tsaopoulos
Future Internet 2023, 15(5), 174; https://doi.org/10.3390/fi15050174 - 5 May 2023
Cited by 13 | Viewed by 4803
Abstract
Understanding the performance indicators that contribute to the final score of a football match is crucial for directing the training process towards specific goals. This paper presents a pipeline for identifying key team-level performance variables in football using explainable ML techniques. The input [...] Read more.
Understanding the performance indicators that contribute to the final score of a football match is crucial for directing the training process towards specific goals. This paper presents a pipeline for identifying key team-level performance variables in football using explainable ML techniques. The input data includes various team-specific features such as ball possession and pass behaviors, with the target output being the average scoring performance of each team over a season. The pipeline includes data preprocessing, sequential forward feature selection, model training, prediction, and explainability using SHapley Additive exPlanations (SHAP). Results show that 14 variables have the greatest contribution to the outcome of a match, with 12 having a positive effect and 2 having a negative effect. The study also identified the importance of certain performance indicators, such as shots, chances, passing, and ball possession, to the final score. This pipeline provides valuable insights for coaches and sports analysts to understand which aspects of a team’s performance need improvement and enable targeted interventions to improve performance. The use of explainable ML techniques allows for a deeper understanding of the factors contributing to the predicted average team score performance. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

14 pages, 2477 KiB  
Article
Blockchain-Enabled NextGen Service Architecture for Mobile Internet Offload
by Raman Singh, Zeeshan Pervez and Hitesh Tewari
Future Internet 2023, 15(5), 173; https://doi.org/10.3390/fi15050173 - 5 May 2023
Cited by 1 | Viewed by 2169
Abstract
The amalgamation of heterogeneous generations of mobile cellular networks around the globe has resulted in diverse data speed experiences for end users. At present, there are no defined mechanisms in place for subscribers of a mobile network operator (MNO) to use the services [...] Read more.
The amalgamation of heterogeneous generations of mobile cellular networks around the globe has resulted in diverse data speed experiences for end users. At present, there are no defined mechanisms in place for subscribers of a mobile network operator (MNO) to use the services of third-party WiFi providers. MNOs also have no standardized procedures to securely interact with each other, and allow their subscribers to use third-party services on a pay-as-you-go basis. This paper proposes a blockchain-enabled offloading framework that allows a subscriber of a mobile operator to temporarily use another MNO or WiFi provider’s higher-speed network. A smart contract is employed to allow diverse entities, such as MNOs, brokers and WiFi providers, to automatically execute mutual agreements, to enable the utilization of third-party infrastructure in a secure and controlled manner. The proposed framework is tested using Ethereum’s testnet on the Goerli network using Alchemy and Hardhat. The analysis of the results obtained shows that the proposed technique helps mobile operators to offer improved user experience in the form of average speed and latency. The experiments show that the average time taken to deliver a 500 MB file is reduced from 10.23 s to 0.91 s for the global average scenario, from 6.09 s to 0.50 s for 5G, from 13.50 s to 0.50 s for 4G-LTE, from 41.11 s to 0.49 s for 4G, and from 339.11 s to 0.49 s for the 3G scenario. The results also show that, with WiFi offloading, users from all cellular generations can enjoy a similar quality of services, because delivery time ranges from 0.49 s to 0.91 s for offloaded experiments whereas for the non-offloaded scenario it ranges from 6.09 s to 339.11 s. Full article
Show Figures

Figure 1

13 pages, 511 KiB  
Article
System Performance Analysis of Sensor Networks for RF Energy Harvesting and Information Transmission
by Kuncheng Lei and Zhenrong Zhang
Future Internet 2023, 15(5), 172; https://doi.org/10.3390/fi15050172 - 30 Apr 2023
Viewed by 1488
Abstract
This paper investigates the problem of RF energy harvesting in wireless sensor networks, with the aim of finding a suitable communication protocol by comparing the performance of the system under different protocols. The network is made up of two parts: first, at the [...] Read more.
This paper investigates the problem of RF energy harvesting in wireless sensor networks, with the aim of finding a suitable communication protocol by comparing the performance of the system under different protocols. The network is made up of two parts: first, at the beginning of each timeslot, the sensor nodes harvest energy from the base station (BS) and then send packets to the BS using the harvested energy. For the energy-harvesting part of the wireless sensor network, we consider two methods: point-to-point and multi-point-to-point energy harvesting. For each method, we use two independent control protocols, namely head harvesting energy of each timeslot (HHT) and head harvesting energy of dedicated timeslot (HDT). Additionally, for complex channel states, we derive the cumulative distribution function (CDF) of packet transmission time using selective combining (SC) and maximum ratio combining (MRC) techniques. Analytical expressions for system reliability and packet timeout probability are obtained. At the same time, we also utilize the Monte Carlo simulation method to simulate our system and have analyzed both the numerical and simulation solutions. Results show that the performance of the HHT protocol is better than that of the HDT protocol, and the MRC technology outperforms the SC technology for the HHT protocol in terms of the energy-harvesting efficiency coefficient, sensor positions, transmit signal-to-noise ratio (SNR), and length of energy harvesting time. Full article
Show Figures

Figure 1

17 pages, 7028 KiB  
Article
A Distributed Sensor System Based on Cloud-Edge-End Network for Industrial Internet of Things
by Mian Wang , Cong’an Xu, Yun Lin, Zhiyi Lu, Jinlong Sun and Guan Gui
Future Internet 2023, 15(5), 171; https://doi.org/10.3390/fi15050171 - 30 Apr 2023
Cited by 7 | Viewed by 3455
Abstract
The Industrial Internet of Things (IIoT) refers to the application of the IoT in the industrial field. The development of fifth-generation (5G) communication technology has accelerated the world’s entry into the era of the industrial revolution and has also promoted the overall optimization [...] Read more.
The Industrial Internet of Things (IIoT) refers to the application of the IoT in the industrial field. The development of fifth-generation (5G) communication technology has accelerated the world’s entry into the era of the industrial revolution and has also promoted the overall optimization of the IIoT. In the IIoT environment, challenges such as complex operating conditions and diverse data transmission have become increasingly prominent. Therefore, studying how to collect and process a large amount of real-time data from various devices in a timely, efficient, and reasonable manner is a significant problem. To address these issues, we propose a three-level networking model based on distributed sensor self-networking and cloud server platforms for networking. This model can collect monitoring data for a variety of industrial scenarios that require data collection. It enables the processing and storage of key information in a timely manner, reduces data transmission and storage costs, and improves data transmission reliability and efficiency. Additionally, we have designed a feature fusion network to further enhance the amount of feature information and improve the accuracy of industrial data recognition. The system also includes data preprocessing and data visualization capabilities. Finally, we discuss how to further preprocess and visualize the collected dataset and provide a specific algorithm analysis process using a large manipulator dataset as an example. Full article
Show Figures

Figure 1

15 pages, 591 KiB  
Article
Toward an SDN-Based Web Application Firewall: Defending against SQL Injection Attacks
by Fahad M. Alotaibi and Vassilios G. Vassilakis
Future Internet 2023, 15(5), 170; https://doi.org/10.3390/fi15050170 - 29 Apr 2023
Cited by 8 | Viewed by 3726
Abstract
Web attacks pose a significant threat to enterprises, as attackers often target web applications first. Various solutions have been proposed to mitigate and reduce the severity of these threats, such as web application firewalls (WAFs). On the other hand, software-defined networking (SDN) technology [...] Read more.
Web attacks pose a significant threat to enterprises, as attackers often target web applications first. Various solutions have been proposed to mitigate and reduce the severity of these threats, such as web application firewalls (WAFs). On the other hand, software-defined networking (SDN) technology has significantly improved network management and operation by providing centralized control for network administrators. In this work, we investigated the possibility of using SDN to implement a firewall capable of detecting and blocking web attacks. As a proof of concept, we designed and implemented a WAF to detect a known web attack, specifically SQL injection. Our design utilized two detection methods: signatures and regular expressions. The experimental results demonstrate that the SDN controller can successfully function as a WAF and detect SQL injection attacks. Furthermore, we implemented and compared ModSecurity, a traditional WAF, with our proposed SDN-based WAF. The results reveal that our system is more efficient in terms of TCP ACK latency, while ModSecurity exhibits a slightly lower overhead on the controller. Full article
(This article belongs to the Special Issue The Multidimensional Network)
Show Figures

Figure 1

17 pages, 456 KiB  
Article
Benchmarking Change Detector Algorithms from Different Concept Drift Perspectives
by Guilherme Yukio Sakurai, Jessica Fernandes Lopes, Bruno Bogaz Zarpelão and Sylvio Barbon Junior
Future Internet 2023, 15(5), 169; https://doi.org/10.3390/fi15050169 - 29 Apr 2023
Cited by 3 | Viewed by 2092
Abstract
The stream mining paradigm has become increasingly popular due to the vast number of algorithms and methodologies it provides to address the current challenges of Internet of Things (IoT) and modern machine learning systems. Change detection algorithms, which focus on identifying drifts in [...] Read more.
The stream mining paradigm has become increasingly popular due to the vast number of algorithms and methodologies it provides to address the current challenges of Internet of Things (IoT) and modern machine learning systems. Change detection algorithms, which focus on identifying drifts in the data distribution during the operation of a machine learning solution, are a crucial aspect of this paradigm. However, selecting the best change detection method for different types of concept drift can be challenging. This work aimed to provide a benchmark for four drift detection algorithms (EDDM, DDM, HDDMW, and HDDMA) for abrupt, gradual, and incremental drift types. To shed light on the capacity and possible trade-offs involved in selecting a concept drift algorithm, we compare their detection capability, detection time, and detection delay. The experiments were carried out using synthetic datasets, where various attributes, such as stream size, the amount of drifts, and drift duration can be controlled and manipulated on our generator of synthetic stream. Our results show that HDDMW provides the best trade-off among all performance indicators, demonstrating superior consistency in detecting abrupt drifts, but has suboptimal time consumption and a limited ability to detect incremental drifts. However, it outperforms other algorithms in detection delay for both abrupt and gradual drifts with an efficient detection performance and detection time performance. Full article
Show Figures

Figure 1

20 pages, 5900 KiB  
Article
Mobile Application for Real-Time Food Plan Management for Alzheimer Patients through Design-Based Research
by Rui P. Duarte, Carlos A. S. Cunha and Valter N. N. Alves
Future Internet 2023, 15(5), 168; https://doi.org/10.3390/fi15050168 - 29 Apr 2023
Viewed by 2385
Abstract
Alzheimer’s disease is a type of dementia that affects many individuals, mainly in an older age group. Over time, it leads to other diseases that affect their autonomy and independence. The quality of food ingestion is a way to mitigate the disease and [...] Read more.
Alzheimer’s disease is a type of dementia that affects many individuals, mainly in an older age group. Over time, it leads to other diseases that affect their autonomy and independence. The quality of food ingestion is a way to mitigate the disease and preserve the patient’s well-being, which substantially impacts their health. Many existing applications for food plan management focus on the prescription of food plans but do not provide feedback to the nutritionist on the real amount of ingested calories. It makes these applications inadequate for these diseases, where monitoring and control are most important. This paper proposed the design and development of a mobile application to monitor and control the food plans of Alzheimer’s patients, focused on informal caregivers and respective patients. It allows both the realistic visualization of the food plans and users to adjust their consumption and register extra meals and water consumption. The interface design process comprises a two-level approach: the user centered design methodology that accounts for users’ needs and requirements and the user experience questionnaire to measure user satisfaction. The results show that the interface is intuitive, visually appealing, and easy to use, adjusted for users that require a particular level of understanding regarding specific subjects. Full article
(This article belongs to the Special Issue Mobile Health Technology)
Show Figures

Figure 1

17 pages, 2054 KiB  
Article
NDN-BDA: A Blockchain-Based Decentralized Data Authentication Mechanism for Vehicular Named Data Networking
by Ahmed Benmoussa, Chaker Abdelaziz Kerrache, Carlos T. Calafate and Nasreddine Lagraa
Future Internet 2023, 15(5), 167; https://doi.org/10.3390/fi15050167 - 29 Apr 2023
Cited by 4 | Viewed by 2472
Abstract
Named Data Networking (NDN) is an implementation of Information-Centric Networking (ICN) that has emerged as a promising candidate for the Future Internet Architecture (FIA). In contrast to traditional networking protocols, NDN’s focus is on content, rather than the source of the content. NDN [...] Read more.
Named Data Networking (NDN) is an implementation of Information-Centric Networking (ICN) that has emerged as a promising candidate for the Future Internet Architecture (FIA). In contrast to traditional networking protocols, NDN’s focus is on content, rather than the source of the content. NDN enables name-based routing and location-independent data retrieval, which gives NDN the ability to support the highly dynamic nature of mobile networks. Among other important features, NDN integrates security mechanisms and prioritizes protecting content over communication channels through cryptographic signatures. However, the data verification process that NDN employs may cause significant delays, especially in mobile networks and vehicular networks. This aspect makes it unsuitable for time-critical and sensitive applications such as the sharing of safety messages. Therefore, in this work, we propose NDN-BDA, a blockchain-based decentralized mechanism that provides a faster and more efficient data authenticity mechanism for NDN-based vehicular networks. Full article
(This article belongs to the Special Issue Recent Advances in Information-Centric Networks (ICNs))
Show Figures

Figure 1

28 pages, 2832 KiB  
Review
Future Internet Architectures on an Emerging Scale—A Systematic Review
by Sarfaraz Ahmed Mohammed and Anca L. Ralescu
Future Internet 2023, 15(5), 166; https://doi.org/10.3390/fi15050166 - 29 Apr 2023
Cited by 5 | Viewed by 7351
Abstract
Future Internet is a general term that is used to refer to the study of new Internet architectures that emphasize the advancements that are paving the way for the next generation of internet. Today’s internet has become more complicated and arduous to manage [...] Read more.
Future Internet is a general term that is used to refer to the study of new Internet architectures that emphasize the advancements that are paving the way for the next generation of internet. Today’s internet has become more complicated and arduous to manage due to its increased traffic. This traffic is a result of the transfer of 247 billion emails, the management of more than a billion websites and 735 active top-level domains, the viewing of at least one billion YouTube videos per day (which is the source of main traffic), and the uploading of more than 2.5 billion photos to Facebook every year. The internet was never anticipated to provide quality of service (QoS) support, but one can have a best effort service that provides support for video streams and downloaded media applications. Therefore, the future architecture of the internet becomes crucial. Furthermore, the internet as a service has witnessed many evolving conflicts among its stakeholders, leading to extensive research. This article presents a systematic review of the internet’s evolution and discusses the ongoing research efforts towards new internet architectures, as well as the challenges that are faced in increasing the network’s performance and quality. Moreover, as part of these anticipated future developments, this article draws attention to the Metaverse, which combines the emerging areas of augmented reality, virtual reality, mixed reality, and extended reality, and is considered to be the next frontier for the future internet. This article examines the key role of the blockchain in organizing and advancing the applications and services within the Metaverse. It also discusses the potential benefits and challenges of future internet research. Finally, the article outlines certain directions for future internet research, particularly in the context of utilizing blockchains in the Metaverse. Full article
(This article belongs to the Collection Featured Reviews of Future Internet Research)
Show Figures

Figure 1

10 pages, 651 KiB  
Editorial
Theory and Applications of Web 3.0 in the Media Sector
by Charalampos A. Dimoulas and Andreas Veglis
Future Internet 2023, 15(5), 165; https://doi.org/10.3390/fi15050165 - 28 Apr 2023
Cited by 1 | Viewed by 2228
Abstract
We live in a digital era, with vast technological advancements, which, among others, have a major impact on the media domain. More specifically, progress in the last two decades led to the end-to-end digitalization of the media industry, resulting in a rapidly evolving [...] Read more.
We live in a digital era, with vast technological advancements, which, among others, have a major impact on the media domain. More specifically, progress in the last two decades led to the end-to-end digitalization of the media industry, resulting in a rapidly evolving media landscape. In addition to news digitization, User-Generated Content (UGC) is dominant in this new environment, also fueled by Social Media, which has become commonplace for news publishing, propagation, consumption, and interactions. However, the exponential increase in produced and distributed content, with the multiplied growth in the number of plenary individuals involved in the processes, created urgent needs and challenges that need careful treatment. Hence, intelligent processing and automation incorporated into the Semantic Web vision, also known as Web 3.0, aim at providing sophisticated data documentation, retrieval, and management solutions to meet the demands of the new digital world. Specifically, for the sensitive news and media domains, necessities are created both at the production and consumption ends, dealing with content production and validation, as well as tools empowering and engaging audiences (professionals and end users). In this direction, state-of-the-art works studying news detection, modeling, generation, recommendation, evaluation, and utilization are included in the current Special Issue, enlightening multiple contemporary journalistic practices and media perspectives. Full article
(This article belongs to the Special Issue Theory and Applications of Web 3.0 in the Media Sector)
Show Figures

Figure 1

31 pages, 2325 KiB  
Review
Online Privacy Fatigue: A Scoping Review and Research Agenda
by Karl van der Schyff, Greg Foster, Karen Renaud and Stephen Flowerday
Future Internet 2023, 15(5), 164; https://doi.org/10.3390/fi15050164 - 28 Apr 2023
Cited by 6 | Viewed by 4161
Abstract
Online users are responsible for protecting their online privacy themselves: the mantra is custodiat te (protect yourself). Even so, there is a great deal of evidence pointing to the fact that online users generally do not act to preserve the privacy of their [...] Read more.
Online users are responsible for protecting their online privacy themselves: the mantra is custodiat te (protect yourself). Even so, there is a great deal of evidence pointing to the fact that online users generally do not act to preserve the privacy of their personal information, consequently disclosing more than they ought to and unwisely divulging sensitive information. Such self-disclosure has many negative consequences, including the invasion of privacy and identity theft. This often points to a need for more knowledge and awareness but does not explain why even knowledgeable users fail to preserve their privacy. One explanation for this phenomenon may be attributed to online privacy fatigue. Given the importance of online privacy and the lack of integrative online privacy fatigue research, this scoping review aims to provide researchers with an understanding of online privacy fatigue, its antecedents and outcomes, as well as a critical analysis of the methodological approaches used. A scoping review based on the PRISMA-ScR checklist was conducted. Only empirical studies focusing on online privacy were included, with nontechnological studies being excluded. All studies had to be written in English. A search strategy encompassing six electronic databases resulted in eighteen eligible studies, and a backward search of the references resulted in an additional five publications. Of the 23 studies, the majority were quantitative (74%), with fewer than half being theory driven (48%). Privacy fatigue was mainly conceptualized as a loss of control (74% of studies). Five categories of privacy fatigue antecedents were identified: privacy risk, privacy control and management, knowledge and information, individual differences, and privacy policy characteristics. This study highlights the need for greater attention to be paid to the methodological design and theoretical underpinning of future research. Quantitative studies should carefully consider the use of CB-SEM or PLS-SEM, should aim to increase the sample size, and should improve on analytical rigor. In addition, to ensure that the field matures, future studies should be underpinned by established theoretical frameworks. This review reveals a notable absence of privacy fatigue research when modeling the influence of privacy threats and invasions and their relationship with privacy burnout, privacy resignation, and increased self-disclosure. In addition, this review provides insight into theoretical and practical research recommendations that future privacy fatigue researchers should consider going forward. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

15 pages, 373 KiB  
Article
Watermarking Protocols: A Short Guide for Beginners
by Franco Frattolillo
Future Internet 2023, 15(5), 163; https://doi.org/10.3390/fi15050163 - 28 Apr 2023
Cited by 2 | Viewed by 1651
Abstract
Watermarking protocols, in conjunction with digital watermarking technologies, make it possible to trace back digital copyright infringers by identifying who has legitimately purchased digital content and then illegally shared it on the Internet. Although they can act as an effective deterrent against copyright [...] Read more.
Watermarking protocols, in conjunction with digital watermarking technologies, make it possible to trace back digital copyright infringers by identifying who has legitimately purchased digital content and then illegally shared it on the Internet. Although they can act as an effective deterrent against copyright violations, their adoption in the current web context is made difficult due to unresolved usability and performance issues. This paper aims at providing researchers with the basics needed to design watermarking protocols suited to the web context. It is focused on two important aspects. The first concerns the basic requirements that make a protocol usable by both web users and content providers, whereas the second concerns the security primitives and how they have been used to implement the most relevant examples of watermarking protocols documented in the literature. In this way, researchers can rely on a quick guide to getting started in the field of watermarking protocols. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

25 pages, 5686 KiB  
Review
The Future of the Human–Machine Interface (HMI) in Society 5.0
by Dimitris Mourtzis, John Angelopoulos and Nikos Panopoulos
Future Internet 2023, 15(5), 162; https://doi.org/10.3390/fi15050162 - 27 Apr 2023
Cited by 45 | Viewed by 16242
Abstract
The blending of human and mechanical capabilities has become a reality in the realm of Industry 4.0. Enterprises are encouraged to design frameworks capable of harnessing the power of human and technological resources to enhance the era of Artificial Intelligence (AI). Over the [...] Read more.
The blending of human and mechanical capabilities has become a reality in the realm of Industry 4.0. Enterprises are encouraged to design frameworks capable of harnessing the power of human and technological resources to enhance the era of Artificial Intelligence (AI). Over the past decade, AI technologies have transformed the competitive landscape, particularly during the pandemic. Consequently, the job market, at an international level, is transforming towards the integration of suitably skilled people in cutting edge technologies, emphasizing the need to focus on the upcoming super-smart society known as Society 5.0. The concept of a Humachine builds on the notion that humans and machines have a common future that capitalizes on the strengths of both humans and machines. Therefore, the aim of this paper is to identify the capabilities and distinguishing characteristics of both humans and machines, laying the groundwork for improving human–machine interaction (HMI). Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

24 pages, 3987 KiB  
Article
Blockchain-Based Loyalty Management System
by André F. Santos, José Marinho and Jorge Bernardino
Future Internet 2023, 15(5), 161; https://doi.org/10.3390/fi15050161 - 27 Apr 2023
Cited by 6 | Viewed by 3902
Abstract
Loyalty platforms are designed to increase customer loyalty and thus increase consumers’ attraction to purchase. Although successful in increasing brand reach and sales, these platforms fail to meet their primary objective due to a lack of incentives and encouragement for customers to return. [...] Read more.
Loyalty platforms are designed to increase customer loyalty and thus increase consumers’ attraction to purchase. Although successful in increasing brand reach and sales, these platforms fail to meet their primary objective due to a lack of incentives and encouragement for customers to return. Along with the problem in originating sales, they bring excessive costs to brands due to the maintenance and infrastructure required to make the systems feasible. In that sense, recent blockchain technology can help to overcome some of these problems, providing capabilities such as smart contracts, which have the potential to reinvent the way loyalty systems work and solve current problems. Although blockchain is a relatively new technology, some brands are already investigating its usefulness and rebuilding their loyalty systems. However, these platforms are independent and linked directly to a brand. Thus, there is a need for a generic platform capable of creating and managing different loyalty programs, regardless of the size of the business. This paper explores the shortcomings of current loyalty programs identified through the literature review, and proposes a loyalty management system with blockchain integration that allows any retailer to create and manage their loyalty programs and have customers interact directly with multiple programs in a single application. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT II)
Show Figures

Figure 1

27 pages, 1760 KiB  
Article
Extending Learning and Collaboration in Quantum Information with Internet Support: A Future Perspective on Research Education beyond Boundaries, Limitations, and Frontiers
by Francisco Delgado
Future Internet 2023, 15(5), 160; https://doi.org/10.3390/fi15050160 - 26 Apr 2023
Cited by 1 | Viewed by 2174
Abstract
Quantum information is an emerging scientific and technological discipline attracting a growing number of professionals from various related fields. Although it can potentially serve as a valuable source of skilled labor, the Internet provides a way to disseminate information about education, opportunities, and [...] Read more.
Quantum information is an emerging scientific and technological discipline attracting a growing number of professionals from various related fields. Although it can potentially serve as a valuable source of skilled labor, the Internet provides a way to disseminate information about education, opportunities, and collaboration. In this work, we analyzed, through a blended approach, the sustained effort over 12 years to involve science and engineering students in research education and collaboration, emphasizing the role played by the Internet. Three main spaces have been promoted, workshops, research stays, and a minor, all successfully developed through distance education in 2021–2022, involving students from various locations in Mexico and the United States. The success of these efforts was measured by research-oriented indicators, the number of participants, and their surveyed opinions. The decisive inclusion of the Internet to facilitate the blended approach has accelerated the boost in human resources and research production. During the COVID-19 pandemic, the Internet played a crucial role in the digital transformation of this research education initiative, leading to effective educative and collaborative experiences in the “New Normal”. Full article
(This article belongs to the Special Issue E-Learning and Technology Enhanced Learning II)
Show Figures

Figure 1

16 pages, 515 KiB  
Article
Domain Adaptation Speech-to-Text for Low-Resource European Portuguese Using Deep Learning
by Eduardo Medeiros, Leonel Corado, Luís Rato, Paulo Quaresma and Pedro Salgueiro
Future Internet 2023, 15(5), 159; https://doi.org/10.3390/fi15050159 - 24 Apr 2023
Cited by 2 | Viewed by 2099
Abstract
Automatic speech recognition (ASR), commonly known as speech-to-text, is the process of transcribing audio recordings into text, i.e., transforming speech into the respective sequence of words. This paper presents a deep learning ASR system optimization and evaluation for the European Portuguese language. We [...] Read more.
Automatic speech recognition (ASR), commonly known as speech-to-text, is the process of transcribing audio recordings into text, i.e., transforming speech into the respective sequence of words. This paper presents a deep learning ASR system optimization and evaluation for the European Portuguese language. We present a pipeline composed of several stages for data acquisition, analysis, pre-processing, model creation, and evaluation. A transfer learning approach is proposed considering an English language-optimized model as starting point; a target composed of European Portuguese; and the contribution to the transfer process by a source from a different domain consisting of a multiple-variant Portuguese language dataset, essentially composed of Brazilian Portuguese. A domain adaptation was investigated between European Portuguese and mixed (mostly Brazilian) Portuguese. The proposed optimization evaluation used the NVIDIA NeMo framework implementing the QuartzNet15×5 architecture based on 1D time-channel separable convolutions. Following this transfer learning data-centric approach, the model was optimized, achieving a state-of-the-art word error rate (WER) of 0.0503. Full article
(This article belongs to the Special Issue Deep Learning and Natural Language Processing II)
Show Figures

Figure 1

20 pages, 1518 KiB  
Article
Chinese Short-Text Sentiment Prediction: A Study of Progressive Prediction Techniques and Attentional Fine-Tuning
by Jinlong Wang, Dong Cui and Qiang Zhang
Future Internet 2023, 15(5), 158; https://doi.org/10.3390/fi15050158 - 23 Apr 2023
Cited by 1 | Viewed by 1813
Abstract
With sentiment prediction technology, businesses can quickly look at user reviews to find ways to improve their products and services. We present the BertBilstm Multiple Emotion Judgment (BBMEJ) model for small-sample emotion prediction tasks to solve the difficulties of short emotion identification datasets [...] Read more.
With sentiment prediction technology, businesses can quickly look at user reviews to find ways to improve their products and services. We present the BertBilstm Multiple Emotion Judgment (BBMEJ) model for small-sample emotion prediction tasks to solve the difficulties of short emotion identification datasets and the high dataset annotation costs encountered by small businesses. The BBMEJ model is suitable for many datasets. When an insufficient quantity of relevant datasets prevents the model from achieving the desired training results, the prediction accuracy of the model can be enhanced by fine-tuning it with additional datasets prior to training. Due to the number of parameters in the Bert model, fine-tuning requires a lot of data, which drives up the cost of fine-tuning. We present the Bert Tail Attention Fine-Tuning (BTAFT) method to make fine-tuning work better. Our experimental findings demonstrate that the BTAFT fine-tuning approach performs better in terms of the prediction effect than fine-tuning all parameters. Our model obtains a small sample prediction accuracy of 0.636, which is better than the ideal baseline of 0.064. The Macro-F1 (F1) evaluation metrics significantly exceed other models. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop