Next Issue
Volume 13, April
Previous Issue
Volume 13, February
 
 

Future Internet, Volume 13, Issue 3 (March 2021) – 27 articles

Cover Story (view full-size image): Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity, especially during this pandemic era. However, the best-effort nature of 802.11 networks hinders users from experiencing uninterrupted high-quality video conferencing. This paper presents a novel methodology to predict the perceived quality of service (PQoS) of video conferencing services over Wi-Fi networks using machine learning techniques with 92–98% accuracy in various networks. In effect, ISPs can utilize this methodology to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customer privacy while reducing the operational costs of monitoring and data analytics. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 2870 KiB  
Article
Virtual Network Function Embedding under Nodal Outage Using Deep Q-Learning
by Swarna Bindu Chetty, Hamed Ahmadi, Sachin Sharma and Avishek Nag
Future Internet 2021, 13(3), 82; https://doi.org/10.3390/fi13030082 - 23 Mar 2021
Cited by 6 | Viewed by 3049
Abstract
With the emergence of various types of applications such as delay-sensitive applications, future communication networks are expected to be increasingly complex and dynamic. Network Function Virtualization (NFV) provides the necessary support towards efficient management of such complex networks, by virtualizing network functions and [...] Read more.
With the emergence of various types of applications such as delay-sensitive applications, future communication networks are expected to be increasingly complex and dynamic. Network Function Virtualization (NFV) provides the necessary support towards efficient management of such complex networks, by virtualizing network functions and placing them on shared commodity servers. However, one of the critical issues in NFV is the resource allocation for the highly complex services; moreover, this problem is classified as an NP-Hard problem. To solve this problem, our work investigates the potential of Deep Reinforcement Learning (DRL) as a swift yet accurate approach (as compared to integer linear programming) for deploying Virtualized Network Functions (VNFs) under several Quality-of-Service (QoS) constraints such as latency, memory, CPU, and failure recovery requirements. More importantly, the failure recovery requirements are focused on the node-outage problem where outage can be either due to a disaster or unavailability of network topology information (e.g., due to proprietary and ownership issues). In DRL, we adopt a Deep Q-Learning (DQL) based algorithm where the primary network estimates the action-value function Q, as well as the predicted Q, highly causing divergence in Q-value’s updates. This divergence increases for the larger-scale action and state-space causing inconsistency in learning, resulting in an inaccurate output. Thus, to overcome this divergence, our work has adopted a well-known approach, i.e., introducing Target Neural Networks and Experience Replay algorithms in DQL. The constructed model is simulated for two real network topologies—Netrail Topology and BtEurope Topology—with various capacities of the nodes (e.g., CPU core, VNFs per Core), links (e.g., bandwidth and latency), several VNF Forwarding Graph (VNF-FG) complexities, and different degrees of the nodal outage from 0% to 50%. We can conclude from our work that, with the increase in network density or nodal capacity or VNF-FG’s complexity, the model took extremely high computation time to execute the desirable results. Moreover, with the rise in complexity of the VNF-FG, the resources decline much faster. In terms of the nodal outage, our model provided almost 70–90% Service Acceptance Rate (SAR) even with a 50% nodal outage for certain combinations of scenarios. Full article
Show Figures

Figure 1

17 pages, 4290 KiB  
Article
Vehicular Communication Management Framework: A Flexible Hybrid Connectivity Platform for CCAM Services
by Dries Naudts, Vasilis Maglogiannis, Seilendria Hadiwardoyo, Daniel van den Akker, Simon Vanneste, Siegfried Mercelis, Peter Hellinckx, Bart Lannoo, Johann Marquez-Barja and Ingrid Moerman
Future Internet 2021, 13(3), 81; https://doi.org/10.3390/fi13030081 - 22 Mar 2021
Cited by 19 | Viewed by 3670
Abstract
In the upcoming decade and beyond, the Cooperative, Connected and Automated Mobility (CCAM) initiative will play a huge role in increasing road safety, traffic efficiency and comfort of driving in Europe. While several individual vehicular wireless communication technologies exist, there is still a [...] Read more.
In the upcoming decade and beyond, the Cooperative, Connected and Automated Mobility (CCAM) initiative will play a huge role in increasing road safety, traffic efficiency and comfort of driving in Europe. While several individual vehicular wireless communication technologies exist, there is still a lack of real flexible and modular platforms that can support the need for hybrid communication. In this paper, we propose a novel vehicular communication management framework (CAMINO), which incorporates flexible support for both short-range direct and long-range cellular technologies and offers built-in Cooperative Intelligent Transport Systems’ (C-ITS) services for experimental validation in real-life settings. Moreover, integration with vehicle and infrastructure sensors/actuators and external services is enabled using a Distributed Uniform Streaming (DUST) framework. The framework is implemented and evaluated in the Smart Highway test site for two targeted use cases, proofing the functional operation in realistic environments. The flexibility and the modular architecture of the hybrid CAMINO framework offers valuable research potential in the field of vehicular communications and CCAM services and can enable cross-technology vehicular connectivity. Full article
(This article belongs to the Special Issue Vehicular Networks and Mobility as Service)
Show Figures

Figure 1

18 pages, 5942 KiB  
Article
A Web Interface for Analyzing Hate Speech
by Lazaros Vrysis, Nikolaos Vryzas, Rigas Kotsakis, Theodora Saridou, Maria Matsiola, Andreas Veglis, Carlos Arcila-Calderón and Charalampos Dimoulas
Future Internet 2021, 13(3), 80; https://doi.org/10.3390/fi13030080 - 22 Mar 2021
Cited by 33 | Viewed by 7989
Abstract
Social media services make it possible for an increasing number of people to express their opinion publicly. In this context, large amounts of hateful comments are published daily. The PHARM project aims at monitoring and modeling hate speech against refugees and migrants in [...] Read more.
Social media services make it possible for an increasing number of people to express their opinion publicly. In this context, large amounts of hateful comments are published daily. The PHARM project aims at monitoring and modeling hate speech against refugees and migrants in Greece, Italy, and Spain. In this direction, a web interface for the creation and the query of a multi-source database containing hate speech-related content is implemented and evaluated. The selected sources include Twitter, YouTube, and Facebook comments and posts, as well as comments and articles from a selected list of websites. The interface allows users to search in the existing database, scrape social media using keywords, annotate records through a dedicated platform and contribute new content to the database. Furthermore, the functionality for hate speech detection and sentiment analysis of texts is provided, making use of novel methods and machine learning models. The interface can be accessed online with a graphical user interface compatible with modern internet browsers. For the evaluation of the interface, a multifactor questionnaire was formulated, targeting to record the users’ opinions about the web interface and the corresponding functionality. Full article
(This article belongs to the Special Issue Theory and Applications of Web 3.0 in the Media Sector)
Show Figures

Figure 1

14 pages, 3246 KiB  
Article
RecPOID: POI Recommendation with Friendship Aware and Deep CNN
by Sadaf Safavi and Mehrdad Jalali
Future Internet 2021, 13(3), 79; https://doi.org/10.3390/fi13030079 - 22 Mar 2021
Cited by 19 | Viewed by 3033
Abstract
In location-based social networks (LBSNs), exploit several key features of points-of-interest (POIs) and users on precise POI recommendation be significant. In this work, a novel POI recommendation pipeline based on the convolutional neural network named RecPOID is proposed, which can recommend an accurate [...] Read more.
In location-based social networks (LBSNs), exploit several key features of points-of-interest (POIs) and users on precise POI recommendation be significant. In this work, a novel POI recommendation pipeline based on the convolutional neural network named RecPOID is proposed, which can recommend an accurate sequence of top-k POIs and considers only the effect of the most similar pattern friendship rather than all user’s friendship. We use the fuzzy c-mean clustering method to find the similarity. Temporal and spatial features of similar friends are fed to our Deep CNN model. The 10-layer convolutional neural network can predict longitude and latitude and the Id of the next proper locations; after that, based on the shortest time distance from a similar pattern’s friendship, select the smallest distance locations. The proposed structure uses six features, including user’s ID, month, day, hour, minute, and second of visiting time by each user as inputs. RecPOID based on two accessible LBSNs datasets is evaluated. Experimental outcomes illustrate considering most similar friendship could improve the accuracy of recommendations and the proposed RecPOID for POI recommendation outperforms state-of-the-art approaches. Full article
(This article belongs to the Special Issue Social Networks Analysis and Mining)
Show Figures

Figure 1

13 pages, 1861 KiB  
Article
An Adaptive Throughput-First Packet Scheduling Algorithm for DPDK-Based Packet Processing Systems
by Chuanhong Li, Lei Song and Xuewen Zeng
Future Internet 2021, 13(3), 78; https://doi.org/10.3390/fi13030078 - 19 Mar 2021
Cited by 1 | Viewed by 2943
Abstract
The continuous increase in network traffic has sharply increased the demand for high-performance packet processing systems. For a high-performance packet processing system based on multi-core processors, the packet scheduling algorithm is critical because of the significant role it plays in load distribution, which [...] Read more.
The continuous increase in network traffic has sharply increased the demand for high-performance packet processing systems. For a high-performance packet processing system based on multi-core processors, the packet scheduling algorithm is critical because of the significant role it plays in load distribution, which is related to system throughput, attracting intensive research attention. However, it is not an easy task since the canonical flow-level packet scheduling algorithm is vulnerable to traffic locality, while the packet-level packet scheduling algorithm fails to maintain cache affinity. In this paper, we propose an adaptive throughput-first packet scheduling algorithm for DPDK-based packet processing systems. Combined with the feature of DPDK burst-oriented packet receiving and transmitting, we propose using Subflow as the scheduling unit and the adjustment unit making the proposed algorithm not only maintain the advantages of flow-level packet scheduling algorithms when the adjustment does not happen but also avoid packet loss as much as possible when the target core may be overloaded Experimental results show that the proposed method outperforms Round-Robin, HRW (High Random Weight), and CRC32 on system throughput and packet loss rate. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

13 pages, 3113 KiB  
Article
The Role of Mobile Application Acceptance in Shaping E-Customer Service
by Laith T. Khrais and Abdullah M. Alghamdi
Future Internet 2021, 13(3), 77; https://doi.org/10.3390/fi13030077 - 19 Mar 2021
Cited by 20 | Viewed by 8737
Abstract
Most retailers are integrating their practices with modern technologies to enhance the effectiveness of their operations. The adoption of technology aims to enable businesses to accurately meet customer needs and expectations. This study focused on examining the role of mobile application (app) acceptance [...] Read more.
Most retailers are integrating their practices with modern technologies to enhance the effectiveness of their operations. The adoption of technology aims to enable businesses to accurately meet customer needs and expectations. This study focused on examining the role of mobile application (app) acceptance in shaping customer electronic experience. A mixed method was adopted, in which qualitative data were collected using interviews, and quantitative data were gathered using the questionnaires. The results indicate that mobile app acceptance contributes to a positive customer experience while purchasing products and services from online retailers. Mobile apps are associated with benefits, such as convenience, ease of use, and the ability to access various products and services. With the rapid development in technology, e-commerce retailers should leverage such innovations to meet customer needs. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

20 pages, 1218 KiB  
Article
Realistic Aspects of Simulation Models for Fake News Epidemics over Social Networks
by Quintino Francesco Lotito, Davide Zanella and Paolo Casari
Future Internet 2021, 13(3), 76; https://doi.org/10.3390/fi13030076 - 17 Mar 2021
Cited by 13 | Viewed by 5925
Abstract
The pervasiveness of online social networks has reshaped the way people access information. Online social networks make it common for users to inform themselves online and share news among their peers, but also favor the spreading of both reliable and fake news alike. [...] Read more.
The pervasiveness of online social networks has reshaped the way people access information. Online social networks make it common for users to inform themselves online and share news among their peers, but also favor the spreading of both reliable and fake news alike. Because fake news may have a profound impact on the society at large, realistically simulating their spreading process helps evaluate the most effective countermeasures to adopt. It is customary to model the spreading of fake news via the same epidemic models used for common diseases; however, these models often miss concepts and dynamics that are peculiar to fake news spreading. In this paper, we fill this gap by enriching typical epidemic models for fake news spreading with network topologies and dynamics that are typical of realistic social networks. Specifically, we introduce agents with the role of influencers and bots in the model and consider the effects of dynamical network access patterns, time-varying engagement, and different degrees of trust in the sources of circulating information. These factors concur with making the simulations more realistic. Among other results, we show that influencers that share fake news help the spreading process reach nodes that would otherwise remain unaffected. Moreover, we emphasize that bots dramatically speed up the spreading process and that time-varying engagement and network access change the effectiveness of fake news spreading. Full article
(This article belongs to the Special Issue Social Networks Analysis and Mining)
Show Figures

Figure 1

14 pages, 2715 KiB  
Article
Dirichlet Process Prior for Student’s t Graph Variational Autoencoders
by Yuexuan Zhao and Jing Huang
Future Internet 2021, 13(3), 75; https://doi.org/10.3390/fi13030075 - 16 Mar 2021
Viewed by 2486
Abstract
Graph variational auto-encoder (GVAE) is a model that combines neural networks and Bayes methods, capable of deeper exploring the influential latent features of graph reconstruction. However, several pieces of research based on GVAE employ a plain prior distribution for latent variables, for instance, [...] Read more.
Graph variational auto-encoder (GVAE) is a model that combines neural networks and Bayes methods, capable of deeper exploring the influential latent features of graph reconstruction. However, several pieces of research based on GVAE employ a plain prior distribution for latent variables, for instance, standard normal distribution (N(0,1)). Although this kind of simple distribution has the advantage of convenient calculation, it will also make latent variables contain relatively little helpful information. The lack of adequate expression of nodes will inevitably affect the process of generating graphs, which will eventually lead to the discovery of only external relations and the neglect of some complex internal correlations. In this paper, we present a novel prior distribution for GVAE, called Dirichlet process (DP) construction for Student’s t (St) distribution. The DP allows the latent variables to adapt their complexity during learning and then cooperates with heavy-tailed St distribution to approach sufficient node representation. Experimental results show that this method can achieve a relatively better performance against the baselines. Full article
Show Figures

Figure 1

24 pages, 376 KiB  
Article
Characterization of the Digital Identity of Chilean University Students Considering Their Personal Learning Environments
by Marisol Hernández-Orellana, Adolfina Pérez-Garcias and Ángel Roco-Videla
Future Internet 2021, 13(3), 74; https://doi.org/10.3390/fi13030074 - 16 Mar 2021
Cited by 3 | Viewed by 3617
Abstract
At present, our online activity is almost constant, either producing information or consuming it, both for the social and academic fields. The spaces in which people move and travel every day, innocently divided between the face-to-face and the virtual, affect the way we [...] Read more.
At present, our online activity is almost constant, either producing information or consuming it, both for the social and academic fields. The spaces in which people move and travel every day, innocently divided between the face-to-face and the virtual, affect the way we communicate and perceive ourselves. In this document, a characterization of the academic digital identity of Chilean university students is proposed and an invitation to teachers to redefine learning spaces is made, allowing integrating all those technological tools that the student actually uses. This study was developed within the logic of pragmatism based on mixed methodology, non-experimental design, and a descriptive–quantitative cross-sectional approach. A non-probabilistic sample was made up of 509 students, who participated voluntarily with an online questionnaire. The Stata Version-14 program was used, applying the Mann–Whitney–Wilcoxon and Kruskal–Wallis U tests. To develop characterizations, a conglomerate analysis was performed with a hierarchical dissociative method. In general, Chilean university students are highly truthful on the Internet without making significant differences between face-to-face and digital interactions, with low awareness of their ID, being easily recognizable on the Web. Regarding their educational process, they manage it with analogical/face-to-face mixing formal and informal technological tools to optimize their learning process. These students manifest a hybrid academic digital identity, without gender difference in the deployment of their PLEs, but maintaining stereotypical gender behaviors in the construction of their digital identity on the Web, which shows a human-technological development similar to that of young Asians and Europeans. Full article
14 pages, 2616 KiB  
Article
Deep Model Poisoning Attack on Federated Learning
by Xingchen Zhou, Ming Xu, Yiming Wu and Ning Zheng
Future Internet 2021, 13(3), 73; https://doi.org/10.3390/fi13030073 - 14 Mar 2021
Cited by 95 | Viewed by 10530
Abstract
Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, [...] Read more.
Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different from existing methods, we primarily focus on the effectiveness, persistence and stealth of attacks. Numerical experiments demonstrate that the proposed method can not only achieve high attack success rate, but it is also stealthy enough to bypass two existing defense methods. Full article
(This article belongs to the Special Issue Distributed Systems and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 2495 KiB  
Article
Person Re-Identification Based on Attention Mechanism and Context Information Fusion
by Shengbo Chen, Hongchang Zhang and Zhou Lei
Future Internet 2021, 13(3), 72; https://doi.org/10.3390/fi13030072 - 13 Mar 2021
Cited by 4 | Viewed by 3675
Abstract
Person re-identification (ReID) plays a significant role in video surveillance analysis. In the real world, due to illumination, occlusion, and deformation, pedestrian features extraction is the key to person ReID. Considering the shortcomings of existing methods in pedestrian features extraction, a method based [...] Read more.
Person re-identification (ReID) plays a significant role in video surveillance analysis. In the real world, due to illumination, occlusion, and deformation, pedestrian features extraction is the key to person ReID. Considering the shortcomings of existing methods in pedestrian features extraction, a method based on attention mechanism and context information fusion is proposed. A lightweight attention module is introduced into ResNet50 backbone network equipped with a small number of network parameters, which enhance the significant characteristics of person and suppress irrelevant information. Aiming at the problem of person context information loss due to the over depth of the network, a context information fusion module is designed to sample the shallow feature map of pedestrians and cascade with the high-level feature map. In order to improve the robustness, the model is trained by combining the loss of margin sample mining with the loss function of cross entropy. Experiments are carried out on datasets Market1501 and DukeMTMC-reID, our method achieves rank-1 accuracy of 95.9% on the Market1501 dataset, and 90.1% on the DukeMTMC-reID dataset, outperforming the current mainstream method in case of only using global feature. Full article
Show Figures

Figure 1

13 pages, 2000 KiB  
Article
Transfer Learning for Multi-Premise Entailment with Relationship Processing Module
by Pin Wu, Rukang Zhu and Zhidan Lei
Future Internet 2021, 13(3), 71; https://doi.org/10.3390/fi13030071 - 13 Mar 2021
Viewed by 2451
Abstract
Using the single premise entailment (SPE) model to accomplish the multi-premise entailment (MPE) task can alleviate the problem that the neural network cannot be effectively trained due to the lack of labeled multi-premise training data. Moreover, the abundant judgment methods for the relationship [...] Read more.
Using the single premise entailment (SPE) model to accomplish the multi-premise entailment (MPE) task can alleviate the problem that the neural network cannot be effectively trained due to the lack of labeled multi-premise training data. Moreover, the abundant judgment methods for the relationship between sentence pairs can also be applied in this task. However, the single-premise pre-trained model does not have a structure for processing multi-premise relationships, and this structure is a crucial technique for solving MPE problems. This paper proposes adding a multi-premise relationship processing module based on not changing the structure of the pre-trained model to compensate for this deficiency. Moreover, we proposed a three-step training method combining this module, which ensures that the module focuses on dealing with the multi-premise relationship during matching, thus applying the single-premise model to multi-premise tasks. Besides, this paper also proposes a specific structure of the relationship processing module, i.e., we call it the attention-backtracking mechanism. Experiments show that this structure can fully consider the context of multi-premise, and the structure combined with the three-step training can achieve better accuracy on the MPE test set than other transfer methods. Full article
Show Figures

Figure 1

23 pages, 4398 KiB  
Article
Joint Offloading and Energy Harvesting Design in Multiple Time Blocks for FDMA Based Wireless Powered MEC
by Zhiyan Yu, Gaochao Xu, Yang Li, Peng Liu and Long Li
Future Internet 2021, 13(3), 70; https://doi.org/10.3390/fi13030070 - 12 Mar 2021
Cited by 5 | Viewed by 2358
Abstract
The combination of mobile edge computing (MEC) and wireless power transfer (WPT) is recognized as a promising technology to solve the problem of limited battery capacities and insufficient computation capabilities of mobile devices. This technology can transfer energy to users by radio frequency [...] Read more.
The combination of mobile edge computing (MEC) and wireless power transfer (WPT) is recognized as a promising technology to solve the problem of limited battery capacities and insufficient computation capabilities of mobile devices. This technology can transfer energy to users by radio frequency (RF) in wireless powered mobile edge computing. The user converts the harvested energy, stores it in the battery, and utilizes the harvested energy to execute corresponding local computing and offloading tasks. This paper adopts the Frequency Division Multiple Access (FDMA) technique to achieve task offloading from multiple mobile devices to the MEC server simultaneously. Our objective is to study multiuser dynamic joint optimization of computation and wireless resource allocation under multiple time blocks to solve the problem of maximizing residual energy. To this end, we formalize it as a nonconvex problem that jointly optimizes the number of offloaded bits, energy harvesting time, and transmission bandwidth. We adopt convex optimization technology, combine with Karush–Kuhn–Tucker (KKT) conditions, and finally transform the problem into a univariate constrained convex optimization problem. Furthermore, to solve the problem, we propose a combined method of Bisection method and sequential unconstrained minimization based on Reformulation-Linearization Technique (RLT). Numerical results demonstrate that the performance of our joint optimization method outperforms other benchmark schemes for the residual energy maximization problem. Besides, the algorithm can maximize the residual energy, reduce the computation complexity, and improve computation efficiency. Full article
(This article belongs to the Special Issue Fog and Mobile Edge Computing)
Show Figures

Figure 1

17 pages, 3941 KiB  
Article
Effects of Transport Network Slicing on 5G Applications
by Yi-Bing Lin, Chien-Chao Tseng and Ming-Hung Wang
Future Internet 2021, 13(3), 69; https://doi.org/10.3390/fi13030069 - 11 Mar 2021
Cited by 18 | Viewed by 4487
Abstract
Network slicing is considered a key technology in enabling the underlying 5G mobile network infrastructure to meet diverse service requirements. In this article, we demonstrate how transport network slicing accommodates the various network service requirements of Massive IoT (MIoT), Critical IoT (CIoT), and [...] Read more.
Network slicing is considered a key technology in enabling the underlying 5G mobile network infrastructure to meet diverse service requirements. In this article, we demonstrate how transport network slicing accommodates the various network service requirements of Massive IoT (MIoT), Critical IoT (CIoT), and Mobile Broadband (MBB) applications. Given that most of the research conducted previously to measure 5G network slicing is done through simulations, we utilized SimTalk, an IoT application traffic emulator, to emulate large amounts of realistic traffic patterns in order to study the effects of transport network slicing on IoT and MBB applications. Furthermore, we developed several MIoT, CIoT, and MBB applications that operate sustainably on several campuses and directed both real and emulated traffic into a Programming Protocol-Independent Packet Processors (P4)-based 5G testbed. We then examined the performance in terms of throughput, packet loss, and latency. Our study indicates that applications with different traffic characteristics need different corresponding Committed Information Rate (CIR) ratios. The CIR ratio is the CIR setting for a P4 meter in physical switch hardware over the aggregated data rate of applications of the same type. A low CIR ratio adversely affects the application’s performance because P4 switches will dispatch application packets to the low-priority queue if the packet arrival rate exceeds the CIR setting for the same type of applications. In our testbed, both exemplar MBB applications required a CIR ratio of 140% to achieve, respectively, a near 100% throughput percentage with a 0.0035% loss rate and an approximate 100% throughput percentage with a 0.0017% loss rate. However, the exemplar CIoT and MIoT applications required a CIR ratio of 120% and 100%, respectively, to reach a 100% throughput percentage without any packet loss. With the proper CIR settings for the P4 meters, the proposed transport network slicing mechanism can enforce the committed rates and fulfill the latency and reliability requirements for 5G MIoT, CIoT, and MBB applications in both TCP and UDP. Full article
(This article belongs to the Special Issue Intelligent 5G Network Slicing)
Show Figures

Figure 1

24 pages, 767 KiB  
Article
Investigating and Modeling of Cooperative Vehicle-to-Vehicle Safety Stopping Distance
by Steven Knowles Flanagan, Zuoyin Tang, Jianhua He and Irfan Yusoff
Future Internet 2021, 13(3), 68; https://doi.org/10.3390/fi13030068 - 10 Mar 2021
Cited by 11 | Viewed by 3807
Abstract
Dedicated Short-Range Communication (DSRC) or IEEE 802.11p/OCB (Out of the Context of a Base-station) is widely considered to be a primary technology for Vehicle-to-Vehicle (V2V) communication, and it is aimed toward increasing the safety of users on the road by sharing information between [...] Read more.
Dedicated Short-Range Communication (DSRC) or IEEE 802.11p/OCB (Out of the Context of a Base-station) is widely considered to be a primary technology for Vehicle-to-Vehicle (V2V) communication, and it is aimed toward increasing the safety of users on the road by sharing information between one another. The requirements of DSRC are to maintain real-time communication with low latency and high reliability. In this paper, we investigate how communication can be used to improve stopping distance performance based on fieldwork results. In addition, we assess the impacts of reduced reliability, in terms of distance independent, distance dependent and density-based consecutive packet losses. A model is developed based on empirical measurements results depending on distance, data rate, and traveling speed. With this model, it is shown that cooperative V2V communications can effectively reduce reaction time and increase safety stop distance, and highlight the importance of high reliability. The obtained results can be further used for the design of cooperative V2V-based driving and safety applications. Full article
Show Figures

Figure 1

19 pages, 22081 KiB  
Article
Implementation of IoT Framework with Data Analysis Using Deep Learning Methods for Occupancy Prediction in a Building
by Eric Hitimana, Gaurav Bajpai, Richard Musabe, Louis Sibomana and Jayavel Kayalvizhi
Future Internet 2021, 13(3), 67; https://doi.org/10.3390/fi13030067 - 9 Mar 2021
Cited by 31 | Viewed by 4848
Abstract
Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the [...] Read more.
Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Graphical abstract

12 pages, 654 KiB  
Article
Data Protection Impact Assessment (DPIA) for Cloud-Based Health Organizations
by Dimitra Georgiou and Costas Lambrinoudakis
Future Internet 2021, 13(3), 66; https://doi.org/10.3390/fi13030066 - 7 Mar 2021
Cited by 6 | Viewed by 4254
Abstract
The General Data Protection Regulation (GDPR) harmonizes personal data protection laws across the European Union, affecting all sectors including the healthcare industry. For processing operations that pose a high risk for data subjects, a Data Protection Impact Assessment (DPIA) is mandatory from May [...] Read more.
The General Data Protection Regulation (GDPR) harmonizes personal data protection laws across the European Union, affecting all sectors including the healthcare industry. For processing operations that pose a high risk for data subjects, a Data Protection Impact Assessment (DPIA) is mandatory from May 2018. Taking into account the criticality of the process and the importance of its results, for the protection of the patients’ health data, as well as the complexity involved and the lack of past experience in applying such methodologies in healthcare environments, this paper presents the main steps of a DPIA study and provides guidelines on how to carry them out effectively. To this respect, the Privacy Impact Assessment, Commission Nationale de l’Informatique et des Libertés (PIA-CNIL) methodology has been employed, which is also compliant with the privacy impact assessment tasks described in ISO/IEC 29134:2017. The work presented in this paper focuses on the first two steps of the DPIA methodology and more specifically on the identification of the Purposes of Processing and of the data categories involved in each of them, as well as on the evaluation of the organization’s GDPR compliance level and of the gaps (Gap Analysis) that must be filled-in. The main contribution of this work is the identification of the main organizational and legal requirements that must be fulfilled by the health care organization. This research sets the legal grounds for data processing, according to the GDPR and is highly relevant to any processing of personal data, as it helps to structure the process, as well as be aware of data protection issues and the relevant legislation. Full article
(This article belongs to the Special Issue Security and Privacy in Social Networks and Solutions)
Show Figures

Figure 1

21 pages, 23980 KiB  
Article
The Effect of Thickness-Based Dynamic Matching Mechanism on a Hyperledger Fabric-Based TimeBank System
by Jhan-Jia Lin, Yu-Tse Lee and Ja-Ling Wu
Future Internet 2021, 13(3), 65; https://doi.org/10.3390/fi13030065 - 6 Mar 2021
Cited by 4 | Viewed by 2666
Abstract
In a community with an aging population, helping each other is a must society function. Lacking mutual trust makes the need for a fair and transparent service exchange platform on top of the public service administration’s list. We present an efficient blockchain-based TimeBank [...] Read more.
In a community with an aging population, helping each other is a must society function. Lacking mutual trust makes the need for a fair and transparent service exchange platform on top of the public service administration’s list. We present an efficient blockchain-based TimeBank realization with a newly proposed dynamic service matching algorithm (DSMA) in this work. The Hyperledger Fabric (or Fabric in short), one of the well-known Consortium Blockchains, is chosen as our system realization platform. It provides the identity certification mechanism and has an extendable network structure. The performance of a DSMA is measured by the waiting time for a service to get a match, called the service-matching waiting time (SMWT). In our DSMA, the decision as to whether a service is to get a match or wait for a later chance depends dynamically on the total number of contemporarily available services (i.e., the thickness of the service market). To better the proposed TimeBank system’s service quality, a Dynamic Tuning Strategy (DTS) is designed to thicken the market size. Experimental results show that a thicker market makes on-chain nodes have more links, and in turn, they find a match easier (i.e., consume a shorter SMWT). Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

16 pages, 12428 KiB  
Article
A Classification Method for Academic Resources Based on a Graph Attention Network
by Jie Yu, Yaliu Li, Chenle Pan and Junwei Wang
Future Internet 2021, 13(3), 64; https://doi.org/10.3390/fi13030064 - 4 Mar 2021
Cited by 1 | Viewed by 3115
Abstract
Classification of resource can help us effectively reduce the work of filtering massive academic resources, such as selecting relevant papers and focusing on the latest research by scholars in the same field. However, existing graph neural networks do not take into account the [...] Read more.
Classification of resource can help us effectively reduce the work of filtering massive academic resources, such as selecting relevant papers and focusing on the latest research by scholars in the same field. However, existing graph neural networks do not take into account the associations between academic resources, leading to unsatisfactory classification results. In this paper, we propose an Association Content Graph Attention Network (ACGAT), which is based on the association features and content attributes of academic resources. The semantic relevance and academic relevance are introduced into the model. The ACGAT makes full use of the association commonality and the influence information of resources and introduces an attention mechanism to improve the accuracy of academic resource classification. We conducted experiments on a self-built scholar network and two public citation networks. Experimental results show that the ACGAT has better effectiveness than existing classification methods. Full article
(This article belongs to the Special Issue Natural Language Engineering: Methods, Tasks and Applications)
Show Figures

Graphical abstract

18 pages, 4605 KiB  
Article
Estimating PQoS of Video Conferencing on Wi-Fi Networks Using Machine Learning
by Maghsoud Morshedi and Josef Noll
Future Internet 2021, 13(3), 63; https://doi.org/10.3390/fi13030063 - 3 Mar 2021
Cited by 4 | Viewed by 3093
Abstract
Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment [...] Read more.
Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment that operate according to recent 802.11ac/ax standards and promise users the ability to establish uninterrupted video conferencing calls with ultra-high-definition video and audio quality. However, the best-effort nature of 802.11 networks and the high variability of wireless medium conditions hinder users experiencing uninterrupted high-quality video conferencing. This paper presents a novel approach to estimate the perceived quality of service (PQoS) of video conferencing using only 802.11-specific network performance parameters collected from Wi-Fi access points (APs) on customer premises. This study produced datasets comprising 802.11-specific network performance parameters collected from off-the-shelf Wi-Fi APs operating at 802.11g/n/ac/ax standards on both 2.4 and 5 GHz frequency bands to train machine learning algorithms. In this way, we achieved classification accuracies of 92–98% in estimating the level of PQoS of video conferencing services on various Wi-Fi networks. To efficiently troubleshoot wireless issues, we further analyzed the machine learning model to correlate features in the model with the root cause of quality degradation. Thus, ISPs can utilize the approach presented in this study to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customers’ privacy while reducing the operational costs of monitoring and data analytics. Full article
Show Figures

Graphical abstract

32 pages, 2102 KiB  
Review
Distributed Ledger Technology Review and Decentralized Applications Development Guidelines
by Claudia Antal, Tudor Cioara, Ionut Anghel, Marcel Antal and Ioan Salomie
Future Internet 2021, 13(3), 62; https://doi.org/10.3390/fi13030062 - 27 Feb 2021
Cited by 74 | Viewed by 11606
Abstract
The Distributed Ledger Technology (DLT) provides an infrastructure for developing decentralized applications with no central authority for registering, sharing, and synchronizing transactions on digital assets. In the last years, it has drawn high interest from the academic community, technology developers, and startups mostly [...] Read more.
The Distributed Ledger Technology (DLT) provides an infrastructure for developing decentralized applications with no central authority for registering, sharing, and synchronizing transactions on digital assets. In the last years, it has drawn high interest from the academic community, technology developers, and startups mostly by the advent of its most popular type, blockchain technology. In this paper, we provide a comprehensive overview of DLT analyzing the challenges, provided solutions or alternatives, and their usage for developing decentralized applications. We define a three-tier based architecture for DLT applications to systematically classify the technology solutions described in over 100 papers and startup initiatives. Protocol and Network Tier contains solutions for digital assets registration, transactions, data structure, and privacy and business rules implementation and the creation of peer-to-peer networks, ledger replication, and consensus-based state validation. Scalability and Interoperability Tier solutions address the scalability and interoperability issues with a focus on blockchain technology, where they manifest most often, slowing down its large-scale adoption. The paper closes with a discussion on challenges and opportunities for developing decentralized applications by providing a multi-step guideline for decentralizing the design and implementation of traditional systems. Full article
(This article belongs to the Special Issue Blockchain: Applications, Challenges, and Solutions)
Show Figures

Figure 1

13 pages, 867 KiB  
Article
A Cloud-Based Data Collaborative to Combat the COVID-19 Pandemic and to Solve Major Technology Challenges
by Max Cappellari, John Belstner, Bryan Rodriguez and Jeff Sedayao
Future Internet 2021, 13(3), 61; https://doi.org/10.3390/fi13030061 - 27 Feb 2021
Cited by 5 | Viewed by 3495
Abstract
The XPRIZE Foundation designs and operates multi-million-dollar, global competitions to incentivize the development of technological breakthroughs that accelerate humanity toward a better future. To combat the COVID-19 pandemic, the foundation coordinated with several organizations to make datasets about different facets of the disease [...] Read more.
The XPRIZE Foundation designs and operates multi-million-dollar, global competitions to incentivize the development of technological breakthroughs that accelerate humanity toward a better future. To combat the COVID-19 pandemic, the foundation coordinated with several organizations to make datasets about different facets of the disease available and to provide the computational resources needed to analyze those datasets. This paper is a case study of the requirements, design, and implementation of the XPRIZE Data Collaborative, which is a Cloud-based infrastructure that enables the XPRIZE to meet its COVID-19 mission and host future data-centric competitions. We examine how a Cloud Native Application can use an unexpected variety of Cloud technologies, ranging from containers, serverless computing, to even older ones such as Virtual Machines. We also search and document the effects that the pandemic had on application development in the Cloud. We include our experiences of having users successfully exercise the Data Collaborative, detailing the challenges encountered and areas for improvement and future work. Full article
(This article belongs to the Special Issue Cloud-Native Applications and Services)
Show Figures

Figure 1

18 pages, 1551 KiB  
Article
Learning How to Separate Fake from Real News: Scalable Digital Tutorials Promoting Students’ Civic Online Reasoning
by Carl-Anton Werner Axelsson, Mona Guath and Thomas Nygren
Future Internet 2021, 13(3), 60; https://doi.org/10.3390/fi13030060 - 27 Feb 2021
Cited by 25 | Viewed by 9239
Abstract
With the rise of misinformation, there is a great need for scalable educational interventions supporting students’ abilities to determine the trustworthiness of digital news. We address this challenge in our study by developing an online intervention tool based on tutorials in civic online [...] Read more.
With the rise of misinformation, there is a great need for scalable educational interventions supporting students’ abilities to determine the trustworthiness of digital news. We address this challenge in our study by developing an online intervention tool based on tutorials in civic online reasoning that aims to teach adolescents how to critically assess online information comprising text, videos and images. Our findings from an online intervention with 209 upper secondary students highlight how observational learning and feedback support their ability to read laterally and improve their performance in determining the credibility of digital news and social media posts. Full article
(This article belongs to the Special Issue Social Network and Sustainable Distance Education)
Show Figures

Figure 1

22 pages, 1464 KiB  
Article
Adapting Data-Driven Research to the Fields of Social Sciences and the Humanities
by Albert Weichselbraun, Philipp Kuntschik, Vincenzo Francolino, Mirco Saner, Urs Dahinden and Vinzenz Wyss
Future Internet 2021, 13(3), 59; https://doi.org/10.3390/fi13030059 - 26 Feb 2021
Cited by 6 | Viewed by 3141
Abstract
Recent developments in the fields of computer science, such as advances in the areas of big data, knowledge extraction, and deep learning, have triggered the application of data-driven research methods to disciplines such as the social sciences and humanities. This article presents a [...] Read more.
Recent developments in the fields of computer science, such as advances in the areas of big data, knowledge extraction, and deep learning, have triggered the application of data-driven research methods to disciplines such as the social sciences and humanities. This article presents a collaborative, interdisciplinary process for adapting data-driven research to research questions within other disciplines, which considers the methodological background required to obtain a significant impact on the target discipline and guides the systematic collection and formalization of domain knowledge, as well as the selection of appropriate data sources and methods for analyzing, visualizing, and interpreting the results. Finally, we present a case study that applies the described process to the domain of communication science by creating approaches that aid domain experts in locating, tracking, analyzing, and, finally, better understanding the dynamics of media criticism. The study clearly demonstrates the potential of the presented method, but also shows that data-driven research approaches require a tighter integration with the methodological framework of the target discipline to really provide a significant impact on the target discipline. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Graphical abstract

35 pages, 1758 KiB  
Article
A Multi-Tier Security Analysis of Official Car Management Apps for Android
by Efstratios Chatzoglou, Georgios Kambourakis and Vasileios Kouliaridis
Future Internet 2021, 13(3), 58; https://doi.org/10.3390/fi13030058 - 25 Feb 2021
Cited by 11 | Viewed by 7360
Abstract
Using automotive smartphone applications (apps) provided by car manufacturers may offer numerous advantages to the vehicle owner, including improved safety, fuel efficiency, anytime monitoring of vehicle data, and timely over-the-air delivery of software updates. On the other hand, the continuous tracking of the [...] Read more.
Using automotive smartphone applications (apps) provided by car manufacturers may offer numerous advantages to the vehicle owner, including improved safety, fuel efficiency, anytime monitoring of vehicle data, and timely over-the-air delivery of software updates. On the other hand, the continuous tracking of the vehicle data by such apps may also pose a risk to the car owner, if, say, sensitive pieces of information are leaked to third parties or the app is vulnerable to attacks. This work contributes the first to our knowledge full-fledged security assessment of all the official single-vehicle management apps offered by major car manufacturers who operate in Europe. The apps are scrutinised statically with the purpose of not only identifying surfeits, say, in terms of the permissions requested, but also from a vulnerability assessment viewpoint. On top of that, we run each app to identify possible weak security practices in the owner-to-app registration process. The results reveal a multitude of issues, ranging from an over-claim of sensitive permissions and the use of possibly privacy-invasive API calls, to numerous potentially exploitable CWE and CVE-identified weaknesses and vulnerabilities, the, in some cases, excessive employment of third-party trackers, and a number of other flaws related to the use of third-party software libraries, unsanitised input, and weak user password policies, to mention just a few. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Cybersecurity Section)
Show Figures

Figure 1

18 pages, 751 KiB  
Review
Investigation of Degradation and Upgradation Models for Flexible Unit Systems: A Systematic Literature Review
by Thirupathi Samala, Vijaya Kumar Manupati, Maria Leonilde R. Varela and Goran Putnik
Future Internet 2021, 13(3), 57; https://doi.org/10.3390/fi13030057 - 25 Feb 2021
Cited by 5 | Viewed by 2808
Abstract
Research on flexible unit systems (FUS) with the context of descriptive, predictive, and prescriptive analysis have remarkably progressed in recent times, being now reinforced in the current Industry 4.0 era with the increased focus on integration of distributed and digitalized systems. In the [...] Read more.
Research on flexible unit systems (FUS) with the context of descriptive, predictive, and prescriptive analysis have remarkably progressed in recent times, being now reinforced in the current Industry 4.0 era with the increased focus on integration of distributed and digitalized systems. In the existing literature, most of the work focused on the individual contributions of the above mentioned three analyses. Moreover, the current literature is unclear with respect to the integration of degradation and upgradation models for FUS. In this paper, a systematic literature review on degradation, residual life distribution, workload adjustment strategy, upgradation, and predictive maintenance as major performance measures to investigate the performance of the FUS has been considered. In order to identify the key issues and research gaps in the existing literature, the 59 most relevant papers from 2009 to 2020 have been sorted and analyzed. Finally, we identify promising research opportunities that could expand the scope and depth of FUS. Full article
Show Figures

Figure 1

22 pages, 2288 KiB  
Article
Online Professional Learning in Response to COVID-19—Towards Robust Evaluation
by Alireza Ahadi, Matt Bower, Abhay Singh and Michael Garrett
Future Internet 2021, 13(3), 56; https://doi.org/10.3390/fi13030056 - 24 Feb 2021
Cited by 10 | Viewed by 4480
Abstract
As COVID-19 continues to impact upon education worldwide, systems and organizations are rapidly transiting their professional learning to online mode. This raises concerns, not simply about whether online professional learning can result in equivalent outcomes to face-to-face learning, but more importantly about how [...] Read more.
As COVID-19 continues to impact upon education worldwide, systems and organizations are rapidly transiting their professional learning to online mode. This raises concerns, not simply about whether online professional learning can result in equivalent outcomes to face-to-face learning, but more importantly about how to best evaluate online professional learning so we can iteratively improve our approaches. This case study analyses the evaluation of an online teacher professional development workshop for the purpose of critically reflecting upon the efficacy of workshop evaluation techniques. The evaluation approach was theoretically based in a synthesis of six seminal workshop evaluation models, and structured around eight critical dimensions of educational technology evaluation. The approach involving collection of pre-workshop participant background information, pre-/post-teacher perceptions data, and post-workshop focus group perceptions, enabled the changes in teacher knowledge, skills, and beliefs to be objectively evaluated, at the same time as providing qualitative information to effectively improve future iterations of the workshops along a broad range of dimensions. The evaluation approach demonstrated that the professional learning that was shifted into online mode in response to COVID-19 could unequivocally result in significant improvements to professional learning outcomes. More importantly, the evaluation approach is critically contrasted with previous evaluation models, and a series of recommendations for the evaluation of technology-enhanced teacher professional development workshops are proposed. Full article
(This article belongs to the Special Issue E-Learning and Technology Enhanced Learning)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop