Next Issue
Volume 14, April
Previous Issue
Volume 14, February
 
 

Future Internet, Volume 14, Issue 3 (March 2022) – 29 articles

Cover Story (view full-size image): Authentication relies on the detection of inconsistencies that indicate malicious editing in audiovisual files. However, automation does not guarantee robustness. A computer-supported toolbox is presented that can assist inexperienced users to visually investigate the consistency of audio streams. Several algorithms are incorporated, including a convolutional network model for Signal-to-Reverberation-Ratio (SRR) estimation. The user can access the application through a web browser amd can upload an audio/video file or YouTube link. The application outputs a set of interactive visualizations that help the user investigate the authenticity of the file. Following a crowdsourcing methodology, users can contribute by uploading or annotating files from the dataset to determine their authenticity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 6021 KiB  
Article
Time Series Surface Temperature Prediction Based on Cyclic Evolutionary Network Model for Complex Sea Area
by Jiahao Shi, Jie Yu, Jinkun Yang, Lingyu Xu and Huan Xu
Future Internet 2022, 14(3), 96; https://doi.org/10.3390/fi14030096 - 21 Mar 2022
Cited by 4 | Viewed by 2294
Abstract
The prediction of marine elements has become increasingly important in the field of marine research. However, time series data in a complex environment vary significantly because they are composed of dynamic changes with multiple mechanisms, causes, and laws. For example, sea surface temperature [...] Read more.
The prediction of marine elements has become increasingly important in the field of marine research. However, time series data in a complex environment vary significantly because they are composed of dynamic changes with multiple mechanisms, causes, and laws. For example, sea surface temperature (SST) can be influenced by ocean currents. Conventional models often focus on capturing the impact of historical data but ignore the spatio–temporal relationships in sea areas, and they cannot predict such widely varying data effectively. In this work, we propose a cyclic evolutionary network model (CENS), an error-driven network group, which is composed of multiple network node units. Different regions of data can be automatically matched to a suitable network node unit for prediction so that the model can cluster the data based on their characteristics and, therefore, be more practical. Experiments were performed on the Bohai Sea and the South China Sea. Firstly, we performed an ablation experiment to verify the effectiveness of the framework of the model. Secondly, we tested the model to predict sea surface temperature, and the results verified the accuracy of CENS. Lastly, there was a meaningful finding that the clustering results of the model in the South China Sea matched the actual characteristics of the continental shelf of the South China Sea, and the cluster had spatial continuity. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Figure 1

30 pages, 2168 KiB  
Review
Self-Organizing Networks for 5G and Beyond: A View from the Top
by Andreas G. Papidas and George C. Polyzos
Future Internet 2022, 14(3), 95; https://doi.org/10.3390/fi14030095 - 17 Mar 2022
Cited by 17 | Viewed by 8285
Abstract
We describe self-organizing network (SON) concepts and architectures and their potential to play a central role in 5G deployment and next-generation networks. Our focus is on the basic SON use case applied to radio access networks (RAN), which is self-optimization. We analyze SON [...] Read more.
We describe self-organizing network (SON) concepts and architectures and their potential to play a central role in 5G deployment and next-generation networks. Our focus is on the basic SON use case applied to radio access networks (RAN), which is self-optimization. We analyze SON applications’ rationale and operation, the design and dimensioning of SON systems, possible deficiencies and conflicts that occur through the parallel operation of functions, and describe the strong reliance on machine learning (ML) and artificial intelligence (AI). Moreover, we present and comment on very recent proposals for SON deployment in 5G networks. Typical examples include the binding of SON systems with techniques such as Network Function Virtualization (NFV), Cloud RAN (C-RAN), Ultra-Reliable Low Latency Communications (URLLC), massive Machine-Type Communication (mMTC) for IoT, and automated backhauling, which lead the way towards the adoption of SON techniques in Beyond 5G (B5G) networks. Full article
(This article belongs to the Special Issue 5G Enabling Technologies and Wireless Networking)
Show Figures

Figure 1

19 pages, 1239 KiB  
Article
A Data-Driven Approach to Improve Customer Churn Prediction Based on Telecom Customer Segmentation
by Tianyuan Zhang, Sérgio Moro and Ricardo F. Ramos
Future Internet 2022, 14(3), 94; https://doi.org/10.3390/fi14030094 - 16 Mar 2022
Cited by 26 | Viewed by 6857
Abstract
Numerous valuable clients can be lost to competitors in the telecommunication industry, leading to profit loss. Thus, understanding the reasons for client churn is vital for telecommunication companies. This study aimed to develop a churn prediction model to predict telecom client churn through [...] Read more.
Numerous valuable clients can be lost to competitors in the telecommunication industry, leading to profit loss. Thus, understanding the reasons for client churn is vital for telecommunication companies. This study aimed to develop a churn prediction model to predict telecom client churn through customer segmentation. Data were collected from three major Chinese telecom companies, and Fisher discriminant equations and logistic regression analysis were used to build a telecom customer churn prediction model. According to the results, it can be concluded that the telecom customer churn model constructed by regression analysis had higher prediction accuracy (93.94%) and better results. This study will help telecom companies efficiently predict the possibility of and take targeted measures to avoid customer churn, thereby increasing their profits. Full article
(This article belongs to the Special Issue Big Data Analytics, Privacy and Visualization)
Show Figures

Figure 1

19 pages, 4672 KiB  
Review
Survey on Videos Data Augmentation for Deep Learning Models
by Nino Cauli and Diego Reforgiato Recupero
Future Internet 2022, 14(3), 93; https://doi.org/10.3390/fi14030093 - 16 Mar 2022
Cited by 21 | Viewed by 5515
Abstract
In most Computer Vision applications, Deep Learning models achieve state-of-the-art performances. One drawback of Deep Learning is the large amount of data needed to train the models. Unfortunately, in many applications, data are difficult or expensive to collect. Data augmentation can alleviate the [...] Read more.
In most Computer Vision applications, Deep Learning models achieve state-of-the-art performances. One drawback of Deep Learning is the large amount of data needed to train the models. Unfortunately, in many applications, data are difficult or expensive to collect. Data augmentation can alleviate the problem, generating new data from a smaller initial dataset. Geometric and color space image augmentation methods can increase accuracy of Deep Learning models but are often not enough. More advanced solutions are Domain Randomization methods or the use of simulation to artificially generate the missing data. Data augmentation algorithms are usually specifically designed for single images. Most recently, Deep Learning models have been applied to the analysis of video sequences. The aim of this paper is to perform an exhaustive study of the novel techniques of video data augmentation for Deep Learning models and to point out the future directions of the research on this topic. Full article
(This article belongs to the Special Issue Big Data Analytics, Privacy and Visualization)
Show Figures

Graphical abstract

20 pages, 616 KiB  
Article
The Time Machine in Columnar NoSQL Databases: The Case of Apache HBase
by Chia-Ping Tsai, Che-Wei Chang, Hung-Chang Hsiao and Haiying Shen
Future Internet 2022, 14(3), 92; https://doi.org/10.3390/fi14030092 - 15 Mar 2022
Cited by 2 | Viewed by 2500
Abstract
Not Only SQL (NoSQL) is a critical technology that is scalable and provides flexible schemas, thereby complementing existing relational database technologies. Although NoSQL is flourishing, present solutions lack the features required by enterprises for critical missions. In this paper, we explore solutions to [...] Read more.
Not Only SQL (NoSQL) is a critical technology that is scalable and provides flexible schemas, thereby complementing existing relational database technologies. Although NoSQL is flourishing, present solutions lack the features required by enterprises for critical missions. In this paper, we explore solutions to the data recovery issue in NoSQL. Data recovery for any database table entails restoring the table to a prior state or replaying (insert/update) operations over the table given a time period in the past. Recovery of NoSQL database tables enables applications such as failure recovery, analysis for historical data, debugging, and auditing. Particularly, our study focuses on columnar NoSQL databases. We propose and evaluate two solutions to address the data recovery problem in columnar NoSQL and implement our solutions based on Apache HBase, a popular NoSQL database in the Hadoop ecosystem widely adopted across industries. Our implementations are extensively benchmarked with an industrial NoSQL benchmark under real environments. Full article
(This article belongs to the Special Issue Advances in High Performance Cloud Computing)
Show Figures

Figure 1

14 pages, 522 KiB  
Article
CPU-GPU-Memory DVFS for Power-Efficient MPSoC in Mobile Cyber Physical Systems
by Somdip Dey, Samuel Isuwa, Suman Saha, Amit Kumar Singh and Klaus McDonald-Maier
Future Internet 2022, 14(3), 91; https://doi.org/10.3390/fi14030091 - 14 Mar 2022
Cited by 6 | Viewed by 3238
Abstract
Most modern mobile cyber-physical systems such as smartphones come equipped with multi-processor systems-on-chip (MPSoCs) with variant computing capacity both to cater to performance requirements and reduce power consumption when executing an application. In this paper, we propose a novel approach to dynamic voltage [...] Read more.
Most modern mobile cyber-physical systems such as smartphones come equipped with multi-processor systems-on-chip (MPSoCs) with variant computing capacity both to cater to performance requirements and reduce power consumption when executing an application. In this paper, we propose a novel approach to dynamic voltage and frequency scaling (DVFS) on CPU, GPU and RAM in a mobile MPSoC, which caters to the performance requirements of the executing application while consuming low power. We evaluate our methodology on a real hardware platform, Odroid XU4, and the experimental results prove the approach to be 26% more power-efficient and 21% more thermal-efficient compared to the state-of-the-art system. Full article
Show Figures

Figure 1

20 pages, 1483 KiB  
Article
A Density-Based Random Forest for Imbalanced Data Classification
by Jia Dong and Quan Qian
Future Internet 2022, 14(3), 90; https://doi.org/10.3390/fi14030090 - 14 Mar 2022
Cited by 12 | Viewed by 2850
Abstract
Many machine learning problem domains, such as the detection of fraud, spam, outliers, and anomalies, tend to involve inherently imbalanced class distributions of samples. However, most classification algorithms assume equivalent sample sizes for each class. Therefore, imbalanced classification datasets pose a significant challenge [...] Read more.
Many machine learning problem domains, such as the detection of fraud, spam, outliers, and anomalies, tend to involve inherently imbalanced class distributions of samples. However, most classification algorithms assume equivalent sample sizes for each class. Therefore, imbalanced classification datasets pose a significant challenge in prediction modeling. Herein, we propose a density-based random forest algorithm (DBRF) to improve the prediction performance, especially for minority classes. DBRF is designed to recognize boundary samples as the most difficult to classify and then use a density-based method to augment them. Subsequently, two different random forest classifiers were constructed to model the augmented boundary samples and the original dataset dependently, and the final output was determined using a bagging technique. A real-world material classification dataset and 33 open public imbalanced datasets were used to evaluate the performance of DBRF. On the 34 datasets, DBRF could achieve improvements of 2–15% over random forest in terms of the F1-measure and G-mean. The experimental results proved the ability of DBRF to solve the problem of classifying objects located on the class boundary, including objects of minority classes, by taking into account the density of objects in space. Full article
(This article belongs to the Special Issue AI, Machine Learning and Data Analytics for Wireless Communications)
Show Figures

Graphical abstract

27 pages, 1227 KiB  
Article
A Survey on Intrusion Detection Systems for Fog and Cloud Computing
by Victor Chang, Lewis Golightly, Paolo Modesti, Qianwen Ariel Xu, Le Minh Thao Doan, Karl Hall, Sreeja Boddu and Anna Kobusińska
Future Internet 2022, 14(3), 89; https://doi.org/10.3390/fi14030089 - 13 Mar 2022
Cited by 33 | Viewed by 8250
Abstract
The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there [...] Read more.
The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there is an absolute need to differentiate boundaries in personal information and cloud and fog computing globally and the adoption of specific information security policies and regulations. The goal of the security policy and framework for cloud and fog computing is to protect the end-users and their information, reduce task-based operations, aid in compliance, and create standards for expected user actions, all of which are based on the use of established rules for cloud computing. Moreover, intrusion detection systems are widely adopted solutions to monitor and analyze network traffic and detect anomalies that can help identify ongoing adversarial activities, trigger alerts, and automatically block traffic from hostile sources. This survey paper analyzes factors, including the application of technologies and techniques, which can enable the deployment of security policy on fog and cloud computing successfully. The paper focuses on a Software-as-a-Service (SaaS) and intrusion detection, which provides an effective and resilient system structure for users and organizations. Our survey aims to provide a framework for a cloud and fog computing security policy, while addressing the required security tools, policies, and services, particularly for cloud and fog environments for organizational adoption. While developing the essential linkage between requirements, legal aspects, analyzing techniques and systems to reduce intrusion detection, we recommend the strategies for cloud and fog computing security policies. The paper develops structured guidelines for ways in which organizations can adopt and audit the security of their systems as security is an essential component of their systems and presents an agile current state-of-the-art review of intrusion detection systems and their principles. Functionalities and techniques for developing these defense mechanisms are considered, along with concrete products utilized in operational systems. Finally, we discuss evaluation criteria and open-ended challenges in this area. Full article
Show Figures

Figure 1

14 pages, 7919 KiB  
Article
Neural Network-Based Price Tag Data Analysis
by Pavel Laptev, Sergey Litovkin, Sergey Davydenko, Anton Konev, Evgeny Kostyuchenko and Alexander Shelupanov
Future Internet 2022, 14(3), 88; https://doi.org/10.3390/fi14030088 - 13 Mar 2022
Cited by 3 | Viewed by 3487
Abstract
This paper compares neural networks, specifically Unet, MobileNetV2, VGG16 and YOLOv4-tiny, for image segmentation as part of a study aimed at finding an optimal solution for price tag data analysis. The neural networks considered were trained on an individual dataset collected by the [...] Read more.
This paper compares neural networks, specifically Unet, MobileNetV2, VGG16 and YOLOv4-tiny, for image segmentation as part of a study aimed at finding an optimal solution for price tag data analysis. The neural networks considered were trained on an individual dataset collected by the authors. Additionally, this paper covers the automatic image text recognition approach using EasyOCR API. Research revealed that the optimal network for segmentation is YOLOv4-tiny, featuring a cross validation accuracy of 96.92%. EasyOCR accuracy was also calculated and is 95.22%. Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia)
Show Figures

Figure 1

42 pages, 5579 KiB  
Article
Handover Management in 5G Vehicular Networks
by Ioannis Kosmopoulos, Emmanouil Skondras, Angelos Michalas, Emmanouel T. Michailidis and Dimitrios D. Vergados
Future Internet 2022, 14(3), 87; https://doi.org/10.3390/fi14030087 - 13 Mar 2022
Cited by 7 | Viewed by 3176
Abstract
Fifth-Generation (5G) vehicular networks support novel services with increased Quality of Service (QoS) requirements. Vehicular users need to be continuously connected to networks that fulfil the constraints of their services. Thus, the implementation of optimal Handover (HO) mechanisms for 5G vehicular architectures is [...] Read more.
Fifth-Generation (5G) vehicular networks support novel services with increased Quality of Service (QoS) requirements. Vehicular users need to be continuously connected to networks that fulfil the constraints of their services. Thus, the implementation of optimal Handover (HO) mechanisms for 5G vehicular architectures is deemed necessary. This work describes a scheme for performing HOs in 5G vehicular networks using the functionalities of the Media-Independent Handover (MIH) and Fast Proxy Mobile IPv6 (FPMIP) standards. The scheme supports both predictive and reactive HO scenarios. A velocity and alternative network monitoring process prepares each vehicle for both HO cases. In the case of predictive HO, each time the satisfaction grade of the vehicular user drops below a predefined threshold, the HO is initiated. On the other hand, in the case of reactive HO, the vehicle loses the connectivity with its serving network and connects to the available network that has obtained the higher ranking from the network selection process. Furthermore, the HO implementation is based on an improved version of the FPMIPv6 protocol. For the evaluation of the described methodology, a 5G vehicular network architecture was simulated. In this architecture, multiple network access technologies coexist, while the experimental results showed that the proposed scheme outperformed existing HO methods. Full article
(This article belongs to the Special Issue Future Intelligent Vehicular Networks toward 6G)
Show Figures

Graphical abstract

17 pages, 3624 KiB  
Article
Unsupervised Anomaly Detection and Segmentation on Dirty Datasets
by Jiahao Guo, Xiaohuo Yu and Lu Wang
Future Internet 2022, 14(3), 86; https://doi.org/10.3390/fi14030086 - 13 Mar 2022
Cited by 1 | Viewed by 3176
Abstract
Industrial quality control is an important task. Most of the existing vision-based unsupervised industrial anomaly detection and segmentation methods require that the training set only consists of normal samples, which is difficult to ensure in practice. This paper proposes an unsupervised framework to [...] Read more.
Industrial quality control is an important task. Most of the existing vision-based unsupervised industrial anomaly detection and segmentation methods require that the training set only consists of normal samples, which is difficult to ensure in practice. This paper proposes an unsupervised framework to solve the industrial anomaly detection and segmentation problem when the training set contains anomaly samples. Our framework uses a model pretrained on ImageNet as a feature extractor to extract patch-level features. After that, we propose a trimming method to estimate a robust Gaussian distribution based on the patch features at each position. Then, with an iterative filtering process, we can iteratively filter out the anomaly samples in the training set and re-estimate the Gaussian distribution at each position. In the prediction phase, the Mahalanobis distance between a patch feature vector and the center of the Gaussian distribution at the corresponding position is used as the anomaly score of this patch. The subsequent anomaly region segmentation is performed based on the patch anomaly score. We tested the proposed method on three datasets containing the anomaly samples and obtained state-of-the-art performance. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Graphical abstract

18 pages, 892 KiB  
Article
Addressing Syntax-Based Semantic Complementation: Incorporating Entity and Soft Dependency Constraints into Metonymy Resolution
by Siyuan Du and Hao Wang
Future Internet 2022, 14(3), 85; https://doi.org/10.3390/fi14030085 - 12 Mar 2022
Cited by 2 | Viewed by 2221
Abstract
State-of-the-art methods for metonymy resolution (MR) consider the sentential context by modeling the entire sentence. However, entity representation, or syntactic structure that are informative may be beneficial for identifying metonymy. Other approaches only using deep neural network fail to capture such information. To [...] Read more.
State-of-the-art methods for metonymy resolution (MR) consider the sentential context by modeling the entire sentence. However, entity representation, or syntactic structure that are informative may be beneficial for identifying metonymy. Other approaches only using deep neural network fail to capture such information. To leverage both entity and syntax constraints, this paper proposes a robust model EBAGCN for metonymy resolution. First, this work extracts syntactic dependency relations under the guidance of syntactic knowledge. Then the work constructs a neural network to incorporate both entity representation and syntactic structure into better resolution representations. In this way, the proposed model alleviates the impact of noisy information from entire sentences and breaks the limit of performance on the complicated texts. Experiments on the SemEval and ReLocaR dataset show that the proposed model significantly outperforms the state-of-the-art method BERT by more than 4%. Ablation tests demonstrate that leveraging these two types of constraints benefits fine pre-trained language models in the MR task. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

15 pages, 725 KiB  
Article
Weighted-CAPIC Caching Algorithm for Priority Traffic in Named Data Network
by Leanna Vidya Yovita, Nana Rachmana Syambas and Ian Joseph Matheus Edward
Future Internet 2022, 14(3), 84; https://doi.org/10.3390/fi14030084 - 12 Mar 2022
Cited by 4 | Viewed by 2267
Abstract
Today, the internet requires many additional mechanisms or protocols to support various ever-growing applications. As a future internet architecture candidate, the Named Data Network (NDN) offers a solution that naturally fulfills this need. One of the critical components in NDN is cache. Caching [...] Read more.
Today, the internet requires many additional mechanisms or protocols to support various ever-growing applications. As a future internet architecture candidate, the Named Data Network (NDN) offers a solution that naturally fulfills this need. One of the critical components in NDN is cache. Caching in NDN solves bandwidth usage, server load, and service time. Some research about caching has been conducted, but improvements can be made. In this research, we derived the utility function of multiclass content to obtain the relationship between the class’s weight and cache hit ratio. Then, we formulated it into the Weighted-CAPIC caching algorithm. Our research shows that Weighted-CAPIC provides a higher cache hit ratio for the priority class and the whole system. This performance is supported while the algorithm still provides the same path-stretch value as Dynamic-CAPIC. The Weighted-CAPIC is suitable to used in mobile nodes due to its ability to work individually without having to coordinate with other nodes. Full article
(This article belongs to the Special Issue Recent Advances in Information-Centric Networks (ICNs))
Show Figures

Figure 1

23 pages, 41814 KiB  
Article
High-Performance Computing and ABMS for High-Resolution COVID-19 Spreading Simulation
by Mattia Pellegrino, Gianfranco Lombardo, Stefano Cagnoni and Agostino Poggi
Future Internet 2022, 14(3), 83; https://doi.org/10.3390/fi14030083 - 11 Mar 2022
Cited by 6 | Viewed by 2623
Abstract
This paper presents an approach for the modeling and the simulation of the spreading of COVID-19 based on agent-based modeling and simulation (ABMS). Our goal is not only to support large-scale simulations but also to increase the simulation resolution. Moreover, we do not [...] Read more.
This paper presents an approach for the modeling and the simulation of the spreading of COVID-19 based on agent-based modeling and simulation (ABMS). Our goal is not only to support large-scale simulations but also to increase the simulation resolution. Moreover, we do not assume an underlying network of contacts, and the person-to-person contacts responsible for the spreading are modeled as a function of the geographical distance among the individuals. In particular, we defined a commuting mechanism combining radiation-based and gravity-based models and we exploited the commuting properties at different resolution levels (municipalities and provinces). Finally, we exploited the high-performance computing (HPC) facilities to simulate millions of concurrent agents, each mapping the individual’s behavior. To do such simulations, we developed a spreading simulator and validated it through the simulation of the spreading in two of the most populated Italian regions: Lombardy and Emilia-Romagna. Our main achievement consists of the effective modeling of 10 million of concurrent agents, each one mapping an individual behavior with a high-resolution in terms of social contacts, mobility and contribution to the virus spreading. Moreover, we analyzed the forecasting ability of our framework to predict the number of infections being initialized with only a few days of real data. We validated our model with the statistical data coming from the serological analysis conducted in Lombardy, and our model makes a smaller error than other state of the art models with a final root mean squared error equal to 56,009 simulating the entire first pandemic wave in spring 2020. On the other hand, for the Emilia-Romagna region, we simulated the second pandemic wave during autumn 2020, and we reached a final RMSE equal to 10,730.11. Full article
(This article belongs to the Special Issue Modern Trends in Multi-Agent Systems)
Show Figures

Figure 1

21 pages, 4234 KiB  
Article
Investigation of Using CAPTCHA Keystroke Dynamics to Enhance the Prevention of Phishing Attacks
by Emtethal K. Alamri, Abdullah M. Alnajim and Suliman A. Alsuhibany
Future Internet 2022, 14(3), 82; https://doi.org/10.3390/fi14030082 - 10 Mar 2022
Cited by 5 | Viewed by 2925
Abstract
Phishing is a cybercrime that is increasing exponentially day by day. In phishing, a phisher employs social engineering and technology to misdirect victims towards revealing their personal information, which can then be exploited. Despite ongoing research to find effective anti-phishing solutions, phishing remains [...] Read more.
Phishing is a cybercrime that is increasing exponentially day by day. In phishing, a phisher employs social engineering and technology to misdirect victims towards revealing their personal information, which can then be exploited. Despite ongoing research to find effective anti-phishing solutions, phishing remains a serious security problem for Internet users. In this paper, an investigation of using CAPTCHA keystroke dynamics to enhance the prevention of phishing attacks was presented. A controlled laboratory experiment was conducted, with the results indicating the proposed approach as highly effective in protecting online services from phishing attacks. The results showed a 0% false-positive rate and 17.8% false-negative rate. Overall, the proposed solution provided a practical and effective way of preventing phishing attacks. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

22 pages, 4133 KiB  
Article
Bot-Based Emergency Software Applications for Natural Disaster Situations
by Gabriel Ovando-Leon, Luis Veas-Castillo, Veronica Gil-Costa and Mauricio Marin
Future Internet 2022, 14(3), 81; https://doi.org/10.3390/fi14030081 - 9 Mar 2022
Cited by 3 | Viewed by 3029
Abstract
Upon a serious emergency situation such as a natural disaster, people quickly try to call their friends and family with the software they use every day. On the other hand, people also tend to participate as a volunteer for rescue purposes. It is [...] Read more.
Upon a serious emergency situation such as a natural disaster, people quickly try to call their friends and family with the software they use every day. On the other hand, people also tend to participate as a volunteer for rescue purposes. It is unlikely and impractical for these people to download and learn to use an application specially designed for aid processes. In this work, we investigate the feasibility of including bots, which provide a mechanism to get inside the software that people use daily, to develop emergency software applications designed to be used by victims and volunteers during stressful situations. In such situations, it is necessary to achieve efficiency, scalability, fault tolerance, elasticity, and mobility between data centers. We evaluate three bot-based applications. The first one, named Jayma, sends information about affected people during the natural disaster to a network of contacts. The second bot-based application, Ayni, manages and assigns tasks to volunteers. The third bot-based application named Rimay registers volunteers and manages campaigns and emergency tasks. The applications are built using common practice for distributed software architecture design. Most of the components forming the architecture are from existing public domain software, and some components are even consumed as an external service as in the case of Telegram. Moreover, the applications are executed on commodity hardware usually available from universities. We evaluate the applications to detect critical tasks, bottlenecks, and the most critical resource. Results show that Ayni and Rimay tend to saturate the CPU faster than other resources. Meanwhile, the RAM memory tends to reach the highest utilization level in the Jayma application. Full article
(This article belongs to the Special Issue Machine Learning for Software Engineering)
Show Figures

Figure 1

17 pages, 9320 KiB  
Article
Deep Anomaly Detection Based on Variational Deviation Network
by Junwen Lu, Jinhui Wang, Xiaojun Wei, Keshou Wu and Guanfeng Liu
Future Internet 2022, 14(3), 80; https://doi.org/10.3390/fi14030080 - 8 Mar 2022
Viewed by 2695
Abstract
There is relatively little research on deep learning for anomaly detection within the field of deep learning. Existing deep anomaly detection methods focus on the learning of feature reconstruction, but such methods mainly learn new feature representations, and the new features do not [...] Read more.
There is relatively little research on deep learning for anomaly detection within the field of deep learning. Existing deep anomaly detection methods focus on the learning of feature reconstruction, but such methods mainly learn new feature representations, and the new features do not fully reflect the original features, leading to inaccurate anomaly scores; in addition, there is an end-to-end deep anomaly detection algorithm, but the method cannot accurately obtain a reference score that matches the data themselves. In addition, in most practical scenarios, the data are unlabeled, and there exist some datasets with labels, but the confidence and accuracy of the labels are very low, resulting in inaccurate results when put into the model, which makes them often designed for unsupervised learning, and thus in such algorithms, the prior knowledge of known anomalous data is often not used to optimize the anomaly scores. To address the two problems raised above, this paper proposes a new anomaly detection model that learns anomaly scores mainly through a variational deviation network (i.e., not by learning the reconstruction error of new features, distance metrics, or random generation, but by learning the normal distribution of normal data). In this model, we force the anomaly scores to deviate significantly from the normal data by a small amount of anomalous data and a reference score generated by variational self-encoding. The experimental results in multiple classes of data show that the new variational deviation network proposed in this paper has higher accuracy among the mainstream anomaly detection algorithms. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

24 pages, 4791 KiB  
Article
Solar Radiation Forecasting by Pearson Correlation Using LSTM Neural Network and ANFIS Method: Application in the West-Central Jordan
by Hossam Fraihat, Amneh A. Almbaideen, Abdullah Al-Odienat, Bassam Al-Naami, Roberto De Fazio and Paolo Visconti
Future Internet 2022, 14(3), 79; https://doi.org/10.3390/fi14030079 - 5 Mar 2022
Cited by 22 | Viewed by 4201
Abstract
Solar energy is one of the most important renewable energies, with many advantages over other sources. Many parameters affect the electricity generation from solar plants. This paper aims to study the influence of these parameters on predicting solar radiation and electric energy produced [...] Read more.
Solar energy is one of the most important renewable energies, with many advantages over other sources. Many parameters affect the electricity generation from solar plants. This paper aims to study the influence of these parameters on predicting solar radiation and electric energy produced in the Salt-Jordan region (Middle East) using long short-term memory (LSTM) and Adaptive Network-based Fuzzy Inference System (ANFIS) models. The data relating to 24 meteorological parameters for nearly the past five years were downloaded from the MeteoBleu database. The results show that the influence of parameters on solar radiation varies according to the season. The forecasting using ANFIS provides better results when the parameter correlation with solar radiation is high (i.e., Pearson Correlation Coefficient PCC between 0.95 and 1). In comparison, the LSTM neural network shows better results when correlation is low (PCC in the range 0.5–0.8). The obtained RMSE varies from 0.04 to 0.8 depending on the season and used parameters; new meteorological parameters influencing solar radiation are also investigated. Full article
Show Figures

Graphical abstract

29 pages, 1440 KiB  
Article
Graphol: A Graphical Language for Ontology Modeling Equivalent to OWL 2
by Domenico Lembo, Valerio Santarelli, Domenico Fabio Savo and Giuseppe De Giacomo
Future Internet 2022, 14(3), 78; https://doi.org/10.3390/fi14030078 - 28 Feb 2022
Cited by 6 | Viewed by 3724
Abstract
In this paper we study Graphol, a fully graphical language inspired by standard formalisms for conceptual modeling, similar to the UML class diagram and the ER model, but equipped with formal semantics. We formally prove that Graphol is equivalent to OWL 2, [...] Read more.
In this paper we study Graphol, a fully graphical language inspired by standard formalisms for conceptual modeling, similar to the UML class diagram and the ER model, but equipped with formal semantics. We formally prove that Graphol is equivalent to OWL 2, i.e., it can capture every OWL 2 ontology and vice versa. We also present some usability studies indicating that Graphol is suitable for quick adoption by conceptual modelers that are familiar with UML and ER. This is further testified by the adoption of Graphol for ontology representation in several industrial projects. Full article
(This article belongs to the Special Issue Modern Trends in Multi-Agent Systems)
Show Figures

Figure 1

17 pages, 1106 KiB  
Article
Utilizing Blockchain for IoT Privacy through Enhanced ECIES with Secure Hash Function
by Yurika Pant Khanal, Abeer Alsadoon, Khurram Shahzad, Ahmad B. Al-Khalil, Penatiyana W. C. Prasad, Sabih Ur Rehman and Rafiqul Islam
Future Internet 2022, 14(3), 77; https://doi.org/10.3390/fi14030077 - 28 Feb 2022
Cited by 6 | Viewed by 3166
Abstract
Blockchain technology has been widely advocated for security and privacy in IoT systems. However, a major impediment to its successful implementation is the lack of privacy protection regarding user access policy while accessing personal data in the IoT system. This work aims to [...] Read more.
Blockchain technology has been widely advocated for security and privacy in IoT systems. However, a major impediment to its successful implementation is the lack of privacy protection regarding user access policy while accessing personal data in the IoT system. This work aims to preserve the privacy of user access policy by protecting the confidentiality and authenticity of the transmitted message while obtaining the necessary consents for data access. We consider a Modified Elliptic Curve Integrated Encryption Scheme (ECIES) to improve the security strength of the transmitted message. A secure hash function is used in conjunction with a key derivation function to modify the encryption procedure, which enhances the efficiency of the encryption and decryption by generating multiple secure keys through one master key. The proposed solution eliminates user-dependent variables by including transaction generation and verification in the calculation of computation time, resulting in increased system reliability. In comparison to previously established work, the security of the transmitted message is improved through a reduction of more than 12% in the correlation coefficient between the constructed request transaction and encrypted transaction, coupled with a decrease of up to 7% in computation time. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT)
Show Figures

Figure 1

14 pages, 519 KiB  
Article
Forecasting Students Dropout: A UTAD University Study
by Diogo E. Moreira da Silva, Eduardo J. Solteiro Pires, Arsénio Reis, Paulo B. de Moura Oliveira and João Barroso
Future Internet 2022, 14(3), 76; https://doi.org/10.3390/fi14030076 - 28 Feb 2022
Cited by 7 | Viewed by 3333
Abstract
In Portugal, the dropout rate of university courses is around 29%. Understanding the reasons behind such a high desertion rate can drastically improve the success of students and universities. This work applies existing data mining techniques to predict the academic dropout mainly using [...] Read more.
In Portugal, the dropout rate of university courses is around 29%. Understanding the reasons behind such a high desertion rate can drastically improve the success of students and universities. This work applies existing data mining techniques to predict the academic dropout mainly using the academic grades. Four different machine learning techniques are presented and analyzed. The dataset consists of 331 students who were previously enrolled in the Computer Engineering degree at the Universidade de Trás-os-Montes e Alto Douro (UTAD). The study aims to detect students who may prematurely drop out using existing methods. The most relevant data features were identified using the Permutation Feature Importance technique. In the second phase, several methods to predict the dropouts were applied. Then, each machine learning technique’s results were displayed and compared to select the best approach to predict academic dropout. The methods used achieved good results, reaching an F1-Score of 81% in the final test set, concluding that students’ marks somehow incorporate their living conditions. Full article
(This article belongs to the Special Issue Smart Objects and Technologies for Social Good)
Show Figures

Figure 1

17 pages, 1806 KiB  
Article
A Prototype Web Application to Support Human-Centered Audiovisual Content Authentication and Crowdsourcing
by Nikolaos Vryzas, Anastasia Katsaounidou, Lazaros Vrysis, Rigas Kotsakis and Charalampos Dimoulas
Future Internet 2022, 14(3), 75; https://doi.org/10.3390/fi14030075 - 27 Feb 2022
Cited by 6 | Viewed by 2855
Abstract
Media authentication relies on the detection of inconsistencies that may indicate malicious editing in audio and video files. Traditionally, authentication processes are performed by forensics professionals using dedicated tools. There is rich research on the automation of this procedure, but the results do [...] Read more.
Media authentication relies on the detection of inconsistencies that may indicate malicious editing in audio and video files. Traditionally, authentication processes are performed by forensics professionals using dedicated tools. There is rich research on the automation of this procedure, but the results do not yet guarantee the feasibility of providing automated tools. In the current approach, a computer-supported toolbox is presented, providing online functionality for assisting technically inexperienced users (journalists or the public) to investigate visually the consistency of audio streams. Several algorithms based on previous research have been incorporated on the backend of the proposed system, including a novel CNN model that performs a Signal-to-Reverberation-Ratio (SRR) estimation with a mean square error of 2.9%. The user can access the web application online through a web browser. After providing an audio/video file or a YouTube link, the application returns as output a set of interactive visualizations that can allow the user to investigate the authenticity of the file. The visualizations are generated based on the outcomes of Digital Signal Processing and Machine Learning models. The files are stored in a database, along with their analysis results and annotation. Following a crowdsourcing methodology, users are allowed to contribute by annotating files from the dataset concerning their authenticity. The evaluation version of the web application is publicly available online. Full article
(This article belongs to the Special Issue Theory and Applications of Web 3.0 in the Media Sector)
Show Figures

Figure 1

18 pages, 731 KiB  
Review
Business Models for the Internet of Services: State of the Art and Research Agenda
by Jacqueline Zonichenn Reis, Rodrigo Franco Gonçalves, Marcia Terra da Silva and Nikolai Kazantsev
Future Internet 2022, 14(3), 74; https://doi.org/10.3390/fi14030074 - 25 Feb 2022
Cited by 3 | Viewed by 3802
Abstract
The relevance of the Internet of Services (IoS) comes from the global reach of the Internet into everyone’s home and daily activities and from the move from a manufacturing-based economy to a service-based economy. The IoS is seen as a new ecosystem where [...] Read more.
The relevance of the Internet of Services (IoS) comes from the global reach of the Internet into everyone’s home and daily activities and from the move from a manufacturing-based economy to a service-based economy. The IoS is seen as a new ecosystem where service providers and consumers explore their business networks for service provision and consumption. The scientific literature refers to IoS as an important cornerstone for Industry 4.0 and Future Internet; thus, it becomes relevant to study how IoS interacts with business models. Nevertheless, there is a lack of clarity on such an intersection. Moreover, a systematic review of IoS-based business models is still missing. This paper aims to make a systematic review of IoS-based business models and their application fields. We included studies from Scopus and Web of Science databases, we excluded duplicated papers and short conference versions of the later full paper journal publications. Twenty-three different studies are presented, categorized in the sub-areas of IoS, and then by the fields of applications. The main finding highlights the opportunities of IoS applications in different fields, offering directions for future research on this new arena. Full article
Show Figures

Figure 1

28 pages, 1252 KiB  
Review
Quantum Key Distribution for 5G Networks: A Review, State of Art and Future Directions
by Mohd Hirzi Adnan, Zuriati Ahmad Zukarnain and Nur Ziadah Harun
Future Internet 2022, 14(3), 73; https://doi.org/10.3390/fi14030073 - 25 Feb 2022
Cited by 13 | Viewed by 7202
Abstract
In recent years, 5G networks and services become progressively popular among telecommunication providers. Simultaneously, the growth in the usage and deployment of smartphone platforms and mobile applications have been seen as phenomenal. Therefore, this paper discusses the current state of the art of [...] Read more.
In recent years, 5G networks and services become progressively popular among telecommunication providers. Simultaneously, the growth in the usage and deployment of smartphone platforms and mobile applications have been seen as phenomenal. Therefore, this paper discusses the current state of the art of 5G technology in the merger of unconditional security requirements referred to as Quantum Cryptography. The various domain of Quantum Cryptography is illustrated including the protocols available, their functionality and previous implementation in real networks. This paper further identifies research gaps covering critical aspects of how Quantum Cryptography can be realized and effectively utilized in 5G networks. These include improving the current technique in Quantum Cryptography through efficient key distribution and message sharing between users in 5G networks. Full article
Show Figures

Figure 1

17 pages, 662 KiB  
Article
A Vote-Based Architecture to Generate Classified Datasets and Improve Performance of Intrusion Detection Systems Based on Supervised Learning
by Diogo Teixeira, Silvestre Malta and Pedro Pinto
Future Internet 2022, 14(3), 72; https://doi.org/10.3390/fi14030072 - 25 Feb 2022
Cited by 3 | Viewed by 3323
Abstract
An intrusion detection system (IDS) is an important tool to prevent potential threats to systems and data. Anomaly-based IDSs may deploy machine learning algorithms to classify events either as normal or anomalous and trigger the adequate response. When using supervised learning, these algorithms [...] Read more.
An intrusion detection system (IDS) is an important tool to prevent potential threats to systems and data. Anomaly-based IDSs may deploy machine learning algorithms to classify events either as normal or anomalous and trigger the adequate response. When using supervised learning, these algorithms require classified, rich, and recent datasets. Thus, to foster the performance of these machine learning models, datasets can be generated from different sources in a collaborative approach, and trained with multiple algorithms. This paper proposes a vote-based architecture to generate classified datasets and improve the performance of supervised learning-based IDSs. On a regular basis, multiple IDSs in different locations send their logs to a central system that combines and classifies them using different machine learning models and a majority vote system. Then, it generates a new and classified dataset, which is trained to obtain the best updated model to be integrated into the IDS of the companies involved. The proposed architecture trains multiple times with several algorithms. To shorten the overall runtimes, the proposed architecture was deployed in Fed4FIRE+ with Ray to distribute the tasks by the available resources. A set of machine learning algorithms and the proposed architecture were assessed. When compared with a baseline scenario, the proposed architecture enabled to increase the accuracy by 11.5% and the precision by 11.2%. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

22 pages, 1487 KiB  
Article
Design of Relay Switching to Combat an Eavesdropper in IoT-NOMA Wireless Networks
by Thanh-Nam Tran, Van-Cuu Ho, Thoai Phu Vo, Khanh Ngo Nhu Tran and Miroslav Voznak
Future Internet 2022, 14(3), 71; https://doi.org/10.3390/fi14030071 - 24 Feb 2022
Cited by 2 | Viewed by 2339
Abstract
The requirements of low latency, low cost, less energy consumption, high flexibility, high network capacity, and high data safety are crucial challenges for future Internet of Things (IoT) wireless networks. Motivated by these challenges, this study deals with a novel design of green-cooperative [...] Read more.
The requirements of low latency, low cost, less energy consumption, high flexibility, high network capacity, and high data safety are crucial challenges for future Internet of Things (IoT) wireless networks. Motivated by these challenges, this study deals with a novel design of green-cooperative IoT network, which employed coupled relays consisting of one IoT relay selected for forwarding signals to multiple IoT devices while another IoT relay transmitted jamming signals to an eavesdropper. For flexibility, all IoT nodes were powered by solar energy enough to sustain themselves, in order to consume less energy. To reach low latency, the study adopted the emerging non-orthogonal multiple access technique to serve multiple IoT devices simultaneously. Furthermore, the study adopted the simultaneous wireless information and power transfer technique which transmits wireless data for information processing and energy for energy harvesting. The study sketched a novel transmission block time period framework which plotted how a signal could travel via an individual IoT model. Maximizing the achievable bit-rate of IoT devices was considered to improve network capacity and data safety as well. Aiming at enhancing secrecy performance, a rest IoT relay played a role as a friendly jammer to transmit a jamming signal to an eavesdropper using energy harvested from the power splitting protocol. The results achieved in this study showed that the proposed model satisfied the requirements of future green IoT wireless networks. Derivatives leading to closed-form expressions are presented and verified by simulation results. The investigated results demonstrated that a friendly jammer based on radio frequency and energy harvesting strongly forces the intercept probability performance of the eavesdropper towards one, while outage probability performance of IoT devices towards zero showed that the signal to noise ratio tends to infinity. Full article
(This article belongs to the Special Issue 6G Wireless Channel Measurements and Models: Trends and Challenges)
Show Figures

Figure 1

19 pages, 1349 KiB  
Article
A Survey on the Use of Graph Convolutional Networks for Combating Fake News
by Iraklis Varlamis, Dimitrios Michail, Foteini Glykou and Panagiotis Tsantilas
Future Internet 2022, 14(3), 70; https://doi.org/10.3390/fi14030070 - 24 Feb 2022
Cited by 14 | Viewed by 5121
Abstract
The combat against fake news and disinformation is an ongoing, multi-faceted task for researchers in social media and social networks domains, which comprises not only the detection of false facts in published content but also the detection of accountability mechanisms that keep a [...] Read more.
The combat against fake news and disinformation is an ongoing, multi-faceted task for researchers in social media and social networks domains, which comprises not only the detection of false facts in published content but also the detection of accountability mechanisms that keep a record of the trustfulness of sources that generate news and, lately, of the networks that deliberately distribute fake information. In the direction of detecting and handling organized disinformation networks, major social media and social networking sites are currently developing strategies and mechanisms to block such attempts. The role of machine learning techniques, especially neural networks, is crucial in this task. The current work focuses on the popular and promising graph representation techniques and performs a survey of the works that employ Graph Convolutional Networks (GCNs) to the task of detecting fake news, fake accounts and rumors that spread in social networks. It also highlights the available benchmark datasets employed in current research for validating the performance of the proposed methods. This work is a comprehensive survey of the use of GCNs in the combat against fake news and aims to be an ideal starting point for future researchers in the field. Full article
Show Figures

Figure 1

25 pages, 2453 KiB  
Article
Transformer-Based Abstractive Summarization for Reddit and Twitter: Single Posts vs. Comment Pools in Three Languages
by Ivan S. Blekanov, Nikita Tarasov and Svetlana S. Bodrunova
Future Internet 2022, 14(3), 69; https://doi.org/10.3390/fi14030069 - 23 Feb 2022
Cited by 10 | Viewed by 5916
Abstract
Abstractive summarization is a technique that allows for extracting condensed meanings from long texts, with a variety of potential practical applications. Nonetheless, today’s abstractive summarization research is limited to testing the models on various types of data, which brings only marginal improvements and [...] Read more.
Abstractive summarization is a technique that allows for extracting condensed meanings from long texts, with a variety of potential practical applications. Nonetheless, today’s abstractive summarization research is limited to testing the models on various types of data, which brings only marginal improvements and does not lead to massive practical employment of the method. In particular, abstractive summarization is not used for social media research, where it would be very useful for opinion and topic mining due to the complications that social media data create for other methods of textual analysis. Of all social media, Reddit is most frequently used for testing new neural models of text summarization on large-scale datasets in English, without further testing on real-world smaller-size data in various languages or from various other platforms. Moreover, for social media, summarizing pools of texts (one-author posts, comment threads, discussion cascades, etc.) may bring crucial results relevant for social studies, which have not yet been tested. However, the existing methods of abstractive summarization are not fine-tuned for social media data and have next-to-never been applied to data from platforms beyond Reddit, nor for comments or non-English user texts. We address these research gaps by fine-tuning the newest Transformer-based neural network models LongFormer and T5 and testing them against BART, and on real-world data from Reddit, with improvements of up to 2%. Then, we apply the best model (fine-tuned T5) to pools of comments from Reddit and assess the similarity of post and comment summarizations. Further, to overcome the 500-token limitation of T5 for analyzing social media pools that are usually bigger, we apply LongFormer Large and T5 Large to pools of tweets from a large-scale discussion on the Charlie Hebdo massacre in three languages and prove that pool summarizations may be used for detecting micro-shifts in agendas of networked discussions. Our results show, however, that additional learning is definitely needed for German and French, as the results for these languages are non-satisfactory, and more fine-tuning is needed even in English for Twitter data. Thus, we show that a ‘one-for-all’ neural-network summarization model is still impossible to reach, while fine-tuning for platform affordances works well. We also show that fine-tuned T5 works best for small-scale social media data, but LongFormer is helpful for larger-scale pool summarizations. Full article
Show Figures

Figure 1

22 pages, 2317 KiB  
Article
Adjacency-Information-Entropy-Based Cooperative Name Resolution Approach in ICN
by Jiaqi Li, Jiali You and Haojiang Deng
Future Internet 2022, 14(3), 68; https://doi.org/10.3390/fi14030068 - 23 Feb 2022
Cited by 2 | Viewed by 2242
Abstract
Information-centric networking (ICN) is an emerging network architecture that has the potential to address low-transmission latency and high-reliability requirements in the fifth generation and beyond communication networks (5G/B5G). In the ICN architectures that use the identifier–locator separation mode, a name resolution system (NRS) [...] Read more.
Information-centric networking (ICN) is an emerging network architecture that has the potential to address low-transmission latency and high-reliability requirements in the fifth generation and beyond communication networks (5G/B5G). In the ICN architectures that use the identifier–locator separation mode, a name resolution system (NRS) is an important infrastructure for managing and maintaining the mappings between identifiers and locators. To meet the demands of time-sensitive applications, researchers have developed a distributed local NRS that can provide name resolution service within deterministic latency, which means it can respond to a name resolution request within a latency upper bound. However, processing name resolution requests only locally cannot take full advantage of the potential of the distributed local NRS. In this paper, we propose a name resolution approach, called adjacency-information-entropy-based cooperative name resolution (ACNR). In ACNR, when a name resolution node receives a name resolution request from a user, it can use neighboring name resolution nodes to respond to this request in a parallel processing manner. For this purpose, ACNR uses the information entropy that takes into account the adjacency and latency between name resolution nodes to describe the local structure of nodes efficiently. The proposed approach is extensively validated on simulated networks. Compared with several other approaches, the experiment results show that ACNR can discover more cooperative neighbors in a reasonable communication overhead, and achieve a higher name resolution success rate. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop