Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5489 KiB  
Article
A New AI-Based Semantic Cyber Intelligence Agent
by Fahim Sufi
Future Internet 2023, 15(7), 231; https://doi.org/10.3390/fi15070231 - 29 Jun 2023
Cited by 8 | Viewed by 2212
Abstract
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. [...] Read more.
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. In order to mitigate the deleterious consequences of cyber threats, adoption of an intelligent agent-based solution to enhance the speed and comprehensiveness of cyber intelligence is advocated. In this paper, a novel cyber intelligence solution is proposed, employing four semantic agents that interact autonomously to acquire crucial cyber intelligence pertaining to any given country. The solution leverages a combination of techniques, including a convolutional neural network (CNN), sentiment analysis, exponential smoothing, latent Dirichlet allocation (LDA), term frequency-inverse document frequency (TF-IDF), Porter stemming, and others, to analyse data from both social media and web sources. The proposed method underwent evaluation from 13 October 2022 to 6 April 2023, utilizing a dataset comprising 37,386 tweets generated by 30,706 users across 54 languages. To address non-English content, a total of 8199 HTTP requests were made to facilitate translation. Additionally, the system processed 238,220 cyber threat data from the web. Within a remarkably brief duration of 6 s, the system autonomously generated a comprehensive cyber intelligence report encompassing 7 critical dimensions of cyber intelligence for countries such as Russia, Ukraine, China, Iran, India, and Australia. Full article
(This article belongs to the Special Issue Semantic Web Services for Multi-Agent Systems)
Show Figures

Figure 1

17 pages, 18103 KiB  
Article
RSSI and Device Pose Fusion for Fingerprinting-Based Indoor Smartphone Localization Systems
by Imran Moez Khan, Andrew Thompson, Akram Al-Hourani, Kandeepan Sithamparanathan and Wayne S. T. Rowe
Future Internet 2023, 15(6), 220; https://doi.org/10.3390/fi15060220 - 20 Jun 2023
Cited by 3 | Viewed by 1283
Abstract
Complementing RSSI measurements at anchors with onboard smartphone accelerometer measurements is a popular research direction to improve the accuracy of indoor localization systems. This can be performed at different levels; for example, many studies have used pedestrian dead reckoning (PDR) and a filtering [...] Read more.
Complementing RSSI measurements at anchors with onboard smartphone accelerometer measurements is a popular research direction to improve the accuracy of indoor localization systems. This can be performed at different levels; for example, many studies have used pedestrian dead reckoning (PDR) and a filtering method at the algorithm level for sensor fusion. In this study, a novel conceptual framework was developed and applied at the data level that first utilizes accelerometer measurements to classify the smartphone’s device pose and then combines this with RSSI measurements. The framework was explored using neural networks with room-scale experimental data obtained from a Bluetooth low-energy (BLE) setup. Consistent accuracy improvement was obtained for the output localization classes (zones), with an average overall accuracy improvement of 10.7 percentage points for the RSSI-and-device-pose framework over that of RSSI-only localization. Full article
Show Figures

Figure 1

19 pages, 3172 KiB  
Article
BERT4Loc: BERT for Location—POI Recommender System
by Syed Raza Bashir, Shaina Raza and Vojislav B. Misic
Future Internet 2023, 15(6), 213; https://doi.org/10.3390/fi15060213 - 12 Jun 2023
Cited by 3 | Viewed by 1860
Abstract
Recommending points of interest (POI) is a challenging task that requires extracting comprehensive location data from location-based social media platforms. To provide effective location-based recommendations, it is important to analyze users’ historical behavior and preferences. In this study, we present a sophisticated location-aware [...] Read more.
Recommending points of interest (POI) is a challenging task that requires extracting comprehensive location data from location-based social media platforms. To provide effective location-based recommendations, it is important to analyze users’ historical behavior and preferences. In this study, we present a sophisticated location-aware recommendation system that uses Bidirectional Encoder Representations from Transformers (BERT) to offer personalized location-based suggestions. Our model combines location information and user preferences to provide more relevant recommendations compared to models that predict the next POI in a sequence. Based on our experiments conducted on two benchmark datasets, we have observed that our BERT-based model surpasses baselines models in terms of HR by a significant margin of 6% compared to the second-best performing baseline. Furthermore, our model demonstrates a percentage gain of 1–2% in the NDCG compared to second best baseline. These results indicate the superior performance and effectiveness of our BERT-based approach in comparison to other models when evaluating HR and NDCG metrics. Moreover, we see the effectiveness of the proposed model for quality through additional experiments. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

45 pages, 2869 KiB  
Review
Securing Wireless Sensor Networks Using Machine Learning and Blockchain: A Review
by Shereen Ismail, Diana W. Dawoud and Hassan Reza
Future Internet 2023, 15(6), 200; https://doi.org/10.3390/fi15060200 - 30 May 2023
Cited by 12 | Viewed by 4383
Abstract
As an Internet of Things (IoT) technological key enabler, Wireless Sensor Networks (WSNs) are prone to different kinds of cyberattacks. WSNs have unique characteristics, and have several limitations which complicate the design of effective attack prevention and detection techniques. This paper aims to [...] Read more.
As an Internet of Things (IoT) technological key enabler, Wireless Sensor Networks (WSNs) are prone to different kinds of cyberattacks. WSNs have unique characteristics, and have several limitations which complicate the design of effective attack prevention and detection techniques. This paper aims to provide a comprehensive understanding of the fundamental principles underlying cybersecurity in WSNs. In addition to current and envisioned solutions that have been studied in detail, this review primarily focuses on state-of-the-art Machine Learning (ML) and Blockchain (BC) security techniques by studying and analyzing 164 up-to-date publications highlighting security aspect in WSNs. Then, the paper discusses integrating BC and ML towards developing a lightweight security framework that consists of two lines of defence, i.e, cyberattack detection and cyberattack prevention in WSNs, emphasizing the relevant design insights and challenges. The paper concludes by presenting a proposed integrated BC and ML solution highlighting potential BC and ML algorithms underpinning a less computationally demanding solution. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT II)
Show Figures

Figure 1

17 pages, 7028 KiB  
Article
A Distributed Sensor System Based on Cloud-Edge-End Network for Industrial Internet of Things
by Mian Wang , Cong’an Xu, Yun Lin, Zhiyi Lu, Jinlong Sun and Guan Gui
Future Internet 2023, 15(5), 171; https://doi.org/10.3390/fi15050171 - 30 Apr 2023
Cited by 7 | Viewed by 1930
Abstract
The Industrial Internet of Things (IIoT) refers to the application of the IoT in the industrial field. The development of fifth-generation (5G) communication technology has accelerated the world’s entry into the era of the industrial revolution and has also promoted the overall optimization [...] Read more.
The Industrial Internet of Things (IIoT) refers to the application of the IoT in the industrial field. The development of fifth-generation (5G) communication technology has accelerated the world’s entry into the era of the industrial revolution and has also promoted the overall optimization of the IIoT. In the IIoT environment, challenges such as complex operating conditions and diverse data transmission have become increasingly prominent. Therefore, studying how to collect and process a large amount of real-time data from various devices in a timely, efficient, and reasonable manner is a significant problem. To address these issues, we propose a three-level networking model based on distributed sensor self-networking and cloud server platforms for networking. This model can collect monitoring data for a variety of industrial scenarios that require data collection. It enables the processing and storage of key information in a timely manner, reduces data transmission and storage costs, and improves data transmission reliability and efficiency. Additionally, we have designed a feature fusion network to further enhance the amount of feature information and improve the accuracy of industrial data recognition. The system also includes data preprocessing and data visualization capabilities. Finally, we discuss how to further preprocess and visualize the collected dataset and provide a specific algorithm analysis process using a large manipulator dataset as an example. Full article
Show Figures

Figure 1

18 pages, 432 KiB  
Review
Opportunities for Early Detection and Prediction of Ransomware Attacks against Industrial Control Systems
by Mazen Gazzan and Frederick T. Sheldon
Future Internet 2023, 15(4), 144; https://doi.org/10.3390/fi15040144 - 7 Apr 2023
Cited by 8 | Viewed by 3057
Abstract
Industrial control systems (ICS) and supervisory control and data acquisition (SCADA) systems, which control critical infrastructure such as power plants and water treatment facilities, have unique characteristics that make them vulnerable to ransomware attacks. These systems are often outdated and run on proprietary [...] Read more.
Industrial control systems (ICS) and supervisory control and data acquisition (SCADA) systems, which control critical infrastructure such as power plants and water treatment facilities, have unique characteristics that make them vulnerable to ransomware attacks. These systems are often outdated and run on proprietary software, making them difficult to protect with traditional cybersecurity measures. The limited visibility into these systems and the lack of effective threat intelligence pose significant challenges to the early detection and prediction of ransomware attacks. Ransomware attacks on ICS and SCADA systems have become a growing concern in recent years. These attacks can cause significant disruptions to critical infrastructure and result in significant financial losses. Despite the increasing threat, the prediction of ransomware attacks on ICS remains a significant challenge for the cybersecurity community. This is due to the unique characteristics of these systems, including the use of proprietary software and limited visibility into their operations. In this review paper, we will examine the challenges associated with predicting ransomware attacks on industrial systems and the existing approaches for mitigating these risks. We will also discuss the need for a multi-disciplinary approach that involves a close collaboration between the cybersecurity and ICS communities. We aim to provide a comprehensive overview of the current state of ransomware prediction on industrial systems and to identify opportunities for future research and development in this area. Full article
(This article belongs to the Special Issue Cyber Security Challenges in the New Smart Worlds)
Show Figures

Figure 1

15 pages, 5002 KiB  
Article
Future of Drug Discovery: The Synergy of Edge Computing, Internet of Medical Things, and Deep Learning
by Mohammad (Behdad) Jamshidi, Omid Moztarzadeh, Alireza Jamshidi, Ahmed Abdelgawad, Ayman S. El-Baz and Lukas Hauer
Future Internet 2023, 15(4), 142; https://doi.org/10.3390/fi15040142 - 7 Apr 2023
Cited by 12 | Viewed by 2700
Abstract
The global spread of COVID-19 highlights the urgency of quickly finding drugs and vaccines and suggests that similar challenges will arise in the future. This underscores the need for ongoing efforts to overcome the obstacles involved in the development of potential treatments. Although [...] Read more.
The global spread of COVID-19 highlights the urgency of quickly finding drugs and vaccines and suggests that similar challenges will arise in the future. This underscores the need for ongoing efforts to overcome the obstacles involved in the development of potential treatments. Although some progress has been made in the use of Artificial Intelligence (AI) in drug discovery, virologists, pharmaceutical companies, and investors seek more long-term solutions and greater investment in emerging technologies. One potential solution to aid in the drug-development process is to combine the capabilities of the Internet of Medical Things (IoMT), edge computing (EC), and deep learning (DL). Some practical frameworks and techniques utilizing EC, IoMT, and DL have been proposed for the monitoring and tracking of infected individuals or high-risk areas. However, these technologies have not been widely utilized in drug clinical trials. Given the time-consuming nature of traditional drug- and vaccine-development methods, there is a need for a new AI-based platform that can revolutionize the industry. One approach involves utilizing smartphones equipped with medical sensors to collect and transmit real-time physiological and healthcare information on clinical-trial participants to the nearest edge nodes (EN). This allows the verification of a vast amount of medical data for a large number of individuals in a short time frame, without the restrictions of latency, bandwidth, or security constraints. The collected information can be monitored by physicians and researchers to assess a vaccine’s performance. Full article
Show Figures

Figure 1

18 pages, 995 KiB  
Review
Influential Factors in the Design and Development of a Sustainable Web3/Metaverse and Its Applications
by Reza Aria, Norm Archer, Moein Khanlari and Bharat Shah
Future Internet 2023, 15(4), 131; https://doi.org/10.3390/fi15040131 - 30 Mar 2023
Cited by 9 | Viewed by 4247
Abstract
This paper summarizes the work of many different authors, industries, and countries by introducing important and influential factors that will help in the development, successful adoption, and sustainable use of the Web3/metaverse and its applications. We introduce a few important factors derived from [...] Read more.
This paper summarizes the work of many different authors, industries, and countries by introducing important and influential factors that will help in the development, successful adoption, and sustainable use of the Web3/metaverse and its applications. We introduce a few important factors derived from the current state-of-the-art literature, including four essential elements including (1) appropriate decentralization, (2) good user experience, (3) appropriate translation and synchronization to the real world, and (4) a viable economy, which are required for appropriate implementation of a metaverse and its applications. The future of Web3 is all about decentralization, and blockchain can play a significant part in the development of the Metaverse. This paper also sheds light on some of the most relevant open issues and challenges currently facing the Web3/metaverse and its applications, with the hope that this discourse will help to encourage the development of appropriate solutions. Full article
Show Figures

Figure 1

36 pages, 5618 KiB  
Review
Quantum Computing for Healthcare: A Review
by Raihan Ur Rasool, Hafiz Farooq Ahmad, Wajid Rafique, Adnan Qayyum, Junaid Qadir and Zahid Anwar
Future Internet 2023, 15(3), 94; https://doi.org/10.3390/fi15030094 - 27 Feb 2023
Cited by 27 | Viewed by 19048
Abstract
In recent years, the interdisciplinary field of quantum computing has rapidly developed and garnered substantial interest from both academia and industry due to its ability to process information in fundamentally different ways, leading to hitherto unattainable computational capabilities. However, despite its potential, the [...] Read more.
In recent years, the interdisciplinary field of quantum computing has rapidly developed and garnered substantial interest from both academia and industry due to its ability to process information in fundamentally different ways, leading to hitherto unattainable computational capabilities. However, despite its potential, the full extent of quantum computing’s impact on healthcare remains largely unexplored. This survey paper presents the first systematic analysis of the various capabilities of quantum computing in enhancing healthcare systems, with a focus on its potential to revolutionize compute-intensive healthcare tasks such as drug discovery, personalized medicine, DNA sequencing, medical imaging, and operational optimization. Through a comprehensive analysis of existing literature, we have developed taxonomies across different dimensions, including background and enabling technologies, applications, requirements, architectures, security, open issues, and future research directions, providing a panoramic view of the quantum computing paradigm for healthcare. Our survey aims to aid both new and experienced researchers in quantum computing and healthcare by helping them understand the current research landscape, identifying potential opportunities and challenges, and making informed decisions when designing new architectures and applications for quantum computing in healthcare. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

34 pages, 2792 KiB  
Article
BPMNE4IoT: A Framework for Modeling, Executing and Monitoring IoT-Driven Processes
by Yusuf Kirikkayis, Florian Gallik, Michael Winter and Manfred Reichert
Future Internet 2023, 15(3), 90; https://doi.org/10.3390/fi15030090 - 22 Feb 2023
Cited by 11 | Viewed by 3744
Abstract
The Internet of Things (IoT) enables a variety of smart applications, including smart home, smart manufacturing, and smart city. By enhancing Business Process Management Systems with IoT capabilities, the execution and monitoring of business processes can be significantly improved. Providing a holistic support [...] Read more.
The Internet of Things (IoT) enables a variety of smart applications, including smart home, smart manufacturing, and smart city. By enhancing Business Process Management Systems with IoT capabilities, the execution and monitoring of business processes can be significantly improved. Providing a holistic support for modeling, executing and monitoring IoT-driven processes, however, constitutes a challenge. Existing process modeling and process execution languages, such as BPMN 2.0, are unable to fully meet the IoT characteristics (e.g., asynchronicity and parallelism) of IoT-driven processes. In this article, we present BPMNE4IoT—A holistic framework for modeling, executing and monitoring IoT-driven processes. We introduce various artifacts and events based on the BPMN 2.0 metamodel that allow realizing the desired IoT awareness of business processes. The framework is evaluated along two real-world scenarios from two different domains. Moreover, we present a user study for comparing BPMNE4IoT and BPMN 2.0. In particular, this study has confirmed that the BPMNE4IoT framework facilitates the support of IoT-driven processes. Full article
(This article belongs to the Special Issue IoT-Based BPM for Smart Environments)
Show Figures

Figure 1

17 pages, 2570 KiB  
Article
Machine Learning for Data Center Optimizations: Feature Selection Using Shapley Additive exPlanation (SHAP)
by Yibrah Gebreyesus, Damian Dalton, Sebastian Nixon, Davide De Chiara and Marta Chinnici
Future Internet 2023, 15(3), 88; https://doi.org/10.3390/fi15030088 - 21 Feb 2023
Cited by 13 | Viewed by 4921
Abstract
The need for artificial intelligence (AI) and machine learning (ML) models to optimize data center (DC) operations increases as the volume of operations management data upsurges tremendously. These strategies can assist operators in better understanding their DC operations and help them make informed [...] Read more.
The need for artificial intelligence (AI) and machine learning (ML) models to optimize data center (DC) operations increases as the volume of operations management data upsurges tremendously. These strategies can assist operators in better understanding their DC operations and help them make informed decisions upfront to maintain service reliability and availability. The strategies include developing models that optimize energy efficiency, identifying inefficient resource utilization and scheduling policies, and predicting outages. In addition to model hyperparameter tuning, feature subset selection (FSS) is critical for identifying relevant features for effectively modeling DC operations to provide insight into the data, optimize model performance, and reduce computational expenses. Hence, this paper introduces the Shapley Additive exPlanation (SHAP) values method, a class of additive feature attribution values for identifying relevant features that is rarely discussed in the literature. We compared its effectiveness with several commonly used, importance-based feature selection methods. The methods were tested on real DC operations data streams obtained from the ENEA CRESCO6 cluster with 20,832 cores. To demonstrate the effectiveness of SHAP compared to other methods, we selected the top ten most important features from each method, retrained the predictive models, and evaluated their performance using the MAE, RMSE, and MPAE evaluation criteria. The results presented in this paper demonstrate that the predictive models trained using features selected with the SHAP-assisted method performed well, with a lower error and a reasonable execution time compared to other methods. Full article
(This article belongs to the Special Issue Machine Learning Perspective in the Convolutional Neural Network Era)
Show Figures

Figure 1

23 pages, 2561 KiB  
Article
HealthBlock: A Framework for a Collaborative Sharing of Electronic Health Records Based on Blockchain
by Leina Abdelgalil and Mohamed Mejri
Future Internet 2023, 15(3), 87; https://doi.org/10.3390/fi15030087 - 21 Feb 2023
Cited by 10 | Viewed by 2544
Abstract
Electronic health records (EHRs) play an important role in our life. However, most of the time, they are scattered and saved on different databases belonging to distinct institutions (hospitals, laboratories, clinics, etc.) geographically distributed across one or many countries. Due to this decentralization [...] Read more.
Electronic health records (EHRs) play an important role in our life. However, most of the time, they are scattered and saved on different databases belonging to distinct institutions (hospitals, laboratories, clinics, etc.) geographically distributed across one or many countries. Due to this decentralization and the heterogeneity of the different involved systems, medical staff are facing difficulties in correctly collaborating by sharing, protecting, and tracking their patient’s electronic health-record history to provide them with the best care. Additionally, patients have no control over their private EHRs. Blockchain has many promising future uses for the healthcare domain because it provides a better solution for sharing data while preserving the integrity, the interoperability, the availability of the classical client–server architectures used to manage EHRS. This paper proposes a framework called HealthBlock for collaboratively sharing EHRs and their privacy preservation. Different technologies have been combined to achieve this goal. The InterPlanetary File System (IPFS) technology stores and shares patients’ EHRs in distributed off-chain storage and ensures the record’s immutability; Hyperledger Indy gives patients full control over their EHRs, and Hyperledger Fabric stores the patient-access control policy and delegations. Full article
Show Figures

Figure 1

13 pages, 2344 KiB  
Article
Forest Fire Detection and Notification Method Based on AI and IoT Approaches
by Kuldoshbay Avazov, An Eui Hyun, Alabdulwahab Abrar Sami S, Azizbek Khaitov, Akmalbek Bobomirzaevich Abdusalomov and Young Im Cho
Future Internet 2023, 15(2), 61; https://doi.org/10.3390/fi15020061 - 31 Jan 2023
Cited by 24 | Viewed by 6465
Abstract
There is a high risk of bushfire in spring and autumn, when the air is dry. Do not bring any flammable substances, such as matches or cigarettes. Cooking or wood fires are permitted only in designated areas. These are some of the regulations [...] Read more.
There is a high risk of bushfire in spring and autumn, when the air is dry. Do not bring any flammable substances, such as matches or cigarettes. Cooking or wood fires are permitted only in designated areas. These are some of the regulations that are enforced when hiking or going to a vegetated forest. However, humans tend to disobey or disregard guidelines and the law. Therefore, to preemptively stop people from accidentally starting a fire, we created a technique that will allow early fire detection and classification to ensure the utmost safety of the living things in the forest. Some relevant studies on forest fire detection have been conducted in the past few years. However, there are still insufficient studies on early fire detection and notification systems for monitoring fire disasters in real time using advanced approaches. Therefore, we came up with a solution using the convergence of the Internet of Things (IoT) and You Only Look Once Version 5 (YOLOv5). The experimental results show that IoT devices were able to validate some of the falsely detected fires or undetected fires that YOLOv5 reported. This report is recorded and sent to the fire department for further verification and validation. Finally, we compared the performance of our method with those of recently reported fire detection approaches employing widely used performance matrices to test the achieved fire classification results. Full article
(This article belongs to the Special Issue Machine Learning Perspective in the Convolutional Neural Network Era)
Show Figures

Figure 1

29 pages, 3716 KiB  
Article
Analysis of Lightweight Cryptographic Algorithms on IoT Hardware Platform
by Mohammed El-hajj, Hussien Mousawi and Ahmad Fadlallah
Future Internet 2023, 15(2), 54; https://doi.org/10.3390/fi15020054 - 30 Jan 2023
Cited by 12 | Viewed by 4757
Abstract
Highly constrained devices that are interconnected and interact to complete a task are being used in a diverse range of new fields. The Internet of Things (IoT), cyber-physical systems, distributed control systems, vehicular systems, wireless sensor networks, tele-medicine, and the smart grid are [...] Read more.
Highly constrained devices that are interconnected and interact to complete a task are being used in a diverse range of new fields. The Internet of Things (IoT), cyber-physical systems, distributed control systems, vehicular systems, wireless sensor networks, tele-medicine, and the smart grid are a few examples of these fields. In any of these contexts, security and privacy might be essential aspects. Research on secure communication in Internet of Things (IoT) networks is a highly contested topic. One method for ensuring secure data transmission is cryptography. Because IoT devices have limited resources, such as power, memory, and batteries, IoT networks have boosted the term “lightweight cryptography”. Algorithms for lightweight cryptography are designed to efficiently protect data while using minimal resources. In this research, we evaluated and benchmarked lightweight symmetric ciphers for resource-constrained devices. The evaluation is performed using two widely used platform: Arduino and Raspberry Pi. In the first part, we implemented 39 block ciphers on an ATMEGA328p microcontroller and analyzed them in the terms of speed, cost, and energy efficiency during encryption and decryption for different block and key sizes. In the second part, the 2nd-round NIST candidates (80 stream and block cipher algorithms) were added to the first-part ciphers in a comprehensive analysis for equivalent block and key sizes in the terms of latency and energy efficiency. Full article
Show Figures

Figure 1

18 pages, 968 KiB  
Article
The Cloud-to-Edge-to-IoT Continuum as an Enabler for Search and Rescue Operations
by Leonardo Militano, Adriana Arteaga, Giovanni Toffetti and Nathalie Mitton
Future Internet 2023, 15(2), 55; https://doi.org/10.3390/fi15020055 - 30 Jan 2023
Cited by 10 | Viewed by 4050
Abstract
When a natural or human disaster occurs, time is critical and often of vital importance. Data from the incident area containing the information to guide search and rescue (SAR) operations and improve intervention effectiveness should be collected as quickly as possible and with [...] Read more.
When a natural or human disaster occurs, time is critical and often of vital importance. Data from the incident area containing the information to guide search and rescue (SAR) operations and improve intervention effectiveness should be collected as quickly as possible and with the highest accuracy possible. Nowadays, rescuers are assisted by different robots able to fly, climb or crawl, and with different sensors and wireless communication means. However, the heterogeneity of devices and data together with the strong low-delay requirements cause these technologies not yet to be used at their highest potential. Cloud and Edge technologies have shown the capability to offer support to the Internet of Things (IoT), complementing it with additional resources and functionalities. Nonetheless, building a continuum from the IoT to the edge and to the cloud is still an open challenge. SAR operations would benefit strongly from such a continuum. Distributed applications and advanced resource orchestration solutions over the continuum in combination with proper software stacks reaching out to the edge of the network may enhance the response time and effective intervention for SAR operation. The challenges for SAR operations, the technologies, and solutions for the cloud-to-edge-to-IoT continuum will be discussed in this paper. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
Show Figures

Figure 1

20 pages, 2842 KiB  
Article
Envisioning Architecture of Metaverse Intensive Learning Experience (MiLEx): Career Readiness in the 21st Century and Collective Intelligence Development Scenario
by Eman AbuKhousa, Mohamed Sami El-Tahawy and Yacine Atif
Future Internet 2023, 15(2), 53; https://doi.org/10.3390/fi15020053 - 30 Jan 2023
Cited by 16 | Viewed by 4640
Abstract
Th metaverse presents a new opportunity to construct personalized learning paths and to promote practices that scale the development of future skills and collective intelligence. The attitudes, knowledge and skills that are necessary to face the challenges of the 21st century should be [...] Read more.
Th metaverse presents a new opportunity to construct personalized learning paths and to promote practices that scale the development of future skills and collective intelligence. The attitudes, knowledge and skills that are necessary to face the challenges of the 21st century should be developed through iterative cycles of continuous learning, where learners are enabled to experience, reflect, and produce new ideas while participating in a collective creativity process. In this paper, we propose an architecture to develop a metaverse-intensive learning experience (MiLEx) platform with an illustrative scenario that reinforces the development of 21st century career practices and collective intelligence. The learning ecosystem of MiLEx integrates four key elements: (1) key players that define the main actors and their roles in the learning process; (2) a learning context that defines the learning space and the networks of expected interactions among human and non-human objects; (3) experiential learning instances that deliver education via a real-life–virtual merge; and (4) technology support for building practice communities online, developing experiential cycles and transforming knowledge between human and non-human objects within the community. The proposed MiLEx architecture incorporates sets of technological and data components to (1) discover/profile learners and design learner-centric, theoretically grounded and immersive learning experiences; (2) create elements and experiential learning scenarios; (3) analyze learner’s interactive and behavioral patterns; (4) support the emergence of collective intelligence; (5) assess learning outcomes and monitor the learner’s maturity process; and (6) evaluate experienced learning and recommend future experiences. We also present the MiLEx continuum as a cyclic flow of information to promote immersive learning. Finally, we discuss some open issues to increase the learning value and propose some future work suggestions to further shape the transformative potential of metaverse-based learning environments. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

22 pages, 3265 KiB  
Article
Engineering Resource-Efficient Data Management for Smart Cities with Apache Kafka
by Theofanis P. Raptis, Claudio Cicconetti, Manolis Falelakis, Grigorios Kalogiannis, Tassos Kanellos and Tomás Pariente Lobo
Future Internet 2023, 15(2), 43; https://doi.org/10.3390/fi15020043 - 22 Jan 2023
Cited by 8 | Viewed by 3558
Abstract
In terms of the calibre and variety of services offered to end users, smart city management is undergoing a dramatic transformation. The parties involved in delivering pervasive applications can now solve key issues in the big data value chain, including data gathering, analysis, [...] Read more.
In terms of the calibre and variety of services offered to end users, smart city management is undergoing a dramatic transformation. The parties involved in delivering pervasive applications can now solve key issues in the big data value chain, including data gathering, analysis, and processing, storage, curation, and real-world data visualisation. This trend is being driven by Industry 4.0, which calls for the servitisation of data and products across all industries, including the field of smart cities, where people, sensors, and technology work closely together. In order to implement reactive services such as situational awareness, video surveillance, and geo-localisation while constantly preserving the safety and privacy of affected persons, the data generated by omnipresent devices needs to be processed fast. This paper proposes a modular architecture to (i) leverage cutting-edge technologies for data acquisition, management, and distribution (such as Apache Kafka and Apache NiFi); (ii) develop a multi-layer engineering solution for revealing valuable and hidden societal knowledge in the context of smart cities processing multi-modal, real-time, and heterogeneous data flows; and (iii) address the key challenges in tasks involving complex data flows and offer general guidelines to solve them. In order to create an effective system for the monitoring and servitisation of smart city assets with a scalable platform that proves its usefulness in numerous smart city use cases with various needs, we deduced some guidelines from an experimental setting performed in collaboration with leading industrial technical departments. Ultimately, when deployed in production, the proposed data platform will contribute toward the goal of revealing valuable and hidden societal knowledge in the context of smart cities. Full article
(This article belongs to the Special Issue Network Cost Reduction in Cloud and Fog Computing Environments)
Show Figures

Figure 1

35 pages, 13088 KiB  
Review
From 5G to 6G—Challenges, Technologies, and Applications
by Ahmed I. Salameh and Mohamed El Tarhuni
Future Internet 2022, 14(4), 117; https://doi.org/10.3390/fi14040117 - 12 Apr 2022
Cited by 49 | Viewed by 9079
Abstract
As the deployment of 5G mobile radio networks gains momentum across the globe, the wireless research community is already planning the successor of 5G. In this paper, we highlight the shortcomings of 5G in meeting the needs of more data-intensive, low-latency, and ultra-high-reliability [...] Read more.
As the deployment of 5G mobile radio networks gains momentum across the globe, the wireless research community is already planning the successor of 5G. In this paper, we highlight the shortcomings of 5G in meeting the needs of more data-intensive, low-latency, and ultra-high-reliability applications. We then discuss the salient characteristics of the 6G network following a hierarchical approach including the social, economic, and technological aspects. We also discuss some of the key technologies expected to support the move towards 6G. Finally, we quantify and summarize the research work related to beyond 5G and 6G networks through an extensive search of publications and research groups and present a possible timeline for 6G activities. Full article
Show Figures

Graphical abstract

28 pages, 1891 KiB  
Review
ML-Based 5G Network Slicing Security: A Comprehensive Survey
by Ramraj Dangi, Akshay Jadhav, Gaurav Choudhary, Nicola Dragoni, Manas Kumar Mishra and Praveen Lalwani
Future Internet 2022, 14(4), 116; https://doi.org/10.3390/fi14040116 - 8 Apr 2022
Cited by 37 | Viewed by 8713
Abstract
Fifth-generation networks efficiently support and fulfill the demands of mobile broadband and communication services. There has been a continuing advancement from 4G to 5G networks, with 5G mainly providing the three services of enhanced mobile broadband (eMBB), massive machine type communication (eMTC), and [...] Read more.
Fifth-generation networks efficiently support and fulfill the demands of mobile broadband and communication services. There has been a continuing advancement from 4G to 5G networks, with 5G mainly providing the three services of enhanced mobile broadband (eMBB), massive machine type communication (eMTC), and ultra-reliable low-latency services (URLLC). Since it is difficult to provide all of these services on a physical network, the 5G network is partitioned into multiple virtual networks called “slices”. These slices customize these unique services and enable the network to be reliable and fulfill the needs of its users. This phenomenon is called network slicing. Security is a critical concern in network slicing as adversaries have evolved to become more competent and often employ new attack strategies. This study focused on the security issues that arise during the network slice lifecycle. Machine learning and deep learning algorithm solutions were applied in the planning and design, construction and deployment, monitoring, fault detection, and security phases of the slices. This paper outlines the 5G network slicing concept, its layers and architectural framework, and the prevention of attacks, threats, and issues that represent how network slicing influences the 5G network. This paper also provides a comparison of existing surveys and maps out taxonomies to illustrate various machine learning solutions for different application parameters and network functions, along with significant contributions to the field. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

18 pages, 3711 KiB  
Article
Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection
by João Vitorino, Nuno Oliveira and Isabel Praça
Future Internet 2022, 14(4), 108; https://doi.org/10.3390/fi14040108 - 29 Mar 2022
Cited by 14 | Viewed by 8105
Abstract
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be [...] Read more.
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks. Full article
(This article belongs to the Topic Cyber Security and Critical Infrastructures)
Show Figures

Graphical abstract

30 pages, 2168 KiB  
Review
Self-Organizing Networks for 5G and Beyond: A View from the Top
by Andreas G. Papidas and George C. Polyzos
Future Internet 2022, 14(3), 95; https://doi.org/10.3390/fi14030095 - 17 Mar 2022
Cited by 19 | Viewed by 8542
Abstract
We describe self-organizing network (SON) concepts and architectures and their potential to play a central role in 5G deployment and next-generation networks. Our focus is on the basic SON use case applied to radio access networks (RAN), which is self-optimization. We analyze SON [...] Read more.
We describe self-organizing network (SON) concepts and architectures and their potential to play a central role in 5G deployment and next-generation networks. Our focus is on the basic SON use case applied to radio access networks (RAN), which is self-optimization. We analyze SON applications’ rationale and operation, the design and dimensioning of SON systems, possible deficiencies and conflicts that occur through the parallel operation of functions, and describe the strong reliance on machine learning (ML) and artificial intelligence (AI). Moreover, we present and comment on very recent proposals for SON deployment in 5G networks. Typical examples include the binding of SON systems with techniques such as Network Function Virtualization (NFV), Cloud RAN (C-RAN), Ultra-Reliable Low Latency Communications (URLLC), massive Machine-Type Communication (mMTC) for IoT, and automated backhauling, which lead the way towards the adoption of SON techniques in Beyond 5G (B5G) networks. Full article
(This article belongs to the Special Issue 5G Enabling Technologies and Wireless Networking)
Show Figures

Figure 1

27 pages, 1227 KiB  
Article
A Survey on Intrusion Detection Systems for Fog and Cloud Computing
by Victor Chang, Lewis Golightly, Paolo Modesti, Qianwen Ariel Xu, Le Minh Thao Doan, Karl Hall, Sreeja Boddu and Anna Kobusińska
Future Internet 2022, 14(3), 89; https://doi.org/10.3390/fi14030089 - 13 Mar 2022
Cited by 38 | Viewed by 8584
Abstract
The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there [...] Read more.
The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there is an absolute need to differentiate boundaries in personal information and cloud and fog computing globally and the adoption of specific information security policies and regulations. The goal of the security policy and framework for cloud and fog computing is to protect the end-users and their information, reduce task-based operations, aid in compliance, and create standards for expected user actions, all of which are based on the use of established rules for cloud computing. Moreover, intrusion detection systems are widely adopted solutions to monitor and analyze network traffic and detect anomalies that can help identify ongoing adversarial activities, trigger alerts, and automatically block traffic from hostile sources. This survey paper analyzes factors, including the application of technologies and techniques, which can enable the deployment of security policy on fog and cloud computing successfully. The paper focuses on a Software-as-a-Service (SaaS) and intrusion detection, which provides an effective and resilient system structure for users and organizations. Our survey aims to provide a framework for a cloud and fog computing security policy, while addressing the required security tools, policies, and services, particularly for cloud and fog environments for organizational adoption. While developing the essential linkage between requirements, legal aspects, analyzing techniques and systems to reduce intrusion detection, we recommend the strategies for cloud and fog computing security policies. The paper develops structured guidelines for ways in which organizations can adopt and audit the security of their systems as security is an essential component of their systems and presents an agile current state-of-the-art review of intrusion detection systems and their principles. Functionalities and techniques for developing these defense mechanisms are considered, along with concrete products utilized in operational systems. Finally, we discuss evaluation criteria and open-ended challenges in this area. Full article
Show Figures

Figure 1

25 pages, 2331 KiB  
Review
Digital Twin—Cyber Replica of Physical Things: Architecture, Applications and Future Research Directions
by Cheng Qian, Xing Liu, Colin Ripley, Mian Qian, Fan Liang and Wei Yu
Future Internet 2022, 14(2), 64; https://doi.org/10.3390/fi14020064 - 21 Feb 2022
Cited by 60 | Viewed by 11508
Abstract
The Internet of Things (IoT) connects massive smart devices to collect big data and carry out the monitoring and control of numerous things in cyber-physical systems (CPS). By leveraging machine learning (ML) and deep learning (DL) techniques to analyze the collected data, physical [...] Read more.
The Internet of Things (IoT) connects massive smart devices to collect big data and carry out the monitoring and control of numerous things in cyber-physical systems (CPS). By leveraging machine learning (ML) and deep learning (DL) techniques to analyze the collected data, physical systems can be monitored and controlled effectively. Along with the development of IoT and data analysis technologies, a number of CPS (smart grid, smart transportation, smart manufacturing, smart cities, etc.) adopt IoT and data analysis technologies to improve their performance and operations. Nonetheless, directly manipulating or updating the real system has inherent risks. Thus, creating a digital clone of a real physical system, denoted as a Digital Twin (DT), is a viable strategy. Generally speaking, a DT is a data-driven software and hardware emulation platform, which is a cyber replica of physical systems. Meanwhile, a DT describes a specific physical system and tends to achieve the functions and use cases of physical systems. Since DT is a complex digital system, finding a way to effectively represent a variety of things in timely and efficient manner poses numerous challenges to the networking, computing, and data analytics for IoT. Furthermore, the design of a DT for IoT systems must consider numerous exceptional requirements (e.g., latency, reliability, safety, scalability, security, and privacy). To address such challenges, the thoughtful design of DTs offers opportunities for novel and interdisciplinary research efforts. To address the aforementioned problems and issues, in this paper, we first review the architectures of DTs, data representation, and communication protocols. We then review existing efforts on applying DT into IoT data-driven smart systems, including the smart grid, smart transportation, smart manufacturing, and smart cities. Further, we summarize the existing challenges from CPS, data science, optimization, and security and privacy perspectives. Finally, we outline possible future research directions from the perspectives of performance, new DT-driven services, model and learning, and security and privacy. Full article
(This article belongs to the Special Issue Towards Convergence of Internet of Things and Cyber-Physical Systems)
Show Figures

Graphical abstract

22 pages, 2136 KiB  
Article
Open-Source MQTT-Based End-to-End IoT System for Smart City Scenarios
by Cristian D’Ortona, Daniele Tarchi and Carla Raffaelli
Future Internet 2022, 14(2), 57; https://doi.org/10.3390/fi14020057 - 15 Feb 2022
Cited by 19 | Viewed by 8991
Abstract
Many innovative services are emerging based on the Internet of Things (IoT) technology, aiming at fostering better sustainability of our cities. New solutions integrating Information and Communications Technologies (ICTs) with sustainable transport media are encouraged by several public administrations in the so-called Smart [...] Read more.
Many innovative services are emerging based on the Internet of Things (IoT) technology, aiming at fostering better sustainability of our cities. New solutions integrating Information and Communications Technologies (ICTs) with sustainable transport media are encouraged by several public administrations in the so-called Smart City scenario, where heterogeneous users in city roads call for safer mobility. Among several possible applications, recently, there has been a lot of attention on the so-called Vulnerable Road Users (VRUs), such as pedestrians or bikers. They can be equipped with wearable sensors that are able to communicate their data through a chain of devices towards the cloud for agile and effective control of their mobility. This work describes a complete end-to-end IoT system implemented through the integration of different complementary technologies, whose main purpose is to monitor the information related to road users generated by wearable sensors. The system has been implemented using an ESP32 micro-controller connected to the sensors and communicating through a Bluetooth Low Energy (BLE) interface with an Android device, which is assumed to always be carried by any road user. Based on this, we use it as a gateway node, acting as a real-time asynchronous publisher of a Message Queue Telemetry Transport (MQTT) protocol chain. The MQTT broker is configured on a Raspberry PI device and collects sensor data to be sent to a web-based control panel that performs data monitoring and processing. All the architecture modules have been implemented through open-source technologies. The analysis of the BLE packet exchange has been carried out by resorting to the Wireshark packet analyzer. In addition, a feasibility analysis has been carried out by showing the capability of the proposed solution to show the values gathered through the sensors on a remote dashboard. The developed system is publicly available to allow the possible integration of other modules for additional Smart City services or extension to further ICT applications. Full article
(This article belongs to the Special Issue Mobility and Cyber-Physical Intelligence)
Show Figures

Graphical abstract

39 pages, 1220 KiB  
Review
Network Function Virtualization and Service Function Chaining Frameworks: A Comprehensive Review of Requirements, Objectives, Implementations, and Open Research Challenges
by Haruna Umar Adoga and Dimitrios P. Pezaros
Future Internet 2022, 14(2), 59; https://doi.org/10.3390/fi14020059 - 15 Feb 2022
Cited by 31 | Viewed by 8967
Abstract
Network slicing has become a fundamental property for next-generation networks, especially because an inherent part of 5G standardisation is the ability for service providers to migrate some or all of their network services to a virtual network infrastructure, thereby reducing both capital and [...] Read more.
Network slicing has become a fundamental property for next-generation networks, especially because an inherent part of 5G standardisation is the ability for service providers to migrate some or all of their network services to a virtual network infrastructure, thereby reducing both capital and operational costs. With network function virtualisation (NFV), network functions (NFs) such as firewalls, traffic load balancers, content filters, and intrusion detection systems (IDS) are either instantiated on virtual machines (VMs) or lightweight containers, often chained together to create a service function chain (SFC). In this work, we review the state-of-the-art NFV and SFC implementation frameworks and present a taxonomy of the current proposals. Our taxonomy comprises three major categories based on the primary objectives of each of the surveyed frameworks: (1) resource allocation and service orchestration, (2) performance tuning, and (3) resilience and fault recovery. We also identify some key open research challenges that require further exploration by the research community to achieve scalable, resilient, and high-performance NFV/SFC deployments in next-generation networks. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Graphical abstract

24 pages, 2977 KiB  
Review
Research on Progress of Blockchain Consensus Algorithm: A Review on Recent Progress of Blockchain Consensus Algorithms
by Huanliang Xiong, Muxi Chen, Canghai Wu, Yingding Zhao and Wenlong Yi
Future Internet 2022, 14(2), 47; https://doi.org/10.3390/fi14020047 - 30 Jan 2022
Cited by 54 | Viewed by 9787
Abstract
Blockchain technology can solve the problem of trust in the open network in a decentralized way. It has broad application prospects and has attracted extensive attention from academia and industry. The blockchain consensus algorithm ensures that the nodes in the chain reach consensus [...] Read more.
Blockchain technology can solve the problem of trust in the open network in a decentralized way. It has broad application prospects and has attracted extensive attention from academia and industry. The blockchain consensus algorithm ensures that the nodes in the chain reach consensus in the complex network environment, and the node status ultimately remains the same. The consensus algorithm is one of the core technologies of blockchain and plays a pivotal role in the research of blockchain technology. This article gives the basic concepts of the blockchain, summarizes the key technologies of the blockchain, especially focuses on the research of the blockchain consensus algorithm, expounds the general principles of the consensus process, and classifies the mainstream consensus algorithms. Then, focusing on the improvement of consensus algorithm performance, it reviews the research progress of consensus algorithms in detail, analyzes and compares the characteristics, suitable scenarios, and possible shortcomings of different consensus algorithms, and based on this, studies the future development trend of consensus algorithms for reference. Full article
(This article belongs to the Special Issue Distributed Systems for Emerging Computing: Platform and Application)
Show Figures

Figure 1

19 pages, 3481 KiB  
Article
Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT
by Youpeng Tu, Haiming Chen, Linjie Yan and Xinyan Zhou
Future Internet 2022, 14(2), 30; https://doi.org/10.3390/fi14020030 - 18 Jan 2022
Cited by 32 | Viewed by 7137
Abstract
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task [...] Read more.
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%. Full article
(This article belongs to the Special Issue Machine Learning for Wireless Communications)
Show Figures

Figure 1

19 pages, 2039 KiB  
Review
IoT for Smart Cities: Machine Learning Approaches in Smart Healthcare—A Review
by Taher M. Ghazal, Mohammad Kamrul Hasan, Muhammad Turki Alshurideh, Haitham M. Alzoubi, Munir Ahmad, Syed Shehryar Akbar, Barween Al Kurdi and Iman A. Akour
Future Internet 2021, 13(8), 218; https://doi.org/10.3390/fi13080218 - 23 Aug 2021
Cited by 308 | Viewed by 20434
Abstract
Smart city is a collective term for technologies and concepts that are directed toward making cities efficient, technologically more advanced, greener and more socially inclusive. These concepts include technical, economic and social innovations. This term has been tossed around by various actors in [...] Read more.
Smart city is a collective term for technologies and concepts that are directed toward making cities efficient, technologically more advanced, greener and more socially inclusive. These concepts include technical, economic and social innovations. This term has been tossed around by various actors in politics, business, administration and urban planning since the 2000s to establish tech-based changes and innovations in urban areas. The idea of the smart city is used in conjunction with the utilization of digital technologies and at the same time represents a reaction to the economic, social and political challenges that post-industrial societies are confronted with at the start of the new millennium. The key focus is on dealing with challenges faced by urban society, such as environmental pollution, demographic change, population growth, healthcare, the financial crisis or scarcity of resources. In a broader sense, the term also includes non-technical innovations that make urban life more sustainable. So far, the idea of using IoT-based sensor networks for healthcare applications is a promising one with the potential of minimizing inefficiencies in the existing infrastructure. A machine learning approach is key to successful implementation of the IoT-powered wireless sensor networks for this purpose since there is large amount of data to be handled intelligently. Throughout this paper, it will be discussed in detail how AI-powered IoT and WSNs are applied in the healthcare sector. This research will be a baseline study for understanding the role of the IoT in smart cities, in particular in the healthcare sector, for future research works. Full article
(This article belongs to the Special Issue AI and IoT technologies in Smart Cities)
Show Figures

Figure 1

26 pages, 3426 KiB  
Review
Survey of Localization for Internet of Things Nodes: Approaches, Challenges and Open Issues
by Sheetal Ghorpade, Marco Zennaro and Bharat Chaudhari
Future Internet 2021, 13(8), 210; https://doi.org/10.3390/fi13080210 - 16 Aug 2021
Cited by 42 | Viewed by 7054
Abstract
With exponential growth in the deployment of Internet of Things (IoT) devices, many new innovative and real-life applications are being developed. IoT supports such applications with the help of resource-constrained fixed as well as mobile nodes. These nodes can be placed in anything [...] Read more.
With exponential growth in the deployment of Internet of Things (IoT) devices, many new innovative and real-life applications are being developed. IoT supports such applications with the help of resource-constrained fixed as well as mobile nodes. These nodes can be placed in anything from vehicles to the human body to smart homes to smart factories. Mobility of the nodes enhances the network coverage and connectivity. One of the crucial requirements in IoT systems is the accurate and fast localization of its nodes with high energy efficiency and low cost. The localization process has several challenges. These challenges keep changing depending on the location and movement of nodes such as outdoor, indoor, with or without obstacles and so on. The performance of localization techniques greatly depends on the scenarios and conditions from which the nodes are traversing. Precise localization of nodes is very much required in many unique applications. Although several localization techniques and algorithms are available, there are still many challenges for the precise and efficient localization of the nodes. This paper classifies and discusses various state-of-the-art techniques proposed for IoT node localization in detail. It includes the different approaches such as centralized, distributed, iterative, ranged based, range free, device-based, device-free and their subtypes. Furthermore, the different performance metrics that can be used for localization, comparison of the different techniques, some prominent applications in smart cities and future directions are also covered. Full article
Show Figures

Figure 1

18 pages, 516 KiB  
Article
Designing a Network Intrusion Detection System Based on Machine Learning for Software Defined Networks
by Abdulsalam O. Alzahrani and Mohammed J. F. Alenazi
Future Internet 2021, 13(5), 111; https://doi.org/10.3390/fi13050111 - 28 Apr 2021
Cited by 125 | Viewed by 10251
Abstract
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a [...] Read more.
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a more vulnerable environment and dangerous threats, causing network breakdowns, systems paralysis, online banking frauds and robberies. These issues have a significantly destructive impact on organizations, companies or even economies. Accuracy, high performance and real-time systems are essential to achieve this goal successfully. Extending intelligent machine learning algorithms in a network intrusion detection system (NIDS) through a software-defined network (SDN) has attracted considerable attention in the last decade. Big data availability, the diversity of data analysis techniques, and the massive improvement in the machine learning algorithms enable the building of an effective, reliable and dependable system for detecting different types of attacks that frequently target networks. This study demonstrates the use of machine learning algorithms for traffic monitoring to detect malicious behavior in the network as part of NIDS in the SDN controller. Different classical and advanced tree-based machine learning techniques, Decision Tree, Random Forest and XGBoost are chosen to demonstrate attack detection. The NSL-KDD dataset is used for training and testing the proposed methods; it is considered a benchmarking dataset for several state-of-the-art approaches in NIDS. Several advanced preprocessing techniques are performed on the dataset in order to extract the best form of the data, which produces outstanding results compared to other systems. Using just five out of 41 features of NSL-KDD, a multi-class classification task is conducted by detecting whether there is an attack and classifying the type of attack (DDoS, PROBE, R2L, and U2R), accomplishing an accuracy of 95.95%. Full article
(This article belongs to the Special Issue Mobile and Wireless Network Security and Privacy)
Show Figures

Figure 1

20 pages, 846 KiB  
Article
Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning
by Haokun Fang and Quan Qian
Future Internet 2021, 13(4), 94; https://doi.org/10.3390/fi13040094 - 8 Apr 2021
Cited by 149 | Viewed by 15069
Abstract
Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties [...] Read more.
Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties just transmitting the encrypted gradients by homomorphic encryption. From experiments, the model trained by PFMLP has almost the same accuracy, and the deviation is less than 1%. Considering the computational overhead of homomorphic encryption, we use an improved Paillier algorithm which can speed up the training by 25–28%. Moreover, comparisons on encryption key length, the learning network structure, number of learning clients, etc. are also discussed in detail in the paper. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

17 pages, 2916 KiB  
Article
Characterization of the Teaching Profile within the Framework of Education 4.0
by María Soledad Ramírez-Montoya, María Isabel Loaiza-Aguirre, Alexandra Zúñiga-Ojeda and May Portuguez-Castro
Future Internet 2021, 13(4), 91; https://doi.org/10.3390/fi13040091 - 1 Apr 2021
Cited by 52 | Viewed by 7795
Abstract
The authors of the Education 4.0 concept postulated a flexible combination of digital literacy, critical thinking, and problem-solving in educational environments linked to real-world scenarios. Therefore, teachers have been challenged to develop new methods and resources to integrate into their planning in order [...] Read more.
The authors of the Education 4.0 concept postulated a flexible combination of digital literacy, critical thinking, and problem-solving in educational environments linked to real-world scenarios. Therefore, teachers have been challenged to develop new methods and resources to integrate into their planning in order to help students develop these desirable and necessary skills; hence, the question: What are the characteristics of a teacher to consider within the framework of Education 4.0? This study was conducted in a higher education institution in Ecuador, with the aim to identify the teaching profile required in new undergraduate programs within the framework of Education 4.0 in order to contribute to decision-making about teacher recruitment, professional training and evaluation, human talent management, and institutional policies interested in connecting competencies with the needs of society. Descriptive and exploratory approaches, where we applied quantitative and qualitative instruments (surveys) to 337 undergraduate students in education programs and 313 graduates, were used. We also included interviews with 20 experts in the educational field and five focus groups with 32 chancellors, school principals, university professors, and specialists in the educational area. The data were triangulated, and the results were organized into the categories of (a) processes as facilitators (b), soft skills, (c) human sense, and (d) the use of technologies. The results outlined the profile of a professor as a specialized professional with competencies for innovation, complex problem solving, entrepreneurship, collaboration, international perspective, leadership, and connection with the needs of society. This research study may be of value to administrators, educational and social entrepreneurs, trainers, and policy-makers interested in implementing innovative training programs and in supporting management and policy decisions. Full article
Show Figures

Figure 1

17 pages, 4205 KiB  
Article
Research on the Impacts of Generalized Preceding Vehicle Information on Traffic Flow in V2X Environment
by Xiaoyuan Wang, Junyan Han, Chenglin Bai, Huili Shi, Jinglei Zhang and Gang Wang
Future Internet 2021, 13(4), 88; https://doi.org/10.3390/fi13040088 - 30 Mar 2021
Cited by 10 | Viewed by 2549
Abstract
With the application of vehicles to everything (V2X) technologies, drivers can obtain massive traffic information and adjust their car-following behavior according to the information. The macro-characteristics of traffic flow are essentially the overall expression of the micro-behavior of drivers. There are some shortcomings [...] Read more.
With the application of vehicles to everything (V2X) technologies, drivers can obtain massive traffic information and adjust their car-following behavior according to the information. The macro-characteristics of traffic flow are essentially the overall expression of the micro-behavior of drivers. There are some shortcomings in the previous researches on traffic flow in the V2X environment, which result in difficulties to employ the related models or methods in exploring the characteristics of traffic flow affected by the information of generalized preceding vehicles (GPV). Aiming at this, a simulation framework based on the car-following model and the cellular automata (CA) is proposed in this work, then the traffic flow affected by the information of GPV is simulated and analyzed utilizing this framework. The research results suggest that the traffic flow, which is affected by the information of GPV in the V2X environment, would operate with a higher value of velocity, volume as well as jamming density and can maintain the free flow state with a much higher density of vehicles. The simulation framework constructed in this work can provide a reference for further research on the characteristics of traffic flow affected by various information in the V2X environment. Full article
Show Figures

Figure 1

32 pages, 2102 KiB  
Review
Distributed Ledger Technology Review and Decentralized Applications Development Guidelines
by Claudia Antal, Tudor Cioara, Ionut Anghel, Marcel Antal and Ioan Salomie
Future Internet 2021, 13(3), 62; https://doi.org/10.3390/fi13030062 - 27 Feb 2021
Cited by 65 | Viewed by 9676
Abstract
The Distributed Ledger Technology (DLT) provides an infrastructure for developing decentralized applications with no central authority for registering, sharing, and synchronizing transactions on digital assets. In the last years, it has drawn high interest from the academic community, technology developers, and startups mostly [...] Read more.
The Distributed Ledger Technology (DLT) provides an infrastructure for developing decentralized applications with no central authority for registering, sharing, and synchronizing transactions on digital assets. In the last years, it has drawn high interest from the academic community, technology developers, and startups mostly by the advent of its most popular type, blockchain technology. In this paper, we provide a comprehensive overview of DLT analyzing the challenges, provided solutions or alternatives, and their usage for developing decentralized applications. We define a three-tier based architecture for DLT applications to systematically classify the technology solutions described in over 100 papers and startup initiatives. Protocol and Network Tier contains solutions for digital assets registration, transactions, data structure, and privacy and business rules implementation and the creation of peer-to-peer networks, ledger replication, and consensus-based state validation. Scalability and Interoperability Tier solutions address the scalability and interoperability issues with a focus on blockchain technology, where they manifest most often, slowing down its large-scale adoption. The paper closes with a discussion on challenges and opportunities for developing decentralized applications by providing a multi-step guideline for decentralizing the design and implementation of traditional systems. Full article
(This article belongs to the Special Issue Blockchain: Applications, Challenges, and Solutions)
Show Figures

Figure 1

40 pages, 620 KiB  
Review
A Systematic Review of Cybersecurity Risks in Higher Education
by Joachim Bjørge Ulven and Gaute Wangen
Future Internet 2021, 13(2), 39; https://doi.org/10.3390/fi13020039 - 2 Feb 2021
Cited by 56 | Viewed by 25498
Abstract
The demands for information security in higher education will continue to increase. Serious data breaches have occurred already and are likely to happen again without proper risk management. This paper applies the Comprehensive Literature Review (CLR) Model to synthesize research within cybersecurity risk [...] Read more.
The demands for information security in higher education will continue to increase. Serious data breaches have occurred already and are likely to happen again without proper risk management. This paper applies the Comprehensive Literature Review (CLR) Model to synthesize research within cybersecurity risk by reviewing existing literature of known assets, threat events, threat actors, and vulnerabilities in higher education. The review included published studies from the last twelve years and aims to expand our understanding of cybersecurity’s critical risk areas. The primary finding was that empirical research on cybersecurity risks in higher education is scarce, and there are large gaps in the literature. Despite this issue, our analysis found a high level of agreement regarding cybersecurity issues among the reviewed sources. This paper synthesizes an overview of mission-critical assets, everyday threat events, proposes a generic threat model, and summarizes common cybersecurity vulnerabilities. This report concludes nine strategic cyber risks with descriptions of frequencies from the compiled dataset and consequence descriptions. The results will serve as input for security practitioners in higher education, and the research contains multiple paths for future work. It will serve as a starting point for security researchers in the sector. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Cybersecurity Section)
Show Figures

Figure 1

20 pages, 1172 KiB  
Article
Using Machine Learning for Web Page Classification in Search Engine Optimization
by Goran Matošević, Jasminka Dobša and Dunja Mladenić
Future Internet 2021, 13(1), 9; https://doi.org/10.3390/fi13010009 - 2 Jan 2021
Cited by 31 | Viewed by 12523
Abstract
This paper presents a novel approach of using machine learning algorithms based on experts’ knowledge to classify web pages into three predefined classes according to the degree of content adjustment to the search engine optimization (SEO) recommendations. In this study, classifiers were built [...] Read more.
This paper presents a novel approach of using machine learning algorithms based on experts’ knowledge to classify web pages into three predefined classes according to the degree of content adjustment to the search engine optimization (SEO) recommendations. In this study, classifiers were built and trained to classify an unknown sample (web page) into one of the three predefined classes and to identify important factors that affect the degree of page adjustment. The data in the training set are manually labeled by domain experts. The experimental results show that machine learning can be used for predicting the degree of adjustment of web pages to the SEO recommendations—classifier accuracy ranges from 54.59% to 69.67%, which is higher than the baseline accuracy of classification of samples in the majority class (48.83%). Practical significance of the proposed approach is in providing the core for building software agents and expert systems to automatically detect web pages, or parts of web pages, that need improvement to comply with the SEO guidelines and, therefore, potentially gain higher rankings by search engines. Also, the results of this study contribute to the field of detecting optimal values of ranking factors that search engines use to rank web pages. Experiments in this paper suggest that important factors to be taken into consideration when preparing a web page are page title, meta description, H1 tag (heading), and body text—which is aligned with the findings of previous research. Another result of this research is a new data set of manually labeled web pages that can be used in further research. Full article
(This article belongs to the Special Issue Digital Marketing and App-based Marketing)
Show Figures

Figure 1

Back to TopTop