Future Internet doi: 10.3390/fi16030103
Authors: Rafael Moreno-Vozmediano Rubén S. Montero Eduardo Huedo Ignacio M. Llorente
The adoption of edge infrastructure in 5G environments stands out as a transformative technology aimed at meeting the increasing demands of latency-sensitive and data-intensive applications. This research paper presents a comprehensive study on the intelligent orchestration of 5G edge computing infrastructures. The proposed Smart 5G Edge-Cloud Management Architecture, built upon an OpenNebula foundation, incorporates a ONEedge5G experimental component, which offers intelligent workload forecasting and infrastructure orchestration and automation capabilities, for optimal allocation of virtual resources across diverse edge locations. The research evaluated different forecasting models, based both on traditional statistical techniques and machine learning techniques, comparing their accuracy in CPU usage prediction for a dataset of virtual machines (VMs). Additionally, an integer linear programming formulation was proposed to solve the optimization problem of mapping VMs to physical servers in distributed edge infrastructure. Different optimization criteria such as minimizing server usage, load balancing, and reducing latency violations were considered, along with mapping constraints. Comprehensive tests and experiments were conducted to evaluate the efficacy of the proposed architecture.
]]>Future Internet doi: 10.3390/fi16030102
Authors: Lijun Zu Wenyu Qi Hongyi Li Xiaohua Men Zhihui Lu Jiawei Ye Liang Zhang
The digital transformation of banks has led to a paradigm shift, promoting the open sharing of data and services with third-party providers through APIs, SDKs, and other technological means. While data sharing brings personalized, convenient, and enriched services to users, it also introduces security risks, including sensitive data leakage and misuse, highlighting the importance of data classification and grading as the foundational pillar of security. This paper presents a cloud-edge collaborative banking data open application scenario, focusing on the critical need for an accurate and automated sensitive data classification and categorization method. The regulatory outpost module addresses this requirement, aiming to enhance the precision and efficiency of data classification. Firstly, regulatory policies impose strict requirements concerning data protection. Secondly, the sheer volume of business and the complexity of the work situation make it impractical to rely on manual experts, as they incur high labor costs and are unable to guarantee significant accuracy. Therefore, we propose a scheme UP-SDCG for automatically classifying and grading financially sensitive structured data. We developed a financial data hierarchical classification library. Additionally, we employed library augmentation technology and implemented a synonym discrimination model. We conducted an experimental analysis using simulation datasets, where UP-SDCG achieved precision surpassing 95%, outperforming the other three comparison models. Moreover, we performed real-world testing in financial institutions, achieving good detection results in customer data, supervision, and additional in personally sensitive information, aligning with application goals. Our ongoing work will extend the model’s capabilities to encompass unstructured data classification and grading, broadening the scope of application.
]]>Future Internet doi: 10.3390/fi16030101
Authors: Alessandro Pozzebon
Over the last years few years, the number of interconnected devices within the context of Internet of Things (IoT) has rapidly grown; some statistics state that the total number of IoT-connected devices in 2023 has reached the groundbreaking number of 17 billion [...]
]]>Future Internet doi: 10.3390/fi16030100
Authors: Yogeswaranathan Kalyani Liam Vorster Rebecca Whetton Rem Collier
In the last decade, digital twin (DT) technology has received considerable attention across various domains, such as manufacturing, smart healthcare, and smart cities. The digital twin represents a digital representation of a physical entity, object, system, or process. Although it is relatively new in the agricultural domain, it has gained increasing attention recently. Recent reviews of DTs show that this technology has the potential to revolutionise agriculture management and activities. It can also provide numerous benefits to all agricultural stakeholders, including farmers, agronomists, researchers, and others, in terms of making decisions on various agricultural processes. In smart crop farming, DTs help simulate various farming tasks like irrigation, fertilisation, nutrient management, and pest control, as well as access real-time data and guide farmers through ‘what-if’ scenarios. By utilising the latest technologies, such as cloud–fog–edge computing, multi-agent systems, and the semantic web, farmers can access real-time data and analytics. This enables them to make accurate decisions about optimising their processes and improving efficiency. This paper presents a proposed architectural framework for DTs, exploring various potential application scenarios that integrate this architecture. It also analyses the benefits and challenges of implementing this technology in agricultural environments. Additionally, we investigate how cloud–fog–edge computing contributes to developing decentralised, real-time systems essential for effective management and monitoring in agriculture.
]]>Future Internet doi: 10.3390/fi16030099
Authors: Alfonso Quarati Riccardo Albertoni
Linked Data (LD) principles, when applied to Open Government Data (OGD), aim to make government data accessible and interconnected, unlocking its full potential and facilitating widespread reuse. As a modular and scalable solution to fragmented government data, Linked Open Government Data (LOGD) improve citizens’ understanding of government functions while promoting greater data interoperability, ultimately leading to more efficient government processes. However, despite promising developments in the early 2010s, including the release of LOGD datasets by some government agencies, and studies and methodological proposals by numerous scholars, a cursory examination of government websites and portals suggests that interest in this technology has gradually waned. Given the initial expectations surrounding LOGD, this paper goes beyond a superficial analysis and provides a deeper insight into the evolution of interest in LOGD by raising questions about the extent to which the dream of LD has influenced the reality of OGD and whether it remains sustainable.
]]>Future Internet doi: 10.3390/fi16030098
Authors: Muhammad Bin Saif Sara Migliorini Fausto Spoto
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually stores only a fingerprint of data, such as the hash of data, while full, raw information is stored off-chain. This is generally enough to guarantee immutability and traceability, but misses to support another important property, that is, data availability. This is particularly true when a traditional, centralized database is chosen for off-chain storage. For this reason, many proposals try to properly combine blockchain with decentralized IPFS storage. However, the storage of data on IPFS could pose some privacy problems. This paper proposes a solution that properly combines blockchain, IPFS, and encryption techniques to guarantee immutability, traceability, availability, and data privacy.
]]>Future Internet doi: 10.3390/fi16030097
Authors: Ancilon Leuch Alencar Marcelo Dornbusch Lopes Anita Maria da Rocha Fernandes Julio Cesar Santos dos Anjos Juan Francisco De Paz Santana Valderi Reis Quietinho Leithardt
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image manipulation detection: active and passive. Active techniques intervene preemptively, embedding structures into images to facilitate subsequent authenticity verification, whereas passive methods analyze image content for traces of manipulation. This study presents a novel solution to image manipulation detection by leveraging a multi-stream neural network architecture. Our approach harnesses three convolutional neural networks (CNNs) operating on distinct data streams extracted from the original image. We have developed a solution based on two passive detection methodologies. The system utilizes two separate streams to extract specific data subsets, while a third stream processes the unaltered image. Each net independently processes its respective data stream, capturing diverse facets of the image. The outputs from these nets are then fused through concatenation to ascertain whether the image has undergone manipulation, yielding a comprehensive detection framework surpassing the efficacy of its constituent methods. Our work introduces a unique dataset derived from the fusion of four publicly available datasets, featuring organically manipulated images that closely resemble real-world scenarios. This dataset offers a more authentic representation than other state-of-the-art methods that use algorithmically generated datasets based on image patches. By encompassing genuine manipulation scenarios, our dataset enhances the model’s ability to generalize across varied manipulation techniques, thereby improving its performance in real-world settings. After training, the merged approach obtained an accuracy of 89.59% in the set of validation images, significantly higher than the model trained with only unaltered images, which obtained 78.64%, and the two other models trained using images with a feature selection method applied to enhance inconsistencies that obtained 68.02% for Error-Level Analysis images and 50.70% for the method using Discrete Wavelet Transform. Moreover, our proposed approach exhibits reduced accuracy variance compared to alternative models, underscoring its stability and robustness across diverse datasets. The approach outlined in this work needs to provide information about the specific location or type of tempering, which limits its practical applications.
]]>Future Internet doi: 10.3390/fi16030096
Authors: Mengchi Xing Haojiang Deng Rui Han
The 5G core network adopts a Control and User Plane Separation (CUPS) architecture to meet the challenges of low-latency business requirements. In this architecture, a balance between management costs and User Experience (UE) is achieved by moving User Plane Function (UPF) to the edge of the network. However, cross-UPF handover during communication between the UE and the remote server will cause TCP/IP session interruption and affect continuity of delay-sensitive real-time communication continuity. Information-Centric Networks (ICNs) separate identity and location, and their ability to route based on identity can effectively handle mobility. Therefore, based on the 5G-ICN architecture, we propose a seamless mobility support method based on router buffered data (BDMM), making full use of the ICN’s identity-based routing capabilities to solve the problem of UE cross-UPF handover affecting business continuity. BDMM also uses the ICN router data buffering capabilities to reduce packet loss during handovers. We design a dynamic buffer resource allocation strategy (DBRAS) that can adjust the buffer resource allocation results in time according to network traffic changes and business types to solve the problem of unreasonable buffer resource allocation. Finally, experimental results show that our method outperforms other methods in terms of average packet delay, weighted average packet loss rate, and network overhead. In addition, our method also has good performance in average handover delay.
]]>Future Internet doi: 10.3390/fi16030095
Authors: Ying-Hsun Lai Shin-Yeh Chen Wen-Chi Chou Hua-Yang Hsu Han-Chieh Chao
Federated learning trains a neural network model using the client’s data to maintain the benefits of centralized model training while maintaining their privacy. However, if the client data are not independently and identically distributed (non-IID) because of different environments, the accuracy of the model may suffer from client drift during training owing to discrepancies in each client’s data. This study proposes a personalized federated learning algorithm based on the concept of multitask learning to divide each client model into two layers: a feature extraction layer and a category prediction layer. The feature extraction layer maps the input data to a low-dimensional feature vector space. Furthermore, the parameters of the neural network are aggregated with those of other clients using an adaptive method. The category prediction layer maps low-dimensional feature vectors to the label sample space, with its parameters remaining unaffected by other clients to maintain client uniqueness. The proposed personalized federated learning method produces faster learning model convergence rates and higher accuracy rates for the non-IID datasets in our experiments.
]]>Future Internet doi: 10.3390/fi16030094
Authors: Yu Yao Quan Qian
We develop the online process parameter design (OPPD) framework for efficiently handling streaming data collected from industrial automation equipment. This framework integrates online machine learning, concept drift detection and Bayesian optimization techniques. Initially, concept drift detection mitigates the impact of anomalous data on model updates. Data without concept drift are used for online model training and updating, enabling accurate predictions for the next processing cycle. Bayesian optimization is then employed for inverse optimization and process parameter design. Within OPPD, we introduce the online accelerated support vector regression (OASVR) algorithm for enhanced computational efficiency and model accuracy. OASVR simplifies support vector regression, boosting both speed and durability. Furthermore, we incorporate a dynamic window mechanism to regulate the training data volume for adapting to real-time demands posed by diverse online scenarios. Concept drift detection uses the EI-kMeans algorithm, and the Bayesian inverse design employs an upper confidence bound approach with an adaptive learning rate. Applied to single-crystal fabrication, the OPPD framework outperforms other models, with an RMSE of 0.12, meeting precision demands in production.
]]>Future Internet doi: 10.3390/fi16030093
Authors: Ziyad Almudayni Ben Soh Alice Li
The advent of the Internet of Things (IoT) has revolutionised our interaction with the environment, facilitating seamless connections among sensors, actuators, and humans. Efficient task scheduling stands as a cornerstone in maximising resource utilisation and ensuring timely task execution in IoT systems. The implementation of efficient task scheduling methodologies can yield substantial enhancements in productivity and cost-effectiveness for IoT infrastructures. To that end, this paper presents the IoT-mist bat-inspired algorithm (IMBA), designed specifically to optimise resource allocation in IoT environments. IMBA’s efficacy lies in its ability to elevate user service quality through enhancements in task completion rates, load distribution, network utilisation, processing time, and power efficiency. Through comparative analysis, IMBA demonstrates superiority over traditional methods, such as fuzzy logic and round-robin algorithms, across all performance metrics.
]]>Future Internet doi: 10.3390/fi16030092
Authors: Andreas Giannakoulopoulos Minas Pergantis Aristeidis Lamprogeorgos
The present study focuses on using qualitative and quantitative data to evaluate the functionality, user experience (UX), and aesthetic approach offered by an academic multi-site Web ecosystem consisting of multiple interconnected websites. Large entities in various industry fields often have the need for an elaborate Web presence. In an effort to address the challenges posed by this need specifically in the field of academia, the authors developed, over a period of many years, a multi-site ecosystem within the Ionian University, which focuses on interconnectivity and a collaborative approach to academic content management. This system, known as “Publish@Ionio”, uses a singular content management infrastructure to allow for the creation of content for different websites that share both information and resources while at the same time allowing for individual variations in both functionality and aesthetics. The ecosystem was evaluated through quantitative data from its operation and qualitative feedback from a focus-group interview with experts, including website editors and administrative staff. The collected data were used to assess the strengths and weaknesses of the multi-site approach based on the actions and needs of the individuals in charge of generating content. The study led to conclusions on the advantages that interoperability offers in terms of digital and human resource management, the benefits of a unified aesthetic approach that allows for variability, and the necessity of collaborative content management tools that are tailored to the content’s nature.
]]>Future Internet doi: 10.3390/fi16030091
Authors: Mauro Femminella Gianluca Reali
The need for adaptivity and scalability in telecommunication systems has led to the introduction of a software-based approach to networking, in which network functions are virtualized and implemented in software modules, based on network function virtualization (NFV) technologies. The growing demand for low latency, efficiency, flexibility and security has placed some limitations on the adoption of these technologies, due to some problems of traditional virtualization solutions. However, the introduction of lightweight virtualization approaches is paving the way for new and better infrastructures for implementing network functions. This article discusses these new virtualization solutions and shows a proposal, based on serverless computing, that uses them to implement container-based virtualized network functions for the delivery of advanced Internet of Things (IoT) services. It includes open source software components to implement both the virtualization layer, implemented through Firecracker, and the runtime environment, based on Kata containers. A set of experiments shows that the proposed approach is fast, in order to boost new network functions, and more efficient than some baseline solutions, with minimal resource footprint. Therefore, it is an excellent candidate to implement NFV functions in the edge deployment of serverless services for the IoT.
]]>Future Internet doi: 10.3390/fi16030090
Authors: Heidi Toivonen Francesco Lelli
This paper investigates how users of smart devices attribute agency both to themselves and to their devices. Statistical analyses, tag cloud analysis, and sentiment analysis were applied on survey data collected from 587 participants. As a result of a preliminary factorial analysis, two independent constructs of agency emerged: (i) user agency and (ii) device agency. These two constructs received further support from a sentiment analysis and a tag cloud analysis conducted on the written responses provided in a survey. We also studied how user agency and device agency relate to various background variables, such as the user’s professional knowledge of smart devices. We present a new preliminary model, where the two agency constructs are used to conceptualize agency in human–smart device relationships in a matrix composed of a controller, collaborator, detached, and victim. Our model with the constructs of user agency and device agency fosters a richer understanding of the users’ experiences in their interactions with devices. The results could facilitate designing interfaces that better take into account the users’ views of their own capabilities as well as the capacities of their devices; the findings can assist in tackling challenges such as the feeling of lacking agency experienced by technologically savvy users.
]]>Future Internet doi: 10.3390/fi16030089
Authors: Mostafa El Debeiki Saba Al-Rubaye Adolfo Perrusquía Christopher Conrad Juan Alejandro Flores-Campos
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and communication landscapes. In particular, mountainous areas pose difficulties due to the loss of connectivity caused by large valleys and the volumes of hazardous weather. In this paper, the connectivity issue in mountainous areas is addressed using a path planning algorithm for UAV relay. The approach is based on two main phases: (1) the detection of areas of interest where the connectivity signal is poor, and (2) an energy-aware and resilient path planning algorithm that maximizes the coverage links. The approach uses a viewshed analysis to identify areas of visibility between the areas of interest and the cell-towers. This allows the construction of a blockage map that prevents the UAV from passing through areas with no coverage, whilst maximizing the coverage area under energy constraints and hazardous weather. The proposed approach is validated under open-access datasets of mountainous zones, and the obtained results confirm the benefits of the proposed approach for communication networks in remote and challenging environments.
]]>Future Internet doi: 10.3390/fi16030088
Authors: Dominic Lightbody Duc-Minh Ngo Andriy Temko Colin C. Murphy Emanuel Popovici
The growth of the Internet of Things (IoT) has led to a significant rise in cyber attacks and an expanded attack surface for the average consumer. In order to protect consumers and infrastructure, research into detecting malicious IoT activity must be of the highest priority. Security research in this area has two key issues: the lack of datasets for training artificial intelligence (AI)-based intrusion detection models and the fact that most existing datasets concentrate only on one type of network traffic. Thus, this study introduces Dragon_Pi, an intrusion detection dataset designed for IoT devices based on side-channel power consumption data. Dragon_Pi comprises a collection of normal and under-attack power consumption traces from separate testbeds featuring a DragonBoard 410c and a Raspberry Pi. Dragon_Slice is trained on this dataset; it is an unsupervised convolutional autoencoder (CAE) trained exclusively on held-out normal slices from Dragon_Pi for anomaly detection. The Dragon_Slice network has two iterations in this study. The original achieves 0.78 AUC without post-processing and 0.876 AUC with post-processing. A second iteration of Dragon_Slice, utilising dropout to further impede the CAE’s ability to reconstruct anomalies, outperforms the original network with a raw AUC of 0.764 and a post-processed AUC of 0.89.
]]>Future Internet doi: 10.3390/fi16030087
Authors: Dominik Warch Patrick Stellbauer Pascal Neis
In the digital transformation era, video media libraries’ untapped potential is immense, restricted primarily by their non-machine-readable nature and basic search functionalities limited to standard metadata. This study presents a novel multimodal methodology that utilizes advances in artificial intelligence, including neural networks, computer vision, and natural language processing, to extract and geocode geospatial references from videos. Leveraging the geospatial information from videos enables semantic searches, enhances search relevance, and allows for targeted advertising, particularly on mobile platforms. The methodology involves a comprehensive process, including data acquisition from ARD Mediathek, image and text analysis using advanced machine learning models, and audio and subtitle processing with state-of-the-art linguistic models. Despite challenges like model interpretability and the complexity of geospatial data extraction, this study’s findings indicate significant potential for advancing the precision of spatial data analysis within video content, promising to enrich media libraries with more navigable, contextually rich content. This advancement has implications for user engagement, targeted services, and broader urban planning and cultural heritage applications.
]]>Future Internet doi: 10.3390/fi16030086
Authors: Peter K. K. Loh Aloysius Z. Y. Lee Vivek Balachandran
The rise in generative Artificial Intelligence (AI) has led to the development of more sophisticated phishing email attacks, as well as an increase in research on using AI to aid the detection of these advanced attacks. Successful phishing email attacks severely impact businesses, as employees are usually the vulnerable targets. Defense against such attacks, therefore, requires realizing defense along both technological and human vectors. Security hardening research work along the technological vector is few and focuses mainly on the use of machine learning and natural language processing to distinguish between machine- and human-generated text. Common existing approaches to harden security along the human vector consist of third-party organized training programmes, the content of which needs to be updated over time. There is, to date, no reported approach that provides both phishing attack detection and progressive end-user training. In this paper, we present our contribution, which includes the design and development of an integrated approach that employs AI-assisted and generative AI platforms for phishing attack detection and continuous end-user education in a hybrid security framework. This framework supports scenario-customizable and evolving user education in dealing with increasingly advanced phishing email attacks. The technological design and functional details for both platforms are presented and discussed. Performance tests showed that the phishing attack detection sub-system using the Convolutional Neural Network (CNN) deep learning model architecture achieved the best overall results: above 94% accuracy, above 95% precision, and above 94% recall.
]]>Future Internet doi: 10.3390/fi16030084
Authors: Håkon Harnes Donn Morrison
WebAssembly is a low-level bytecode language that enables high-level languages like C, C++, and Rust to be executed in the browser at near-native performance. In recent years, WebAssembly has gained widespread adoption and is now natively supported by all modern browsers. Despite its benefits, WebAssembly has introduced significant security challenges, primarily due to vulnerabilities inherited from memory-unsafe source languages. Moreover, the use of WebAssembly extends beyond traditional web applications to smart contracts on blockchain platforms, where vulnerabilities have led to significant financial losses. WebAssembly has also been used for malicious purposes, like cryptojacking, where website visitors’ hardware resources are used for crypto mining without their consent. To address these issues, several analysis techniques for WebAssembly binaries have been proposed. This paper presents a systematic review of these analysis techniques, focusing on vulnerability analysis, cryptojacking detection, and smart contract security. The analysis techniques are categorized into static, dynamic, and hybrid methods, evaluating their strengths and weaknesses based on quantitative data. Our findings reveal that static techniques are efficient but may struggle with complex binaries, while dynamic techniques offer better detection at the cost of increased overhead. Hybrid approaches, which merge the strengths of static and dynamic methods, are not extensively used in the literature and emerge as a promising direction for future research. Lastly, this paper identifies potential future research directions based on the state of the current literature.
]]>Future Internet doi: 10.3390/fi16030085
Authors: Hadeel Alrubayyi Moudy Sharaf Alshareef Zunaira Nadeem Ahmed M. Abdelmoniem Mona Jaber
The hype of the Internet of Things as an enabler for intelligent applications and related promise for ushering accessibility, efficiency, and quality of service is met with hindering security and data privacy concerns. It follows that such IoT systems, which are empowered by artificial intelligence, need to be investigated with cognisance of security threats and mitigation schemes that are tailored to their specific constraints and requirements. In this work, we present a comprehensive review of security threats in IoT and emerging countermeasures with a particular focus on malware and man-in-the-middle attacks. Next, we elaborate on two use cases: the Internet of Energy Things and the Internet of Medical Things. Innovative artificial intelligence methods for automating energy theft detection and stress levels are first detailed, followed by an examination of contextual security threats and privacy breach concerns. An artificial immune system is employed to mitigate the risk of malware attacks, differential privacy is proposed for data protection, and federated learning is harnessed to reduce data exposure.
]]>Future Internet doi: 10.3390/fi16030083
Authors: Gulshan Saleem Usama Ijaz Bajwa Rana Hammad Raza Fan Zhang
Surveillance video analytics encounters unprecedented challenges in 5G and IoT environments, including complex intra-class variations, short-term and long-term temporal dynamics, and variable video quality. This study introduces Edge-Enhanced TempoFuseNet, a cutting-edge framework that strategically reduces spatial resolution to allow the processing of low-resolution images. A dual upscaling methodology based on bicubic interpolation and an encoder–bank–decoder configuration is used for anomaly classification. The two-stream architecture combines the power of a pre-trained Convolutional Neural Network (CNN) for spatial feature extraction from RGB imagery in the spatial stream, while the temporal stream focuses on learning short-term temporal characteristics, reducing the computational burden of optical flow. To analyze long-term temporal patterns, the extracted features from both streams are combined and routed through a Gated Recurrent Unit (GRU) layer. The proposed framework (TempoFuseNet) outperforms the encoder–bank–decoder model in terms of performance metrics, achieving a multiclass macro average accuracy of 92.28%, an F1-score of 69.29%, and a false positive rate of 4.41%. This study presents a significant advancement in the field of video anomaly recognition and provides a comprehensive solution to the complex challenges posed by real-world surveillance scenarios in the context of 5G and IoT.
]]>Future Internet doi: 10.3390/fi16030082
Authors: Hanyue Xu Kah Phooi Seng Jeremy Smith Li Minn Ang
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of deep learning models within these systems encounters significant challenges, chiefly due to data privacy concerns and dealing with communication latency from large-scale IoT devices. To address these issues, multi-level split federated learning (multi-level SFL) has been proposed, merging the benefits of split learning (SL) and federated learning (FL). This framework introduces a novel multi-level aggregation architecture that reduces communication delays, enhances scalability, and addresses system and statistical heterogeneity inherent in large AIoT systems with non-IID data distributions. The architecture leverages the Message Queuing Telemetry Transport (MQTT) protocol to cluster IoT devices geographically and employs edge and fog computing layers for initial model parameter aggregation. Simulation experiments validate that the multi-level SFL outperforms traditional SFL by improving model accuracy and convergence speed in large-scale, non-IID environments. This paper delineates the proposed architecture, its workflow, and its advantages in enhancing the robustness and scalability of AIoT systems in smart cities while preserving data privacy.
]]>Future Internet doi: 10.3390/fi16030081
Authors: Yushan Li Satoshi Fujita
This paper proposes a novel event-driven architecture for enhancing edge-based vehicular systems within smart transportation. Leveraging the inherent real-time, scalable, and fault-tolerant nature of the Elixir language, we present an innovative architecture tailored for edge computing. This architecture employs MQTT for efficient event transport and utilizes Elixir’s lightweight concurrency model for distributed processing. Robustness and scalability are further ensured through the EMQX broker. We demonstrate the effectiveness of our approach through two smart transportation case studies: a traffic light system for dynamically adjusting signal timing, and a cab dispatch prototype designed for high concurrency and real-time data processing. Evaluations on an Apple M1 chip reveal consistently low latency responses below 5 ms and efficient multicore utilization under load. These findings showcase the system’s robust throughput and multicore programming capabilities, confirming its suitability for real-time, distributed edge computing applications in smart transportation. Therefore, our work suggests that integrating Elixir with an event-driven model represents a promising approach for developing scalable, responsive applications in edge computing. This opens avenues for further exploration and adoption of Elixir in addressing the evolving demands of edge-based smart transportation systems.
]]>Future Internet doi: 10.3390/fi16030080
Authors: Haedam Kim Suhyun Park Hyemin Hong Jieun Park Seongmin Kim
As the size of the IoT solutions and services market proliferates, industrial fields utilizing IoT devices are also diversifying. However, the proliferation of IoT devices, often intertwined with users’ personal information and privacy, has led to a continuous surge in attacks targeting these devices. However, conventional network-level intrusion detection systems with pre-defined rulesets are gradually losing their efficacy due to the heterogeneous environments of IoT ecosystems. To address such security concerns, researchers have utilized ML-based network-level intrusion detection techniques. Specifically, transfer learning has been dedicated to identifying unforeseen malicious traffic in IoT environments based on knowledge distillation from the rich source domain data sets. Nevertheless, since most IoT devices operate in heterogeneous but small-scale environments, such as home networks, selecting adequate source domains for learning proves challenging. This paper introduces a framework designed to tackle this issue. In instances where assessing an adequate data set through pre-learning using transfer learning is non-trivial, our proposed framework advocates the selection of a data set as the source domain for transfer learning. This selection process aims to determine the appropriateness of implementing transfer learning, offering the best practice in such scenarios. Our evaluation demonstrates that the proposed framework successfully chooses a fitting source domain data set, delivering the highest accuracy.
]]>Future Internet doi: 10.3390/fi16030079
Authors: Mattia Pellegrino Gianfranco Lombardo George Adosoglou Stefano Cagnoni Panos M. Pardalos Agostino Poggi
With the recent advances in machine learning (ML), several models have been successfully applied to financial and accounting data to predict the likelihood of companies’ bankruptcy. However, time series have received little attention in the literature, with a lack of studies on the application of deep learning sequence models such as Recurrent Neural Networks (RNNs) and the recent Attention-based models in general. In this research work, we investigated the application of Long Short-Term Memory (LSTM) networks to exploit time series of accounting data for bankruptcy prediction. The main contributions of our work are the following: (a) We proposed a multi-head LSTM that models each financial variable in a time window independently and compared it with a single-input LSTM and other traditional ML models. The multi-head LSTM outperformed all the other models. (b) We identified the optimal time series length for bankruptcy prediction to be equal to 4 years of accounting data. (c) We made public the dataset we used for the experiments which includes data from 8262 different public companies in the American stock market generated in the period between 1999 and 2018. Furthermore, we proved the efficacy of the multi-head LSTM model in terms of fewer false positives and the better division of the two classes.
]]>Future Internet doi: 10.3390/fi16030078
Authors: Mohammad Javad Salariseddigh Ons Dabbabi Christian Deppe Holger Boche
Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has been developed in the literature. In this paper, we develop deterministic K-identification (DKI) for the binary symmetric channel (BSC) with and without a Hamming weight constraint imposed on the codewords. This channel may be of use for IoT in the context of smart system technologies, where sophisticated communication models can be reduced to a BSC for the aim of studying basic information theoretical properties. We derive inner and outer bounds on the DKI capacity of the BSC when the size of the goal message set K may grow in the codeword length n. As a major observation, we find that, for deterministic encoding, assuming that K grows exponentially in n, i.e., K=2nκ, where κ is the identification goal rate, then the number of messages that can be accurately identified grows exponentially in n, i.e., 2nR, where R is the DKI coding rate. Furthermore, the established inner and outer bound regions reflects impact of the input constraint (Hamming weight) and the channel statistics, i.e., the cross-over probability.
]]>Future Internet doi: 10.3390/fi16030077
Authors: Dimah Almani Tim Muller Xavier Carpent Takahito Yoshizawa Steven Furnell
This research investigates the deployment and effectiveness of the novel Pre-Signature scheme, developed to allow for up-to-date reputation being available in Vehicle-to-Vehicle (V2V) communications in rural landscapes, where the communications infrastructure is limited. We discuss how existing standards and specifications can be adjusted to incorporate the Pre-Signature scheme to disseminate reputation. Addressing the unique challenges posed by sparse or irregular Roadside Units (RSUs) coverage in these areas, the study investigates the implications of such environmental factors on the integrity and reliability of V2V communication networks. Using the widely used SUMO traffic simulation tool, we create and simulate real-world rural scenarios. We have conducted an in-depth performance evaluation of the Pre-Signature scheme under the typical infrastructural limitations encountered in rural scenarios. Our findings demonstrate the scheme’s usefulness in scenarios with variable or constrained RSUs access. Furthermore, the relationships between the three variables, communication range, amount of RSUs, and degree of home-to-vehicle connectivity overnight, are studied, offering an exhaustive analysis of the determinants influencing V2V communication efficiency in rural contexts. The important findings are (1) that access to accurate Reputation Values increases with all three variables and (2) the necessity of Pre-Signatures decreases if the amount and range of RSUs increase to high numbers. Together, these findings imply that areas with a low degree of adoption of RSUs (typically rural areas) benefit the most from our approach.
]]>Future Internet doi: 10.3390/fi16030076
Authors: Andry Alamsyah Gede Natha Wijaya Kusuma Dian Puteri Ramadhani
The future of the internet is moving toward decentralization, with decentralized networks and blockchain technology playing essential roles in different sectors. Decentralized networks offer equality, accessibility, and security at a societal level, while blockchain technology guarantees security, authentication, and openness. Integrating blockchain technology with decentralized characteristics has become increasingly significant in finance; we call this “decentralized finance” (DeFi). As of January 2023, the DeFi crypto market capitalized USD 46.21 billion and served over 6.6 million users. As DeFi continues to outperform traditional finance (TradFi), it provides reduced fees, increased inclusivity, faster transactions, enhanced security, and improved accessibility, transparency, and programmability; it also eliminates intermediaries. For end users, DeFi presents asset custody options, peer-to-peer transactions, programmable control features, and innovative financial solutions. Despite its rapid growth in recent years, there is limited comprehensive research on mapping DeFi’s benefits and risks alongside its role as an enabling technology within the financial services sector. This research addresses these gaps by developing a DeFi classification system, organizing information, and clarifying connections among its various aspects. The research goal is to improve the understanding of DeFi in both academic and industrial circles to promote comprehension of DeFi taxonomy. This well-organized DeFi taxonomy aids experts, regulators, and decision-makers in making informed and strategic decisions, thereby fostering responsible integration into TradFi for effective risk management. This study enhances DeFi security by providing users with clear guidance on existing mechanisms and risks in DeFi, reducing susceptibility to misinformation, and promoting secure participation. Additionally, it offers an overview of DeFi’s role in shaping the future of the internet.
]]>Future Internet doi: 10.3390/fi16030075
Authors: Feng Zhou Shijing Hu Xin Du Xiaoli Wan Jie Wu
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on the central server. In this article, we design an image preprocessing method and propose a lightweight neural network model called Linge (Lightweight Neural Network Models for the Edge). We propose a distributed intelligent edge computing technology based on the federated learning algorithm for disease risk prediction. The intelligent edge computing method we proposed for disease risk prediction directly performs prediction model training and inference at the edge without increasing storage space. It also reduces the load on network bandwidth and reduces the computing pressure on the server. The lightweight neural network model we designed has only 7.63 MB of parameters and only takes up 155.28 MB of memory. In the experiment with the Linge model compared with the EfficientNetV2 model, the accuracy and precision increased by 2%, the recall rate increased by 1%, the specificity increased by 4%, the F1 score increased by 3%, and the AUC (Area Under the Curve) value increased by 2%.
]]>Future Internet doi: 10.3390/fi16030074
Authors: Reeva Lederman Esther Brainin Ofir Ben-Assuli
Electronic medical record (EMR) systems possess the potential to enable smart healthcare by serving as a hub for the transformation of medical data into meaningful information, knowledge, and wisdom in the health care sector [...]
]]>Future Internet doi: 10.3390/fi16030072
Authors: Adel Belkhiri Michel Dagenais
The graphics processing unit (GPU) plays a crucial role in boosting application performance and enhancing computational tasks. Thanks to its parallel architecture and energy efficiency, the GPU has become essential in many computing scenarios. On the other hand, the advent of GPU virtualization has been a significant breakthrough, as it provides scalable and adaptable GPU resources for virtual machines. However, this technology faces challenges in debugging and analyzing the performance of GPU-accelerated applications. Most current performance tools do not support virtual GPUs (vGPUs), highlighting the need for more advanced tools. Thus, this article introduces a novel performance analysis tool that is designed for systems using vGPUs. Our tool is compatible with the Intel GVT-g virtualization solution, although its underlying principles can apply to many vGPU-based systems. Our tool uses software tracing techniques to gather detailed runtime data and generate relevant performance metrics. It also offers many synchronized graphical views, which gives practitioners deep insights into GVT-g operations and helps them identify potential performance bottlenecks in vGPU-enabled virtual machines.
]]>Future Internet doi: 10.3390/fi16030073
Authors: Konstantinos Psychogyios Andreas Papadakis Stavroula Bourou Nikolaos Nikolaou Apostolos Maniatis Theodore Zahariadis
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential data, obstruct activity, etc. To this end, intrusion detection systems (IDSs) are needed to filter malicious traffic and prevent common attacks. In the past, these systems relied on a fixed set of rules or comparisons with previous attacks. However, with the increased availability of computational power and data, machine learning has emerged as a promising solution for this task. While many systems now use this methodology in real-time for a reactive approach to mitigation, we explore the potential of configuring it as a proactive time series prediction. In this work, we delve into this possibility further. More specifically, we convert a classic IDS dataset to a time series format and use predictive models to forecast forthcoming malign packets. We propose a new architecture combining convolutional neural networks, long short-term memory networks, and attention. The findings indicate that our model performs strongly, exhibiting an F1 score and AUC that are within margins of 1% and 3%, respectively, when compared to conventional real-time detection. Also, our architecture achieves an ∼8% F1 score improvement compared to an LSTM (long short-term memory) model.
]]>Future Internet doi: 10.3390/fi16030071
Authors: Emma Fitzgerald Michał Pióro
Industry 4.0, with its focus on flexibility and customizability, is pushing in the direction of wireless communication in future smart factories, in particular, massive multiple-input-multiple-output (MIMO) and its future evolution of large intelligent surfaces (LIS), which provide more reliable channel quality than previous technologies. At the same time, network slicing in 5G and beyond systems provides easier management of different categories of users and traffic, and a better basis for providing quality of service, especially for demanding use cases such as industrial control. In previous works, we have presented solutions for scheduling industrial control traffic in LIS and massive MIMO systems. We now consider the case of dynamic slicing in the radio access network, where we need to not only meet the stringent latency and reliability requirements of industrial control traffic, but also minimize the radio resources occupied by the network slice serving the control traffic, ensuring resources are available for lower-priority traffic slices. In this paper, we provide mixed-integer programming optimization formulations for radio resource usage minimization for dynamic network slicing. We tested our formulations in numerical experiments with varying traffic profiles and numbers of nodes, up to a maximum of 32 nodes. For all problem instances tested, we were able to calculate an optimal schedule within 1 s, making our approach feasible for use in real deployment scenarios.
]]>Future Internet doi: 10.3390/fi16030070
Authors: Mikael Sabuhi Petr Musilek Cor-Paul Bezemer
As the number of machine learning applications increases, growing concerns about data privacy expose the limitations of traditional cloud-based machine learning methods that rely on centralized data collection and processing. Federated learning emerges as a promising alternative, offering a novel approach to training machine learning models that safeguards data privacy. Federated learning facilitates collaborative model training across various entities. In this approach, each user trains models locally and shares only the local model parameters with a central server, which then generates a global model based on these individual updates. This approach ensures data privacy since the training data itself is never directly shared with a central entity. However, existing federated machine learning frameworks are not without challenges. In terms of server design, these frameworks exhibit limited scalability with an increasing number of clients and are highly vulnerable to system faults, particularly as the central server becomes a single point of failure. This paper introduces Micro-FL, a federated learning framework that uses a microservices architecture to implement the federated learning system. It demonstrates that the framework is fault-tolerant and scalable, showing its ability to handle an increasing number of clients. A comprehensive performance evaluation confirms that Micro-FL proficiently handles component faults, enabling a smooth and uninterrupted operation.
]]>Future Internet doi: 10.3390/fi16030069
Authors: Davy Preuveneers Wouter Joosen
Ontologies have the potential to play an important role in the cybersecurity landscape as they are able to provide a structured and standardized way to semantically represent and organize knowledge about a domain of interest. They help in unambiguously modeling the complex relationships between various cybersecurity concepts and properties. Leveraging this knowledge, they provide a foundation for designing more intelligent and adaptive cybersecurity systems. In this work, we propose an ontology-based cybersecurity framework that extends well-known cybersecurity ontologies to specifically model and manage threats imposed on applications, systems, and services that rely on artificial intelligence (AI). More specifically, our efforts focus on documenting prevalent machine learning (ML) threats and countermeasures, including the mechanisms by which emerging attacks circumvent existing defenses as well as the arms race between them. In the ever-expanding AI threat landscape, the goal of this work is to systematically formalize a body of knowledge intended to complement existing taxonomies and threat-modeling approaches of applications empowered by AI and to facilitate their automated assessment by leveraging enhanced reasoning capabilities.
]]>Future Internet doi: 10.3390/fi16030068
Authors: Juliana Basulo-Ribeiro Leonor Teixeira
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy between human experience and technology. To this end, 6 specific objectives were found, which were answered in the results through an empirical study based on interviews with 11 healthcare professionals. This article thus outlines strategic and policy guidelines for the integration of I5.0 in healthcare, advocating policy-driven change, and contributes to the literature by offering a solid theoretical basis on I5.0 and its impact on the healthcare sector.
]]>Future Internet doi: 10.3390/fi16030067
Authors: Paul Scalise Matthew Boeding Michael Hempel Hamid Sharif Joseph Delloiacovo John Reed
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and maintain a sufficiently secure 5G environment that places user privacy and security at the forefront. Confidentiality, integrity, and availability are all pillars of a privacy and security framework that define major aspects of 5G operations. They are incorporated and considered in the design of the 5G standard by the 3rd Generation Partnership Project (3GPP) with the goal of providing a highly reliable network operation for all. Through a comprehensive review, we aim to analyze the ever-evolving landscape of 5G, including any potential attack vectors and proposed measures to mitigate or prevent these threats. This paper presents a comprehensive survey of the state-of-the-art research that has been conducted in recent years regarding 5G systems, focusing on the main components in a systematic approach: the Core Network (CN), Radio Access Network (RAN), and User Equipment (UE). Additionally, we investigate the utilization of 5G in time-dependent, ultra-confidential, and private communications built around a Zero Trust approach. In today’s world, where everything is more connected than ever, Zero Trust policies and architectures can be highly valuable in operations containing sensitive data. Realizing a Zero Trust Architecture entails continuous verification of all devices, users, and requests, regardless of their location within the network, and grants permission only to authorized entities. Finally, developments and proposed methods of new 5G and future 6G security approaches, such as Blockchain technology, post-quantum cryptography (PQC), and Artificial Intelligence (AI) schemes, are also discussed to understand better the full landscape of current and future research within this telecommunications domain.
]]>Future Internet doi: 10.3390/fi16020066
Authors: Muhammad Nafees Ulfat Khan Weiping Cao Zhiling Tang Ata Ullah Wanghua Pan
The rapid development of the Internet of Things (IoT) has opened the way for transformative advances in numerous fields, including healthcare. IoT-based healthcare systems provide unprecedented opportunities to gather patients’ real-time data and make appropriate decisions at the right time. Yet, the deployed sensors generate normal readings most of the time, which are transmitted to Cluster Heads (CHs). Handling these voluminous duplicated data is quite challenging. The existing techniques have high energy consumption, storage costs, and communication costs. To overcome these problems, in this paper, an innovative Energy-Efficient Fuzzy Data Aggregation System (EE-FDAS) has been presented. In it, at the first level, it is checked that sensors either generate normal or critical readings. In the first case, readings are converted to Boolean digit 0. This reduced data size takes only 1 digit which considerably reduces energy consumption. In the second scenario, sensors generating irregular readings are transmitted in their original 16 or 32-bit form. Then, data are aggregated and transmitted to respective CHs. Afterwards, these data are further transmitted to Fog servers, from where doctors have access. Lastly, for later usage, data are stored in the cloud server. For checking the proficiency of the proposed EE-FDAS scheme, extensive simulations are performed using NS-2.35. The results showed that EE-FDAS has performed well in terms of aggregation factor, energy consumption, packet drop rate, communication, and storage cost.
]]>Future Internet doi: 10.3390/fi16020065
Authors: Paolo Bellavista Giuseppe Di Modica
A Digital Twin (DT) refers to a virtual representation or digital replica of a physical object, system, process, or entity. This concept involves creating a detailed, real-time digital counterpart that mimics the behavior, characteristics, and attributes of its physical counterpart. DTs have the potential to improve efficiency, reduce costs, and enhance decision-making by providing a detailed, real-time understanding of the physical systems they represent. While this technology is finding application in numerous fields, such as energy, healthcare, and transportation, it appears to be a key component of the digital transformation of industries fostered by the fourth Industrial revolution (Industry 4.0). In this paper, we present the research results achieved by IoTwins, a European research project aimed at investigating opportunities and issues of adopting DTs in the fields of industrial manufacturing and facility management. Particularly, we discuss a DT model and a reference architecture for use by the research community to implement a platform for the development and deployment of industrial DTs in the cloud continuum. Guided by the devised architectures’ principles, we implemented an open platform and a development methodology to help companies build DT-based industrial applications and deploy them in the so-called Edge/Cloud continuum. To prove the research value and the usability of the implemented platform, we discuss a simple yet practical development use case.
]]>Future Internet doi: 10.3390/fi16020063
Authors: Haohan Shi Xiyu Shi Safak Dogan
Audio inpainting plays an important role in addressing incomplete, damaged, or missing audio signals, contributing to improved quality of service and overall user experience in multimedia communications over the Internet and mobile networks. This paper presents an innovative solution for speech inpainting using Long Short-Term Memory (LSTM) networks, i.e., a restoring task where the missing parts of speech signals are recovered from the previous information in the time domain. The lost or corrupted speech signals are also referred to as gaps. We regard the speech inpainting task as a time-series prediction problem in this research work. To address this problem, we designed multi-layer LSTM networks and trained them on different speech datasets. Our study aims to investigate the inpainting performance of the proposed models on different datasets and with varying LSTM layers and explore the effect of multi-layer LSTM networks on the prediction of speech samples in terms of perceived audio quality. The inpainted speech quality is evaluated through the Mean Opinion Score (MOS) and a frequency analysis of the spectrogram. Our proposed multi-layer LSTM models are able to restore up to 1 s of gaps with high perceptual audio quality using the features captured from the time domain only. Specifically, for gap lengths under 500 ms, the MOS can reach up to 3~4, and for gap lengths ranging between 500 ms and 1 s, the MOS can reach up to 2~3. In the time domain, the proposed models can proficiently restore the envelope and trend of lost speech signals. In the frequency domain, the proposed models can restore spectrogram blocks with higher similarity to the original signals at frequencies less than 2.0 kHz and comparatively lower similarity at frequencies in the range of 2.0 kHz~8.0 kHz.
]]>Future Internet doi: 10.3390/fi16020064
Authors: Ryo Matsuoka Koichi Kobayashi Yuh Yamashita
A pickup and delivery problem by multiple agents has many applications, such as food delivery service and disaster rescue. In this problem, there are cases where fuels must be considered (e.g., the case of using drones as agents). In addition, there are cases where demand forecasting should be considered (e.g., the case where a large number of orders are carried by a small number of agents). In this paper, we consider an online pickup and delivery problem considering fuel and demand forecasting. First, the pickup and delivery problem with fuel constraints is formulated. The information on demand forecasting is included in the cost function. Based on the orders, the agents’ paths (e.g., the paths from stores to customers) are calculated. We suppose that the target area is given by an undirected graph. Using a given graph, several constraints such as the moves and fuels of the agents are introduced. This problem is reduced to a mixed integer linear programming (MILP) problem. Next, in online optimization, the MILP problem is solved depending on the acceptance of orders. Owing to new orders, the calculated future paths may be changed. Finally, by using a numerical example, we present the effectiveness of the proposed method.
]]>Future Internet doi: 10.3390/fi16020062
Authors: Salvatore Calcagno Andrea Calvagna Emiliano Tramontana Gabriella Verga
The Electronic Health Record (EHR) is a system for collecting and storing patient medical records as data that can be mechanically accessed, hence facilitating and assisting the medical decision-making process. EHRs exist in several formats, and each format lists thousands of keywords to classify patients data. The keywords are specific and are medical jargon; hence, data classification is very accurate. As the keywords constituting the formats of medical records express concepts by means of specific jargon without definitions or references, their proper use is left to clinicians and could be affected by their background, hence the interpretation of data could become slow or less accurate than that desired. This article presents an approach that accurately relates data in EHRs to ontologies in the medical realm. Thanks to ontologies, clinicians can be assisted when writing or analysing health records, e.g., our solution promptly suggests rigorous definitions for scientific terms, and automatically connects data spread over several parts of EHRs. The first step of our approach consists of converting selected data and keywords from several EHR formats into a format easier to parse, then the second step is merging the extracted data with specialised medical ontologies. Finally, enriched versions of the medical data are made available to professionals. The proposed approach was validated by taking samples of medical records and ontologies in the real world. The results have shown both versatility on handling data, precision of query results, and appropriate suggestions for relations among medical records.
]]>Future Internet doi: 10.3390/fi16020061
Authors: Dennis Papenfuß Bennet Gerlach Stefan Fischer Mohamed Ahmed Hail
The IoT encompasses objects, sensors, and everyday items not typically considered computers. IoT devices are subject to severe energy, memory, and computation power constraints. Employing NDN for the IoT is a recent approach to accommodate these issues. To gain a deeper insight into how different network parameters affect energy consumption, analyzing a range of parameters using hyperparameter optimization seems reasonable. The experiments from this work’s ndnSIM-based hyperparameter setup indicate that the data packet size has the most significant impact on energy consumption, followed by the caching scheme, caching strategy, and finally, the forwarding strategy. The energy footprint of these parameters is orders of magnitude apart. Surprisingly, the packet request sequence influences the caching parameters’ energy footprint more than the graph size and topology. Regarding energy consumption, the results indicate that data compression may be more relevant than expected, and caching may be more significant than the forwarding strategy. The framework for ndnSIM developed in this work can be used to simulate NDN networks more efficiently. Furthermore, the work presents a valuable basis for further research on the effect of specific parameter combinations not examined before.
]]>Future Internet doi: 10.3390/fi16020060
Authors: Qian Qu Mohsen Hatami Ronghua Xu Deeraj Nagothu Yu Chen Xiaohua Li Erik Blasch Erika Ardiles-Cruz Genshe Chen
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse.
]]>Future Internet doi: 10.3390/fi16020059
Authors: Tianjie Fu Peiyu Li Chenke Shi Youzhu Liu
The growing demand for high-quality steel across various industries has led to an increasing need for superior-grade steel. The quality of slab ingots is a pivotal factor influencing the final quality of steel production. However, the current level of intelligence in the steelmaking industry’s processes is relatively insufficient. Consequently, slab ingot quality inspection is characterized by high-temperature risks and imprecision. The positional accuracy of quality detection is inadequate, and the precise quantification of slab ingot production and quality remains challenging. This paper proposes a digital twin (DT)-based monitoring system for the slab ingot production process that integrates DT technology with slab ingot process detection. A neural network is introduced for defect identification to ensure precise defect localization and efficient recognition. Concurrently, environmental production factors are considered, leading to the introduction of a defect prediction module. The effectiveness of this system is validated through experimental verification.
]]>Future Internet doi: 10.3390/fi16020058
Authors: Adedamola Adesokan Rowan Kinney Eirini Eleni Tsiropoulou
This paper tackles the challenges inherent in crowdsourcing dynamics by introducing the CROWDMATCH mechanism. Aimed at enabling crowdworkers to strategically select suitable crowdsourcers while contributing information to crowdsourcing tasks, CROWDMATCH considers incentives, information availability and cost, and the decisions of fellow crowdworkers to model the utility functions for both the crowdworkers and the crowdsourcers. Specifically, the paper presents an initial Approximate CROWDMATCH mechanism grounded in matching theory principles, eliminating externalities from crowdworkers’ decisions and enabling each entity to maximize its utility. Subsequently, the Accurate CROWDMATCH mechanism is introduced, which is initiated by the outcome of the Approximate CROWDMATCH mechanism, and coalition game-theoretic principles are employed to refine the matching process by accounting for externalities. The paper’s contributions include the introduction of the CROWDMATCH system model, the development of both Approximate and Accurate CROWDMATCH mechanisms, and a demonstration of their superior performance through comprehensive simulation results. The mechanisms’ scalability in large-scale crowdsourcing systems and operational advantages are highlighted, distinguishing them from existing methods and highlighting their efficacy in empowering crowdworkers in crowdsourcer selection.
]]>Future Internet doi: 10.3390/fi16020057
Authors: Mohammed Bellaj Najib Naja Abdellah Jamali
Named Data Networking (NDN) has emerged as a promising architecture to overcome the limitations of the conventional Internet Protocol (IP) architecture, particularly in terms of mobility, security, and data availability. However, despite the advantages it offers, producer mobility management remains a significant challenge for NDN, especially for moving vehicles and emerging technologies such as Unmanned Aerial Vehicles (UAVs), known for their high-speed and unpredictable movements, which makes it difficult for NDN to maintain seamless communication. To solve this mobility problem, we propose a Distributed Mobility Management Scheme (DMMS) to support UAV mobility and ensure low-latency content delivery in NDN architecture. DMMS utilizes decentralized Anchors to forward proactively the consumer’s Interest packets toward the producer’s predicted location when handoff occurs. Moreover, it introduces a new forwarding approach that combines the standard and location-based forwarding strategy to improve forwarding efficiency under producer mobility without changing the network structure. Using a realistic scenario, DMMS is evaluated and compared against two well-known solutions, namely MAP-ME and Kite, using the ndnSIM simulations. We demonstrate that DMMS achieves better results compared to Kite and MAP-ME solutions in terms of network cost and consumer quality-of-service metrics.
]]>Future Internet doi: 10.3390/fi16020056
Authors: Ayman Khalil Besma Zeddini
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, and sustainable networking solutions has never been more pressing. Opportunistic networks, characterized by intermittent connectivity and dynamic network conditions, present unique challenges that necessitate innovative approaches for optimal performance and sustainability. This paper introduces a groundbreaking paradigm that integrates the principles of cybersecurity with opportunistic networks. At its core, this study presents a novel routing protocol meticulously designed to significantly outperform existing solutions concerning key metrics such as delivery probability, overhead ratio, and communication delay. Leveraging cybersecurity’s inherent strengths, our protocol not only fortifies the network’s security posture but also provides a foundation for enhancing efficiency and sustainability in opportunistic networks. The overarching goal of this paper is to address the inherent limitations of conventional opportunistic network protocols. By proposing an innovative routing protocol, we aim to optimize data delivery, minimize overhead, and reduce communication latency. These objectives are crucial for ensuring seamless and timely information exchange, especially in scenarios where traditional networking infrastructures fall short. By large-scale simulations, the new model proves its effectiveness in the different scenarios, especially in terms of message delivery probability, while ensuring reasonable overhead and latency.
]]>Future Internet doi: 10.3390/fi16020055
Authors: Meng Li Jiqiang Liu Yeping Yang
Data governance is an extremely important protection and management measure throughout the entire life cycle of data. However, there are still data governance issues, such as data security risks, data privacy breaches, and difficulties in data management and access control. These problems lead to a risk of data breaches and abuse. Therefore, the security classification and grading of data has become an important task to accurately identify sensitive data and adopt appropriate maintenance and management measures with different sensitivity levels. This work started from the problems existing in the current data security classification and grading work, such as inconsistent classification and grading standards, difficult data acquisition and sorting, and weak semantic information of data fields, to find the limitations of the current methods and the direction for improvement. The automatic identification method of sensitive financial data proposed in this paper is based on topic analysis and was constructed by incorporating Jieba word segmentation, word frequency statistics, the skip-gram model, K-means clustering, and other technologies. Expert assistance was sought to select appropriate keywords for enhanced accuracy. This work used the descriptive text library and real business data of a Chinese financial institution for training and testing to further demonstrate its effectiveness and usefulness. The evaluation indicators illustrated the effectiveness of this method in the classification of data security. The proposed method addressed the challenge of sensitivity level division in texts with limited semantic information, which overcame the limitations on model expansion across different domains and provided an optimized application model. All of the above pointed out the direction for the real-time updating of the method.
]]>Future Internet doi: 10.3390/fi16020054
Authors: Martin Kenyeres Ivana Budinská Ladislav Hluchý Agostino Poggi
The term “multi-agent system” is generally understood as an interconnected set of independent entities that can effectively solve complex and time-consuming problems exceeding the individual abilities of common problem solvers [...]
]]>Future Internet doi: 10.3390/fi16020053
Authors: Massimo Cafaro Italo Epicoco Marco Pulimeno
This Special Issue aims to provide a comprehensive overview of the current state of the art in Future Internet Technology in Italy [...]
]]>Future Internet doi: 10.3390/fi16020052
Authors: Adwitiya Mukhopadhyay Aryadevi Remanidevi Devidas Venkat P. Rangan Maneesha Vinodini Ramesh
Addressing the inadequacy of medical facilities in rural communities and the high number of patients affected by ailments that need to be treated immediately is of prime importance for all countries. The various recent healthcare emergency situations bring out the importance of telemedicine and demand rapid transportation of patients to nearby hospitals with available resources to provide the required medical care. Many current healthcare facilities and ambulances are not equipped to provide real-time risk assessment for each patient and dynamically provide the required medical interventions. This work proposes an IoT-based mobile medical edge (IM2E) node to be integrated with wearable and portable devices for the continuous monitoring of emergency patients transported via ambulances and it delves deeper into the existing challenges, such as (a) a lack of a simplified patient risk scoring system, (b) the need for architecture that enables seamless communication for dynamically varying QoS requirements, and (c)the need for context-aware knowledge regarding the effect of end-to-end delay and the packet loss ratio (PLR) on the real-time monitoring of health risks in emergency patients. The proposed work builds a data path selection model to identify the most effective path through which to route the data packets in an effective manner. The signal-to-noise interference ratio and the fading in the path are chosen to analyze the suitable path for data transmission.
]]>Future Internet doi: 10.3390/fi16020051
Authors: Ming-Yen Lin Ping-Chun Wu Sue-Chen Hsueh
This study introduces session-aware recommendation models, leveraging GRU (Gated Recurrent Unit) and attention mechanisms for advanced latent interaction data integration. A primary advancement is enhancing latent context, a critical factor for boosting recommendation accuracy. We address the existing models’ rigidity by dynamically blending short-term (most recent) and long-term (historical) preferences, moving beyond static period definitions. Our approaches, pre-combination (LCII-Pre) and post-combination (LCII-Post), with fixed (Fix) and flexible learning (LP) weight configurations, are thoroughly evaluated. We conducted extensive experiments to assess our models’ performance on public datasets such as Amazon and MovieLens 1M. Notably, on the MovieLens 1M dataset, LCII-PreFix achieved a 1.85% and 2.54% higher Recall@20 than II-RNN and BERT4Rec+st+TSA, respectively. On the Steam dataset, LCII-PostLP outperformed these models by 18.66% and 5.5%. Furthermore, on the Amazon dataset, LCII showed a 2.59% and 1.89% improvement in Recall@20 over II-RNN and CAII. These results affirm the significant enhancement our models bring to session-aware recommendation systems, showcasing their potential for both academic and practical applications in the field.
]]>Future Internet doi: 10.3390/fi16020050
Authors: Pradeep Kumar Guo-Liang Shih Bo-Lin Guo Siva Kumar Nagi Yibeltal Chanie Manie Cheng-Kai Yao Michael Augustine Arockiyadoss Peng-Chun Peng
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a model aimed at enhancing real-time emergency response capabilities and swiftly identifying criminals. This initiative aims to foster a safer environment and better manage criminal activity within smart cities. The proposed architecture combines an image-to-image stable diffusion model with violence detection and pose estimation approaches. The diffusion model generates synthetic data while the object detection approach uses YOLO v7 to identify violent objects like baseball bats, knives, and pistols, complemented by MediaPipe for action detection. Further, a long short-term memory (LSTM) network classifies the action attacks involving violent objects. Subsequently, an ensemble consisting of an edge device and the entire proposed model is deployed onto the edge device for real-time data testing using a dash camera. Thus, this study can handle violent attacks and send alerts in emergencies. As a result, our proposed YOLO model achieves a mean average precision (MAP) of 89.5% for violent attack detection, and the LSTM classifier model achieves an accuracy of 88.33% for violent action classification. The results highlight the model’s enhanced capability to accurately detect violent objects, particularly in effectively identifying violence through the implemented artificial intelligence system.
]]>Future Internet doi: 10.3390/fi16020048
Authors: Azizah Assiri Hassen Sallay
Opportunistic mobile social networks (OMSNs) have become increasingly popular in recent years due to the rise of social media and smartphones. However, message forwarding and sharing social information through intermediary nodes on OMSNs raises privacy concerns as personal data and activities become more exposed. Therefore, maintaining privacy without limiting efficient social interaction is a challenging task. This paper addresses this specific problem of safeguarding user privacy during message forwarding by integrating a privacy layer on the state-of-the-art OMSN routing decision models that empowers users to control their message dissemination. Mainly, we present three user-centric privacy-aware forwarding modes guiding the selection of the next hop in the forwarding path based on social metrics such as common friends and exchanged messages between OMSN nodes. More specifically, we define different social relationship strengths approximating real-world scenarios (familiar, weak tie, stranger) and trust thresholds to give users choices on trust levels for different social contexts and guide the routing decisions. We evaluate the privacy enhancement and network performance through extensive simulations using ONE simulator for several routing schemes (Epidemic, Prophet, and Spray and Wait) and different movement models (random way, bus, and working day). We demonstrate that our modes can enhance privacy by up to 45% in various network scenarios, as measured by the reduction in the likelihood of unintended message propagation, while keeping the message-delivery process effective and efficient.
]]>Future Internet doi: 10.3390/fi16020049
Authors: Hamid Saadatfar Hamid Gholampour Ahangar Javad Hassannataj Joloudari
Resource pricing in cloud computing has become one of the main challenges for cloud providers. The challenge is determining a fair and appropriate price to satisfy users and resource providers. To establish a justifiable price, it is imperative to take into account the circumstances and requirements of both the provider and the user. This research tries to provide a pricing mechanism for cloud computing based on game theory. The suggested approach considers three aspects: the likelihood of faults, the interplay among virtual machines, and the amount of energy used, in order to determine a justifiable price. In the game that is being proposed, the provider is responsible for determining the price of the virtual machine that can be made available to the user on each physical machine. The user, on the other hand, has the authority to choose between the virtual machines that are offered in order to run their application. The whole game is implemented as a function of the resource broker component. The proposed mechanism is simulated and evaluated using the CloudSim simulator. Its performance is compared with several previous recent mechanisms. The results indicate that the suggested mechanism has successfully identified a more rational price for both the user and the provider, consequently enhancing the overall profitability of the cloud system.
]]>Future Internet doi: 10.3390/fi16020047
Authors: Andreas F. Gkontzis Sotiris Kotsiantis Georgios Feretzakis Vassilios S. Verykios
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of the urban environment, fostering real-time monitoring, simulation, and analysis of urban systems. This study underscores the significance of real-time monitoring, simulation, and analysis of urban systems to support test scenarios that identify bottlenecks and enhance smart city efficiency. This paper delves into the crucial roles of citizen report analytics, prediction, and digital twin technologies at the neighborhood level. The study integrates extract, transform, load (ETL) processes, artificial intelligence (AI) techniques, and a digital twin methodology to process and interpret urban data streams derived from citizen interactions with the city’s coordinate-based problem mapping platform. Using an interactive GeoDataFrame within the digital twin methodology, dynamic entities facilitate simulations based on various scenarios, allowing users to visualize, analyze, and predict the response of the urban system at the neighborhood level. This approach reveals antecedent and predictive patterns, trends, and correlations at the physical level of each city area, leading to improvements in urban functionality, resilience, and resident quality of life.
]]>Future Internet doi: 10.3390/fi16020046
Authors: Erica Corda Silvia M. Massa Daniele Riboni
As several studies demonstrate, good sleep quality is essential for individuals’ well-being, as a lack of restoring sleep may disrupt different physical, mental, and social dimensions of health. For this reason, there is increasing interest in tools for the monitoring of sleep based on personal sensors. However, there are currently few context-aware methods to help individuals to improve their sleep quality through behavior change tips. In order to tackle this challenge, in this paper, we propose a system that couples machine learning algorithms and large language models to forecast the next night’s sleep quality, and to provide context-aware behavior change tips to improve sleep. In order to encourage adherence and to increase trust, our system includes the use of large language models to describe the conditions that the machine learning algorithm finds harmful to sleep health, and to explain why the behavior change tips are generated as a consequence. We develop a prototype of our system, including a smartphone application, and perform experiments with a set of users. Results show that our system’s forecast is correlated to the actual sleep quality. Moreover, a preliminary user study suggests that the use of large language models in our system is useful in increasing trust and engagement.
]]>Future Internet doi: 10.3390/fi16020045
Authors: Marcin Aftowicz Ievgen Kabin Zoya Dyka Peter Langendörfer
While IoT technology makes industries, cities, and homes smarter, it also opens the door to security risks. With the right equipment and physical access to the devices, the attacker can leverage side-channel information, like timing, power consumption, or electromagnetic emanation, to compromise cryptographic operations and extract the secret key. This work presents a side channel analysis of a cryptographic hardware accelerator for the Elliptic Curve Scalar Multiplication operation, implemented in a Field-Programmable Gate Array and as an Application-Specific Integrated Circuit. The presented framework consists of initial key extraction using a state-of-the-art statistical horizontal attack and is followed by regularized Artificial Neural Networks, which take, as input, the partially incorrect key guesses from the horizontal attack and correct them iteratively. The initial correctness of the horizontal attack, measured as the fraction of correctly extracted bits of the secret key, was improved from 75% to 98% by applying the iterative learning.
]]>Future Internet doi: 10.3390/fi16020044
Authors: Arturs Kempelis Inese Polaka Andrejs Romanovs Antons Patlins
Urban agriculture presents unique challenges, particularly in the context of microclimate monitoring, which is increasingly important in food production. This paper explores the application of convolutional neural networks (CNNs) to forecast key sensor measurements from thermal images within this context. This research focuses on using thermal images to forecast sensor measurements of relative air humidity, soil moisture, and light intensity, which are integral to plant health and productivity in urban farming environments. The results indicate a higher accuracy in forecasting relative air humidity and soil moisture levels, with Mean Absolute Percentage Errors (MAPEs) within the range of 10–12%. These findings correlate with the strong dependency of these parameters on thermal patterns, which are effectively extracted by the CNNs. In contrast, the forecasting of light intensity proved to be more challenging, yielding lower accuracy. The reduced performance is likely due to the more complex and variable factors that affect light in urban environments. The insights gained from the higher predictive accuracy for relative air humidity and soil moisture may inform targeted interventions for urban farming practices, while the lower accuracy in light intensity forecasting highlights the need for further research into the integration of additional data sources or hybrid modeling approaches. The conclusion suggests that the integration of these technologies can significantly enhance the predictive maintenance of plant health, leading to more sustainable and efficient urban farming practices. However, the study also acknowledges the challenges in implementing these technologies in urban agricultural models.
]]>Future Internet doi: 10.3390/fi16020043
Authors: Sergio Jesús González-Ambriz Rolando Menchaca-Méndez Sergio Alejandro Pinacho-Castellanos Mario Eduardo Rivero-Ángeles
This paper presents the spectral gap-based topology control algorithm (SGTC) for wireless backhaul networks, a novel approach that employs the Laplacian Spectral Gap (LSG) to find expander-like graphs that optimize the topology of the network in terms of robustness, diameter, energy cost, and network entropy. The latter measures the network’s ability to promote seamless traffic offloading from the Macro Base Stations to smaller cells by providing a high diversity of shortest paths connecting all the stations. Given the practical constraints imposed by cellular technologies, the proposed algorithm uses simulated annealing to search for feasible network topologies with a large LSG. Then, it computes the Pareto front of the set of feasible solutions found during the annealing process when considering robustness, diameter, and entropy as objective functions. The algorithm’s result is the Pareto efficient solution that minimizes energy cost. A set of experimental results shows that by optimizing the LSG, the proposed algorithm simultaneously optimizes the set of desirable topological properties mentioned above. The results also revealed that generating networks with good spectral expansion is possible even under the restrictions imposed by current wireless technologies. This is a desirable feature because these networks have strong connectivity properties even if they do not have a large number of links.
]]>Future Internet doi: 10.3390/fi16020042
Authors: Aristeidis Karras Anastasios Giannaros Christos Karras Leonidas Theodorakopoulos Constantinos S. Mammassis George A. Krimpas Spyros Sioutas
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI.
]]>Future Internet doi: 10.3390/fi16020041
Authors: Melania Nitu Mihai Dascalu
Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents, including human-written samples as well as generated texts using six distinct Large Language Models (LLMs) and three different generation methods. Our robust experimental dataset covers five domains, namely books, news, legal, medical, and scientific publications. The exploratory text analysis revealed differences between human-authored and artificially generated texts, exposing the intricacies of lexical diversity and textual complexity. Since Romanian is a less-resourced language requiring dedicated detectors on which out-of-the-box solutions do not work, this paper introduces two techniques for discerning machine-generated texts. The first method leverages a Transformer-based model to categorize texts as human or machine-generated, while the second method extracts and examines linguistic features, such as identifying the top textual complexity indices via Kruskal–Wallis mean rank and computes burstiness, which are further fed into a machine-learning model leveraging an extreme gradient-boosting decision tree. The methods show competitive performance, with the first technique’s results outperforming the second one in two out of five domains, reaching an F1 score of 0.96. Our study also includes a text similarity analysis between human-authored and artificially generated texts, coupled with a SHAP analysis to understand which linguistic features contribute more to the classifier’s decision.
]]>Future Internet doi: 10.3390/fi16020040
Authors: Mahmud Hossain Golam Kayas Ragib Hasan Anthony Skjellum Shahid Noor S. M. Riazul Islam
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of crime and e-crime. Despite ongoing research and study, there remains a significant gap in the thorough analysis of security challenges, feasible solutions, and open secure problems for IoT. To bridge this gap, we provide a comprehensive overview of the state of the art in IoT security with a critical investigation-based approach. This includes a detailed analysis of vulnerabilities in IoT-based systems and potential attacks. We present a holistic review of the security properties required to be adopted by IoT devices, applications, and services to mitigate IoT vulnerabilities and, thus, successful attacks. Moreover, we identify challenges to the design of security protocols for IoT systems in which constituent devices vary markedly in capability (such as storage, computation speed, hardware architecture, and communication interfaces). Next, we review existing research and feasible solutions for IoT security. We highlight a set of open problems not yet addressed among existing security solutions. We provide a set of new perspectives for future research on such issues including secure service discovery, on-device credential security, and network anomaly detection. We also provide directions for designing a forensic investigation framework for IoT infrastructures to inspect relevant criminal cases, execute a cyber forensic process, and determine the facts about a given incident. This framework offers a means to better capture information on successful attacks as part of a feedback mechanism to thwart future vulnerabilities and threats. This systematic holistic review will both inform on current challenges in IoT security and ideally motivate their future resolution.
]]>Future Internet doi: 10.3390/fi16020039
Authors: Ricardo Lopes Marcello Trovati Ella Pereira
Industry 4.0 has become a crucial part in the majority of processes, components, and related modelling, as well as predictive tools that allow a more efficient, automated and sustainable approach to industry. The availability of large quantities of data, and the advances in IoT, AI, and data-driven frameworks, have led to an enhanced data gathering, assessment, and extraction of actionable information, resulting in a better decision-making process. Product picking and its subsequent packing is an important area, and has drawn increasing attention for the research community. However, depending of the context, some of the related approaches tend to be either highly mathematical, or applied to a specific context. This article aims to provide a survey on the main methods, techniques, and frameworks relevant to product packing and to highlight the main properties and features that should be further investigated to ensure a more efficient and optimised approach.
]]>Future Internet doi: 10.3390/fi16020038
Authors: Min Ma Shanrong Liu Shufei Wang Shengnan Shi
Automatic modulation classification (AMC) plays a crucial role in wireless communication by identifying the modulation scheme of received signals, bridging signal reception and demodulation. Its main challenge lies in performing accurate signal processing without prior information. While deep learning has been applied to AMC, its effectiveness largely depends on the availability of labeled samples. To address the scarcity of labeled data, we introduce a novel semi-supervised AMC approach combining consistency regularization and pseudo-labeling. This method capitalizes on the inherent data distribution of unlabeled data to supplement the limited labeled data. Our approach involves a dual-component objective function for model training: one part focuses on the loss from labeled data, while the other addresses the regularized loss for unlabeled data, enhanced through two distinct levels of data augmentation. These combined losses concurrently refine the model parameters. Our method demonstrates superior performance over established benchmark algorithms, such as decision trees (DTs), support vector machines (SVMs), pi-models, and virtual adversarial training (VAT). It exhibits a marked improvement in the recognition accuracy, particularly when the proportion of labeled samples is as low as 1–4%.
]]>Future Internet doi: 10.3390/fi16020037
Authors: Shiva Raj Pokhrel Jonathan Kua Deol Satish Sebnem Ozer Jeff Howe Anwar Walid
We introduce a novel multipath data transport approach at the transport layer referred to as ‘Deep Deterministic Policy Gradient for Multipath Performance-oriented Congestion Control’ (DDPG-MPCC), which leverages deep reinforcement learning to enhance congestion management in multipath networks. Our method combines DDPG with online convex optimization to optimize fairness and performance in simultaneously challenging multipath internet congestion control scenarios. Through experiments by developing kernel implementation, we show how DDPG-MPCC performs compared to the state-of-the-art solutions.
]]>Future Internet doi: 10.3390/fi16020036
Authors: Viktor Masalskyi Dominykas Čičiurėnas Andrius Dzedzickis Urtė Prentice Gediminas Braziulis Vytautas Bučinskas
This paper addresses the challenge of synchronizing data acquisition from independent sensor systems in a local network. The network comprises microcontroller-based systems that collect data from physical sensors used for monitoring human gait. The synchronized data are transmitted to a PC or cloud storage through a central controller. The performed research proposes a solution for effectively synchronizing the data acquisition using two alternative data-synchronization approaches. Additionally, it explores techniques to handle varying amounts of data from different sensor types. The experimental research validates the proposed solution by providing trial results and stability evaluations and comparing them to the human-gait-monitoring system requirements. The alternative data-transmission method was used to compare the data-transmission quality and data-loss rate. The developed algorithm allows data acquisition from six pressure sensors and two accelerometer/gyroscope modules, ensuring a 24.6 Hz sampling rate and 1 ms synchronization accuracy. The obtained results prove the algorithm’s suitability for human-gait monitoring under its regular activity. The paper concludes with discussions and key insights derived from the obtained results.
]]>Future Internet doi: 10.3390/fi16010035
Authors: Muhammad Sher Ramzan Anees Asghar Ata Ullah Fawaz Alsolami Iftikhar Ahmad
The Internet of Things (IoT) consists of complex and dynamically aggregated elements or smart entities that need decentralized supervision for data exchanging throughout different networks. The artificial bee colony (ABC) is utilized in optimization problems for the big data in IoT, cloud and central repositories. The main limitation during the searching mechanism is that every single food site is compared with every other food site to find the best solution in the neighboring regions. In this way, an extensive number of redundant comparisons are required, which results in a slower convergence rate, greater time consumption and increased delays. This paper presents a solution to optimize search operations with an enhanced ABC (E-ABC) approach. The proposed algorithm compares the best food sites with neighboring sites to exclude poor sources. It achieves an efficient mechanism, where the number of redundant comparisons is decreased during the searching mechanism of the employed bee phase and the onlooker bee phase. The proposed algorithm is implemented in a replication scenario to validate its performance in terms of the mean objective function values for different functions, as well as the probability of availability and the response time. The results prove the superiority of the E-ABC in contrast to its counterparts.
]]>Future Internet doi: 10.3390/fi16010034
Authors: Jui-Chuan Liu Heng-Xiao Chi Ching-Chun Chang Chin-Chen Chang
Information has been uploaded and downloaded through the Internet, day in and day out, ever since we immersed ourselves in the Internet. Data security has become an area demanding high attention, and one of the most efficient techniques for protecting data is data hiding. In recent studies, it has been shown that the indices of a codebook can be reordered to hide secret bits. The hiding capacity of the codeword index reordering scheme increases when the size of the codebook increases. Since the codewords in the codebook are not modified, the visual performance of compressed images is retained. We propose a novel scheme making use of the fundamental principle of the codeword index reordering technique to hide secret data in encrypted images. By observing our experimental results, we can see that the obtained embedding capacity of 197,888 is larger than other state-of-the-art schemes. Secret data can be extracted when a receiver owns a data hiding key, and the image can be recovered when a receiver owns an encryption key.
]]>Future Internet doi: 10.3390/fi16010033
Authors: Spyridon Daousis Nikolaos Peladarinos Vasileios Cheimaras Panagiotis Papageorgas Dimitrios D. Piromalis Radu Adrian Munteanu
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the market tension in recent years in the gradual development of wireless networks for industrial applications, and proceeds to categorize WSNs and examine the protocols and standards of WSNs in demanding environments like critical infrastructures, drawing on the recent literature. This review concentrates on the protocols and standards utilized in WSNs for critical infrastructures, and it concludes by identifying a notable gap in the literature concerning quality standards for equipment used in such infrastructures.
]]>Future Internet doi: 10.3390/fi16010032
Authors: Hassan Khazane Mohammed Ridouani Fatima Salahdine Naima Kaabouch
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks.
]]>Future Internet doi: 10.3390/fi16010031
Authors: Zhengyang Fan Wanru Li Kathryn Blackmond Laskey Kuo-Chu Chang
Phishing attacks represent a significant and growing threat in the digital world, affecting individuals and organizations globally. Understanding the various factors that influence susceptibility to phishing is essential for developing more effective strategies to combat this pervasive cybersecurity challenge. Machine learning has become a prevalent method in the study of phishing susceptibility. Most studies in this area have taken one of two approaches: either they explore statistical associations between various factors and susceptibility, or they use complex models such as deep neural networks to predict phishing behavior. However, these approaches have limitations in terms of providing practical insights for individuals to avoid future phishing attacks and delivering personalized explanations regarding their susceptibility to phishing. In this paper, we propose a machine-learning approach that leverages explainable artificial intelligence techniques to examine the influence of human and demographic factors on susceptibility to phishing attacks. The machine learning model yielded an accuracy of 78%, with a recall of 71%, and a precision of 57%. Our analysis reveals that psychological factors such as impulsivity and conscientiousness, as well as appropriate online security habits, significantly affect an individual’s susceptibility to phishing attacks. Furthermore, our individualized case-by-case approach offers personalized recommendations on mitigating the risk of falling prey to phishing exploits, considering the specific circumstances of each individual.
]]>Future Internet doi: 10.3390/fi16010030
Authors: Jiantao Qu Chunyu Qi He Meng
Within the Shuo Huang Railway Company (Suning, China ) the long-term evolution for railways (LTE-R) network carries core wireless communication services for trains. The communication performance of LTE-R cells directly affects the operational safety of the trains. Therefore, this paper proposes a novel detection method for LTE-R cells with degraded communication performance. Considering that the number of LTE-R cells with degraded communication performance and that of normal cells are extremely imbalanced and that the communication performance indicator data for each cell are sequence data, we propose a feature extraction neural network structure for imbalanced sequences, based on shapelet transformation and a convolutional neural network (CNN). Then, to train the network, we set the optimization objective based on the Fisher criterion. Finally, using a two-stage training method, we obtain a neural network model that can distinguish LTE-R cells with degraded communication performance from normal cells at the feature level. Experiments on a real-world dataset show that the proposed method can realize the accurate detection of LTE-R cells with degraded communication performance and has high practical application value.
]]>Future Internet doi: 10.3390/fi16010029
Authors: Valeriy Ivanov Maxim Tereshonok
The OSI model used to be a common network model for years. In the case of ad hoc networks with dynamic topology and difficult radio communications conditions, gradual departure is happening from the classical kind of OSI network model with a clear delineation of layers (physical, channel, network, transport, application) to the cross-layer approach. The layers of the network model in ad hoc networks strongly influence each other. Thus, the cross-layer approach can improve the performance of an ad hoc network by jointly developing protocols using interaction and collaborative optimization of multiple layers. The existing cross-layer methods classification is too complicated because it is based on the whole manifold of network model layer combinations, regardless of their importance. In this work, we review ad hoc network cross-layer methods, propose a new useful classification of cross-layer methods, and show future research directions in the development of ad hoc network cross-layer methods. The proposed classification can help to simplify the goal-oriented cross-layer protocol development.
]]>Future Internet doi: 10.3390/fi16010028
Authors: Kyle DeMedeiros Chan Young Koh Abdeltawab Hendawi
The Chicago Array of Things (AoT) is a robust dataset taken from over 100 nodes over four years. Each node contains over a dozen sensors. The array contains a series of Internet of Things (IoT) devices with multiple heterogeneous sensors connected to a processing and storage backbone to collect data from across Chicago, IL, USA. The data collected include meteorological data such as temperature, humidity, and heat, as well as chemical data like CO2 concentration, PM2.5, and light intensity. The AoT sensor network is one of the largest open IoT systems available for researchers to utilize its data. Anomaly detection (AD) in IoT and sensor networks is an important tool to ensure that the ever-growing IoT ecosystem is protected from faulty data and sensors, as well as from attacking threats. Interestingly, an in-depth analysis of the Chicago AoT for anomaly detection is rare. Here, we study the viability of the Chicago AoT dataset to be used in anomaly detection by utilizing clustering techniques. We utilized K-Means, DBSCAN, and Hierarchical DBSCAN (H-DBSCAN) to determine the viability of labeling an unlabeled dataset at the sensor level. The results show that the clustering algorithm best suited for this task varies based on the density of the anomalous readings and the variability of the data points being clustered; however, at the sensor level, the K-Means algorithm, though simple, is better suited for the task of determining specific, at-a-glance anomalies than the more complex DBSCAN and HDBSCAN algorithms, though it comes with drawbacks.
]]>Future Internet doi: 10.3390/fi16010027
Authors: Xu Feng Mengyang He Lei Zhuang Yanrui Song Rumeng Peng
SAGIN is formed by the fusion of ground networks and aircraft networks. It breaks through the limitation of communication, which cannot cover the whole world, bringing new opportunities for network communication in remote areas. However, many heterogeneous devices in SAGIN pose significant challenges in terms of end-to-end resource management, and the limited regional heterogeneous resources also threaten the QoS for users. In this regard, this paper proposes a hierarchical resource management structure for SAGIN, named SAGIN-MEC, based on a SDN, NFV, and MEC, aiming to facilitate the systematic management of heterogeneous network resources. Furthermore, to minimize the operator deployment costs while ensuring the QoS, this paper formulates a resource scheduling optimization model tailored to SAGIN scenarios to minimize energy consumption. Additionally, we propose a deployment algorithm, named DRL-G, which is based on heuristics and DRL, aiming to allocate heterogeneous network resources within SAGIN effectively. Experimental results showed that SAGIN-MEC can reduce the end-to-end delay by 6–15 ms compared to the terrestrial edge network, and compared to other algorithms, the DRL-G algorithm can improve the service request reception rate by up to 20%. In terms of energy consumption, it reduces the average energy consumption by 4.4% compared to the PG algorithm.
]]>Future Internet doi: 10.3390/fi16010026
Authors: Andrea Moreno-Cabanillas Elizabet Castillero-Ostio Antonio Castillo-Esparcia
The communication of organizations with their audiences has undergone changes thanks to the Internet. Non-Governmental Organizations (NGOs), as influential groups, are no exception, as much of their activism takes place through grassroots digital lobbying. The consolidation of Web 2.0 has not only provided social organizations with a new and powerful tool for disseminating information but also brought about significant changes in the relationship between nonprofit organizations and their diverse audiences. This has facilitated and improved interaction between them. The purpose of this article is to analyze the level of interactivity implemented on the websites of leading NGOs worldwide and their presence on social networks, with the aim of assessing whether these influential groups are moving towards more dialogic systems in relation to their audience. The results reveal that NGOs have a high degree of interactivity in the tools used to present and disseminate information on their websites. However, not all maintain the same level of interactivity in the resources available for interaction with Internet users, as very few have high interactivity regarding bidirectional resources. It was concluded that international non-governmental organizations still suffer from certain shortcomings in the strategic management of digital communication on their web platforms, while, on the other hand, a strong presence can be noted on the most-popular social networks.
]]>Future Internet doi: 10.3390/fi16010025
Authors: Chinyang Henry Tseng Woei-Jiunn Tsaur Yueh-Mao Shen
In detecting large-scale attacks, deep neural networks (DNNs) are an effective approach based on high-quality training data samples. Feature selection and feature extraction are the primary approaches for data quality enhancement for high-accuracy intrusion detection. However, their enhancement root causes usually present weak relationships to the differences between normal and attack behaviors in the data samples. Thus, we propose a Classification Tendency Difference Index (CTDI) model for feature selection and extraction in intrusion detection. The CTDI model consists of three indexes: Classification Tendency Frequency Difference (CTFD), Classification Tendency Membership Difference (CTMD), and Classification Tendency Distance Difference (CTDD). In the dataset, each feature has many feature values (FVs). In each FV, the normal and attack samples indicate the FV classification tendency, and CTDI shows the classification tendency differences between the normal and attack samples. CTFD is the frequency difference between the normal and attack samples. By employing fuzzy C means (FCM) to establish the normal and attack clusters, CTMD is the membership difference between the clusters, and CTDD is the distance difference between the cluster centers. CTDI calculates the index score in each FV and summarizes the scores of all FVs in the feature as the feature score for each of the three indexes. CTDI adopts an Auto Encoder for feature extraction to generate new features from the dataset and calculate the three index scores for the new features. CTDI sorts the original and new features for each of the three indexes to select the best features. The selected CTDI features indicate the best classification tendency differences between normal and attack samples. The experiment results demonstrate that the CTDI features achieve better detection accuracy as classified by DNN for the Aegean WiFi Intrusion Dataset than their related works, and the detection enhancements are based on the improved classification tendency differences in the CTDI features.
]]>Future Internet doi: 10.3390/fi16010024
Authors: Sana Rasheed Soulla Louca
A national population census is instrumental in offering a holistic view of a country’s progress, directly influencing policy formulation and strategic planning. Potential flaws in the census system can have detrimental impacts on national development. Our prior research has pinpointed various deficiencies in current census methodologies, including inadequate population coverage, racial and ethnic discrimination, and challenges related to data privacy, security, and distribution. This study aims to address the “missing persons” challenge in the national census population and housing system. The integration of blockchain technology emerges as a promising solution for addressing these identified issues, enhancing the integrity and efficacy of census processes. Building upon our earlier research which examined the national census system of Pakistan, we propose an architecture design incorporating Hyperledger Fabric, performing system sizing for the entire nation count. The Blockchain-Based Implementation of National Census as a Supplementary Instrument for Enhanced Transparency, Accountability, Privacy, and Security (BINC-TAPS) seeks to provide a robust, transparent, scalable, immutable, and tamper-proof solution for conducting national population and housing censuses, while also fostering socio-economic advancements. This paper presents a comprehensive overview of our research, with a primary focus on the implementation of the blockchain-based proposed solution, including prototype testing and the resulting outcomes.
]]>Future Internet doi: 10.3390/fi16010023
Authors: Alex Galis
This paper presents a comprehensive set of design methods for making future Internet networking fully energy-aware and sustainably minimizing and managing the energy footprint. It includes (a) 41 energy-aware design methods, grouped into Service Operations Support, Management Operations Support, Compute Operations Support, Connectivity/Forwarding Operations Support, Traffic Engineering Methods, Architectural Support for Energy Instrumentation, and Network Configuration; (b) energy consumption models and energy metrics are identified and specified. It specifies the requirements for energy-defined network compliance, which include energy-measurable network devices with the support of several control messages: registration, discovery, provisioning, discharge, monitoring, synchronization, flooding, performance, and pushback.
]]>Future Internet doi: 10.3390/fi16010022
Authors: Mahmoud Elkhodr Samiya Khan Ergun Gide
In the modern digital landscape of the Internet of Things (IoT), data interoperability and heterogeneity present critical challenges, particularly with the increasing complexity of IoT systems and networks. Addressing these challenges, while ensuring data security and user trust, is pivotal. This paper proposes a novel Semantic IoT Middleware (SIM) for healthcare. The architecture of this middleware comprises the following main processes: data generation, semantic annotation, security encryption, and semantic operations. The data generation module facilitates seamless data and event sourcing, while the Semantic Annotation Component assigns structured vocabulary for uniformity. SIM adopts blockchain technology to provide enhanced data security, and its layered approach ensures robust interoperability and intuitive user-centric operations for IoT systems. The security encryption module offers data protection, and the semantic operations module underpins data processing and integration. A distinctive feature of this middleware is its proficiency in service integration, leveraging semantic descriptions augmented by user feedback. Additionally, SIM integrates artificial intelligence (AI) feedback mechanisms to continuously refine and optimise the middleware’s operational efficiency.
]]>Future Internet doi: 10.3390/fi16010021
Authors: G. G. Md. Nawaz Ali Mohammad Nazmus Sadat Md Suruz Miah Sameer Ahmed Sharief Yun Wang
Recently, the Third Generation Partnership Project (3GPP) introduced new radio (NR) technology for vehicle-to-everything (V2X) communication to enable delay-sensitive and bandwidth-hungry applications in vehicular communication. The NR system is strategically crafted to complement the existing long-term evolution (LTE) cellular-vehicle-to-everything (C-V2X) infrastructure, particularly to support advanced services such as the operation of automated vehicles. It is widely anticipated that the fifth-generation (5G) NR system will surpass LTE C-V2X in terms of achieving superior performance in scenarios characterized by high throughput, low latency, and enhanced reliability, especially in the context of congested traffic conditions and a diverse range of vehicular applications. This article will provide a comprehensive literature review on vehicular communications from dedicated short-range communication (DSRC) to NR V2X. Subsequently, it delves into a detailed examination of the challenges and opportunities inherent in NR V2X technology. Finally, we proceed to elucidate the process of creating and analyzing an open-source 5G NR V2X module in network simulation-3 (ns-3) and then demonstrate the NR V2X performance in terms of different key performance indicators implemented through diverse operational scenarios.
]]>Future Internet doi: 10.3390/fi16010020
Authors: Lidong Liu Shidang Li Mingsheng Wei Jinsong Xu Bencheng Yu
Network energy resources are limited in communication systems, which may cause energy shortages in mobile devices at the user end. Active Reconfigurable Intelligent Surfaces (A-RIS) not only have phase modulation properties but also enhance the signal strength; thus, they are expected to solve the energy shortage problem experience at the user end in 6G communications. In this paper, a resource allocation algorithm for maximizing the sum of harvested energy is proposed for an active RIS-assisted Simultaneous Wireless Information and Power Transfer (SWIPT) system to solve the problem of low performance of harvested energy for users due to multiplicative fading. First, in the active RIS-assisted SWIPT system using a power splitting architecture to achieve information and energy co-transmission, the joint resource allocation problem is constructed with the objective function of maximizing the sum of the collected energy of all users, under the constraints of signal-to-noise ratio, active RIS and base station transmit power, and power splitting factors. Second, the considered non-convex problem can be turned into a standard convex problem by using alternating optimization, semi-definite relaxation, successive convex approximation, penalty function, etc., and then an alternating iterative algorithm for harvesting energy is proposed. The proposed algorithm splits the problem into two sub-problems and then performs iterative optimization separately, and then the whole is alternately optimized to obtain the optimal solution. Simulation results show that the proposed algorithm improves the performance by 45.2% and 103.7% compared to the passive RIS algorithm and the traditional without-RIS algorithm, respectively, at the maximum permissible transmitting power of 45 dBm at the base station.
]]>Future Internet doi: 10.3390/fi16010019
Authors: Chen Zhang Celimuge Wu Min Lin Yangfei Lin William Liu
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. However, the inherent complexity and variability of MEC networks pose significant challenges in computational offloading decisions. To tackle this problem, we propose a proximal policy optimization (PPO)-based Device-to-Device (D2D)-assisted computation offloading and resource allocation scheme. We construct a realistic MEC network environment and develop a Markov decision process (MDP) model that minimizes time loss and energy consumption. The integration of a D2D communication-based offloading framework allows for collaborative task offloading between end devices and MEC servers, enhancing both resource utilization and computational efficiency. The MDP model is solved using the PPO algorithm in deep reinforcement learning to derive an optimal policy for offloading and resource allocation. Extensive comparative analysis with three benchmarked approaches has confirmed our scheme’s superior performance in latency, energy consumption, and algorithmic convergence, demonstrating its potential to improve MEC network operations in the context of emerging 5G and beyond technologies.
]]>Future Internet doi: 10.3390/fi16010018
Authors: Irina Kochetkova Kseniia Leonteva Ibram Ghebrial Anastasiya Vlaskina Sofia Burtseva Anna Kushchazli Konstantin Samouylov
Fifth-generation (5G) networks provide network slicing capabilities, enabling the deployment of multiple logically isolated network slices on a single infrastructure platform to meet specific requirements of users. This paper focuses on modeling and analyzing resource capacity planning and reallocation for network slicing, specifically between two providers transmitting elastic traffic, such during as web browsing. A controller determines the need for resource reallocation and plans new resource capacity accordingly. A Markov decision process is employed in a controllable queuing system to find the optimal resource capacity for each provider. The reward function incorporates three network slicing principles: maximum matching for equal resource partitioning, maximum share of signals resulting in resource reallocation, and maximum resource utilization. To efficiently compute the optimal resource capacity planning policy, we developed an iterative algorithm that begins with maximum resource utilization as the starting point. Through numerical demonstrations, we show the optimal policy and metrics of resource reallocation for two services: web browsing and bulk data transfer. The results highlight fast convergence within three iterations and the effectiveness of the balanced three-principle approach in resource capacity planning for 5G network slicing.
]]>Future Internet doi: 10.3390/fi16010017
Authors: Mizuki Asano Takumi Miyoshi Taku Yamazaki
Smart home environments, which consist of various Internet of Things (IoT) devices to support and improve our daily lives, are expected to be widely adopted in the near future. Owing to a lack of awareness regarding the risks associated with IoT devices and challenges in replacing or the updating their firmware, adequate security measures have not been implemented. Instead, IoT device identification methods based on traffic analysis have been proposed. Since conventional methods process and analyze traffic data simultaneously, bias in the occurrence rate of traffic patterns has a negative impact on the analysis results. Therefore, this paper proposes an IoT traffic analysis and device identification method based on two-stage clustering in smart home environments. In the first step, traffic patterns are extracted by clustering IoT traffic at a local gateway located in each smart home and subsequently sent to a cloud server. In the second step, the cloud server extracts common traffic units to represent IoT traffic by clustering the patterns obtained in the first step. Two-stage clustering can reduce the impact of data bias, because each cluster extracted in the first clustering is summarized as one value and used as a single data point in the second clustering, regardless of the occurrence rate of traffic patterns. Through the proposed two-stage clustering method, IoT traffic is transformed into time series vector data that consist of common unit patterns and can be identified based on time series representations. Experiments using public IoT traffic datasets indicated that the proposed method could identify 21 IoTs devices with an accuracy of 86.9%. Therefore, we can conclude that traffic analysis using two-stage clustering is effective for improving the clustering quality, device identification, and implementation in distributed environments.
]]>Future Internet doi: 10.3390/fi16010016
Authors: Javid Misirli Emiliano Casalicchio
The Internet of Things (IoT) uptake brought a paradigm shift in application deployment. Indeed, IoT applications are not centralized in cloud data centers, but the computation and storage are moved close to the consumers, creating a computing continuum between the edge of the network and the cloud. This paradigm shift is called fog computing, a concept introduced by Cisco in 2012. Scheduling applications in this decentralized, heterogeneous, and resource-constrained environment is challenging. The task scheduling problem in fog computing has been widely explored and addressed using many approaches, from traditional operational research to heuristics and machine learning. This paper aims to analyze the literature on task scheduling in fog computing published in the last five years to classify the criteria used for decision-making and the technique used to solve the task scheduling problem. We propose a taxonomy of task scheduling algorithms, and we identify the research gaps and challenges.
]]>Future Internet doi: 10.3390/fi16010015
Authors: Fouad Achkouty Richard Chbeir Laurent Gallon Elio Mansour Antonio Corral
The proliferation of sensor and actuator devices in Internet of things (IoT) networks has garnered significant attention in recent years. However, the increasing number of IoT devices, and the corresponding resources, has introduced various challenges, particularly in indexing and querying. In essence, resource management has become more complex due to the non-uniform distribution of related devices and their limited capacity. Additionally, the diverse demands of users have further complicated resource indexing. This paper proposes a distributed resource indexing and querying algorithm for large connected environments, specifically designed to address the challenges posed by IoT networks. The algorithm considers both the limited device capacity and the non-uniform distribution of devices, acknowledging that devices cannot store information about the entire environment. Furthermore, it places special emphasis on uncovered zones, to reduce the response time of queries related to these areas. Moreover, the algorithm introduces different types of queries, to cater to various user needs, including fast queries and urgent queries suitable for different scenarios. The effectiveness of the proposed approach was evaluated through extensive experiments covering index creation, coverage, and query execution, yielding promising and insightful results.
]]>Future Internet doi: 10.3390/fi16010014
Authors: Omar Serghini Hayat Semlali Asmaa Maali Abdelilah Ghammaz Salvatore Serrano
Spectrum sensing is an essential function of cognitive radio technology that can enable the reuse of available radio resources by so-called secondary users without creating harmful interference with licensed users. The application of machine learning techniques to spectrum sensing has attracted considerable interest in the literature. In this contribution, we study cooperative spectrum sensing in a cognitive radio network where multiple secondary users cooperate to detect a primary user. We introduce multiple cooperative spectrum sensing schemes based on a deep neural network, which incorporate a one-dimensional convolutional neural network and a long short-term memory network. The primary objective of these schemes is to effectively learn the activity patterns of the primary user. The scenario of an imperfect transmission channel is considered for service messages to demonstrate the robustness of the proposed model. The performance of the proposed methods is evaluated with the receiver operating characteristic curve, the probability of detection for various SNR levels and the computational time. The simulation results confirm the effectiveness of the bidirectional long short-term memory-based method, surpassing the performance of the other proposed schemes and the current state-of-the-art methods in terms of detection probability, while ensuring a reasonable online detection time.
]]>Future Internet doi: 10.3390/fi16010013
Authors: Herve M. Kabamba Matthew Khouzam Michel R. Dagenais
Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. With Node.js emerging as a leading development environment recognized for its rapidly growing ecosystem, there is a pressing need for innovative performance debugging approaches that reduce the telemetry data collection efforts and the overhead incurred by the environment’s instrumentation. In response, we introduce a new approach designed for transparent tracing and performance debugging of microservices in cloud settings. This approach is centered around our newly developed Internal Transparent Tracing and Context Reconstruction (ITTCR) technique. ITTCR is adept at correlating internal metrics from various distributed trace files to reconstruct the intricate execution contexts of microservices operating in a Node.js environment. Our method achieves transparency by directly instrumenting the Node.js virtual machine, enabling the collection and analysis of trace events in a transparent manner. This process facilitates the creation of visualization tools, enhancing the understanding and analysis of microservice performance in cloud environments. Compared to other methods, our approach incurs an overhead of approximately 5% on the system for the trace collection infrastructure while exhibiting minimal utilization of system resources during analysis execution. Experiments demonstrate that our technique scales well with very large trace files containing huge numbers of events and performs analyses in very acceptable timeframes.
]]>Future Internet doi: 10.3390/fi16010012
Authors: Xiu Li Aron Henriksson Martin Duneld Jalal Nouri Yongchao Wu
Educational content recommendation is a cornerstone of AI-enhanced learning. In particular, to facilitate navigating the diverse learning resources available on learning platforms, methods are needed for automatically linking learning materials, e.g., in order to recommend textbook content based on exercises. Such methods are typically based on semantic textual similarity (STS) and the use of embeddings for text representation. However, it remains unclear what types of embeddings should be used for this task. In this study, we carry out an extensive empirical evaluation of embeddings derived from three different types of models: (i) static embeddings trained using a concept-based knowledge graph, (ii) contextual embeddings from a pre-trained language model, and (iii) contextual embeddings from a large language model (LLM). In addition to evaluating the models individually, various ensembles are explored based on different strategies for combining two models in an early vs. late fusion fashion. The evaluation is carried out using digital textbooks in Swedish for three different subjects and two types of exercises. The results show that using contextual embeddings from an LLM leads to superior performance compared to the other models, and that there is no significant improvement when combining these with static embeddings trained using a knowledge graph. When using embeddings derived from a smaller language model, however, it helps to combine them with knowledge graph embeddings. The performance of the best-performing model is high for both types of exercises, resulting in a mean Recall@3 of 0.96 and 0.95 and a mean MRR of 0.87 and 0.86 for quizzes and study questions, respectively, demonstrating the feasibility of using STS based on text embeddings for educational content recommendation. The ability to link digital learning materials in an unsupervised manner—relying only on readily available pre-trained models—facilitates the development of AI-enhanced learning.
]]>Future Internet doi: 10.3390/fi16010011
Authors: Qiang Liu Rui Han Yang Li
Idle bandwidth resources are inefficiently distributed among different users. Currently, the utilization of user bandwidth resources mostly relies on traditional IP networks, implementing relevant techniques at the application layer, which creates scalability issues and brings additional system overheads. Information-Centric Networking (ICN), based on the idea of separating identifiers and locators, offers the potential to aggregate idle bandwidth resources from a network layer perspective. This paper proposes a method for utilizing user bandwidth resources in ICN; specifically, we treat the use of user bandwidth resources as a service and assign service IDs (identifiers), and when network congestion (the network nodes are overloaded) occurs, the traffic can be routed to the user side for forwarding through the ID/NA (Network Address) cooperative routing mechanism of ICN, thereby improving the scalability of ICN transmission and the utilization of underlying network resources. To enhance the willingness of users to contribute idle bandwidth resources, we establish a secure and trustworthy bandwidth trading market using blockchain technology. We also design an incentive mechanism based on the Proof-of-Network-Contribution (PoNC) consensus algorithm; users can “mine” by forwarding packets. The experimental results show that utilizing idle bandwidth can significantly improve the scalability of ICN transmission under experimental conditions, bringing a maximum throughput improvement of 19.4% and reducing the packet loss rate. Compared with existing methods, using ICN technology to aggregate idle bandwidth for network transmission will have a more stable and lower latency, and it brings a maximum utilization improvement of 13.7%.
]]>Future Internet doi: 10.3390/fi16010010
Authors: Emanuele Santonicola Ennio Andrea Adinolfi Simone Coppola Francesco Pascale
Nowadays, a vehicle can contain from 20 to 100 ECUs, which are responsible for ordering, controlling and monitoring all the components of the vehicle itself. Each of these units can also send and receive information to other units on the network or externally. For most vehicles, the controller area network (CAN) is the main communication protocol and system used to build their internal network. Technological development, the growing integration of devices and the numerous advances in the field of connectivity have allowed the vehicle to become connected, and the flow of information exchanged between the various ECUs (electronic control units) becomes increasingly important and varied. Furthermore, the vehicle itself is capable of exchanging information with other vehicles, with the surrounding environment and with the Internet. As shown by the CARDIAN project, this type of innovation allows the user an increasingly safe and varied driving experience, but at the same time, it introduces a series of vulnerabilities and dangers due to the connection itself. The job of making the vehicle safe therefore becomes critical. In recent years, it has been demonstrated in multiple ways how easy it is to compromise the safety of a vehicle and its passengers by injecting malicious messages into the CAN network present inside the vehicle itself. The purpose of this article is the construction of a system that, integrated within the vehicle network, is able to effectively recognize any type of intrusion and tampering.
]]>Future Internet doi: 10.3390/fi16010009
Authors: Nasour Bagheri Ygal Bendavid Masoumeh Safkhani Samad Rostampour
A smart grid is an electricity network that uses advanced technologies to facilitate the exchange of information and electricity between utility companies and customers. Although most of the technologies involved in such grids have reached maturity, smart meters—as connected devices—introduce new security challenges. To overcome this significant obstacle to grid modernization, safeguarding privacy has emerged as a paramount concern. In this paper, we begin by evaluating the security levels of recently proposed authentication methods for smart meters. Subsequently, we introduce an enhanced protocol named PPSG, designed for smart grids, which incorporates physical unclonable functions (PUF) and an elliptic curve cryptography (ECC) module to address the vulnerabilities identified in previous approaches. Our security analysis, utilizing a real-or-random (RoR) model, demonstrates that PPSG effectively mitigates the weaknesses found in prior methods. To assess the practicality of PPSG, we conduct simulations using an Arduino UNO board, measuring computation, communication, and energy costs. Our results, including a processing time of 153 ms, a communication cost of 1376 bits, and an energy consumption of 13.468 mJ, align with the requirements of resource-constrained devices within smart grids.
]]>Future Internet doi: 10.3390/fi16010008
Authors: Patrick Toman Nalini Ravishanker Nathan Lally Sanguthevar Rajasekaran
With the advent of the “Internet of Things” (IoT), insurers are increasingly leveraging remote sensor technology in the development of novel insurance products and risk management programs. For example, Hartford Steam Boiler’s (HSB) IoT freeze loss program uses IoT temperature sensors to monitor indoor temperatures in locations at high risk of water-pipe burst (freeze loss) with the goal of reducing insurances losses via real-time monitoring of the temperature data streams. In the event these monitoring systems detect a potentially risky temperature environment, an alert is sent to the end-insured (business manager, tenant, maintenance staff, etc.), prompting them to take remedial action by raising temperatures. In the event that an alert is sent and freeze loss occurs, the firm is not liable for any damages incurred by the event. For the program to be effective, there must be a reliable method of verifying if customers took appropriate corrective action after receiving an alert. Due to the program’s scale, direct follow up via text or phone calls is not possible for every alert event. In addition, direct feedback from customers is not necessarily reliable. In this paper, we propose the use of a non-linear, auto-regressive time series model, coupled with the time series intervention analysis method known as causal impact, to directly evaluate whether or not a customer took action directly from IoT temperature streams. Our method offers several distinct advantages over other methods as it is (a) readily scalable with continued program growth, (b) entirely automated, and (c) inherently less biased than human labelers or direct customer response. We demonstrate the efficacy of our method using a sample of actual freeze alert events from the freeze loss program.
]]>Future Internet doi: 10.3390/fi16010007
Authors: Yadi Zhao Lei Yan Jian Wu Ximing Song
To address the low level of intelligence and low utilization of logs in current rotary cutting equipment, this paper proposes a digital twin-based system for optimizing the rotary cutting of logs using a five-dimensional model of digital twins. The system features a log perception platform to capture three-dimensional point cloud data, outlining the logs’ contours. Utilizing the Delaunay3D algorithm, this model performs a three-dimensional reconstruction of the log point cloud, constructing a precise digital twin. Feature information is extracted from the point cloud using the least squares method. Processing parameters, determined through the kinematic model, are verified in rotary cutting simulations via Bool operations. The system’s efficacy has been substantiated through experimental validation, demonstrating its capability to output specific processing schemes for irregular logs and to verify these through simulation. This approach notably improves log recovery rates, decreasing volume error from 12.8% to 2.7% and recovery rate error from 23.5% to 5.7% The results validate the efficacy of the proposed digital twin system in optimizing the rotary cutting process, demonstrating its capability not only to enhance the utilization rate of log resources but also to improve the economic efficiency of the factory, thereby facilitating industrial development.
]]>Future Internet doi: 10.3390/fi16010006
Authors: Jing Liu Xuesong Hai Keqin Li
Massive amounts of data drive the performance of deep learning models, but in practice, data resources are often highly dispersed and bound by data privacy and security concerns, making it difficult for multiple data sources to share their local data directly. Data resources are difficult to aggregate effectively, resulting in a lack of support for model training. How to collaborate between data sources in order to aggregate the value of data resources is therefore an important research question. However, existing distributed-collaborative-learning architectures still face serious challenges in collaborating between nodes that lack mutual trust, with security and trust issues seriously affecting the confidence and willingness of data sources to participate in collaboration. Blockchain technology provides trusted distributed storage and computing, and combining it with collaboration between data sources to build trusted distributed-collaborative-learning architectures is an extremely valuable research direction for application. We propose a trusted distributed-collaborative-learning mechanism based on blockchain smart contracts. Firstly, the mechanism uses blockchain smart contracts to define and encapsulate collaborative behaviours, relationships and norms between distributed collaborative nodes. Secondly, we propose a model-fusion method based on feature fusion, which replaces the direct sharing of local data resources with distributed-model collaborative training and organises distributed data resources for distributed collaboration to improve model performance. Finally, in order to verify the trustworthiness and usability of the proposed mechanism, on the one hand, we implement formal modelling and verification of the smart contract by using Coloured Petri Net and prove that the mechanism satisfies the expected trustworthiness properties by verifying the formal model of the smart contract associated with the mechanism. On the other hand, the model-fusion method based on feature fusion is evaluated in different datasets and collaboration scenarios, while a typical collaborative-learning case is implemented for a comprehensive analysis and validation of the mechanism. The experimental results show that the proposed mechanism can provide a trusted and fair collaboration infrastructure for distributed-collaboration nodes that lack mutual trust and organise decentralised data resources for collaborative model training to develop effective global models.
]]>Future Internet doi: 10.3390/fi16010005
Authors: Ryunosuke Masaoka Gia Khanh Tran Jin Nakazato Kei Sakaguchi
Nowadays, wireless communications are ubiquitously available. However, as pervasive as this technology is, there are distinct situations, such as during substantial public events, catastrophic disasters, or unexpected malfunctions of base stations (BSs), where the reliability of these communications might be jeopardized. Such scenarios highlight the vulnerabilities inherent in our current infrastructure. As a result, there is growing interest in establishing temporary networks that offer high-capacity communications and can adaptively shift service locations. To address this gap, this paper investigates the promising avenue of merging two powerful technologies: Unmanned Aerial Vehicles (UAVs) and millimeter-wave (mmWave) transmissions. UAVs, with their ability to be operated remotely and to take flight without being constrained by terrestrial limitations, present a compelling case for being the cellular BSs of the future. When integrated with the high-speed data transfer capabilities of mmWave technology, the potential is boundless. We embark on a hands-on approach to provide a tangible foundation for our hypothesis. We carry out comprehensive experiments using an actual UAV equipped with an mmWave device. Our main objective is to meticulously study its radio wave propagation attributes when the UAVs are in flight mode. The insights gleaned from this hands-on experimentation are profound. We contrast our experimental findings with a rigorous numerical analysis to refine our understanding. This comparative study aimed to shed light on the intricacies of wave propagation behaviors within the vast expanse of the atmosphere.
]]>Future Internet doi: 10.3390/fi16010004
Authors: Goran Bubaš Antonela Čižmešija Andreja Kovačić
After the introduction of the ChatGPT conversational artificial intelligence (CAI) tool in November 2022, there has been a rapidly growing interest in the use of such tools in higher education. While the educational uses of some other information technology (IT) tools (including collaboration and communication tools, learning management systems, chatbots, and videoconferencing tools) have been frequently evaluated regarding technology acceptance and usability attributes of those technologies, similar evaluations of CAI tools and services like ChatGPT, Bing Chat, and Bard have only recently started to appear in the scholarly literature. In our study, we present a newly developed set of assessment scales that are related to the usability and user experiences of CAI tools when used by university students, as well as the results of evaluation of these assessment scales specifically regarding the CAI Bing Chat tool (i.e., Microsoft Copilot). The following scales were developed and evaluated using a convenience sample (N = 126) of higher education students: Perceived Usefulness, General Usability, Learnability, System Reliability, Visual Design and Navigation, Information Quality, Information Display, Cognitive Involvement, Design Appeal, Trust, Personification, Risk Perception, and Intention to Use. For most of the aforementioned scales, internal consistency (Cronbach alpha) was in the range from satisfactory to good, which implies their potential usefulness for further studies of related attributes of CAI tools. A stepwise linear regression revealed that the most influential predictors of Intention to Use Bing Chat (or ChatGPT) in the future were the usability variable Perceived Usefulness and two user experience variables—Trust and Design Appeal. Also, our study revealed that students’ perceptions of various specific usability and user experience characteristics of Bing Chat were predominantly positive. The evaluated assessment scales could be beneficial in further research that would include other CAI tools like ChatGPT/GPT-4 and Bard.
]]>