Previous Issue
Volume 17, March
 
 

Future Internet, Volume 17, Issue 4 (April 2025) – 47 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 2006 KiB  
Article
Edge AI for Real-Time Anomaly Detection in Smart Homes
by Manuel J. C. S. Reis and Carlos Serôdio
Future Internet 2025, 17(4), 179; https://doi.org/10.3390/fi17040179 (registering DOI) - 18 Apr 2025
Abstract
The increasing adoption of smart home technologies has intensified the demand for real-time anomaly detection to improve security, energy efficiency, and device reliability. Traditional cloud-based approaches introduce latency, privacy concerns, and network dependency, making Edge AI a compelling alternative for low-latency, on-device processing. [...] Read more.
The increasing adoption of smart home technologies has intensified the demand for real-time anomaly detection to improve security, energy efficiency, and device reliability. Traditional cloud-based approaches introduce latency, privacy concerns, and network dependency, making Edge AI a compelling alternative for low-latency, on-device processing. This paper presents an Edge AI-based anomaly detection framework that combines Isolation Forest (IF) and Long Short-Term Memory Autoencoder (LSTM-AE) models to identify anomalies in IoT sensor data. The system is evaluated on both synthetic and real-world smart home datasets, including temperature, motion, and energy consumption signals. Experimental results show that LSTM-AE achieves higher detection accuracy (up to 93.6%) and recall but requires more computational resources. In contrast, IF offers faster inference and lower power consumption, making it suitable for constrained environments. A hybrid architecture integrating both models is proposed to balance accuracy and efficiency, achieving sub-50 ms inference latency on embedded platforms such as Raspberry Pi and NVIDEA Jetson Nano. Optimization strategies such as quantization reduced LSTM-AE inference time by 76% and power consumption by 35%. Adaptive learning mechanisms, including federated learning, are also explored to minimize cloud dependency and enhance data privacy. These findings demonstrate the feasibility of deploying real-time, privacy-preserving, and energy-efficient anomaly detection directly on edge devices. The proposed framework can be extended to other domains such as smart buildings and industrial IoT. Future work will investigate self-supervised learning, transformer-based detection, and deployment in real-world operational settings. Full article
Show Figures

Figure 1

27 pages, 3907 KiB  
Article
Detecting Disinformation in Croatian Social Media Comments
by Igor Ljubi, Zdravko Grgić, Marin Vuković and Gordan Gledec
Future Internet 2025, 17(4), 178; https://doi.org/10.3390/fi17040178 - 17 Apr 2025
Abstract
The frequency with which fake news or misinformation is published on social networks is constantly increasing. Users of social networks are confronted with many different posts every day, often with sensationalist titles and content of dubious veracity. The problem is particularly common in [...] Read more.
The frequency with which fake news or misinformation is published on social networks is constantly increasing. Users of social networks are confronted with many different posts every day, often with sensationalist titles and content of dubious veracity. The problem is particularly common in times of sensitive social or political situations, such as epidemics of contagious diseases or elections. As such messages can have an impact on democratic processes or cause panic among the population, many countries and the European Commission itself have recently stepped up their activities to combat disinformation campaigns on social networks. Since previous research has shown that there are no tools available to combat disinformation in the Croatian language, we proposed a framework to detect potentially misinforming content in the comments on social media. The case study was conducted with real public comments published on Croatian Facebook pages. The initial results of this framework were encouraging as it can successfully classify and detect disinformation content. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

47 pages, 2579 KiB  
Systematic Review
Enhancing Transplantation Care with eHealth: Benefits, Challenges, and Key Considerations for the Future
by Ilaisaane Falevai and Farkhondeh Hassandoust
Future Internet 2025, 17(4), 177; https://doi.org/10.3390/fi17040177 - 17 Apr 2025
Abstract
eHealth has transformed transplantation care by enhancing communication between patients and clinics, supporting self-management, and improving adherence to medical advice. However, existing research on eHealth in transplantation remains fragmented, lacking a comprehensive understanding of its diverse users, associated benefits and challenges, and key [...] Read more.
eHealth has transformed transplantation care by enhancing communication between patients and clinics, supporting self-management, and improving adherence to medical advice. However, existing research on eHealth in transplantation remains fragmented, lacking a comprehensive understanding of its diverse users, associated benefits and challenges, and key considerations for intervention development. This systematic review, conducted following the PRISMA guidelines, analyzed the literature on eHealth in transplantation published between 2018 and September 2023 across multiple databases. A total of 60 studies were included, highlighting benefits such as improved patient engagement, accessibility, empowerment, and cost-efficiency. Three primary categories of barriers were identified: knowledge and access barriers, usability and implementation challenges, and trust issues. Additionally, patient-centered design and readiness were found to be crucial factors in developing effective eHealth solutions. These findings underscore the need for tailored, patient-centric interventions to maximize the potential of eHealth in transplantation care. Moreover, the success of eHealth interventions in transplantation is increasingly dependent on robust networking infrastructure, cloud-based telemedicine systems, and secure data-sharing platforms. These technologies facilitate real-time communication between transplant teams and patients, ensuring continuous care and monitoring. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

26 pages, 3269 KiB  
Article
Augmentation and Classification of Requests in Moroccan Dialect to Improve Quality of Public Service: A Comparative Study of Algorithms
by Hajar Zaidani, Rim Koulali, Abderrahim Maizate and Mohamed Ouzzif
Future Internet 2025, 17(4), 176; https://doi.org/10.3390/fi17040176 - 17 Apr 2025
Abstract
Moroccan Law 55.19 aims to streamline administrative procedures, fostering trust between citizens and public administrations. To implement this law effectively and enhance public service quality, it is essential to use the Moroccan dialect to involve a wide range of people by leveraging Natural [...] Read more.
Moroccan Law 55.19 aims to streamline administrative procedures, fostering trust between citizens and public administrations. To implement this law effectively and enhance public service quality, it is essential to use the Moroccan dialect to involve a wide range of people by leveraging Natural Language Processing (NLP) techniques customized to its specific linguistic characteristics. It is worth noting that the Moroccan dialect presents a unique linguistic landscape, marked by the coexistence of multiple texts. Though it has emerged as the preferred medium of communication on social media, reaching wide audiences, its perceived difficulty of comprehension remains unaddressed. This article introduces a new approach to addressing these challenges. First, we compiled and processed a dataset of Moroccan dialect requests for public administration documents, employing a new augmentation technique to enhance its size and diversity. Second, we conducted text classification experiments using various machine learning algorithms, ranging from traditional methods to advanced large language models (LLMs), to categorize the requests into three classes. The results indicate promising outcomes, with an accuracy of more than 80% for LLMs. Finally, we propose a chatbot system architecture for deploying the most efficient classification algorithm. This solution also contains a voice assistant system that can contribute to the social inclusion of illiterate people. The article concludes by outlining potential avenues for future research. Full article
Show Figures

Figure 1

54 pages, 5836 KiB  
Review
A Survey on Edge Computing (EC) Security Challenges: Classification, Threats, and Mitigation Strategies
by Abdul Manan Sheikh, Md. Rafiqul Islam, Mohamed Hadi Habaebi, Suriza Ahmad Zabidi, Athaur Rahman Bin Najeeb and Adnan Kabbani
Future Internet 2025, 17(4), 175; https://doi.org/10.3390/fi17040175 - 16 Apr 2025
Viewed by 152
Abstract
Edge computing (EC) is a distributed computing approach to processing data at the network edge, either by the device or a local server, instead of centralized data centers or the cloud. EC proximity to the data source can provide faster insights, response time, [...] Read more.
Edge computing (EC) is a distributed computing approach to processing data at the network edge, either by the device or a local server, instead of centralized data centers or the cloud. EC proximity to the data source can provide faster insights, response time, and bandwidth utilization. However, the distributed architecture of EC makes it vulnerable to data security breaches and diverse attack vectors. The edge paradigm has limited availability of resources like memory and battery power. Also, the heterogeneous nature of the hardware, diverse communication protocols, and difficulty in timely updating security patches exist. A significant number of researchers have presented countermeasures for the detection and mitigation of data security threats in an EC paradigm. However, an approach that differs from traditional data security and privacy-preserving mechanisms already used in cloud computing is required. Artificial Intelligence (AI) greatly improves EC security through advanced threat detection, automated responses, and optimized resource management. When combined with Physical Unclonable Functions (PUFs), AI further strengthens data security by leveraging PUFs’ unique and unclonable attributes alongside AI’s adaptive and efficient management features. This paper investigates various edge security strategies and cutting-edge solutions. It presents a comparison between existing strategies, highlighting their benefits and limitations. Additionally, the paper offers a detailed discussion of EC security threats, including their characteristics and the classification of different attack types. The paper also provides an overview of the security and privacy needs of the EC, detailing the technological methods employed to address threats. Its goal is to assist future researchers in pinpointing potential research opportunities. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

40 pages, 6881 KiB  
Article
Distributed Reputation for Accurate Vehicle Misbehavior Reporting (DRAMBR)
by Dimah Almani, Tim Muller and Steven Furnell
Future Internet 2025, 17(4), 174; https://doi.org/10.3390/fi17040174 - 15 Apr 2025
Viewed by 66
Abstract
Vehicle-to-Vehicle (V2V) communications technology offers enhanced road safety, traffic efficiency, and connectivity. In V2V, vehicles cooperate by broadcasting safety messages to quickly detect and avoid dangerous situations on time or to avoid and reduce congestion. However, vehicles might misbehave, creating false information and [...] Read more.
Vehicle-to-Vehicle (V2V) communications technology offers enhanced road safety, traffic efficiency, and connectivity. In V2V, vehicles cooperate by broadcasting safety messages to quickly detect and avoid dangerous situations on time or to avoid and reduce congestion. However, vehicles might misbehave, creating false information and sharing it with neighboring vehicles, such as, for example, failing to report an observed accident or falsely reporting one when none exists. If other vehicles detect such misbehavior, they can report it. However, false accusations also constitute misbehavior. In disconnected areas with limited infrastructure, the potential for misbehavior increases due to the scarcity of Roadside Units (RSUs) necessary for verifying the truthfulness of communications. In such a situation, identifying malicious behavior using a standard misbehaving management system is ineffective in areas with limited connectivity. This paper presents a novel mechanism, Distributed Reputation for Accurate Misbehavior Reporting (DRAMBR), offering a fully integrated reputation solution that utilizes reputation to enhance the accuracy of the reporting system by identifying misbehavior in rural networks. The system operates in two phases: offline, using the Local Misbehavior Detection Mechanism (LMDM), where vehicles detect misbehavior and store reports locally, and online, where these reports are sent to a central reputation server. DRAMBR aggregates the reports and integrates DBSCAN for clustering spatial and temporal misbehavior reports, Isolation Forest for anomaly detection, and Gaussian Mixture Models for probabilistic classification of reports. Additionally, Random Forest and XGBoost models are combined to improve decision accuracy. DRAMBR distinguishes between honest mistakes, intentional deception, and malicious reporting. Using an existing mechanism, the updated reputation is available even in an offline environment. Through simulations, we evaluate our proposed reputation system’s performance, demonstrating its effectiveness in achieving a reporting accuracy of approximately 98%. The findings highlight the potential of reputation-based strategies to minimize misbehavior and improve the reliability and security of V2V communications, particularly in rural areas with limited infrastructure, ultimately contributing to safer and more reliable transportation systems. Full article
Show Figures

Figure 1

22 pages, 2235 KiB  
Article
Multimodal Fall Detection Using Spatial–Temporal Attention and Bi-LSTM-Based Feature Fusion
by Jungpil Shin, Abu Saleh Musa Miah, Rei Egawa, Najmul Hassan, Koki Hirooka and Yoichi Tomioka
Future Internet 2025, 17(4), 173; https://doi.org/10.3390/fi17040173 - 15 Apr 2025
Viewed by 156
Abstract
Human fall detection is a significant healthcare concern, particularly among the elderly, due to its links to muscle weakness, cardiovascular issues, and locomotive syndrome. Accurate fall detection is crucial for timely intervention and injury prevention, which has led many researchers to work on [...] Read more.
Human fall detection is a significant healthcare concern, particularly among the elderly, due to its links to muscle weakness, cardiovascular issues, and locomotive syndrome. Accurate fall detection is crucial for timely intervention and injury prevention, which has led many researchers to work on developing effective detection systems. However, existing unimodal systems that rely solely on skeleton or sensor data face challenges such as poor robustness, computational inefficiency, and sensitivity to environmental conditions. While some multimodal approaches have been proposed, they often struggle to capture long-range dependencies effectively. In order to address these challenges, we propose a multimodal fall detection framework that integrates skeleton and sensor data. The system uses a Graph-based Spatial-Temporal Convolutional and Attention Neural Network (GSTCAN) to capture spatial and temporal relationships from skeleton and motion data information in stream-1, while a Bi-LSTM with Channel Attention (CA) processes sensor data in stream-2, extracting both spatial and temporal features. The GSTCAN model uses AlphaPose for skeleton extraction, calculates motion between consecutive frames, and applies a graph convolutional network (GCN) with a CA mechanism to focus on relevant features while suppressing noise. In parallel, the Bi-LSTM with CA processes inertial signals, with Bi-LSTM capturing long-range temporal dependencies and CA refining feature representations. The features from both branches are fused and passed through a fully connected layer for classification, providing a comprehensive understanding of human motion. The proposed system was evaluated on the Fall Up and UR Fall datasets, achieving a classification accuracy of 99.09% and 99.32%, respectively, surpassing existing methods. This robust and efficient system demonstrates strong potential for accurate fall detection and continuous healthcare monitoring. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

18 pages, 620 KiB  
Article
C3: Leveraging the Native Messaging Application Programming Interface for Covert Command and Control
by Efstratios Chatzoglou and Georgios Kambourakis
Future Internet 2025, 17(4), 172; https://doi.org/10.3390/fi17040172 - 14 Apr 2025
Viewed by 33
Abstract
Traditional command and control (C2) frameworks struggle with evasion, automation, and resilience against modern detection techniques. This paper introduces covert C2 (C3), a novel C2 framework designed to enhance operational security and minimize detection. C3 employs a decentralized architecture, enabling independent victim communication [...] Read more.
Traditional command and control (C2) frameworks struggle with evasion, automation, and resilience against modern detection techniques. This paper introduces covert C2 (C3), a novel C2 framework designed to enhance operational security and minimize detection. C3 employs a decentralized architecture, enabling independent victim communication with the C2 server for covert persistence. Its adaptable design supports diverse post-exploitation and lateral movement techniques for optimized results across various environments. Through optimized performance and the use of the native messaging API, C3 agents achieve a demonstrably low detection rate against prevalent Endpoint Detection and Response (EDR) solutions. A publicly available proof-of-concept implementation demonstrates C3’s effectiveness in real-world adversarial simulations, specifically in direct code execution for privilege escalation and lateral movement. Our findings indicate that integrating novel techniques, such as the native messaging API, and a decentralized architecture significantly improves the stealth, efficiency, and reliability of offensive operations. The paper further analyzes C3’s post-exploitation behavior, explores relevant defense strategies, and compares it with existing C2 solutions, offering practical insights for enhancing network security. Full article
Show Figures

Figure 1

37 pages, 3696 KiB  
Article
Design Analysis for a Distributed Business Innovation System Employing Generated Expert Profiles, Matchmaking, and Blockchain Technology
by Adrian Alexandrescu, Delia-Elena Bărbuță, Cristian Nicolae Buțincu, Alexandru Archip, Silviu-Dumitru Pavăl, Cătălin Mironeanu and Gabriel-Alexandru Scînteie
Future Internet 2025, 17(4), 171; https://doi.org/10.3390/fi17040171 - 14 Apr 2025
Viewed by 34
Abstract
Innovation ecosystems often face challenges such as inadequate coordination, insufficient protection of intellectual property, limited access to quality expertise, and inefficient matchmaking between innovators and experts. This paper provides an in-depth design analysis of SPARK-IT, a novel business innovation platform specifically addressing these [...] Read more.
Innovation ecosystems often face challenges such as inadequate coordination, insufficient protection of intellectual property, limited access to quality expertise, and inefficient matchmaking between innovators and experts. This paper provides an in-depth design analysis of SPARK-IT, a novel business innovation platform specifically addressing these challenges. The platform leverages advanced AI to precisely match innovators with suitable mentors, supported by a distributed web scraper that constructs expert profiles from reliable sources (e.g., LinkedIn and BrainMap). Data privacy and security are prioritized through robust encryption that restricts sensitive content exclusively to innovators and mentors, preventing unauthorized access even by platform administrators. Additionally, documents are stored encrypted on decentralized storage, with their cryptographic hashes anchored on blockchain to ensure transparency, traceability, non-repudiation, and immutability. To incentivize active participation, SPARK-IT utilizes a dual-token approach comprising reward and reputation tokens. The reward tokens, SparkCoins, are wrapped stablecoins with tangible monetary value, enabling seamless internal transactions and external exchanges. Finally, the paper discusses key design challenges and critical architectural trade-offs and evaluates the socio-economic impacts of implementing this innovative solution. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

37 pages, 3006 KiB  
Article
Employing Streaming Machine Learning for Modeling Workload Patterns in Multi-Tiered Data Storage Systems
by Edson Ramiro Lucas Filho, George Savva, Lun Yang, Kebo Fu, Jianqiang Shen and Herodotos Herodotou
Future Internet 2025, 17(4), 170; https://doi.org/10.3390/fi17040170 - 11 Apr 2025
Viewed by 89
Abstract
Modern multi-tiered data storage systems optimize file access by managing data across a hybrid composition of caches and storage tiers while using policies whose decisions can severely impact the storage system’s performance. Recently, different Machine-Learning (ML) algorithms have been used to model access [...] Read more.
Modern multi-tiered data storage systems optimize file access by managing data across a hybrid composition of caches and storage tiers while using policies whose decisions can severely impact the storage system’s performance. Recently, different Machine-Learning (ML) algorithms have been used to model access patterns from complex workloads. Yet, current approaches train their models offline in a batch-based approach, even though storage systems are processing a stream of file requests with dynamic workloads. In this manuscript, we advocate the streaming ML paradigm for modeling access patterns in multi-tiered storage systems as it introduces various advantages, including high efficiency, high accuracy, and high adaptability. Moreover, representative file access patterns, including temporal, spatial, length, and frequency patterns, are identified for individual files, directories, and file formats, and used as features. Streaming ML models are developed, trained, and tested on different file system traces for making two types of predictions: the next offset to be read in a file and the future file hotness. An extensive evaluation is performed with production traces provided by Huawei Technologies, showing that the models are practical, with low memory consumption (<1.3 MB) and low training delay (<1.8 ms per training instance), and can make accurate predictions online (0.98 F1 score and 0.07 MAE on average). Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

29 pages, 1763 KiB  
Article
Energy-Efficient Secure Cell-Free Massive MIMO for Internet of Things: A Hybrid CNN–LSTM-Based Deep-Learning Approach
by Ali Vaziri, Pardis Sadatian Moghaddam, Mehrdad Shoeibi and Masoud Kaveh
Future Internet 2025, 17(4), 169; https://doi.org/10.3390/fi17040169 - 11 Apr 2025
Viewed by 85
Abstract
The Internet of Things (IoT) has revolutionized modern communication systems by enabling seamless connectivity among low-power devices. However, the increasing demand for high-performance wireless networks necessitates advanced frameworks that optimize both energy efficiency (EE) and security. Cell-free massive multiple-input multiple-output (CF m-MIMO) has [...] Read more.
The Internet of Things (IoT) has revolutionized modern communication systems by enabling seamless connectivity among low-power devices. However, the increasing demand for high-performance wireless networks necessitates advanced frameworks that optimize both energy efficiency (EE) and security. Cell-free massive multiple-input multiple-output (CF m-MIMO) has emerged as a promising solution for IoT networks, offering enhanced spectral efficiency, low-latency communication, and robust connectivity. Nevertheless, balancing EE and security in such systems remains a significant challenge due to the stringent power and computational constraints of IoT devices. This study employs secrecy energy efficiency (SEE) as a key performance metric to evaluate the trade-off between power consumption and secure communication efficiency. By jointly considering energy consumption and secrecy rate, our analysis provides a comprehensive assessment of security-aware energy efficiency in CF m-MIMO-based IoT networks. To enhance SEE, we introduce a hybrid deep-learning (DL) framework that integrates convolutional neural networks (CNN) and long short-term memory (LSTM) networks for joint EE and security optimization. The CNN extracts spatial features, while the LSTM captures temporal dependencies, enabling a more robust and adaptive modeling of dynamic IoT communication patterns. Additionally, a multi-objective improved biogeography-based optimization (MOIBBO) algorithm is utilized to optimize hyperparameters, ensuring an improved balance between convergence speed and model performance. Extensive simulation results demonstrate that the proposed MOIBBO-CNN–LSTM framework achieves superior SEE performance compared to benchmark schemes. Specifically, MOIBBO-CNN–LSTM attains an SEE gain of up to 38% compared to LSTM and 22% over CNN while converging significantly faster at early training epochs. Furthermore, our results reveal that SEE improves with increasing AP transmit power up to a saturation point (approximately 9.5 Mb/J at PAPmax=500 mW), beyond which excessive power consumption limits efficiency gains. Additionally, SEE decreases as the number of APs increases, underscoring the need for adaptive AP selection strategies to mitigate static power consumption in backhaul links. These findings confirm that MOIBBO-CNN–LSTM offers an effective solution for optimizing SEE in CF m-MIMO-based IoT networks, paving the way for more energy-efficient and secure IoT communications. Full article
(This article belongs to the Special Issue Moving Towards 6G Wireless Technologies—2nd Edition)
Show Figures

Figure 1

14 pages, 522 KiB  
Article
NUDIF: A Non-Uniform Deployment Framework for Distributed Inference in Heterogeneous Edge Clusters
by Peng Li, Chen Qing and Hao Liu
Future Internet 2025, 17(4), 168; https://doi.org/10.3390/fi17040168 - 11 Apr 2025
Viewed by 73
Abstract
Distributed inference in resource-constrained heterogeneous edge clusters is fundamentally limited by disparities in device capabilities and load imbalance issues. Existing methods predominantly focus on optimizing single-pipeline allocation schemes for partitioned sub-models. However, such approaches often lead to load imbalance and suboptimal resource utilization [...] Read more.
Distributed inference in resource-constrained heterogeneous edge clusters is fundamentally limited by disparities in device capabilities and load imbalance issues. Existing methods predominantly focus on optimizing single-pipeline allocation schemes for partitioned sub-models. However, such approaches often lead to load imbalance and suboptimal resource utilization under concurrent batch processing scenarios. To address these challenges, we propose a non-uniform deployment inference framework (NUDIF), which achieves high-throughput distributed inference service by adapting to heterogeneous resources and balancing inter-stage processing capabilities. Formulated as a mixed-integer nonlinear programming (MINLP) problem, NUDIF is responsible for planning the number of instances for each sub-model and determining the specific devices for deploying these instances, while considering computational capacity, memory constraints, and communication latency. This optimization minimizes inter-stage processing discrepancies and maximizes resource utilization. Experimental evaluations demonstrate that NUDIF enhances system throughput by an average of 9.95% compared to traditional single-pipeline optimization methods under various scales of cluster device configurations. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

24 pages, 687 KiB  
Article
Analyzing Impact and Systemwide Effects of the SlowROS Attack in an Industrial Automation Scenario
by Ivan Cibrario Bertolotti, Luca Durante and Enrico Cambiaso
Future Internet 2025, 17(4), 167; https://doi.org/10.3390/fi17040167 - 11 Apr 2025
Viewed by 90
Abstract
The ongoing adoption of Robot Operating Systems (ROSs) not only for research-oriented projects but also for industrial applications demands a more thorough assessment of its security than in the past. This paper highlights that a key ROS component—the ROS Master—is indeed vulnerable to [...] Read more.
The ongoing adoption of Robot Operating Systems (ROSs) not only for research-oriented projects but also for industrial applications demands a more thorough assessment of its security than in the past. This paper highlights that a key ROS component—the ROS Master—is indeed vulnerable to a novel kind of Slow Denial of Service (slow DoS) attack, the root reason of this vulnerability being an extremely high idle connection timeout. The effects of vulnerability exploitation have been evaluated in detail by means of a realistic test bed, showing how it leads to a systemwide and potentially dangerous disruption of ROS system operations. Moreover, it has been shown how some basic forms of built-in protection of the Linux kernel can be easily circumvented, and are therefore ineffective against this kind of threat. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

22 pages, 2477 KiB  
Article
Reinforcement Learning-Based Dynamic Fuzzy Weight Adjustment for Adaptive User Interfaces in Educational Software
by Christos Troussas, Akrivi Krouska, Phivos Mylonas and Cleo Sgouropoulou
Future Internet 2025, 17(4), 166; https://doi.org/10.3390/fi17040166 - 9 Apr 2025
Viewed by 91
Abstract
Adaptive educational systems are essential for addressing the diverse learning needs of students by dynamically adjusting instructional content and user interfaces (UI) based on real-time performance. Traditional adaptive learning environments often rely on static fuzzy logic rules, which lack the flexibility to evolve [...] Read more.
Adaptive educational systems are essential for addressing the diverse learning needs of students by dynamically adjusting instructional content and user interfaces (UI) based on real-time performance. Traditional adaptive learning environments often rely on static fuzzy logic rules, which lack the flexibility to evolve with learners’ changing behaviors. To address this limitation, this paper presents an adaptive UI system for educational software in Java programming, integrating fuzzy logic and reinforcement learning (RL) to personalize learning experiences. The system consists of two main modules: (a) the Fuzzy Inference Module, which classifies learners into Fast, Moderate, or Slow categories based on triangular membership functions, and (b) the Reinforcement Learning Optimization Module, which dynamically adjusts the fuzzy membership function thresholds to enhance personalization over time. By refining the timing and necessity of UI modifications, the system optimizes hints, difficulty levels, and structured guidance, ensuring interventions are neither premature nor delayed. The system was evaluated in educational software for Java programming, with 100 postgraduate students. The evaluation, based on learning efficiency, engagement, and usability metrics, demonstrated promising results, particularly for slow and moderate learners, confirming that reinforcement learning-driven fuzzy weight adjustments significantly improve adaptive UI effectiveness. Full article
(This article belongs to the Special Issue Advances and Perspectives in Human-Computer Interaction—2nd Edition)
Show Figures

Figure 1

24 pages, 2548 KiB  
Article
CPCROK: A Communication-Efficient and Privacy-Preserving Scheme for Low-Density Vehicular Ad Hoc Networks
by Junchao Wang, Honglin Li, Yan Sun, Chris Phillips, Alexios Mylonas and Dimitris Gritzalis
Future Internet 2025, 17(4), 165; https://doi.org/10.3390/fi17040165 - 9 Apr 2025
Viewed by 93
Abstract
The mix-zone method is effective in preserving real-time vehicle identity and location privacy in Vehicular Ad Hoc Networks (VANETs). However, it has limitations in low-vehicle-density scenarios, where adversaries can still identify the real trajectories of the victim vehicle. To address this issue, researchers [...] Read more.
The mix-zone method is effective in preserving real-time vehicle identity and location privacy in Vehicular Ad Hoc Networks (VANETs). However, it has limitations in low-vehicle-density scenarios, where adversaries can still identify the real trajectories of the victim vehicle. To address this issue, researchers often generate numerous fake beacons to deceive attackers, but this increases transmission overhead significantly. Therefore, we propose the Communication-Efficient Pseudonym-Changing Scheme within the Restricted Online Knowledge Scheme (CPCROK) to protect vehicle privacy without causing significant communication overhead in low-density VANETs by generating highly authentic fake beacons to form a single fabricated trajectory. Specifically, the CPCROK consists of three main modules: firstly, a special Kalman filter module that provides real-time, coarse-grained vehicle trajectory estimates to reduce the need for real-time vehicle state information; secondly, a Recurrent Neural Network (RNN) module that enhances predictions within the mix zone by incorporating offline data engineering and considering online vehicle steering angles; and finally, a trajectory generation module that collaborates with the first two to generate highly convincing fake trajectories outside the mix zone. The experimental results confirm that CPCROK effectively reduces the attack success rate by over 90%, outperforming the plain mix-zone scheme and beating other fake beacon schemes by more than 60%. Additionally, CPCROK effectively minimizes transmission overhead by 67%, all while ensuring a high level of protection. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

12 pages, 386 KiB  
Article
A Transformer-Based Autoencoder with Isolation Forest and XGBoost for Malfunction and Intrusion Detection in Wireless Sensor Networks for Forest Fire Prediction
by Ahshanul Haque and Hamdy Soliman
Future Internet 2025, 17(4), 164; https://doi.org/10.3390/fi17040164 - 9 Apr 2025
Viewed by 85
Abstract
Wireless Sensor Networks (WSNs) play a critical role in environmental monitoring and early forest fire detection. However, they are susceptible to sensor malfunctions and network intrusions, which can compromise data integrity and lead to false alarms or missed detections. This study presents a [...] Read more.
Wireless Sensor Networks (WSNs) play a critical role in environmental monitoring and early forest fire detection. However, they are susceptible to sensor malfunctions and network intrusions, which can compromise data integrity and lead to false alarms or missed detections. This study presents a hybrid anomaly detection framework that integrates a Transformer-based Autoencoder, Isolation Forest, and XGBoost to effectively classify normal sensor behavior, malfunctions, and intrusions. The Transformer Autoencoder models spatiotemporal dependencies in sensor data, while adaptive thresholding dynamically adjusts sensitivity to anomalies. Isolation Forest provides unsupervised anomaly validation, and XGBoost further refines classification, enhancing detection precision. Experimental evaluation using real-world sensor data demonstrates that our model achieves 95% accuracy, with high recall for intrusion detection, minimizing false negatives. The proposed approach improves the reliability of WSN-based fire monitoring by reducing false alarms, adapting to dynamic environmental conditions, and distinguishing between hardware failures and security threats. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

40 pages, 470 KiB  
Systematic Review
A Systematic Review on the Combination of VR, IoT and AI Technologies, and Their Integration in Applications
by Dimitris Kostadimas, Vlasios Kasapakis and Konstantinos Kotis
Future Internet 2025, 17(4), 163; https://doi.org/10.3390/fi17040163 (registering DOI) - 7 Apr 2025
Viewed by 313
Abstract
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This [...] Read more.
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This systematic review adheres to the PRISMA guidelines in order to critically analyze peer-reviewed literature from highly recognized academic databases related to the intersection of VR, AI, and IoT, and identify application domains, methodologies, tools, and key challenges. By focusing on real-life implementations and working prototypes, this review highlights state-of-the-art advancements and uncovers gaps that hinder practical adoption, such as data collection issues, interoperability barriers, and user experience challenges. The findings reveal that digital twins (DTs), AIoT systems, and immersive XR environments are promising as emerging technologies (ET), but require further development to achieve scalability and real-world impact, while in certain fields a limited amount of research is conducted until now. This review bridges theory and practice, providing a targeted foundation for future interdisciplinary research aimed at advancing practical, scalable solutions across domains such as healthcare, smart cities, industry, education, cultural heritage, and beyond. The study found that the integration of VR, AI, and IoT holds significant potential across various domains, with DTs, IoT systems, and immersive XR environments showing promising applications, but challenges such as data interoperability, user experience limitations, and scalability barriers hinder widespread adoption. Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
Show Figures

Figure 1

14 pages, 274 KiB  
Article
Multi-Class Intrusion Detection in Internet of Vehicles: Optimizing Machine Learning Models on Imbalanced Data
by Ágata Palma, Mário Antunes, Jorge Bernardino and Ana Alves
Future Internet 2025, 17(4), 162; https://doi.org/10.3390/fi17040162 (registering DOI) - 7 Apr 2025
Viewed by 99
Abstract
The Internet of Vehicles (IoV) presents complex cybersecurity challenges, particularly against Denial-of-Service (DoS) and spoofing attacks targeting the Controller Area Network (CAN) bus. This study leverages the CICIoV2024 dataset, comprising six distinct classes of benign traffic and various types of attacks, to evaluate [...] Read more.
The Internet of Vehicles (IoV) presents complex cybersecurity challenges, particularly against Denial-of-Service (DoS) and spoofing attacks targeting the Controller Area Network (CAN) bus. This study leverages the CICIoV2024 dataset, comprising six distinct classes of benign traffic and various types of attacks, to evaluate advanced machine learning techniques for instrusion detection systems (IDS). The models XGBoost, Random Forest, AdaBoost, Extra Trees, Logistic Regression, and Deep Neural Network were tested under realistic, imbalanced data conditions, ensuring that the evaluation reflects real-world scenarios where benign traffic dominates. Using hyperparameter optimization with Optuna, we achieved significant improvements in detection accuracy and robustness. Ensemble methods such as XGBoost and Random Forest consistently demonstrated superior performance, achieving perfect accuracy and macro-average F1-scores, even when detecting minority attack classes, in contrast to previous results for the CICIoV2024 dataset. The integration of optimized hyperparameter tuning and a broader methodological scope culminated in an IDS framework capable of addressing diverse attack scenarios with exceptional precision. Full article
Show Figures

Figure 1

39 pages, 4156 KiB  
Review
Enabling Green Cellular Networks: A Review and Proposal Leveraging Software-Defined Networking, Network Function Virtualization, and Cloud-Radio Access Network
by Radheshyam Singh, Line M. P. Larsen, Eder Ollora Zaballa, Michael Stübert Berger, Christian Kloch and Lars Dittmann
Future Internet 2025, 17(4), 161; https://doi.org/10.3390/fi17040161 - 5 Apr 2025
Viewed by 83
Abstract
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various [...] Read more.
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various initiatives undertaken by industry, academia, and researchers to reduce the power consumption of cellular network systems are comprehensively reviewed. Particular attention is given to emerging technologies, including Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Cloud-Radio Access Network (C-RAN), which are identified as key enablers for reshaping cellular infrastructure. Their collective potential to enhance energy efficiency while addressing convergence challenges is analyzed, and solutions for sustainable network evolution are proposed. A conceptual architecture based on SDN, NFV, and C-RAN is presented as an illustrative example of integrating these technologies to achieve significant power savings. The proposed framework outlines an approach to developing energy-efficient cellular networks, capable of reducing power consumption by approximately 40 to 50% through the optimal placement of virtual network functions. Full article
Show Figures

Figure 1

26 pages, 9869 KiB  
Article
Comparative Feature-Guided Regression Network with a Model-Eye Pretrained Model for Online Refractive Error Screening
by Jiayi Wang, Tianyou Zheng, Yang Zhang, Tianli Zheng and Weiwei Fu
Future Internet 2025, 17(4), 160; https://doi.org/10.3390/fi17040160 - 3 Apr 2025
Viewed by 69
Abstract
With the development of the internet, the incidence of myopia is showing a trend towards younger ages, making routine vision screening increasingly essential. This paper designs an online refractive error screening solution centered on the CFGN (Comparative Feature-Guided Network), a refractive error screening [...] Read more.
With the development of the internet, the incidence of myopia is showing a trend towards younger ages, making routine vision screening increasingly essential. This paper designs an online refractive error screening solution centered on the CFGN (Comparative Feature-Guided Network), a refractive error screening network based on the eccentric photorefraction method. Additionally, a training strategy incorporating an objective model-eye pretraining model is introduced to enhance screening accuracy. Specifically, we obtain six-channel infrared eccentric photorefraction pupil images to enrich image information and design a comparative feature-guided module and a multi-channel information fusion module based on the characteristics of each channel image to enhance network performance. Experimental results show that CFGN achieves an accuracy exceeding 92% within a ±1.00 D refractive error range across datasets from two regions, with mean absolute errors (MAEs) of 0.168 D and 0.108 D, outperforming traditional models and meeting vision screening requirements. The pretrained model helps achieve better performance with small samples. The vision screening scheme proposed in this study is more efficient and accurate than existing networks, and the cost-effectiveness of the pretrained model with transfer learning provides a technical foundation for subsequent rapid online screening and routine tracking via networking. Full article
Show Figures

Figure 1

30 pages, 3565 KiB  
Systematic Review
Internet of Things and Deep Learning for Citizen Security: A Systematic Literature Review on Violence and Crime
by Chrisbel Simisterra-Batallas, Pablo Pico-Valencia, Jaime Sayago-Heredia and Xavier Quiñónez-Ku
Future Internet 2025, 17(4), 159; https://doi.org/10.3390/fi17040159 - 3 Apr 2025
Viewed by 111
Abstract
This study conducts a systematic literature review following the PRISMA framework and the guidelines of Kitchenham and Charters to analyze the application of Internet of Things (IoT) technologies and deep learning models in monitoring violent actions and criminal activities in smart cities. A [...] Read more.
This study conducts a systematic literature review following the PRISMA framework and the guidelines of Kitchenham and Charters to analyze the application of Internet of Things (IoT) technologies and deep learning models in monitoring violent actions and criminal activities in smart cities. A total of 45 studies published between 2010 and 2024 were selected, revealing that most research, primarily from India and China, focuses on cybersecurity in IoT networks (76%), while fewer studies address the surveillance of physical violence and crime-related events (17%). Advanced neural network models, such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and hybrid approaches, have demonstrated high accuracy rates, averaging over 97.44%, in detecting suspicious behaviors. These models perform well in identifying anomalies in IoT security; however, they have primarily been tested in simulation environments (91% of analyzed studies), most of which incorporate real-world data. From a legal perspective, existing proposals mainly emphasize security and privacy. This study contributes to the development of smart cities by promoting IoT-based security methodologies that enhance surveillance and crime prevention in cities in developing countries. Full article
(This article belongs to the Special Issue Internet of Things (IoT) in Smart City)
Show Figures

Figure 1

17 pages, 2956 KiB  
Article
A3C-R: A QoS-Oriented Energy-Saving Routing Algorithm for Software-Defined Networks
by Sunan Wang, Rong Song, Xiangyu Zheng, Wanwei Huang and Hongchang Liu
Future Internet 2025, 17(4), 158; https://doi.org/10.3390/fi17040158 - 3 Apr 2025
Viewed by 54
Abstract
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with [...] Read more.
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with business characteristics. They have problems such as unbalanced traffic scheduling and unreasonable network resource allocation. Aiming at the above problems, this paper proposes a QoS-oriented energy-saving routing algorithm A3C-R in the software-defined network (SDN) environment. Based on the asynchronous update advantages of the asynchronous advantage Actor-Critic (A3C) algorithm and the advantages of independent interaction between multiple agents and the environment, the A3C-R algorithm can effectively improve the convergence of the routing algorithm. The process of the A3C-R algorithm first takes QoS indicators such as delay, bandwidth, and packet loss rate and the network energy consumption of the link as input. Then, it creates multiple agents to start asynchronous training, through the continuous updating of Actors and Critics in each agent and periodically synchronizes the model parameters to the global model. After the algorithm training converges, it can output the link weights of the network topology to facilitate the calculation of intelligent routing strategies that meet QoS requirements and lower network energy consumption. The experimental results indicate that the A3C-R algorithm, compared to the baseline algorithms ECMP, I-DQN, and DDPG-EEFS, reduces delay by approximately 9.4%, increases throughput by approximately 7.0%, decreases the packet loss rate by approximately 9.5%, and improves energy-saving percentage by approximately 10.8%. Full article
Show Figures

Figure 1

24 pages, 3782 KiB  
Article
The New CAP Theorem on Blockchain Consensus Systems
by Aristidis G. Anagnostakis and Euripidis Glavas
Future Internet 2025, 17(4), 157; https://doi.org/10.3390/fi17040157 - 2 Apr 2025
Viewed by 136
Abstract
One of the most emblematic theorems in the theory of distributed databases is Eric Brewer’s CAP theorem. It stresses the tradeoffs between Consistency, Availability, and Partition and states that it is impossible to guarantee all three of them simultaneously. Inspired by this, we [...] Read more.
One of the most emblematic theorems in the theory of distributed databases is Eric Brewer’s CAP theorem. It stresses the tradeoffs between Consistency, Availability, and Partition and states that it is impossible to guarantee all three of them simultaneously. Inspired by this, we introduce the new CAP theorem for autonomous consensus systems, and we demonstrate that, at most, two of the three elementary properties, Consensus achievement (C), Autonomy (A), and entropic Performance (P) can be optimized simultaneously in the generic case. This provides a theoretical limit to Blockchain systems’ decentralization, impacting their scalability, security, and real-world adoption. To formalize and analyze this tradeoff, we utilize the IoT micro-Blockchain as a universal, minimal, consensus-enabling framework. We define a set of quantitative functions relating each of the properties to the number of event witnesses in the system. We identify the existing mutual exclusions, and formally prove for one homogenous system consideration, that (A), (C), and (P) cannot be optimized simultaneously. This suggests that a requirement for concurrent optimization of the three properties cannot be satisfied in the generic case and reveals an intrinsic limitation on the design and the optimization of distributed Blockchain consensus mechanisms. Our findings are formally proved utilizing the IoT micro-Blockchain framework and validated through the empirical data benchmarking of large-scale Blockchain systems, i.e., Bitcoin, Ethereum, and Hyperledger Fabric. Full article
Show Figures

Figure 1

23 pages, 2670 KiB  
Article
Database Security and Performance: A Case of SQL Injection Attacks Using Docker-Based Virtualisation and Its Effect on Performance
by Ade Dotun Ajasa, Hassan Chizari and Abu Alam
Future Internet 2025, 17(4), 156; https://doi.org/10.3390/fi17040156 - 2 Apr 2025
Viewed by 224
Abstract
Modern database systems are critical for storing sensitive information but are increasingly targeted by cyber threats, including SQL injection (SQLi) attacks. This research proposes a robust security framework leveraging Docker-based virtualisation to enhance database security and mitigate the impact of SQLi attacks. A [...] Read more.
Modern database systems are critical for storing sensitive information but are increasingly targeted by cyber threats, including SQL injection (SQLi) attacks. This research proposes a robust security framework leveraging Docker-based virtualisation to enhance database security and mitigate the impact of SQLi attacks. A controlled experimental methodology evaluated the framework’s effectiveness using Damn Vulnerable Web Application (DVWA) and Acunetix databases. The findings reveal that Docker significantly reduces the vulnerability to SQLi attacks by isolating database instances, thereby safeguarding user data and system integrity. While Docker introduces a significant increase in CPU utilisation during high-traffic scenarios, the trade-off ensures enhanced security and reliability for real-world applications. This study highlights Docker’s potential as a practical solution for addressing evolving database security challenges in distributed and cloud environments. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

19 pages, 4613 KiB  
Article
Balancing Prediction Accuracy and Explanation Power of Path Loss Modeling in a University Campus Environment via Explainable AI
by Hamed Khalili, Hannes Frey and Maria A. Wimmer
Future Internet 2025, 17(4), 155; https://doi.org/10.3390/fi17040155 - 31 Mar 2025
Viewed by 99
Abstract
For efficient radio network planning, empirical path loss (PL) prediction models are utilized to predict signal attenuation in different environments. Alternatively, machine learning (ML) models are proposed to predict path loss. While empirical models are transparent and require less computational capacity, their predictions [...] Read more.
For efficient radio network planning, empirical path loss (PL) prediction models are utilized to predict signal attenuation in different environments. Alternatively, machine learning (ML) models are proposed to predict path loss. While empirical models are transparent and require less computational capacity, their predictions are not able to generate accurate forecasting in complex environments. While ML models are precise and can cope with complex terrains, their opaque nature hampers building trust and relying assertively on their predictions. To fill the gap between transparency and accuracy, in this paper, we utilize glass box ML using Microsoft research’s explainable boosting machines (EBM) together with the PL data measured for a university campus environment. Moreover, polar coordinate transformation is applied in our paper, which unravels the superior explanation capacity of the feature transmitting angle beyond the feature distance. PL predictions of glass box ML are compared with predictions of black box ML models as well as those generated by empirical models. The glass box EBM exhibits the highest performance. The glass box ML, furthermore, sheds light on the important explanatory features and the magnitude of their effects on signal attenuation in the underlying propagation environment. Full article
Show Figures

Figure 1

19 pages, 3479 KiB  
Article
Generative AI-Enhanced Intelligent Tutoring System for Graduate Cybersecurity Programs
by Madhav Mukherjee, John Le and Yang-Wai Chow
Future Internet 2025, 17(4), 154; https://doi.org/10.3390/fi17040154 - 31 Mar 2025
Viewed by 95
Abstract
Due to the widespread applicability of generative artificial intelligence, we have seen it adopted across many areas of education, providing universities with new opportunities, particularly in cybersecurity education. With the industry facing a skills shortage, this paper explores the use of generative artificial [...] Read more.
Due to the widespread applicability of generative artificial intelligence, we have seen it adopted across many areas of education, providing universities with new opportunities, particularly in cybersecurity education. With the industry facing a skills shortage, this paper explores the use of generative artificial intelligence in higher cybersecurity education as an intelligent tutoring system to enhance factors leading to positive student outcomes. Despite its success in content generation and assessment within cybersecurity, the field’s multidisciplinary nature presents additional challenges to scalability and generalisability. We propose a solution using agents to orchestrate specialised large language models and to demonstrate its applicability in graduate level cybersecurity topics offered at a leading Australian university. We aim to show a generalisable and scalable solution to diversified educational paradigms, highlighting its relevant features, and a method to evaluate the quality of content as well as the general effectiveness of the intelligent tutoring system on subjective factors aligned with positive student outcomes. We further explore areas for future research in model efficiency, privacy, security, and scalability. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

26 pages, 430 KiB  
Article
Practical Comparison Between the CI/CD Platforms Azure DevOps and GitHub
by Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2025, 17(4), 153; https://doi.org/10.3390/fi17040153 - 31 Mar 2025
Viewed by 158
Abstract
Continuous integration and delivery are essential for modern software development, enabling teams to automate testing, streamline deployments, and deliver high-quality software more efficiently. As DevOps adoption grows, selecting the right CI/CD platform is essential for optimizing workflows. Azure DevOps and GitHub, both under [...] Read more.
Continuous integration and delivery are essential for modern software development, enabling teams to automate testing, streamline deployments, and deliver high-quality software more efficiently. As DevOps adoption grows, selecting the right CI/CD platform is essential for optimizing workflows. Azure DevOps and GitHub, both under Microsoft, are leading solutions with distinct features and target audiences. This paper compares Azure DevOps and GitHub, evaluating their CI/CD capabilities, scalability, security, pricing, and usability. It explores their integration with cloud environments, automation workflows, and suitability for teams of varying sizes. Security features, including access controls, vulnerability scanning, and compliance, are analyzed to assess their suitability for organizational needs. Cost-effectiveness is also examined through licensing models and total ownership costs. This study leverages real-world case studies and industry trends to guide organizations in selecting the right CI/CD tools. Whether seeking a fully managed DevOps suite or a flexible, Git-native platform, understanding the strengths and limitations of Azure DevOps and GitHub is crucial for optimizing development and meeting long-term scalability goals. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

2 pages, 133 KiB  
Editorial
eHealth and mHealth
by Bernhard Neumayer, Stefan Sauermann and Sten Hanke
Future Internet 2025, 17(4), 152; https://doi.org/10.3390/fi17040152 - 31 Mar 2025
Viewed by 74
Abstract
eHealth (electronic health) and mHealth (mobile health) have been rapidly evolving in recent years, offering innovative solutions to healthcare challenges [...] Full article
(This article belongs to the Special Issue eHealth and mHealth)
26 pages, 587 KiB  
Article
GDPR and Large Language Models: Technical and Legal Obstacles
by Georgios Feretzakis, Evangelia Vagena, Konstantinos Kalodanis, Paraskevi Peristera, Dimitris Kalles and Athanasios Anastasiou
Future Internet 2025, 17(4), 151; https://doi.org/10.3390/fi17040151 - 28 Mar 2025
Viewed by 165
Abstract
Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In [...] Read more.
Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In particular, we analyze how key GDPR provisions—including the Right to Erasure, Right of Access, Right to Rectification, and restrictions on Automated Decision-Making—are challenged by the opaque and distributed nature of LLMs. We discuss issues such as the transformation of personal data into non-interpretable model parameters, difficulties in ensuring transparency and accountability, and the risks of bias and data over-collection. Moreover, the paper explores potential technical solutions such as machine unlearning, explainable AI (XAI), differential privacy, and federated learning, alongside strategies for embedding privacy-by-design principles and automated compliance tools into LLM development. The analysis is further enriched by considering the implications of emerging regulations like the EU’s Artificial Intelligence Act. In addition, we propose a four-layer governance framework that addresses data governance, technical privacy enhancements, continuous compliance monitoring, and explainability and oversight, thereby offering a practical roadmap for GDPR alignment in LLM systems. Through this comprehensive examination, we aim to bridge the gap between the technical capabilities of LLMs and the stringent data protection standards mandated by GDPR, ultimately contributing to more responsible and ethical AI practices. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

18 pages, 3210 KiB  
Article
GraphDBSCAN: Optimized DBSCAN for Noise-Resistant Community Detection in Graph Clustering
by Danial Ahmadzadeh, Mehrdad Jalali, Reza Ghaemi and Maryam Kheirabadi
Future Internet 2025, 17(4), 150; https://doi.org/10.3390/fi17040150 - 28 Mar 2025
Viewed by 137
Abstract
Community detection in complex networks remains a significant challenge due to noise, outliers, and the dependency on predefined clustering parameters. This study introduces GraphDBSCAN, an adaptive community detection framework that integrates an optimized density-based clustering method with an enhanced graph partitioning approach. The [...] Read more.
Community detection in complex networks remains a significant challenge due to noise, outliers, and the dependency on predefined clustering parameters. This study introduces GraphDBSCAN, an adaptive community detection framework that integrates an optimized density-based clustering method with an enhanced graph partitioning approach. The proposed method refines clustering accuracy through three key innovations: (1) a K-nearest neighbor (KNN)-based strategy for automatic parameter tuning in density-based clustering, eliminating the need for manual selection; (2) a proximity-based feature extraction technique that enhances node representations while preserving network topology; and (3) an improved edge removal strategy in graph partitioning, incorporating additional centrality measures to refine community structures. GraphDBSCAN is evaluated on real-world and synthetic datasets, demonstrating improvements in modularity, noise reduction, and clustering robustness. Compared to existing methods, GraphDBSCAN consistently enhances structural coherence, reduces sensitivity to outliers, and improves community separation without requiring fixed parameter assumptions. The proposed method offers a scalable, data-driven approach to community detection, making it suitable for large-scale and heterogeneous networks. Full article
(This article belongs to the Topic Social Computing and Social Network Analysis)
Show Figures

Figure 1

Previous Issue
Back to TopTop