Previous Issue
Volume 17, September
 
 

Future Internet, Volume 17, Issue 10 (October 2025) – 47 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 1438 KB  
Article
Towards Proactive Domain Name Security: An Adaptive System for .ro domains Reputation Analysis
by Carmen Ionela Rotună, Ioan Ștefan Sacală and Adriana Alexandru
Future Internet 2025, 17(10), 478; https://doi.org/10.3390/fi17100478 (registering DOI) - 18 Oct 2025
Abstract
In a digital landscape marked by the exponential growth of cyber threats, the development of automated domain reputation systems is extremely important. Emerging technologies such as artificial intelligence and machine learning now enable proactive and scalable approaches to early identification of malicious or [...] Read more.
In a digital landscape marked by the exponential growth of cyber threats, the development of automated domain reputation systems is extremely important. Emerging technologies such as artificial intelligence and machine learning now enable proactive and scalable approaches to early identification of malicious or suspicious domains. This paper presents an adaptive domain name reputation system that integrates advanced machine learning to enhance cybersecurity resilience. The proposed framework uses domain data from .ro domain Registry and several other sources (blacklists, whitelists, DNS, SSL certificate), detects anomalies using machine learning techniques, and scores domain security risk levels. A supervised XGBoost model is trained and assessed through five-fold stratified cross-validation and a held-out 80/20 split. On an example dataset of 25,000 domains, the system attains accuracy 0.993 and F1 0.993 and is exposed through a lightweight Flask service that performs asynchronous feature collection for near real-time scoring. The contribution is a blueprint that links list supervision with registry/DNS/TLS features and deployable inference to support proactive domain abuse mitigation in ccTLD environments. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
Show Figures

Figure 1

35 pages, 574 KB  
Article
Uncensored AI in the Wild: Tracking Publicly Available and Locally Deployable LLMs
by Bahrad A. Sokhansanj
Future Internet 2025, 17(10), 477; https://doi.org/10.3390/fi17100477 (registering DOI) - 18 Oct 2025
Abstract
Open-weight generative large language models (LLMs) can be freely downloaded and modified. Yet, little empirical evidence exists on how these models are systematically altered and redistributed. This study provides a large-scale empirical analysis of safety-modified open-weight LLMs, drawing on 8608 model repositories and [...] Read more.
Open-weight generative large language models (LLMs) can be freely downloaded and modified. Yet, little empirical evidence exists on how these models are systematically altered and redistributed. This study provides a large-scale empirical analysis of safety-modified open-weight LLMs, drawing on 8608 model repositories and evaluating 20 representative modified models on unsafe prompts designed to elicit, for example, election disinformation, criminal instruction, and regulatory evasion. This study demonstrates that modified models exhibit substantially higher compliance: while an average of unmodified models complied with only 19.2% of unsafe requests, modified variants complied at an average rate of 80.0%. Modification effectiveness was independent of model size, with smaller, 14-billion-parameter variants sometimes matching or exceeding the compliance levels of 70B parameter versions. The ecosystem is highly concentrated yet structurally decentralized; for example, the top 5% of providers account for over 60% of downloads and the top 20 for nearly 86%. Moreover, more than half of the identified models use GGUF packaging, optimized for consumer hardware, and 4-bit quantization methods proliferate widely, though full-precision and lossless 16-bit models remain the most downloaded. These findings demonstrate how locally deployable, modified LLMs represent a paradigm shift for Internet safety governance, calling for new regulatory approaches suited to decentralized AI. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
25 pages, 580 KB  
Article
Integrating Large Language Models into Automated Software Testing
by Yanet Sáez Iznaga, Luís Rato, Pedro Salgueiro and Javier Lamar León
Future Internet 2025, 17(10), 476; https://doi.org/10.3390/fi17100476 (registering DOI) - 18 Oct 2025
Abstract
This work investigates the use of LLMs to enhance automation in software testing, with a particular focus on generating high-quality, context-aware test scripts from natural language descriptions, while addressing both text-to-code and text+code-to-code generation tasks. The Codestral Mamba model was fine-tuned by proposing [...] Read more.
This work investigates the use of LLMs to enhance automation in software testing, with a particular focus on generating high-quality, context-aware test scripts from natural language descriptions, while addressing both text-to-code and text+code-to-code generation tasks. The Codestral Mamba model was fine-tuned by proposing a way to integrate LoRA matrices into its architecture, enabling efficient domain-specific adaptation and positioning Mamba as a viable alternative to Transformer-based models. The model was trained and evaluated on two benchmark datasets: CONCODE/CodeXGLUE and the proprietary TestCase2Code dataset. Through structured prompt engineering, the system was optimized to generate syntactically valid and semantically meaningful code for test cases. Experimental results demonstrate that the proposed methodology successfully enables the automatic generation of code-based test cases using large language models. In addition, this work reports secondary benefits, including improvements in test coverage, automation efficiency, and defect detection when compared to traditional manual approaches. The integration of LLMs into the software testing pipeline also showed potential for reducing time and cost while enhancing developer productivity and software quality. The findings suggest that LLM-driven approaches can be effectively aligned with continuous integration and deployment workflows. This work contributes to the growing body of research on AI-assisted software engineering and offers practical insights into the capabilities and limitations of current LLM technologies for testing automation. Full article
30 pages, 3405 KB  
Article
Decentralized Federated Learning for IoT Malware Detection at the Multi-Access Edge: A Two-Tier, Privacy-Preserving Design
by Mohammed Asiri, Maher A. Khemakhem, Reemah M. Alhebshi, Bassma S. Alsulami and Fathy E. Eassa
Future Internet 2025, 17(10), 475; https://doi.org/10.3390/fi17100475 - 17 Oct 2025
Abstract
Botnet attacks on Internet of Things (IoT) devices are escalating at the 5G/6G multi-access edge, yet most federated learning frameworks for IoT malware detection (FL-IMD) still hinge on a central aggregator, enlarging the attack surface, weakening privacy, and creating a single point of [...] Read more.
Botnet attacks on Internet of Things (IoT) devices are escalating at the 5G/6G multi-access edge, yet most federated learning frameworks for IoT malware detection (FL-IMD) still hinge on a central aggregator, enlarging the attack surface, weakening privacy, and creating a single point of failure. We propose a two-tier, fully decentralized FL architecture aligned with MEC’s Proximal Edge Server (PES)/Supplementary Edge Server (SES) hierarchy. PES nodes train locally and encrypt updates with the Cheon–Kim–Kim–Song (CKKS) scheme; SES nodes verify ECDSA-signed provenance, homomorphically aggregate ciphertexts, and finalize each round via an Algorand-style committee that writes a compact, tamper-evident record (update digests/URIs and a global-model hash) to an append-only ledger. Using the N-BaIoT benchmark with an unsupervised autoencoder, we evaluate known-device and leave-one-device-out regimes against a classical centralized baseline and a cryptographically hardened but server-centric variant. With the heavier CKKS profile, attack sensitivity is preserved (TPR 0.99), and specificity (TNR) declines by only 0.20 percentage points relative to plaintext in both regimes; a lighter profile maintains TPR while trading 3.5–4.8 percentage points of TNR for about 71% smaller payloads. Decentralization adds only a negligible per-round overhead for committee finality, while homomorphic aggregation dominates latency. Overall, our FL-IMD design removes the trusted aggregator and provides verifiable, ledger-backed provenance suitable for trustless MEC deployments. Full article
26 pages, 784 KB  
Article
Bi-Scale Mahalanobis Detection for Reactive Jamming in UAV OFDM Links
by Nassim Aich, Zakarya Oubrahim, Hachem Ait Talount and Ahmed Abbou
Future Internet 2025, 17(10), 474; https://doi.org/10.3390/fi17100474 - 17 Oct 2025
Abstract
Reactive jamming remains a critical threat to low-latency telemetry of Unmanned Aerial Vehicles (UAVs) using Orthogonal Frequency Division Multiplexing (OFDM). In this paper, a Bi-scale Mahalanobis approach is proposed to detect and classify reactive jamming attacks on UAVs; it jointly exploits window-level energy [...] Read more.
Reactive jamming remains a critical threat to low-latency telemetry of Unmanned Aerial Vehicles (UAVs) using Orthogonal Frequency Division Multiplexing (OFDM). In this paper, a Bi-scale Mahalanobis approach is proposed to detect and classify reactive jamming attacks on UAVs; it jointly exploits window-level energy and the Sevcik fractal dimension and employs self-adapting thresholds to detect any drift in additive white Gaussian noise (AWGN), fading effects, or Radio Frequency (RF) gain. The simulations were conducted on 5000 frames of OFDM signals, which were distorted by Rayleigh fading, a ±10 kHz frequency drift, and log-normal power shadowing. The simulation results achieved a precision of 99.4%, a recall of 100%, an F1 score of 99.7%, an area under the receiver operating characteristic curve (AUC) of 0.9997, and a mean alarm latency of 80 μs. The method used reinforces jam resilience in low-power commercial UAVs, yet it needs no extra RF hardware and avoids heavy deep learning computation. Full article
(This article belongs to the Special Issue Intelligent IoT and Wireless Communication)
Show Figures

Graphical abstract

23 pages, 2648 KB  
Article
QL-AODV: Q-Learning-Enhanced Multi-Path Routing Protocol for 6G-Enabled Autonomous Aerial Vehicle Networks
by Abdelhamied A. Ateya, Nguyen Duc Tu, Ammar Muthanna, Andrey Koucheryavy, Dmitry Kozyrev and János Sztrik
Future Internet 2025, 17(10), 473; https://doi.org/10.3390/fi17100473 - 16 Oct 2025
Viewed by 161
Abstract
With the arrival of sixth-generation (6G) wireless systems comes radical potential for the deployment of autonomous aerial vehicle (AAV) swarms in mission-critical applications, ranging from disaster rescue to intelligent transportation. However, 6G-supporting AAV environments present challenges such as dynamic three-dimensional topologies, highly restrictive [...] Read more.
With the arrival of sixth-generation (6G) wireless systems comes radical potential for the deployment of autonomous aerial vehicle (AAV) swarms in mission-critical applications, ranging from disaster rescue to intelligent transportation. However, 6G-supporting AAV environments present challenges such as dynamic three-dimensional topologies, highly restrictive energy constraints, and extremely low latency demands, which substantially degrade the efficiency of conventional routing protocols. To this end, this work presents a Q-learning-enhanced ad hoc on-demand distance vector (QL-AODV). This intelligent routing protocol uses reinforcement learning within the AODV protocol to support adaptive, data-driven route selection in highly dynamic aerial networks. QL-AODV offers four novelties, including a multipath route set collection methodology that retains up to ten candidate routes for each destination using an extended route reply (RREP) waiting mechanism, a more detailed RREP message format with cumulative node buffer usage, enabling informed decision-making, a normalized 3D state space model recording hop count, average buffer occupancy, and peak buffer saturation, optimized to adhere to aerial network dynamics, and a light-weighted distributed Q-learning approach at the source node that uses an ε-greedy policy to balance exploration and exploitation. Large-scale simulations conducted with NS-3.34 for various node densities and mobility conditions confirm the better performance of QL-AODV compared to conventional AODV. In high-mobility environments, QL-AODV offers up to 9.8% improvement in packet delivery ratio and up to 12.1% increase in throughput, while remaining persistently scalable for various network sizes. The results prove that QL-AODV is a reliable, scalable, and intelligent routing method for next-generation AAV networks that will operate in intensive environments that are expected for 6G. Full article
(This article belongs to the Special Issue Moving Towards 6G Wireless Technologies—2nd Edition)
Show Figures

Figure 1

25 pages, 366 KB  
Article
Security Analysis and Designing Advanced Two-Party Lattice-Based Authenticated Key Establishment and Key Transport Protocols for Mobile Communication
by Mani Rajendran, Dharminder Chaudhary, S. A. Lakshmanan and Cheng-Chi Lee
Future Internet 2025, 17(10), 472; https://doi.org/10.3390/fi17100472 - 16 Oct 2025
Viewed by 60
Abstract
In this paper, we have proposed a two-party authenticated key establishment (AKE), and authenticated key transport protocols based on lattice-based cryptography, aiming to provide security against quantum attacks for secure communication. This protocol enables two parties, who may share long-term public keys, to [...] Read more.
In this paper, we have proposed a two-party authenticated key establishment (AKE), and authenticated key transport protocols based on lattice-based cryptography, aiming to provide security against quantum attacks for secure communication. This protocol enables two parties, who may share long-term public keys, to securely establish a shared session key, and transportation of the session key from the server while achieving mutual authentication. Our construction leverages the hardness of lattice problems Ring Learning With Errors (Ring-LWE), ensuring robustness against quantum and classical adversaries. Unlike traditional schemes whose security depends upon number-theoretic assumptions being vulnerable to quantum attacks, our protocol ensures security in the post-quantum era. The proposed protocol ensures forward secrecy, and provides security even if the long-term key is compromised. This protocol also provides essential property key freshness and resistance against man-in-the-middle attacks, impersonation attacks, replay attacks, and key mismatch attacks. On the other hand, the proposed key transport protocol provides essential property key freshness, anonymity, and resistance against man-in-the-middle attacks, impersonation attacks, replay attacks, and key mismatch attacks. A two-party key transport protocol is a cryptographic protocol in which one party (typically a trusted key distribution center or sender) securely generates and sends a session key to another party. Unlike key exchange protocols (where both parties contribute to key generation), key transport protocols rely on one party to generate the key and deliver it securely. The protocol possesses a minimum number of exchanged messages and can reduce the number of communication rounds to help minimize the communication overhead. Full article
25 pages, 8678 KB  
Article
Explainable AI-Based Semantic Retrieval from an Expert-Curated Oncology Knowledge Graph for Clinical Decision Support
by Sameer Mushtaq, Marcello Trovati and Nik Bessis
Future Internet 2025, 17(10), 471; https://doi.org/10.3390/fi17100471 - 16 Oct 2025
Viewed by 162
Abstract
The modern oncology landscape is characterised by a deluge of high-dimensional data from genomic sequencing, medical imaging, and electronic health records, negatively impacting the analytical capacity of clinicians and health practitioners. This field is not new and it has drawn significant attention from [...] Read more.
The modern oncology landscape is characterised by a deluge of high-dimensional data from genomic sequencing, medical imaging, and electronic health records, negatively impacting the analytical capacity of clinicians and health practitioners. This field is not new and it has drawn significant attention from the research community. However, one of the main limiting issues is the data itself. Despite the vast amount of available data, most of it lacks scalability, quality, and semantic information. This work is motivated by the data platform provided by OncoProAI, an AI-driven clinical decision support platform designed to address this challenge by enabling highly personalised, precision cancer care. The platform is built on a comprehensive knowledge graph, formally modelled as a directed acyclic graph, which has been manually populated, assessed and maintained to provide a unique data ecosystem. This enables targeted and bespoke information extraction and assessment. Full article
Show Figures

Figure 1

23 pages, 1965 KB  
Article
Multifractality and Its Sources in the Digital Currency Market
by Stanisław Drożdż, Robert Kluszczyński, Jarosław Kwapień and Marcin Wątorek
Future Internet 2025, 17(10), 470; https://doi.org/10.3390/fi17100470 - 13 Oct 2025
Viewed by 295
Abstract
Multifractality in time series analysis characterizes the presence of multiple scaling exponents, indicating heterogeneous temporal structures and complex dynamical behaviors beyond simple monofractal models. In the context of digital currency markets, multifractal properties arise due to the interplay of long-range temporal correlations and [...] Read more.
Multifractality in time series analysis characterizes the presence of multiple scaling exponents, indicating heterogeneous temporal structures and complex dynamical behaviors beyond simple monofractal models. In the context of digital currency markets, multifractal properties arise due to the interplay of long-range temporal correlations and heavy-tailed distributions of returns, reflecting intricate market microstructure and trader interactions. Incorporating multifractal analysis into the modeling of cryptocurrency price dynamics enhances the understanding of market inefficiencies. It may also improve volatility forecasting and facilitate the detection of critical transitions or regime shifts. Based on the multifractal cross-correlation analysis (MFCCA) whose spacial case is the multifractal detrended fluctuation analysis (MFDFA), as the most commonly used practical tools for quantifying multifractality, we applied a recently proposed method of disentangling sources of multifractality in time series to the most representative instruments from the digital market. They include Bitcoin (BTC), Ethereum (ETH), decentralized exchanges (DEX) and non-fungible tokens (NFT). The results indicate the significant role of heavy tails in generating a broad multifractal spectrum. However, they also clearly demonstrate that the primary source of multifractality encompasses the temporal correlations in the series, and without them, multifractality fades out. It appears characteristic that these temporal correlations, to a large extent, do not depend on the thickness of the tails of the fluctuation distribution. These observations, made here in the context of the digital currency market, provide a further strong argument for the validity of the proposed methodology of disentangling sources of multifractality in time series. Full article
Show Figures

Figure 1

26 pages, 930 KB  
Article
Modular Microservices Architecture for Generative Music Integration in Digital Audio Workstations via VST Plugin
by Adriano N. Raposo and Vasco N. G. J. Soares
Future Internet 2025, 17(10), 469; https://doi.org/10.3390/fi17100469 - 12 Oct 2025
Viewed by 276
Abstract
This paper presents the design and implementation of a modular cloud-based architecture that enables generative music capabilities in Digital Audio Workstations through a MIDI microservices backend and a user-friendly VST plugin frontend. The system comprises a generative harmony engine deployed as a standalone [...] Read more.
This paper presents the design and implementation of a modular cloud-based architecture that enables generative music capabilities in Digital Audio Workstations through a MIDI microservices backend and a user-friendly VST plugin frontend. The system comprises a generative harmony engine deployed as a standalone service, a microservice layer that orchestrates communication and exposes an API, and a VST plugin that interacts with the backend to retrieve harmonic sequences and MIDI data. Among the microservices is a dedicated component that converts textual chord sequences into MIDI files. The VST plugin allows the user to drag and drop the generated chord progressions directly into a DAW’s MIDI track timeline. This architecture prioritizes modularity, cloud scalability, and seamless integration into existing music production workflows, while abstracting away technical complexity from end users. The proposed system demonstrates how microservice-based design and cross-platform plugin development can be effectively combined to support generative music workflows, offering both researchers and practitioners a replicable and extensible framework. Full article
Show Figures

Figure 1

23 pages, 3251 KB  
Article
Intelligent Control Approaches for Warehouse Performance Optimisation in Industry 4.0 Using Machine Learning
by Ádám Francuz and Tamás Bányai
Future Internet 2025, 17(10), 468; https://doi.org/10.3390/fi17100468 - 11 Oct 2025
Viewed by 303
Abstract
In conventional logistics optimization problems, an objective function describes the relationship between parameters. However, in many industrial practices, such a relationship is unknown, and only observational data is available. The objective of the research is to use machine learning-based regression models to uncover [...] Read more.
In conventional logistics optimization problems, an objective function describes the relationship between parameters. However, in many industrial practices, such a relationship is unknown, and only observational data is available. The objective of the research is to use machine learning-based regression models to uncover patterns in the warehousing dataset and use them to generate an accurate objective function. The models are not only suitable for prediction, but also for interpreting the effect of input variables. This data-driven approach is consistent with the automated, intelligent systems of Industry 4.0, while Industry 5.0 provides opportunities for sustainable, flexible, and collaborative development. In this research, machine learning (ML) models were tested on a fictional dataset using Automated Machine Learning (AutoML), through which Light Gradient Boosting Machine (LightGBM) was selected as the best method (R2 = 0.994). Feature Importance and Partial Dependence Plots revealed the key factors influencing storage performance and their functional relationships. Defining performance as a cost indicator allowed us to interpret optimization as cost minimization, demonstrating that ML-based methods can uncover hidden patterns and support efficiency improvements in warehousing. The proposed approach not only achieves outstanding predictive accuracy, but also transforms model outputs into actionable, interpretable insights for warehouse optimization. By combining automation, interpretability, and optimization, this research advances the practical realization of intelligent warehouse systems in the era of Industry 4.0. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

37 pages, 5895 KB  
Article
Beyond Accuracy: Benchmarking Machine Learning Models for Efficient and Sustainable SaaS Decision Support
by Efthimia Mavridou, Eleni Vrochidou, Michail Selvesakis and George A. Papakostas
Future Internet 2025, 17(10), 467; https://doi.org/10.3390/fi17100467 - 11 Oct 2025
Viewed by 187
Abstract
Machine learning (ML) methods have been successfully employed to support decision-making for Software as a Service (SaaS) providers. While most of the published research primarily emphasizes prediction accuracy, other important aspects, such as cloud deployment efficiency and environmental impact, have received comparatively less [...] Read more.
Machine learning (ML) methods have been successfully employed to support decision-making for Software as a Service (SaaS) providers. While most of the published research primarily emphasizes prediction accuracy, other important aspects, such as cloud deployment efficiency and environmental impact, have received comparatively less attention. It is also critical to effectively use factors such as training time, prediction time and carbon footprint in production. SaaS decision support systems use the output of ML models to provide actionable recommendations, such as running reactivation campaigns for users who are likely to churn. To this end, in this paper, we present a benchmarking comparison of 17 different ML models for churn prediction in SaaS, which include cloud deployment efficiency metrics (e.g., latency, prediction time, etc.) and sustainability metrics (e.g., CO2 emissions, consumed energy, etc.) along with predictive performance metrics (e.g., AUC, Log Loss, etc.). Two public datasets are employed, experiments are repeated on four different machines, locally and on the cloud, while a new weighted Green Efficiency Weighted Score (GEWS) is introduced, as steps towards choosing the simpler, greener and more efficient ML model. Experimental results indicated XGBoost and LightGBM as the models capable of offering a good balance on predictive performance, fast training, inference times, and limited emissions, while the importance of region selection towards minimizing the carbon footprint of the ML models was confirmed. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

45 pages, 4909 KB  
Review
Building Trust in Autonomous Aerial Systems: A Review of Hardware-Rooted Trust Mechanisms
by Sagir Muhammad Ahmad, Mohammad Samie and Barmak Honarvar Shakibaei Asli
Future Internet 2025, 17(10), 466; https://doi.org/10.3390/fi17100466 - 10 Oct 2025
Viewed by 771
Abstract
Unmanned aerial vehicles (UAVs) are redefining both civilian and defense operations, with swarm-based architectures unlocking unprecedented scalability and autonomy. However, these advancements introduce critical security challenges, particularly in location verification and authentication. This review provides a comprehensive synthesis of hardware security primitives (HSPs)—including [...] Read more.
Unmanned aerial vehicles (UAVs) are redefining both civilian and defense operations, with swarm-based architectures unlocking unprecedented scalability and autonomy. However, these advancements introduce critical security challenges, particularly in location verification and authentication. This review provides a comprehensive synthesis of hardware security primitives (HSPs)—including Physical Unclonable Functions (PUFs), Trusted Platform Modules (TPMs), and blockchain-integrated frameworks—as foundational enablers of trust in UAV ecosystems. We systematically analyze communication architectures, cybersecurity vulnerabilities, and deployment constraints, followed by a comparative evaluation of HSP-based techniques in terms of energy efficiency, scalability, and operational resilience. The review further identifies unresolved research gaps and highlights transformative trends such as AI-augmented environmental PUFs, post-quantum secure primitives, and RISC-V-based secure control systems. By bridging current limitations with emerging innovations, this work underscores the pivotal role of hardware-rooted security in shaping the next generation of autonomous aerial networks. Full article
(This article belongs to the Special Issue Security and Privacy Issues in the Internet of Cloud—2nd Edition)
Show Figures

Figure 1

28 pages, 3474 KB  
Article
OptoBrain: A Wireless Sensory Interface for Optogenetics
by Rodrigo de Albuquerque Pacheco Andrade, Helder Eiki Oshiro, Gabriel Augusto Ginja, Eduardo Colombari, Maria Celeste Dias, José A. Afonso and João Paulo Pereira do Carmo
Future Internet 2025, 17(10), 465; https://doi.org/10.3390/fi17100465 - 9 Oct 2025
Viewed by 463
Abstract
Optogenetics leverages light to control neural circuits, but traditional systems are often bulky and tethered, limiting their use. This work introduces OptoBrain, a novel, portable wireless system for optogenetics designed to overcome these challenges. The system integrates modules for multichannel data acquisition, smart [...] Read more.
Optogenetics leverages light to control neural circuits, but traditional systems are often bulky and tethered, limiting their use. This work introduces OptoBrain, a novel, portable wireless system for optogenetics designed to overcome these challenges. The system integrates modules for multichannel data acquisition, smart neurostimulation, and continuous processing, with a focus on low-power and low-voltage operation. OptoBrain features up to eight neuronal acquisition channels with a low input-referred noise (e.g., 0.99 µVRMS at 250 sps with 1 V/V gain), and reliably streams data via a Bluetooth 5.0 link at a measured throughput of up to 400 kbps. Experimental results demonstrate robust performance, highlighting its potential as a simple, practical, and low-cost solution for emerging optogenetics research centers and enabling new avenues in neuroscience. Full article
Show Figures

Figure 1

17 pages, 1076 KB  
Article
Adaptive Cyber Defense Through Hybrid Learning: From Specialization to Generalization
by Muhammad Omer Farooq
Future Internet 2025, 17(10), 464; https://doi.org/10.3390/fi17100464 - 9 Oct 2025
Viewed by 231
Abstract
This paper introduces a hybrid learning framework that synergistically combines Reinforcement Learning (RL) and Supervised Learning (SL) to train autonomous cyber-defense agents capable of operating effectively in dynamic and adversarial environments. The proposed approach leverages RL for strategic exploration and policy development, while [...] Read more.
This paper introduces a hybrid learning framework that synergistically combines Reinforcement Learning (RL) and Supervised Learning (SL) to train autonomous cyber-defense agents capable of operating effectively in dynamic and adversarial environments. The proposed approach leverages RL for strategic exploration and policy development, while incorporating SL to distill high-reward trajectories into refined policy updates, enhancing sample efficiency, learning stability, and robustness. The framework first targets specialized agent training, where each agent is optimized against a specific adversarial behavior. Subsequently, it is extended to enable the training of a generalized agent that learns to counter multiple, diverse attack strategies through multi-task and curriculum learning techniques. Comprehensive experiments conducted in the CybORG simulation environment demonstrate that the hybrid RL–SL framework consistently outperforms pure RL baselines across both specialized and generalized settings, achieving higher cumulative rewards. Specifically, hybrid-trained agents achieve up to 23% higher cumulative rewards in specialized defense tasks and approximately 18% improvements in generalized defense scenarios compared to RL-only agents. Moreover, incorporating temporal context into the observation space yields a further 4–6% performance gain in policy robustness. Furthermore, we investigate the impact of augmenting the observation space with historical actions and rewards, revealing consistent, albeit incremental, gains in SL-based learning performance. Key contributions of this work include: (i) a novel hybrid learning paradigm that integrates RL and SL for effective cyber-defense policy learning, (ii) a scalable extension for training generalized agents across heterogeneous threat models, and (iii) empirical analysis on the role of temporal context in agent observability and decision-making. Collectively, the results highlight the promise of hybrid learning strategies for building intelligent, resilient, and adaptable cyber-defense systems in evolving threat landscapes. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

19 pages, 1318 KB  
Article
Quantifying Website Privacy Posture Through Technical and Policy-Based Assessment
by Ioannis Fragkiadakis, Stefanos Gritzalis and Costas Lambrinoudakis
Future Internet 2025, 17(10), 463; https://doi.org/10.3390/fi17100463 - 9 Oct 2025
Viewed by 257
Abstract
With the rapid growth of digital interactions, safeguarding user privacy on websites has become a critical concern. This paper introduces a comprehensive framework that integrates both technical and policy-based factors to assess a website’s level of privacy protection. The framework employs a scoring [...] Read more.
With the rapid growth of digital interactions, safeguarding user privacy on websites has become a critical concern. This paper introduces a comprehensive framework that integrates both technical and policy-based factors to assess a website’s level of privacy protection. The framework employs a scoring system that evaluates key technical elements, such as HTTP security headers, email authentication protocols (SPF, DKIM, DMARC), SSL/TLS certificate usage, domain reputation, DNSSEC, and cookie practices. In parallel, it examines the clarity and GDPR compliance of privacy policies. The resulting score reflects not only the technical strength of a website’s defenses but also the transparency with which data processing practices are communicated to users. To demonstrate its effectiveness, the framework was applied to two similarly sized private hospitals, generating comparative privacy scores under a unified metric. The results confirm the framework’s value in producing measurable insights that enable cross-organizational privacy benchmarking. By combining policy evaluation with technical analysis, this work addresses a significant gap in existing research and offers a reproducible, extensible methodology for assessing website privacy posture from a visitor’s perspective. Full article
Show Figures

Figure 1

14 pages, 427 KB  
Article
Performance Modeling of Cloud Systems by an Infinite-Server Queue Operating in Rarely Changing Random Environment
by Svetlana Moiseeva, Evgeny Polin, Alexander Moiseev and Janos Sztrik
Future Internet 2025, 17(10), 462; https://doi.org/10.3390/fi17100462 - 8 Oct 2025
Viewed by 261
Abstract
This paper considers a heterogeneous queuing system with an unlimited number of servers, where the parameters are determined by a random environment. A distinctive feature is that the parameters of the exponential distribution of the request processing time do not change their values [...] Read more.
This paper considers a heterogeneous queuing system with an unlimited number of servers, where the parameters are determined by a random environment. A distinctive feature is that the parameters of the exponential distribution of the request processing time do not change their values until the end of service. Thus, the devices in the system under consideration are heterogeneous. For the study, a method of asymptotic analysis is proposed under the condition of extremely rare changes in the states of the random environment. We consider the following problem. Cloud node accepts requests of one type that have a similar intensity of arrival and duration of processing. Sometimes an input scheduler switches to accept requests of another type with other intensity and duration of processing. We model the system as an infinite-server queue in a random environment, which influences the arrival intensity and service time of new requests. The random environment is modeled by a Markov chain with a finite number of states. Arrivals are modeled as a Poisson process with intensity dependent on the state of the random environment. Service times are exponentially distributed with rates also dependent on the state of the random environment at the time moment when the request arrived. When the environment changes its state, requests that are already in the system do not change their service times. So, we have requests of different types (serviced with different rates) present in the system at the same time. For the study, we consider a situation where changes of the random environment are made rarely. The method of asymptotic analysis is used for the study. The asymptotic condition of a rarely changing random environment (entries of the generator of the corresponding Markov chain tend to zero) is used. A multi-dimensional joint steady-state probability distribution of the number of requests of different types present in the system is obtained. Several numerical examples illustrate the comparisons of asymptotic results to simulations. Full article
Show Figures

Figure 1

16 pages, 4740 KB  
Article
Measuring Inter-Bias Effects and Fairness-Accuracy Trade-Offs in GNN-Based Recommender Systems
by Nikzad Chizari, Keywan Tajfar and María N. Moreno-García
Future Internet 2025, 17(10), 461; https://doi.org/10.3390/fi17100461 - 8 Oct 2025
Viewed by 337
Abstract
Bias in artificial intelligence is a critical issue because these technologies increasingly influence decision-making in a wide range of areas. The recommender system field is one of them, where biases can lead to unfair or skewed outcomes. The origin usually lies in data [...] Read more.
Bias in artificial intelligence is a critical issue because these technologies increasingly influence decision-making in a wide range of areas. The recommender system field is one of them, where biases can lead to unfair or skewed outcomes. The origin usually lies in data biases coming from historical inequalities or irregular sampling. Recommendation algorithms using such data contribute to a greater or lesser extent to amplify and perpetuate those imbalances. On the other hand, different types of biases can be found in the outputs of recommender systems, and they can be evaluated by a variety of metrics specific to each of them. However, biases should not be treated independently, as they are interrelated and can potentiate or mask each other. Properly assessing the biases is crucial for ensuring fair and equitable recommendations. This work focuses on analyzing the interrelationship between different types of biases and proposes metrics designed to jointly evaluate multiple interrelated biases, with particular emphasis on those biases that tend to mask or obscure discriminatory treatment against minority or protected demographic groups, evaluated in terms of disparities in recommendation quality outcomes. This approach enables a more comprehensive assessment of algorithmic performance in terms of both fairness and predictive accuracy. Special attention is given to Graph Neural Network-based recommender systems, due to their strong performance in this application domain. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

24 pages, 1582 KB  
Article
Future Internet Applications in Healthcare: Big Data-Driven Fraud Detection with Machine Learning
by Konstantinos P. Fourkiotis and Athanasios Tsadiras
Future Internet 2025, 17(10), 460; https://doi.org/10.3390/fi17100460 - 8 Oct 2025
Viewed by 377
Abstract
Hospital fraud detection has often relied on periodic audits that miss evolving, internet-mediated patterns in electronic claims. An artificial intelligence and machine learning pipeline is being developed that is leakage-safe, imbalance aware, and aligned with operational capacity for large healthcare datasets. The preprocessing [...] Read more.
Hospital fraud detection has often relied on periodic audits that miss evolving, internet-mediated patterns in electronic claims. An artificial intelligence and machine learning pipeline is being developed that is leakage-safe, imbalance aware, and aligned with operational capacity for large healthcare datasets. The preprocessing stack integrates four tables, engineers 13 features, applies imputation, categorical encoding, Power transformation, Boruta selection, and denoising autoencoder representations, with class balancing via SMOTE-ENN evaluated inside cross-validation folds. Eight algorithms are compared under a fraud-oriented composite productivity index that weighs recall, precision, MCC, F1, ROC-AUC, and G-Mean, with per-fold threshold calibration and explicit reporting of Type I and Type II errors. Multilayer perceptron attains the highest composite index, while CatBoost offers the strongest control of false positives with high accuracy. SMOTE-ENN provides limited gains once representations regularize class geometry. The calibrated scores support prepayment triage, postpayment audit, and provider-level profiling, linking alert volume to expected recovery and protecting investigator workload. Situated in the Future Internet context, this work targets internet-mediated claim flows and web-accessible provider registries. Governance procedures for drift monitoring, fairness assessment, and change control complete an internet-ready deployment path. The results indicate that disciplined preprocessing and evaluation, more than classifier choice alone, translate AI improvements into measurable economic value and sustainable fraud prevention in digital health ecosystems. Full article
Show Figures

Figure 1

22 pages, 2631 KB  
Article
Adversarial Robustness Evaluation for Multi-View Deep Learning Cybersecurity Anomaly Detection
by Min Li, Yuansong Qiao and Brian Lee
Future Internet 2025, 17(10), 459; https://doi.org/10.3390/fi17100459 - 8 Oct 2025
Viewed by 333
Abstract
In the evolving cyberthreat landscape, a critical challenge for intrusion detection systems (IDSs) lies in defending against meticulously crafted adversarial attacks. Traditional single-view detection frameworks, constrained by their reliance on limited and unidimensional feature representations, are often inadequate for identifying maliciously manipulated samples. [...] Read more.
In the evolving cyberthreat landscape, a critical challenge for intrusion detection systems (IDSs) lies in defending against meticulously crafted adversarial attacks. Traditional single-view detection frameworks, constrained by their reliance on limited and unidimensional feature representations, are often inadequate for identifying maliciously manipulated samples. To address these limitations, this study proposes a key hypothesis: a detection architecture that adopts a multi-view fusion strategy can significantly enhance the system’s resilience to attacks. To validate the proposed hypothesis, this study developed a multi-view fusion architecture and conducted a series of comparative experiments. A two-pronged validation framework was employed. First, we examined whether the multi-view fusion model demonstrates superior robustness compared to a single-view model in intrusion detection tasks, thereby providing empirical evidence for the effectiveness of multi-view strategies. Second, we evaluated the generalization capability of the multi-view model under varying levels of attack intensity and coverage, assessing its stability in complex adversarial scenarios. Methodologically, a dual-axis training assessment scheme was introduced, comprising (i) continuous gradient testing of perturbation intensity, with the ε parameter increasing from 0.01 to 0.2, and (ii) variation in attack density, with sample contamination rates ranging from 80% to 90%. Adversarial test samples were generated using the Fast Gradient Sign Method (FGSM) on the TON_IoT and UNSW-NB15 datasets. Furthermore, we propose a validation mechanism that integrates both performance and robustness testing. The model is evaluated on clean and adversarial test sets, respectively. By analyzing performance retention and adversarial robustness, we provide a comprehensive assessment of the stability of the multi-view model under varying evaluation conditions. The experimental results provide clear support for the research hypothesis: The multi-view fusion model is more robust than the single-view model under adversarial scenarios. Even under high-intensity attack scenarios, the multi-view model consistently demonstrates superior robustness and stability. More importantly, the multi-view model, through its architectural feature diversity, effectively resists targeted attacks to which the single-view model is vulnerable, confirming the critical role of feature space redundancy in enhancing adversarial robustness. Full article
Show Figures

Figure 1

13 pages, 748 KB  
Article
Lattice-Based Identity Authentication Protocol with Enhanced Privacy and Scalability for Vehicular Ad Hoc Networks
by Kuo-Yu Tsai and Ying-Hsuan Yang
Future Internet 2025, 17(10), 458; https://doi.org/10.3390/fi17100458 - 7 Oct 2025
Viewed by 355
Abstract
Vehicular ad hoc networks (VANETs) demand authentication mechanisms that are both secure and privacy-preserving, particularly in light of emerging quantum-era threats. In this work, we propose a lattice-based identity authentication protocol that leverages pseudo-IDs to safeguard user privacy, while allowing the Trusted Authority [...] Read more.
Vehicular ad hoc networks (VANETs) demand authentication mechanisms that are both secure and privacy-preserving, particularly in light of emerging quantum-era threats. In this work, we propose a lattice-based identity authentication protocol that leverages pseudo-IDs to safeguard user privacy, while allowing the Trusted Authority (TA) to trace misbehaving vehicles when necessary. Compared with existing approaches, the proposed scheme strengthens accountability, improves scalability, and offers resistance against quantum attacks. A comprehensive complexity analysis is presented, addressing computational, communication, and storage overhead. Analysis results under practical parameter settings demonstrate that the protocol delivers robust security with manageable overhead, maintaining authentication latency within the real-time requirements of VANET applications. Full article
Show Figures

Figure 1

19 pages, 1327 KB  
Article
An IoT Architecture for Sustainable Urban Mobility: Towards Energy-Aware and Low-Emission Smart Cities
by Manuel J. C. S. Reis, Frederico Branco, Nishu Gupta and Carlos Serôdio
Future Internet 2025, 17(10), 457; https://doi.org/10.3390/fi17100457 - 4 Oct 2025
Viewed by 357
Abstract
The rapid growth of urban populations intensifies congestion, air pollution, and energy demand. Green mobility is central to sustainable smart cities, and the Internet of Things (IoT) offers a means to monitor, coordinate, and optimize transport systems in real time. This paper presents [...] Read more.
The rapid growth of urban populations intensifies congestion, air pollution, and energy demand. Green mobility is central to sustainable smart cities, and the Internet of Things (IoT) offers a means to monitor, coordinate, and optimize transport systems in real time. This paper presents an Internet of Things (IoT)-based architecture integrating heterogeneous sensing with edge–cloud orchestration and AI-driven control for green routing and coordinated Electric Vehicle (EV) charging. The framework supports adaptive traffic management, energy-aware charging, and multimodal integration through standards-aware interfaces and auditable Key Performance Indicators (KPIs). We hypothesize that, relative to a static shortest-path baseline, the integrated green routing and EV-charging coordination reduce (H1) mean travel time per trip by ≥7%, (H2) CO2 intensity (g/km) by ≥6%, and (H3) station peak load by ≥20% under moderate-to-high demand conditions. These hypotheses are tested in Simulation of Urban MObility (SUMO) with Handbook Emission Factors for Road Transport (HBEFA) emission classes, using 10 independent random seeds and reporting means with 95% confidence intervals and formal significance testing. The results confirm the hypotheses: average travel time decreases by approximately 9.8%, CO2 intensity by approximately 8%, and peak load by approximately 25% under demand multipliers ≥1.2 and EV shares ≥20%. Gains are attenuated under light demand, where congestion effects are weaker. We further discuss scalability, interoperability, privacy/security, and the simulation-to-deployment gap, and outline priorities for reproducible field pilots. In summary, a pragmatic edge–cloud IoT stack has the potential to lower congestion, reduce per-kilometer emissions, and smooth charging demand, provided it is supported by reliable data integration, resilient edge services, and standards-compliant interoperability, thereby contributing to sustainable urban mobility in line with the objectives of SDG 11 (Sustainable Cities and Communities). Full article
Show Figures

Figure 1

25 pages, 666 KB  
Article
Continual Learning for Intrusion Detection Under Evolving Network Threats
by Chaoqun Guo, Xihan Li, Jubao Cheng, Shunjie Yang and Huiquan Gong
Future Internet 2025, 17(10), 456; https://doi.org/10.3390/fi17100456 - 4 Oct 2025
Viewed by 346
Abstract
In the face of ever-evolving cyber threats, modern intrusion detection systems (IDS) must achieve long-term adaptability without sacrificing performance on previously encountered attacks. Traditional IDS approaches often rely on static training assumptions, making them prone to forgetting old patterns, underperforming in label-scarce conditions, [...] Read more.
In the face of ever-evolving cyber threats, modern intrusion detection systems (IDS) must achieve long-term adaptability without sacrificing performance on previously encountered attacks. Traditional IDS approaches often rely on static training assumptions, making them prone to forgetting old patterns, underperforming in label-scarce conditions, and struggling with imbalanced class distributions as new attacks emerge. To overcome these limitations, we present a continual learning framework tailored for adaptive intrusion detection. Unlike prior methods, our approach is designed to operate under real-world network conditions characterized by high-dimensional, sparse traffic data and task-agnostic learning sequences. The framework combines three core components: a clustering-based memory strategy that selectively retains informative historical samples using DP-Means; multi-level knowledge distillation that aligns current and previous model states at output and intermediate feature levels; and a meta-learning-driven class reweighting mechanism that dynamically adjusts to shifting attack distributions. Empirical evaluations on benchmark intrusion detection datasets demonstrate the framework’s ability to maintain high detection accuracy while effectively mitigating forgetting. Notably, it delivers reliable performance in continually changing environments where the availability of labeled data is limited, making it well-suited for real-world cybersecurity systems. Full article
Show Figures

Figure 1

27 pages, 1588 KB  
Article
Toward the Theoretical Foundations of Industry 6.0: A Framework for AI-Driven Decentralized Manufacturing Control
by Andrés Fernández-Miguel, Susana Ortíz-Marcos, Mariano Jiménez-Calzado, Alfonso P. Fernández del Hoyo, Fernando E. García-Muiña and Davide Settembre-Blundo
Future Internet 2025, 17(10), 455; https://doi.org/10.3390/fi17100455 - 3 Oct 2025
Viewed by 534
Abstract
This study advances toward establishing the theoretical foundations of Industry 6.0 by developing a comprehensive framework that integrates artificial intelligence (AI), decentralized control systems, and cyber–physical production environments for intelligent, sustainable, and adaptive manufacturing. The research employs a tri-modal methodology (deductive, inductive, and [...] Read more.
This study advances toward establishing the theoretical foundations of Industry 6.0 by developing a comprehensive framework that integrates artificial intelligence (AI), decentralized control systems, and cyber–physical production environments for intelligent, sustainable, and adaptive manufacturing. The research employs a tri-modal methodology (deductive, inductive, and abductive reasoning) to construct a theoretical architecture grounded in five interdependent constructs: advanced technology integration, decentralized organizational structures, mass customization and sustainability strategies, cultural transformation, and innovation enhancement. Unlike prior conceptualizations of Industry 6.0, the proposed framework explicitly emphasizes the cyclical feedback between innovation and organizational design, as well as the role of cultural transformation as a binding element across technological, organizational, and strategic domains. The resulting framework demonstrates that AI-driven decentralized control systems constitute the cornerstone of Industry 6.0, enabling autonomous real-time decision-making, predictive zero-defect manufacturing, and strategic organizational agility through distributed intelligent control architectures. This work contributes foundational theory and actionable guidance for transitioning from centralized control paradigms to AI-driven distributed intelligent manufacturing control systems, establishing a conceptual foundation for the emerging Industry 6.0 paradigm. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

15 pages, 1705 KB  
Article
Enhancing Two-Step Random Access in LEO Satellite Internet an Attack-Aware Adaptive Backoff Indicator (AA-BI)
by Jiajie Dong, Yong Wang, Qingsong Zhao, Ruiqian Ma and Jiaxiong Yang
Future Internet 2025, 17(10), 454; https://doi.org/10.3390/fi17100454 - 1 Oct 2025
Viewed by 230
Abstract
Low-Earth-Orbit Satellite Internet (LEO SI), with its capability for seamless global coverage, is a key solution for connecting IoT devices in areas beyond terrestrial network reach, playing a vital role in building a future ubiquitous IoT system. Inspired by the IEEE 802.15.4 Improved [...] Read more.
Low-Earth-Orbit Satellite Internet (LEO SI), with its capability for seamless global coverage, is a key solution for connecting IoT devices in areas beyond terrestrial network reach, playing a vital role in building a future ubiquitous IoT system. Inspired by the IEEE 802.15.4 Improved Adaptive Backoff Algorithm (I-ABA), this paper proposes an Attack-Aware Adaptive Backoff Indicator (AA-BI) mechanism to enhance the security and robustness of the two-step random access process in LEO SI. The mechanism constructs a composite threat intensity indicator that incorporates collision probability, Denial-of-Service (DoS) attack strength, and replay attack intensity. This quantified threat level is smoothly mapped to a dynamic backoff window to achieve adaptive backoff adjustment. Simulation results demonstrate that, with 200 pieces of user equipment (UE), the AA-BI mechanism significantly improves the access success rate (ASR) and jamming resistance rate (JRR) under various attack scenarios compared to the I-ABA and Binary Exponential Backoff (BEB) algorithms. Notably, under high-attack conditions, AA-BI improves ASR by up to 25.1% and 56.6% over I-ABA and BEB, respectively. Moreover, under high-load conditions with 800 users, AA-BI still maintains superior performance, achieving an ASR of 0.42 and a JRR of 0.68, thereby effectively ensuring the access performance and reliability of satellite Internet in malicious environments. Full article
Show Figures

Figure 1

37 pages, 5285 KB  
Article
Assessing Student Engagement: A Machine Learning Approach to Qualitative Analysis of Institutional Effectiveness
by Abbirah Ahmed, Martin J. Hayes and Arash Joorabchi
Future Internet 2025, 17(10), 453; https://doi.org/10.3390/fi17100453 - 1 Oct 2025
Viewed by 309
Abstract
In higher education, institutional quality is traditionally assessed through metrics such as academic programs, research output, educational resources, and community services. However, it is important that their activities align with student expectations, particularly in relation to interactive learning environments, learning management system interaction, [...] Read more.
In higher education, institutional quality is traditionally assessed through metrics such as academic programs, research output, educational resources, and community services. However, it is important that their activities align with student expectations, particularly in relation to interactive learning environments, learning management system interaction, curricular and co-curricular activities, accessibility, support services and other learning resources that ensure academic success and, jointly, career readiness. The growing popularity of student engagement metrics as one of the key measures to evaluate institutional efficacy is now a feature across higher education. By monitoring student engagement, institutions assess the impact of existing resources and make necessary improvements or interventions to ensure student success. This study presents a comprehensive analysis of student feedback from the StudentSurvey.ie dataset (2016–2022), which consists of approximately 275,000 student responses, focusing on student self-perception of engagement in the learning process. By using classical topic modelling techniques such as Latent Dirichlet Allocation (LDA) and Bi-term Topic Modelling (BTM), along with the advanced transformer-based BERTopic model, we identify key themes in student responses that can impact institutional strength performance metrics. BTM proved more effective than LDA for short text analysis, whereas BERTopic offered greater semantic coherence and uncovered hidden themes using deep learning embeddings. Moreover, a custom Named Entity Recognition (NER) model successfully extracted entities such as university personnel, digital tools, and educational resources, with improved performance as the training data size increased. To enable students to offer actionable feedback, suggesting areas of improvement, an n-gram and bigram network analysis was used to focus on common modifiers such as “more” and “better” and trends across student groups. This study introduces a fully automated, scalable pipeline that integrates topic modelling, NER, and n-gram analysis to interpret student feedback, offering reportable insights and supporting structured enhancements to the student learning experience. Full article
(This article belongs to the Special Issue Machine Learning and Natural Language Processing)
Show Figures

Figure 1

19 pages, 944 KB  
Article
Robust Optimization for IRS-Assisted SAGIN Under Channel Uncertainty
by Xu Zhu, Litian Kang and Ming Zhao
Future Internet 2025, 17(10), 452; https://doi.org/10.3390/fi17100452 - 1 Oct 2025
Viewed by 207
Abstract
With the widespread adoption of space–air–ground integrated networks (SAGINs) in next-generation wireless communications, intelligent reflecting surfaces (IRSs) have emerged as a key technology for enhancing system performance through passive link reinforcement. This paper addresses the prevalent issue of channel state information (CSI) uncertainty [...] Read more.
With the widespread adoption of space–air–ground integrated networks (SAGINs) in next-generation wireless communications, intelligent reflecting surfaces (IRSs) have emerged as a key technology for enhancing system performance through passive link reinforcement. This paper addresses the prevalent issue of channel state information (CSI) uncertainty in practical systems by constructing an IRS-assisted multi-hop SAGIN communication model. To capture the performance degradation caused by channel estimation errors, a norm-bounded uncertainty model is introduced. A simulated annealing (SA)-based phase optimization algorithm is proposed to enhance system robustness and improve worst-case communication quality. Simulation results demonstrate that the proposed method significantly outperforms traditional multiple access strategies (SDMA and NOMA) under various user densities and perturbation levels, highlighting its stability and scalability in complex environments. Full article
Show Figures

Figure 1

18 pages, 2048 KB  
Article
TwinP2G: A Software Application for Optimal Power-to-Gas Planning
by Eugenia Skepetari, Sotiris Pelekis, Hercules Koutalidis, Alexandros Menelaos Tzortzis, Georgios Kormpakis, Christos Ntanos and Dimitris Askounis
Future Internet 2025, 17(10), 451; https://doi.org/10.3390/fi17100451 - 30 Sep 2025
Viewed by 207
Abstract
This paper presents TwinP2G, a software application for optimal planning of investments in power-to-gas (PtG) systems. TwinP2G provides simulation and optimization services for the techno-economic analysis of user-customized energy networks. The core of TwinP2G is based on power flow simulation; however it supports [...] Read more.
This paper presents TwinP2G, a software application for optimal planning of investments in power-to-gas (PtG) systems. TwinP2G provides simulation and optimization services for the techno-economic analysis of user-customized energy networks. The core of TwinP2G is based on power flow simulation; however it supports energy sector coupling, including electricity, green hydrogen, natural gas, and synthetic methane. The framework provides a user-friendly user interface (UI) suitable for various user roles, including data scientists and energy experts, using visualizations and metrics on the assessed investments. An identity and access management mechanism also serves the security and authorization needs of the framework. Finally, TwinP2G revolutionizes the concept of data availability and data sharing by granting its users access to distributed energy datasets available in the EnerShare Data Space. These data are available to TwinP2G users for conducting their experiments and extracting useful insights on optimal PtG investments for the energy grid. Full article
Show Figures

Figure 1

18 pages, 654 KB  
Article
Trustworthy Face Recognition as a Service: A Multi-Layered Approach for Mitigating Spoofing and Ensuring System Integrity
by Mostafa Kira, Zeyad Alajamy, Ahmed Soliman, Yusuf Mesbah and Manuel Mazzara
Future Internet 2025, 17(10), 450; https://doi.org/10.3390/fi17100450 - 30 Sep 2025
Viewed by 413
Abstract
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a [...] Read more.
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a multi-layered Face Recognition-as-a-Service (FRaaS) platform that integrates passive liveness detection with active challenge–response mechanisms, thereby defending against both low-effort and sophisticated presentation attacks. The platform is designed as a scalable cloud-based solution, complemented by an open-source SDK for seamless third-party integration, and guided by ethical AI principles of fairness, transparency, and privacy. A comprehensive evaluation validates the system’s logic and implementation: (i) Frontend audits using Lighthouse consistently scored above 96% in performance, accessibility, and best practices; (ii) SDK testing achieved over 91% code coverage with reliable OAuth flow and error resilience; (iii) Passive liveness layer employed the DeepPixBiS model, which achieves an Average Classification Error Rate (ACER) of 0.4 on the OULU–NPU benchmark, outperforming prior state-of-the-art methods; and (iv) Load simulations confirmed high throughput (276 req/s), low latency (95th percentile at 1.51 ms), and zero error rates. Together, these results demonstrate that the proposed platform is robust, scalable, and trustworthy for security-critical applications. Full article
Show Figures

Figure 1

35 pages, 2055 KB  
Article
Evaluating Learning Success, Engagement, and Usability of Moalemy: An Arabic Rule-Based Chatbot
by Dalal Al Faia and Khalid Alomar
Future Internet 2025, 17(10), 449; https://doi.org/10.3390/fi17100449 - 30 Sep 2025
Viewed by 196
Abstract
A rule-based chatbot is a type of chatbot that responds by matching users’ queries with pre-defined rules. In e-learning, chatbots can enhance the learning experience by assisting teachers in delivering learning materials pleasantly. This research introduces Moalemy, an Arabic rule-based chatbot designed to [...] Read more.
A rule-based chatbot is a type of chatbot that responds by matching users’ queries with pre-defined rules. In e-learning, chatbots can enhance the learning experience by assisting teachers in delivering learning materials pleasantly. This research introduces Moalemy, an Arabic rule-based chatbot designed to provide a personalized learning experience by tailoring educational content to each learner’s prior knowledge. This empirical study evaluates learning outcomes, user engagement, and system usability using both subjective and objective metrics. It compares the effectiveness of a proposed Arabic rule-based chatbot with adaptive personalization to that of a static, non-personalized chatbot. The comparison was conducted across three levels of task difficulty (easy, medium, and hard) using a 2 × 3 within-subject experimental design with 34 participants. Descriptive statistics revealed higher mean values of usability and engagement in the adaptive method. Although the analysis revealed no significant variations in learning outcomes and SUS scores, it showed statistically significant differences in user satisfaction in favor of the adaptive method, p = 0.003. Analyses showed no significant differences between the two learning methods in terms of effectiveness, efficiency, and engagement. Across difficulty levels, the adaptive method outperforms the static method in terms of efficiency and effectiveness at the medium level, and in engagement at the easy level. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop