Previous Issue
Volume 17, October
 
 

Future Internet, Volume 17, Issue 11 (November 2025) – 7 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 1236 KB  
Article
TRIDENT-DE: Triple-Operator Differential Evolution with Adaptive Restarts and Greedy Refinement
by Vasileios Charilogis, Ioannis G. Tsoulos and Anna Maria Gianni
Future Internet 2025, 17(11), 488; https://doi.org/10.3390/fi17110488 (registering DOI) - 24 Oct 2025
Abstract
This paper introduces TRIDENT-DE, a novel ensemble-based variant of Differential Evolution (DE) designed to tackle complex continuous global optimization problems. The algorithm leverages three complementary trial vector generation strategies best/1/bin, current-to-best/1/bin, and pbest/1/bin executed within a self-adaptive framework that employs jDE parameter control. [...] Read more.
This paper introduces TRIDENT-DE, a novel ensemble-based variant of Differential Evolution (DE) designed to tackle complex continuous global optimization problems. The algorithm leverages three complementary trial vector generation strategies best/1/bin, current-to-best/1/bin, and pbest/1/bin executed within a self-adaptive framework that employs jDE parameter control. To prevent stagnation and premature convergence, TRIDENT-DE incorporates adaptive micro-restart mechanisms, which periodically reinitialize a fraction of the population around the elite solution using Gaussian perturbations, thereby sustaining exploration even in rugged landscapes. Additionally, the algorithm integrates a greedy line-refinement operator that accelerates convergence by projecting candidate solutions along promising base-to-trial directions. These mechanisms are coordinated within a mini-batch update scheme, enabling aggressive iteration cycles while preserving diversity in the population. Experimental results across a diverse set of benchmark problems, including molecular potential energy surfaces and engineering design tasks, show that TRIDENT-DE consistently outperforms or matches state-of-the-art optimizers in terms of both best-found and mean performance. The findings highlight the potential of multi-operator, restart-aware DE frameworks as a powerful approach to advancing the state of the art in global optimization. Full article
28 pages, 1362 KB  
Article
Beyond the Polls: Quantifying Early Signals in Decentralized Prediction Markets with Cross-Correlation and Dynamic Time Warping
by Francisco Cordoba Otalora and Marinos Themistocleous
Future Internet 2025, 17(11), 487; https://doi.org/10.3390/fi17110487 (registering DOI) - 24 Oct 2025
Abstract
In response to the persistent failures of traditional election polling, this study introduces the Decentralized Prediction Market Voter Framework (DPMVF), a novel tool to empirically test and quantify the predictive capabilities of Decentralized Prediction Markets (DPMs). We apply the DPMVF to Polymarket, analysing [...] Read more.
In response to the persistent failures of traditional election polling, this study introduces the Decentralized Prediction Market Voter Framework (DPMVF), a novel tool to empirically test and quantify the predictive capabilities of Decentralized Prediction Markets (DPMs). We apply the DPMVF to Polymarket, analysing over 11 million on-chain transactions from 1 September to 5 November 2024 against aggregated polling in the 2024 U.S. Presidential Election across seven key swing states. By employing Cross-Correlation Function (CCF) for linear analysis and Dynamic Time Warping (DTW) for non-linear pattern similarity, the framework provides a robust, multi-faceted measure of the lead-lag relationship between market sentiment and public opinion. Results reveal a striking divergence in predictive clarity across different electoral contexts. In highly contested states like Arizona, Nevada, and Pennsylvania, the DPMVF identified statistically significant early signals. Using a non-parametric Permutation Test to validate the observed alignments, we found that Polymarket’s price trends preceded polling shifts by up to 14 days, a finding confirmed as non-spurious with a high confidence (p < 0.01) and with an exceptionally high correlation (up to 0.988) and shape similarity. At the same time, in states with low polling volatility like North Carolina, the framework correctly diagnosed a weak signal, identifying a “low-signal environment” where the market had no significant polling trend to predict. This study’s primary contribution is a validated, descriptive tool for contextualizing DPM signals. The DPMVF moves beyond a simple “pass/fail” verdict on prediction markets, offering a systematic approach to differentiate between genuine early signals and market noise. It provides a foundational tool for researchers, journalists, and campaigns to understand not only if DPMs are predictive but when and why, thereby offering a more nuanced and reliable path forward in the future of election analysis. Full article
17 pages, 38319 KB  
Article
Class-Level Feature Disentanglement for Multi-Label Image Classification
by Yingduo Tong, Zhenyu Lu, Yize Dong and Yonggang Lu
Future Internet 2025, 17(11), 486; https://doi.org/10.3390/fi17110486 - 23 Oct 2025
Abstract
Generally, the interpretability of deep neural networks is categorized into a priori and a posteriori interpretability. A priori interpretability involves improving model transparency through deliberate design prior to training. Feature disentanglement is a method for achieving a priori interpretability. Existing disentanglement methods mostly [...] Read more.
Generally, the interpretability of deep neural networks is categorized into a priori and a posteriori interpretability. A priori interpretability involves improving model transparency through deliberate design prior to training. Feature disentanglement is a method for achieving a priori interpretability. Existing disentanglement methods mostly focus on semantic features, such as intrinsic and shared features. These methods distinguish between the background and the main subject, but overlook class-level features in images. To address this, we take a further step by advancing feature disentanglement to the class level. For multi-label image classification tasks, we propose a class-level feature disentanglement method. Specifically, we introduce a multi-head classifier within the feature extraction layer of the backbone network to disentangle features. Each head in this classifier corresponds to a specific class and generates independent predictions, thereby guiding the model to better leverage the intrinsic features of each class and improving multi-label classification precision. Experiments demonstrate that our method significantly enhances performance metrics across various benchmarks while simultaneously achieving a priori interpretability. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision)
Show Figures

Figure 1

26 pages, 890 KB  
Review
Understanding Security Vulnerabilities in Private 5G Networks: Insights from a Literature Review
by Jacinta Fue, Jairo A. Gutierrez and Yezid Donoso
Future Internet 2025, 17(11), 485; https://doi.org/10.3390/fi17110485 - 23 Oct 2025
Abstract
Private fifth generation (5G) networks have emerged as a cornerstone for ultra-reliable, low-latency connectivity across mission-critical domains such as industrial automation, healthcare, and smart cities. Compared to conventional technologies like 4G or Wi-Fi, they provide clear advantages, including enhanced service continuity, higher reliability, [...] Read more.
Private fifth generation (5G) networks have emerged as a cornerstone for ultra-reliable, low-latency connectivity across mission-critical domains such as industrial automation, healthcare, and smart cities. Compared to conventional technologies like 4G or Wi-Fi, they provide clear advantages, including enhanced service continuity, higher reliability, and customizable security controls. However, these benefits come with new security challenges, particularly regarding the confidentiality, integrity, and availability of data and services. This article presents a review of security vulnerabilities in private 5G networks. The review pursues four objectives: (i) to identify and categorize key vulnerabilities, (ii) to analyze threats that undermine core security principles, (iii) to evaluate mitigation strategies proposed in the literature, and (iv) to outline gaps that demand further investigation. The findings offer a structured perspective on the evolving threat landscape of private 5G networks, highlighting both well-documented risks and emerging concerns. By mapping vulnerabilities to mitigation approaches and identifying areas where current solutions fall short, this study provides critical insights for researchers, practitioners, and policymakers. Ultimately, the review underscores the urgent need for robust and adaptive security frameworks to ensure the resilience of private 5G deployments in increasingly complex and high-stakes environments. Full article
Show Figures

Figure 1

21 pages, 1870 KB  
Article
SFC-GS: A Multi-Objective Optimization Service Function Chain Scheduling Algorithm Based on Matching Game
by Shi Kuang, Moshu Niu, Sunan Wang, Haoran Li, Siyuan Liang and Rui Chen
Future Internet 2025, 17(11), 484; https://doi.org/10.3390/fi17110484 - 22 Oct 2025
Abstract
Service Function Chain (SFC) is a framework that dynamically orchestrates Virtual Network Functions (VNFs) and is essential to enhancing resource scheduling efficiency. However, traditional scheduling methods face several limitations, such as low matching efficiency, suboptimal resource utilization, and limited global coordination capabilities. To [...] Read more.
Service Function Chain (SFC) is a framework that dynamically orchestrates Virtual Network Functions (VNFs) and is essential to enhancing resource scheduling efficiency. However, traditional scheduling methods face several limitations, such as low matching efficiency, suboptimal resource utilization, and limited global coordination capabilities. To this end, we propose a multi-objective scheduling algorithm for SFCs based on matching games (SFC-GS). First, a multi-objective cooperative optimization model is established that aims to reduce scheduling time, increase request acceptance rate, lower latency, and minimize resource consumption. Second, a matching model is developed through the construction of preference lists for service nodes and VNFs, followed by multi-round iterative matching. In each round, only the resource status of the current and neighboring nodes is evaluated, thereby reducing computational complexity and improving response speed. Finally, a hierarchical batch processing strategy is introduced, in which service requests are scheduled in priority-based batches, and subsequent allocations are dynamically adjusted based on feedback from previous batches. This establishes a low-overhead iterative optimization mechanism to achieve global resource optimization. Experimental results demonstrate that, compared to baseline methods, SFC-GS improves request acceptance rate and resource utilization by approximately 8%, reduces latency and resource consumption by around 10%, and offers clear advantages in scheduling time. Full article
Show Figures

Figure 1

18 pages, 4336 KB  
Article
Joint Optimization of Container Resource Defragmentation and Task Scheduling in Queueing Cloud Computing: A DRL-Based Approach
by Yan Guo, Lan Wei, Cunqun Fan, You Ma, Xiangang Zhao and Henghong He
Future Internet 2025, 17(11), 483; https://doi.org/10.3390/fi17110483 - 22 Oct 2025
Abstract
Container-based virtualization has become pivotal in cloud computing, and resource fragmentation is inevitable due to the frequency of container deployment/termination and the heterogeneous nature of IoT tasks. In queuing cloud systems, resource defragmentation and task scheduling are interdependent yet rarely co-optimized in existing [...] Read more.
Container-based virtualization has become pivotal in cloud computing, and resource fragmentation is inevitable due to the frequency of container deployment/termination and the heterogeneous nature of IoT tasks. In queuing cloud systems, resource defragmentation and task scheduling are interdependent yet rarely co-optimized in existing research. This paper addresses this gap by investigating the joint optimization of resource defragmentation and task scheduling in a queuing cloud computing system. We first formulate the problem to minimize task completion time and maximize resource utilization, then transform it into an online decision problem. We propose a Deep Reinforcement Learning (DRL)-based two-layer iterative approach called DRL-RDG, which uses a Resource Defragmentation approach based on a Greedy strategy (RDG) to find the optimal container migration solution and a DRL algorithm to learn the optimal task-scheduling solution. Simulation results show that DRL-RDG achieves a low average task completion time and high resource utilization, demonstrating its effectiveness in queuing cloud environments. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

24 pages, 3824 KB  
Article
BiTAD: An Interpretable Temporal Anomaly Detector for 5G Networks with TwinLens Explainability
by Justin Li Ting Lau, Ying Han Pang, Charilaos Zarakovitis, Heng Siong Lim, Dionysis Skordoulis, Shih Yin Ooi, Kah Yoong Chan and Wai Leong Pang
Future Internet 2025, 17(11), 482; https://doi.org/10.3390/fi17110482 - 22 Oct 2025
Abstract
The transition to 5G networks brings unprecedented speed, ultra-low latency, and massive connectivity. Nevertheless, it introduces complex traffic patterns and broader attack surfaces that render traditional intrusion detection systems (IDSs) ineffective. Existing rule-based methods and classical machine learning approaches struggle to capture the [...] Read more.
The transition to 5G networks brings unprecedented speed, ultra-low latency, and massive connectivity. Nevertheless, it introduces complex traffic patterns and broader attack surfaces that render traditional intrusion detection systems (IDSs) ineffective. Existing rule-based methods and classical machine learning approaches struggle to capture the temporal and dynamic characteristics of 5G traffic, while many deep learning models lack interpretability, making them unsuitable for high-stakes security environments. To address these challenges, we propose Bidirectional Temporal Anomaly Detector (BiTAD), a deep temporal learning architecture for anomaly detection in 5G networks. BiTAD leverages dual-direction temporal sequence modelling with attention to encode both past and future dependencies while focusing on critical segments within network sequences. Like many deep models, BiTAD’s faces interpretability challenges. To resolve its “black-box” nature, a dual-perspective explainability module, coined TwinLens, is proposed. This module integrates SHAP and TimeSHAP to provide global feature attribution and temporal relevance, delivering dual-perspective interpretability. Evaluated on the public 5G-NIDD dataset, BiTAD demonstrates superior detection performance compared to existing models. TwinLens enables transparent insights by identifying which features and when they were most influential to anomaly predictions. By jointly addressing the limitations in temporal modelling and interpretability, our work contributes a practical IDS framework tailored to the demands of next-generation mobile networks. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop