Previous Issue
Volume 14, September
 
 

Computers, Volume 14, Issue 10 (October 2025) – 8 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 2519 KB  
Article
Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3
by Manal Alanazi, Alice Li, Halima Samra and Ben Soh
Computers 2025, 14(10), 411; https://doi.org/10.3390/computers14100411 - 26 Sep 2025
Abstract
This study investigates the adoption of PyChatAI, a bilingual AI-powered chatbot for Python programming education, among female computer science students at Jouf University. Guided by the Technology Acceptance Model 3 (TAM3), it examines the determinants of user acceptance and usage behaviour. A Solomon [...] Read more.
This study investigates the adoption of PyChatAI, a bilingual AI-powered chatbot for Python programming education, among female computer science students at Jouf University. Guided by the Technology Acceptance Model 3 (TAM3), it examines the determinants of user acceptance and usage behaviour. A Solomon Four-Group experimental design (N = 300) was used to control pre-test effects and isolate the impact of the intervention. PyChatAI provides interactive problem-solving, code explanations, and topic-based tutorials in English and Arabic. Measurement and structural models were validated via Confirmatory Factor Analysis (CFA) and Structural Equation Modelling (SEM), achieving excellent fit (CFI = 0.980, RMSEA = 0.039). Results show that perceived usefulness (β = 0.446, p < 0.001) and perceived ease of use (β = 0.243, p = 0.005) significantly influence intention to use, which in turn predicts actual usage (β = 0.406, p < 0.001). Trust, facilitating conditions, and hedonic motivation emerged as strong antecedents of ease of use, while social influence and cognitive factors had limited impact. These findings demonstrate that AI-driven bilingual tools can effectively enhance programming engagement in gender-specific, culturally sensitive contexts, offering practical guidance for integrating intelligent tutoring systems into computer science curricula. Full article
Show Figures

Figure 1

76 pages, 904 KB  
Review
Theoretical Bases of Methods of Counteraction to Modern Forms of Information Warfare
by Akhat Bakirov and Ibragim Suleimenov
Computers 2025, 14(10), 410; https://doi.org/10.3390/computers14100410 - 26 Sep 2025
Abstract
This review is devoted to a comprehensive analysis of modern forms of information warfare in the context of digitalization and global interconnectedness. The work considers fundamental theoretical foundations—cognitive distortions, mass communication models, network theories and concepts of cultural code. The key tools of [...] Read more.
This review is devoted to a comprehensive analysis of modern forms of information warfare in the context of digitalization and global interconnectedness. The work considers fundamental theoretical foundations—cognitive distortions, mass communication models, network theories and concepts of cultural code. The key tools of information influence are described in detail, including disinformation, the use of botnets, deepfakes, memetic strategies and manipulations in the media space. Particular attention is paid to methods of identifying and neutralizing information threats using artificial intelligence and digital signal processing, including partial digital convolutions, Fourier–Galois transforms, residue number systems and calculations in finite algebraic structures. The ethical and legal aspects of countering information attacks are analyzed, and geopolitical examples are given, demonstrating the peculiarities of applying various strategies. The review is based on a systematic analysis of 592 publications selected from the international databases Scopus, Web of Science and Google Scholar, covering research from fundamental works to modern publications of recent years (2015–2025). It is also based on regulatory legal acts, which ensures a high degree of relevance and representativeness. The results of the review can be used in the development of technologies for monitoring, detecting and filtering information attacks, as well as in the formation of national cybersecurity strategies. Full article
Show Figures

Figure 1

32 pages, 1432 KB  
Review
A Review of Multi-Microgrids Operation and Control from a Cyber-Physical Systems Perspective
by Ola Ali and Osama A. Mohammed
Computers 2025, 14(10), 409; https://doi.org/10.3390/computers14100409 - 25 Sep 2025
Abstract
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying [...] Read more.
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying energy management systems (EMS) efficiently. However, the communication quality of service (QoS) parameters such as latency, jitter, packet loss, and throughput play an essential role in MMG control and stability, especially in highly dynamic and high-traffic situations. This paper presents a focused review of MMG systems from a cyber-physical viewpoint, particularly concerning the challenges and implications of communication network performance of energy management. The literature on MMG systems includes control strategies, models of communication infrastructure, cybersecurity challenges, and co-simulation platforms. We have identified research gaps, including, but not limited to, the need for scalable, real-time cyber-physical systems; limited research examining communication QoS under realistic conditions/traffic; and integrated cybersecurity strategies for MMGs. We suggest future research opportunities considering these research gaps to enhance the resiliency, adaptability, and sustainability of modern cyber-physical MMGs. Full article
Show Figures

Figure 1

20 pages, 2911 KB  
Article
Topological Machine Learning for Financial Crisis Detection: Early Warning Signals from Persistent Homology
by Ecaterina Guritanu, Enrico Barbierato and Alice Gatti
Computers 2025, 14(10), 408; https://doi.org/10.3390/computers14100408 - 24 Sep 2025
Viewed by 119
Abstract
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, [...] Read more.
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, interpretable indicator is obtained as the L2 norm of the landscape and passed through a causal decision rule (with thresholds α,β and run–length parameters s,t) that suppresses isolated spikes and collapses bursts to time–stamped warnings. On four major U.S. equity indices (S&P 500, NASDAQ, DJIA, Russell 2000) over 1999–2021, the method, at a fixed strictly causal operating point (α=β=3.1,s=57,t=16), attains a balanced precision–recall (F10.50) with an average lead time of about 34 days. It anticipates two of the four canonical crises and issues a contemporaneous signal for the 2008 global financial crisis. Sensitivity analyses confirm the qualitative robustness of the detector, while comparisons with permissive spike rules and volatility–based baselines demonstrate substantially fewer false alarms at comparable recall. The approach delivers interpretable topology–based warnings and provides a reproducible route to combining persistent homology with causal event detection in financial time series. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

22 pages, 858 KB  
Systematic Review
Network Data Flow Collection Methods for Cybersecurity: A Systematic Literature Review
by Alessandro Carvalho Coutinho and Luciano Vieira de Araújo
Computers 2025, 14(10), 407; https://doi.org/10.3390/computers14100407 - 24 Sep 2025
Viewed by 110
Abstract
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January [...] Read more.
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January 2019 and July 2025, of which 51 met PRISMA 2020 eligibility criteria. All extraction materials are archived on OSF. NetFlow derivatives appear in 62.7% of the studies, IPFIX in 45.1%, INT/P4 or OpenFlow mirroring in 17.6%, and sFlow in 9.8%, with totals exceeding 100% because several papers evaluate multiple protocols. In total, 17 of the 51 studies (33.3%) tested production links of at least 40 Gbps, while others remained in laboratory settings. Fewer than half reported packet-loss thresholds or privacy controls, and none adopted a shared benchmark suite. These findings highlight trade-offs between throughput, fidelity, computational cost, and privacy, as well as gaps in encrypted-traffic support and GDPR-compliant anonymisation. Most importantly, our synthesis demonstrates that flow-collection methods directly shape what can be detected: some exporters are effective for volumetric attacks such as DDoS, while others enable visibility into brute-force authentication, botnets, or IoT malware. In other words, the choice of telemetry technology determines which threats and anomalous behaviours remain visible or hidden to defenders. By mapping technologies, metrics, and gaps, this review provides a single reference point for researchers, engineers, and regulators facing the challenges of flow-aware cybersecurity. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Graphical abstract

32 pages, 852 KB  
Article
Benchmarking the Responsiveness of Open-Source Text-to-Speech Systems
by Ha Pham Thien Dinh, Rutherford Agbeshi Patamia, Ming Liu and Akansel Cosgun
Computers 2025, 14(10), 406; https://doi.org/10.3390/computers14100406 - 23 Sep 2025
Viewed by 134
Abstract
Responsiveness—the speed at which a text-to-speech (TTS) system produces audible output—is critical for real-time voice assistants yet has received far less attention than perceptual quality metrics. Existing evaluations often touch on latency but do not establish reproducible, open-source standards that capture responsiveness as [...] Read more.
Responsiveness—the speed at which a text-to-speech (TTS) system produces audible output—is critical for real-time voice assistants yet has received far less attention than perceptual quality metrics. Existing evaluations often touch on latency but do not establish reproducible, open-source standards that capture responsiveness as a first-class dimension. This work introduces a baseline benchmark designed to fill that gap. Our framework unifies latency distribution, tail latency, and intelligibility within a transparent and dataset-diverse pipeline, enabling a fair and replicable comparison across 13 widely used open-source TTS models. By grounding evaluation in structured input sets ranging from single words to sentence-length utterances and adopting a methodology inspired by standardized inference benchmarks, we capture both typical and worst-case user experiences. Unlike prior studies that emphasize closed or proprietary systems, our focus is on establishing open, reproducible baselines rather than ranking against commercial references. The results reveal substantial variability across architectures, with some models delivering near-instant responses while others fail to meet interactive thresholds. By centering evaluation on responsiveness and reproducibility, this study provides an infrastructural foundation for benchmarking TTS systems and lays the groundwork for more comprehensive assessments that integrate both fidelity and speed. Full article
Show Figures

Figure 1

21 pages, 1229 KB  
Article
Eghatha: A Blockchain-Based System to Enhance Disaster Preparedness
by Ayoub Ghani, Ahmed Zinedine and Mohammed El Mohajir
Computers 2025, 14(10), 405; https://doi.org/10.3390/computers14100405 - 23 Sep 2025
Viewed by 175
Abstract
Natural disasters often strike unexpectedly, leaving thousands of victims and affected individuals each year. Effective disaster preparedness is critical to reducing these consequences and accelerating recovery. This paper presents Eghatha, a blockchain-based decentralized system designed to optimize humanitarian aid delivery during crises. By [...] Read more.
Natural disasters often strike unexpectedly, leaving thousands of victims and affected individuals each year. Effective disaster preparedness is critical to reducing these consequences and accelerating recovery. This paper presents Eghatha, a blockchain-based decentralized system designed to optimize humanitarian aid delivery during crises. By enabling secure and transparent transfers of donations and relief from donors to beneficiaries, the system enhances trust and operational efficiency. All transactions are immutably recorded and verified on a blockchain network, reducing fraud and misuse while adapting to local contexts. The platform is volunteer-driven, coordinated by civil society organizations with humanitarian expertise, and supported by government agencies involved in disaster response. Eghatha’s design accounts for disaster-related constraints—including limited mobility, varying levels of technological literacy, and resource accessibility—by offering a user-friendly interface, support for local currencies, and integration with locally available technologies. These elements ensure inclusivity for diverse populations. Aligned with Morocco’s “Digital Morocco 2030” strategy, the system contributes to both immediate crisis response and long-term digital transformation. Its scalable architecture and contextual sensitivity position the platform for broader adoption in similarly affected regions worldwide, offering a practical model for ethical, decentralized, and resilient humanitarian logistics. Full article
Show Figures

Figure 1

14 pages, 731 KB  
Article
Security-Aware Adaptive Video Streaming via Watermarking: Tackling Time-to-First-Byte Delays and QoE Issues in Live Video Delivery Systems
by Reza Kalan, Peren Jerfi Canatalay and Emre Karsli
Computers 2025, 14(10), 404; https://doi.org/10.3390/computers14100404 - 23 Sep 2025
Viewed by 213
Abstract
Illegal broadcasting is one of the primary challenges for Over the Top (OTT) service providers. Watermarking is a method used to trace illegal redistribution of video content. However, watermarking introduces processing overhead due to the embedding of unique patterns into the video content, [...] Read more.
Illegal broadcasting is one of the primary challenges for Over the Top (OTT) service providers. Watermarking is a method used to trace illegal redistribution of video content. However, watermarking introduces processing overhead due to the embedding of unique patterns into the video content, which results in additional latency. End-to-end network latency, caused by network congestion or heavy load on the origin server, can slow data transmission, impacting the time it takes for the segment to reach the client. This paper addresses 5xx errors (e.g., 503, 504) at the Content Delivery Network (CDN) in real-world video streaming platforms, which can negatively impact Quality of Experience (QoE), particularly when watermarking techniques are employed. To address the performance issues caused by the integration of watermarking technology, we enhanced the system architecture by introducing and optimizing a shield cache in front of the packager at the origin server and fine-tuning the CDN configuration. These optimizations significantly reduced the processing load on the packager, minimized latency, and improved overall content delivery. As a result, we achieved a 6% improvement in the Key Performance Indicator (KPI), reflecting enhanced system stability and video quality. Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
Show Figures

Figure 1

Previous Issue
Back to TopTop