Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,526)

Search Parameters:
Keywords = cloud service

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
49 pages, 1694 KB  
Review
Analysis of Deep Reinforcement Learning Algorithms for Task Offloading and Resource Allocation in Fog Computing Environments
by Endris Mohammed Ali, Jemal Abawajy, Frezewd Lemma and Samira A. Baho
Sensors 2025, 25(17), 5286; https://doi.org/10.3390/s25175286 (registering DOI) - 25 Aug 2025
Abstract
Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality [...] Read more.
Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality of Service (QoS) requirements. Deep reinforcement learning (DRL) has emerged as a promising solution to these challenges, offering adaptive, data-driven decision-making in real-time and uncertain conditions. While several surveys have explored DRL in fog computing, most focus on traditional centralized offloading approaches or emphasize reinforcement learning (RL) with limited integration of deep learning. To address this gap, this paper presents a comprehensive and focused survey on the full-scale application of DRL to the task offloading problem in fog computing environments involving multiple user devices and multiple fog nodes. We systematically analyze and classify the literature based on architecture, resource allocation methods, QoS objectives, offloading topology and control, optimization strategies, DRL techniques used, and application scenarios. We also introduce a taxonomy of DRL-based task offloading models and highlight key challenges, open issues, and future research directions. This survey serves as a valuable resource for researchers by identifying unexplored areas and suggesting new directions for advancing DRL-based solutions in fog computing. For practitioners, it provides insights into selecting suitable DRL techniques and system designs to implement scalable, efficient, and QoS-aware fog computing applications in real-world environments. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 9068 KB  
Article
DDPG-Based Computation Offloading Strategy for Maritime UAV
by Ziyue Zhao, Yanli Xu and Qianlian Yu
Electronics 2025, 14(17), 3376; https://doi.org/10.3390/electronics14173376 (registering DOI) - 25 Aug 2025
Abstract
With the development of the maritime Internet of Things (MIoT), a large number of sensors are deployed, generating massive amounts of data. However, due to the limited data processing capabilities of the sensors and the constrained service capacity of maritime communication networks, the [...] Read more.
With the development of the maritime Internet of Things (MIoT), a large number of sensors are deployed, generating massive amounts of data. However, due to the limited data processing capabilities of the sensors and the constrained service capacity of maritime communication networks, the local and cloud data processing of MIoT are restricted. Thus, there is a pressing demand for efficient edge-based data processing solutions. In this paper, we investigate unmanned aerial vehicle (UAV)-assisted maritime edge computing networks. Under energy constraints of both UAV and MIoT devices, we propose a Deep Deterministic Policy Gradient (DDPG)-based maritime computation offloading and resource allocation algorithm to efficiently process MIoT tasks current form of UAV. The algorithm jointly optimizes task offloading ratios, UAV trajectory planning, and edge computing resource allocation to minimize total system task latency while satisfying energy consumption constraints. Simulation results validate its effectiveness and robustness in highly dynamic maritime environments. Full article
(This article belongs to the Special Issue Parallel, Distributed, Edge Computing in UAV Communication)
7 pages, 656 KB  
Proceeding Paper
Using Large Language Models for Ontology Development
by Darko Andročec
Eng. Proc. 2025, 104(1), 9; https://doi.org/10.3390/engproc2025104009 (registering DOI) - 22 Aug 2025
Abstract
This paper explores the application of Large Language Models (LLMs) for ontology development, focusing specifically on cloud service ontologies. We demonstrate how LLMs can streamline the ontology development process by following a modified Ontology Development 101 methodology using Perplexity AI. Our case study [...] Read more.
This paper explores the application of Large Language Models (LLMs) for ontology development, focusing specifically on cloud service ontologies. We demonstrate how LLMs can streamline the ontology development process by following a modified Ontology Development 101 methodology using Perplexity AI. Our case study shows that LLMs can effectively assist in defining scope, identifying existing ontologies, generating class hierarchies, creating properties, and populating instances. The resulting cloud service ontology integrates concepts from multiple standards and existing ontologies. While LLMs cannot fully automate ontology creation, they significantly reduce development time and complexity, serving as valuable assistants in the ontology engineering process. Full article
Show Figures

Figure 1

36 pages, 14083 KB  
Article
Workload Prediction for Proactive Resource Allocation in Large-Scale Cloud-Edge Applications
by Thang Le Duc, Chanh Nguyen and Per-Olov Östberg
Electronics 2025, 14(16), 3333; https://doi.org/10.3390/electronics14163333 - 21 Aug 2025
Viewed by 104
Abstract
Accurate workload prediction is essential for proactive resource allocation in large-scale Content Delivery Networks (CDNs), where traffic patterns are highly dynamic and geographically distributed. This paper introduces a CDN-tailored prediction and autoscaling framework that integrates statistical and deep learning models within an adaptive [...] Read more.
Accurate workload prediction is essential for proactive resource allocation in large-scale Content Delivery Networks (CDNs), where traffic patterns are highly dynamic and geographically distributed. This paper introduces a CDN-tailored prediction and autoscaling framework that integrates statistical and deep learning models within an adaptive feedback loop. The framework is evaluated using 18 months of real traffic traces from a production multi-tier CDN, capturing realistic workload seasonality, cache–tier interactions, and propagation delays. Unlike generic cloud-edge predictors, our design incorporates CDN-specific features and model-switching mechanisms to balance prediction accuracy with computational cost. Seasonal ARIMA (S-ARIMA), Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Online Sequential Extreme Learning Machine (OS-ELM) are combined to support both short-horizon scaling and longer-term capacity planning. The predictions drive a queue-based resource-estimation model, enabling proactive cache–server scaling with low rejection rates. Experimental results demonstrate that the framework maintains high accuracy while reducing computational overhead through adaptive model selection. The proposed approach offers a practical, production-tested solution for predictive autoscaling in CDNs and can be extended to other latency-sensitive edge-cloud services with hierarchical architectures. Full article
(This article belongs to the Special Issue Next-Generation Cloud–Edge Computing: Systems and Applications)
Show Figures

Graphical abstract

20 pages, 1919 KB  
Article
Management of Virtualized Railway Applications
by Ivaylo Atanasov, Evelina Pencheva and Kamelia Nikolova
Information 2025, 16(8), 712; https://doi.org/10.3390/info16080712 - 21 Aug 2025
Viewed by 78
Abstract
Robust, reliable, and secure communications are essential for efficient railway operation and keeping employees and passengers safe. The Future Railway Mobile Communication System (FRMCS) is a global standard aimed at providing innovative, essential, and high-performance communication applications in railway transport. In comparison with [...] Read more.
Robust, reliable, and secure communications are essential for efficient railway operation and keeping employees and passengers safe. The Future Railway Mobile Communication System (FRMCS) is a global standard aimed at providing innovative, essential, and high-performance communication applications in railway transport. In comparison with the legacy communication system (GSM-R), it provides high data rates, ultra-high reliability, and low latency. The FRMCS architecture will also benefit from cloud computing, following the principles of the cloud-native 5G core network design based on Network Function Virtualization (NFV). In this paper, an approach to the management of virtualized FRMCS applications is presented. First, the key management functionality related to the virtualized FRMCS application is identified based on an analysis of the different use cases. Next, this functionality is synthesized as RESTful services. The communication between application management and the services is designed as Application Programing Interfaces (APIs). The APIs are formally verified by modeling the management states of an FRMCS application instance from different points of view, and it is mathematically proved that the management state models are synchronized in time. The latency introduced by the designed APIs, as a key performance indicator, is evaluated through emulation. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

21 pages, 2657 KB  
Article
AI-Powered Adaptive Disability Prediction and Healthcare Analytics Using Smart Technologies
by Malak Alamri, Mamoona Humayun, Khalid Haseeb, Naveed Abbas and Naeem Ramzan
Diagnostics 2025, 15(16), 2104; https://doi.org/10.3390/diagnostics15162104 - 21 Aug 2025
Viewed by 204
Abstract
Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients’ bodies promptly using biosensors. The dynamic nature of medical [...] Read more.
Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients’ bodies promptly using biosensors. The dynamic nature of medical devices not only enhances the data analysis in medical services and the prediction of chronic diseases, but also improves remote diagnostics with the latency-aware healthcare system. However, due to scalability and reliability limitations in data processing, most existing healthcare systems pose research challenges in the timely detection of personalized diseases, leading to inconsistent diagnoses, particularly when continuous monitoring is crucial. Methods: This work propose an adaptive and secure framework for disability identification using the Internet of Medical Things (IoMT), integrating edge computing and artificial intelligence. To achieve the shortest response time for medical decisions, the proposed framework explores lightweight edge computing processes that collect physiological and behavioral data using biosensors. Furthermore, it offers a trusted mechanism using decentralized strategies to protect big data analytics from malicious activities and increase authentic access to sensitive medical data. Lastly, it provides personalized healthcare interventions while monitoring healthcare applications using realistic health records, thereby enhancing the system’s ability to identify diseases associated with chronic conditions. Results: The proposed framework is tested using simulations, and the results indicate the high accuracy of the healthcare system in detecting disabilities at the edges, while enhancing the prompt response of the cloud server and guaranteeing the security of medical data through lightweight encryption methods and federated learning techniques. Conclusions: The proposed framework offers a secure and efficient solution for identifying disabilities in healthcare systems by leveraging IoMT, edge computing, and AI. It addresses critical challenges in real-time disease monitoring, enhancing diagnostic accuracy and ensuring the protection of sensitive medical data. Full article
Show Figures

Figure 1

25 pages, 2133 KB  
Article
Blockchain-Enabled Self-Autonomous Intelligent Transport System for Drone Task Workflow in Edge Cloud Networks
by Pattaraporn Khuwuthyakorn, Abdullah Lakhan, Arnab Majumdar and Orawit Thinnukool
Algorithms 2025, 18(8), 530; https://doi.org/10.3390/a18080530 - 20 Aug 2025
Viewed by 165
Abstract
In recent years, self-autonomous intelligent transportation applications such as drones and autonomous vehicles have seen rapid development and deployment across various countries. Within the domain of artificial intelligence, self-autonomous agents are defined as software entities capable of independently operating drones in an intelligent [...] Read more.
In recent years, self-autonomous intelligent transportation applications such as drones and autonomous vehicles have seen rapid development and deployment across various countries. Within the domain of artificial intelligence, self-autonomous agents are defined as software entities capable of independently operating drones in an intelligent transport system (ITS) without human intervention. The integration of these agents into autonomous vehicles and their deployment across distributed cloud networks have increased significantly. These systems, which include drones, ground vehicles, and aircraft, are used to perform a wide range of tasks such as delivering passengers and packages within defined operational boundaries. Despite their growing utility, practical implementations face significant challenges stemming from the heterogeneity of network resources, as well as persistent issues related to security, privacy, and processing costs. To overcome these challenges, this study proposes a novel blockchain-enabled self-autonomous intelligent transport system designed for drone workflow applications. The proposed system architecture is based on a remote method invocation (RMI) client–server model and incorporates a serverless computing framework to manage processing costs. Termed the self-autonomous blockchain-enabled cost-efficient system (SBECES), the framework integrates a client and system agent mechanism governed by Q-learning and deep-learning-based policies. Furthermore, it incorporates a blockchain-based hash validation and fault-tolerant (HVFT) mechanism to ensure data integrity and operational reliability. A deep reinforcement learning (DRL)-enabled adaptive scheduler is utilized to manage drone workflow execution while meeting quality of service (QoS) constraints, including deadlines, cost-efficiency, and security. The overarching objective of this research is to minimize the total processing costs that comprise execution, communication, and security overheads, while maximizing operational rewards and ensuring the timely execution of drone-based tasks. Experimental results demonstrate that the proposed system achieves a 30% reduction in processing costs and a 29% improvement in security and privacy compared to existing state-of-the-art solutions. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 2110 KB  
Article
Navigating Cross-Border E-Commerce: Prioritizing Logistics Partners with Hybrid MCGDM
by Xingyu Ma and Chuanxu Wang
Entropy 2025, 27(8), 876; https://doi.org/10.3390/e27080876 - 19 Aug 2025
Viewed by 220
Abstract
As global e-commerce expands, efficient cross-border logistics services have become essential. To support the evaluation of logistics service providers (LSPs), we propose HD-CBDTOPSIS (Technique for Order Preference by Similarity to Ideal Solution with heterogeneous data and cloud Bhattacharyya distance), a hybrid multi-criteria group [...] Read more.
As global e-commerce expands, efficient cross-border logistics services have become essential. To support the evaluation of logistics service providers (LSPs), we propose HD-CBDTOPSIS (Technique for Order Preference by Similarity to Ideal Solution with heterogeneous data and cloud Bhattacharyya distance), a hybrid multi-criteria group decision-making (MCGDM) model designed to handle complex, uncertain data. Our criteria system integrates traditional supplier evaluation with cross-border e-commerce characteristics, using heterogeneous data types—including exact numbers, intervals, digital datasets, multi-granularity linguistic terms, and linguistic expressions. These are unified using normal cloud models (NCMs), ensuring uncertainty is consistently represented. A novel algorithm, improved multi-step backward cloud transformation with sampling replacement (IMBCT-SR), is developed for converting dataset-type indicators into cloud models. We also introduce a new similarity measure, the Cloud Bhattacharyya Distance (CBD), which shows superior discrimination ability compared to traditional distances. Using the coefficient of variation (CV) based on CBD, we objectively determine criteria weights. A cloud-based TOPSIS approach is then applied to rank alternative LSPs, with all variables modeled using NCMs to ensure consistent uncertainty representation. An application case and comparative experiments demonstrate that HD-CBDTOPSIS is an effective, flexible, and robust tool for evaluating cross-border LSPs under uncertain and multi-dimensional conditions. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

20 pages, 3592 KB  
Article
Federated Security for Privacy Preservation of Healthcare Data in Edge-Cloud Environments
by Rasanga Jayaweera, Himanshu Agrawal and Nickson M. Karie
Sensors 2025, 25(16), 5108; https://doi.org/10.3390/s25165108 - 17 Aug 2025
Viewed by 452
Abstract
Digital transformation in healthcare has introduced data privacy challenges, as hospitals struggle to protect patient information while adopting digital technologies such as AI, IoT, and cloud more rapidly than ever before. The adoption of powerful third-party Machine Learning as a Service (MLaaS) solutions [...] Read more.
Digital transformation in healthcare has introduced data privacy challenges, as hospitals struggle to protect patient information while adopting digital technologies such as AI, IoT, and cloud more rapidly than ever before. The adoption of powerful third-party Machine Learning as a Service (MLaaS) solutions for disease prediction has become a common practice. However, these solutions offer significant privacy risks when sensitive healthcare data are shared externally to a third-party server. This raises compliance concerns under regulations like HIPAA, GDPR, and Australia’s Privacy Act. To address these challenges, this paper explores a decentralized, privacy-preserving approach to train the models among multiple healthcare stakeholders, integrating Federated Learning (FL) with Homomorphic Encryption (HE), ensuring model parameters remain protected throughout the learning process. This paper proposes a novel Homomorphic Encryption-based Adaptive Tuning for Federated Learning (HEAT-FL) framework to select encryption parameters based on model layer sensitivity. The proposed framework leverages the CKKS scheme to encrypt model parameters on the client side before sharing. This enables secure aggregation at the central server without requiring decryption, providing an additional layer of security through model-layer-wise parameter management. The proposed adaptive encryption approach significantly improves runtime efficiency while maintaining a balanced level of security. Compared to the existing frameworks (non-adaptive) using 256-bit security settings, the proposed framework offers a 56.5% reduction in encryption time for 10 clients and 54.6% for four clients per epoch. Full article
(This article belongs to the Special Issue Privacy and Security in Sensor Networks)
Show Figures

Figure 1

21 pages, 21564 KB  
Article
Remote Visualization and Optimization of Fluid Dynamics Using Mixed Reality
by Sakshi Sandeep More, Brandon Antron, David Paeres and Guillermo Araya
Appl. Sci. 2025, 15(16), 9017; https://doi.org/10.3390/app15169017 - 15 Aug 2025
Viewed by 318
Abstract
This study presents an innovative pipeline for processing, compressing, and remotely visualizing large-scale numerical simulations of fluid dynamics in a virtual wind tunnel (VWT), leveraging virtual and augmented reality (VR/AR) for enhanced analysis and high-end visualization. The workflow addresses the challenges of handling [...] Read more.
This study presents an innovative pipeline for processing, compressing, and remotely visualizing large-scale numerical simulations of fluid dynamics in a virtual wind tunnel (VWT), leveraging virtual and augmented reality (VR/AR) for enhanced analysis and high-end visualization. The workflow addresses the challenges of handling massive databases generated using Direct Numerical Simulation (DNS) while maintaining visual fidelity and ensuring efficient rendering for user interaction. Fully immersive visualization of supersonic (Mach number 2.86) spatially developing turbulent boundary layers (SDTBLs) over strong concave and convex curvatures was achieved. The comprehensive DNS data provides insights on the transport phenomena inside turbulent boundary layers under strong deceleration or an Adverse Pressure Gradient (APG) caused by concave walls as well as strong acceleration or a Favorable Pressure Gradient (FPG) caused by convex walls under different wall thermal conditions (i.e., Cold, Adiabatic, and Hot walls). The process begins with a .vts file input from a DNS, which is visualized using ParaView software. These visualizations, representing different fluid behaviors based on a DNS with a high spatial/temporal resolution and employing millions of “numerical sensors”, are treated as individual time frames and exported in GL Transmission Format (GLTF), which is a widely used open-source file format designed for efficient transmission and loading of 3D scenes. To support the workflow, optimized Extract–Transform–Load (ETL) techniques were implemented for high-throughput data handling. Conversion of exported Graphics Library Transmission Format (GLTF) files into Graphics Library Transmission Format Binary files (typically referred to as GLB) reduced the storage by 25% and improved the load latency by 60%. This research uses Unity’s Profile Analyzer and Memory Profiler to identify performance limitations during contour rendering, focusing on the GPU and CPU efficiency. Further, immersive VR/AR analytics are achieved by connecting the processed outputs to Unity engine software and Microsoft HoloLens Gen 2 via Azure Remote Rendering cloud services, enabling real-time exploration of fluid behavior in mixed-reality environments. This pipeline constitutes a significant advancement in the scientific visualization of fluid dynamics, particularly when applied to datasets comprising hundreds of high-resolution frames. Moreover, the methodologies and insights gleaned from this approach are highly transferable, offering potential applications across various other scientific and engineering disciplines. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 2462 KB  
Article
A Service Recommendation Model in Cloud Environment Based on Trusted Graph-Based Collaborative Filtering Recommender System
by Urvashi Rahul Saxena, Yogita Khatri, Rajan Kadel and Samar Shailendra
Network 2025, 5(3), 30; https://doi.org/10.3390/network5030030 - 13 Aug 2025
Viewed by 255
Abstract
Cloud computing has increasingly adopted multi-tenant infrastructures to enhance cost efficiency and resource utilization by enabling the shared use of computational resources. However, this shared model introduces several security and privacy concerns, including unauthorized access, data redundancy, and susceptibility to malicious activities. In [...] Read more.
Cloud computing has increasingly adopted multi-tenant infrastructures to enhance cost efficiency and resource utilization by enabling the shared use of computational resources. However, this shared model introduces several security and privacy concerns, including unauthorized access, data redundancy, and susceptibility to malicious activities. In such environments, the effectiveness of cloud-based recommendation systems largely depends on the trustworthiness of participating nodes. Traditional collaborative filtering techniques often suffer from limitations such as data sparsity and the cold-start problem, which significantly degrade rating prediction accuracy. To address these challenges, this study proposes a Trusted Graph-Based Collaborative Filtering Recommender System (TGBCF). The model integrates graph-based trust relationships with collaborative filtering to construct a trust-aware user network capable of generating reliable service recommendations. Each node’s reliability is quantitatively assessed using a trust metric, thereby improving both the accuracy and robustness of the recommendation process. Simulation results show that TGBCF achieves a rating prediction accuracy of 93%, outperforming the baseline collaborative filtering approach (82%). Moreover, the model reduces the influence of malicious nodes by 40–60%, demonstrating its applicability in dynamic and security-sensitive cloud service environments. Full article
Show Figures

Figure 1

29 pages, 919 KB  
Article
DDoS Defense Strategy Based on Blockchain and Unsupervised Learning Techniques in SDN
by Shengmin Peng, Jialin Tian, Xiangyu Zheng, Shuwu Chen and Zhaogang Shu
Future Internet 2025, 17(8), 367; https://doi.org/10.3390/fi17080367 - 13 Aug 2025
Viewed by 350
Abstract
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a [...] Read more.
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a central controller, the SDN controller, to quickly direct the routing devices within the topology to forward data packets, thus providing flexible traffic management for communication between information sources. However, traditional Distributed Denial of Service (DDoS) attacks still significantly impact SDN systems. This paper proposes a novel dual-layer strategy capable of detecting and mitigating DDoS attacks in an SDN network environment. The first layer of the strategy enhances security by using blockchain technology to replace the SDN flow table storage container in the northbound interface of the SDN controller. Smart contracts are then used to process the stored flow table information. We employ the time window algorithm and the token bucket algorithm to construct the first layer strategy to defend against obvious DDoS attacks. To detect and mitigate less obvious DDoS attacks, we design a second-layer strategy that uses a composite data feature correlation coefficient calculation method and the Isolation Forest algorithm from unsupervised learning techniques to perform binary classification, thereby identifying abnormal traffic. We conduct experimental validation using the publicly available DDoS dataset CIC-DDoS2019. The results show that using this strategy in the SDN network reduces the average deviation of round-trip time (RTT) by approximately 38.86% compared with the original SDN network without this strategy. Furthermore, the accuracy of DDoS attack detection reaches 97.66% and an F1 score of 92.2%. Compared with other similar methods, under comparable detection accuracy, the deployment of our strategy in small-scale SDN network topologies provides faster detection speeds for DDoS attacks and exhibits less fluctuation in detection time. This indicates that implementing this strategy can effectively identify DDoS attacks without affecting the stability of data transmission in the SDN network environment. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

10 pages, 724 KB  
Article
Real-Time Speech-to-Text on Edge: A Prototype System for Ultra-Low Latency Communication with AI-Powered NLP
by Stefano Di Leo, Luca De Cicco and Saverio Mascolo
Information 2025, 16(8), 685; https://doi.org/10.3390/info16080685 - 11 Aug 2025
Viewed by 845
Abstract
This paper presents a real-time speech-to-text (STT) system designed for edge computing environments requiring ultra-low latency and local processing. Differently from cloud-based STT services, the proposed solution runs entirely on a local infrastructure which allows the enforcement of user privacy and provides high [...] Read more.
This paper presents a real-time speech-to-text (STT) system designed for edge computing environments requiring ultra-low latency and local processing. Differently from cloud-based STT services, the proposed solution runs entirely on a local infrastructure which allows the enforcement of user privacy and provides high performance in bandwidth-limited or offline scenarios. The designed system is based on a browser-native audio capture through WebRTC, real-time streaming with WebSocket, and offline automatic speech recognition (ASR) utilizing the Vosk engine. A natural language processing (NLP) component, implemented as a microservice, improves transcription results for spelling accuracy and clarity. Our prototype reaches sub-second end-to-end latency and strong transcription capabilities under realistic conditions. Furthermore, the modular architecture allows extensibility, integration of advanced AI models, and domain-specific adaptations. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

21 pages, 1902 KB  
Article
Mobile Platform for Continuous Screening of Clear Water Quality Using Colorimetric Plasmonic Sensing
by Rima Mansour, Caterina Serafinelli, Rui Jesus and Alessandro Fantoni
Information 2025, 16(8), 683; https://doi.org/10.3390/info16080683 - 10 Aug 2025
Viewed by 304
Abstract
Effective water quality monitoring is very important for detecting pollution and protecting public health. However, traditional methods are slow, relying on costly equipment, central laboratories, and expert staffing, which delays real-time measurements. At the same time, significant advancements have been made in the [...] Read more.
Effective water quality monitoring is very important for detecting pollution and protecting public health. However, traditional methods are slow, relying on costly equipment, central laboratories, and expert staffing, which delays real-time measurements. At the same time, significant advancements have been made in the field of plasmonic sensing technologies, making them ideal for environmental monitoring. However, their reliance on large, expensive spectrometers limits accessibility. This work aims to bridge the gap between advanced plasmonic sensing and practical water monitoring needs, by integrating plasmonic sensors with mobile technology. We present BioColor, a mobile platform that consists of a plasmonic sensor setup, mobile application, and cloud services. The platform processes captured colorimetric sensor images in real-time using optimized image processing algorithms, including region-of-interest segmentation, color extraction (mean and dominant), and comparison via the CIEDE2000 metric. The results are visualized within the mobile app, providing instant and automated access to the sensing outcome. In our validation experiments, the system consistently measured color differences in various sensor images captured under media with different refractive indices. A user experience test with 12 participants demonstrated excellent usability, resulting in a System Usability Scale (SUS) score of 93. The BioColor platform brings advanced sensing capabilities from hardware into software, making environmental monitoring more accessible, efficient, and continuous. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

24 pages, 1640 KB  
Article
Digital Innovation, Business Models Transformations, and Agricultural SMEs: A PRISMA-Based Review of Challenges and Prospects
by Bingfeng Sun, Jianping Yu, Shoukat Iqbal Khattak, Sadia Tariq and Muhammad Zahid
Systems 2025, 13(8), 673; https://doi.org/10.3390/systems13080673 - 8 Aug 2025
Viewed by 912
Abstract
Digital innovation is rapidly transforming the agriculture sector, drawing attention from global development institutions, policymakers, tech firms, and scholars aimed at aligning food systems with international goals like Zero Hunger and the FAO agendas. Small and medium enterprises in agriculture (Agri-SMEs) represent a [...] Read more.
Digital innovation is rapidly transforming the agriculture sector, drawing attention from global development institutions, policymakers, tech firms, and scholars aimed at aligning food systems with international goals like Zero Hunger and the FAO agendas. Small and medium enterprises in agriculture (Agri-SMEs) represent a significant portion of processing and production units but face challenges in digital transformation despite their importance. Technologies such as Artificial Intelligence (AI), blockchain, cloud services, IoT, and mobile platforms offer tools to improve efficiency, access, value creation, and traceability. However, the patterns and applications of these transformations in Agri-SMEs remain fragmented and under-theorized. This paper presents a systematic review of interactions between digital transformation and innovation in Agri-SMEs based on findings from ninety-five peer-reviewed studies. Key themes identified include AI-based decision support, blockchain traceability, cloud platforms, IoT precision agriculture, and mobile technologies for financial integration. The review maps these themes against business model values and highlights barriers like capacity gaps and infrastructure deficiencies that hinder scalable adoption. It concludes with recommendations for future research, policy, and ecosystem coordination to promote the sustainable development of digitally robust Agri-SMEs. Full article
Show Figures

Figure 1

Back to TopTop