Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (70)

Search Parameters:
Keywords = offline AI models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
6 pages, 892 KB  
Proceeding Paper
Applying Model Context Protocol for Offline Small Language Models in Industrial Data Management
by Nian-Ze Hu, You-Xin Lin, Hao-Lun Huang, Po-Han Lu, Chih-Chen Lin, Yu-Tzu Hung, Sing-Cih Jhang and Pei-Yu Chou
Eng. Proc. 2026, 134(1), 31; https://doi.org/10.3390/engproc2026134031 - 7 Apr 2026
Abstract
In recent years, Large Language Models (LLMs) have demonstrated strong capabilities in contextual reasoning and knowledge retrieval. However, their application in industrial domains is limited by concerns regarding data security, reliance on cloud infrastructure, and high operational costs. To address these challenges, this [...] Read more.
In recent years, Large Language Models (LLMs) have demonstrated strong capabilities in contextual reasoning and knowledge retrieval. However, their application in industrial domains is limited by concerns regarding data security, reliance on cloud infrastructure, and high operational costs. To address these challenges, this study proposes the use of the Model Context Protocol (MCP) as a middleware framework that enables the deployment of offline-operable Small Language Models (SLMs) for industrial data processing. MCP facilitates structured interaction between SLMs and external resources (e.g., databases, APIs, and processors), allowing secure and controlled data access without exposing proprietary systems. As illustrated in the proposed framework, user input is first processed by the SLM (Qwen-7B) for intent determination. When external data is required, MCP coordinates the invocation of relevant resources and integrates the returned results into the model. The SLM then generates the final response. This approach enables SLMs to perform local computation for contextual analysis and decision support while maintaining low computational requirements and full data locality. The proposed system eliminates dependence on cloud-based LLM services and enhances security and cost efficiency. Experimental results demonstrate that the MCP-based architecture provides a practical and effective solution for deploying intelligent assistants in industrial environments without relying on large-scale external AI services. Full article
Show Figures

Figure 1

30 pages, 1286 KB  
Article
Large Language Model Recommendations for Empiric Antibiotics Versus Clinician Prescribing: A Non-Interventional Paired Retrospective Antimicrobial Stewardship Analysis
by Ninel Iacobus Antonie, Vlad Alexandru Ionescu, Gina Gheorghe, Loredana-Crista Tiucă and Camelia Cristina Diaconu
Antibiotics 2026, 15(4), 368; https://doi.org/10.3390/antibiotics15040368 - 2 Apr 2026
Viewed by 222
Abstract
Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support [...] Read more.
Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support can be considered, structured offline evaluation is needed to assess whether model outputs align with auditable stewardship constraints under real-world admission contexts. We therefore evaluated whether post hoc LLM-generated empiric antibiotic recommendations showed greater concordance with a pre-specified stewardship benchmarking framework than clinician-initiated regimens in a retrospective shadow-mode setting. Methods: Single-center retrospective paired evaluation at Clinical Emergency Hospital of Bucharest (Internal Medicine, 2020–2024). The unit of analysis was the admission (N = 493), with paired 24 h empiric regimens (clinician-prescribed vs. post hoc LLM-recommended via OpenAI API; not visible to clinicians; no influence on care). Local laboratory-derived epidemiology was precomputed from microbiology exports and provided as structured prompt context to approximate information parity with clinicians’ implicit local ecology knowledge. Primary (prespecified) endpoint: any contextual guardrail violation (unjustified carbapenem/antipseudomonal/anti-MRSA under prespecified structured severity/MDR-risk rules), exact McNemar. Key secondary (prespecified): Δ contextual guardrail penalty (LLM − Clin), sign test and Wilcoxon signed-rank (ties reported). Ethics committee approval was obtained. Results: Guardrail violations occurred in 17.0% of clinician regimens vs. 4.9% of LLM regimens (paired RD −12.2%; matched OR 0.216, 95% CI 0.127–0.367; McNemar exact p = 1.60 × 10−10). Δ penalty had median 0 with 398/493 ties; among non-ties, improvements (Δ < 0) exceeded adverse shifts (79 vs. 16; sign-test p = 3.47 × 10−11). Conclusions: In this offline, non-interventional paired evaluation, LLM-generated empiric regimens showed greater concordance with a pre-specified stewardship benchmarking framework than clinician empiric regimens for the same admissions. These findings should not be interpreted as evidence of clinical superiority, patient safety, or causal effectiveness, but rather as process-level benchmarking within a rule-based stewardship construct. As such, reproducible guardrail-based benchmarking may serve as an early pre-implementation step to identify alignment and potential failure modes before prospective, safety-governed evaluation. Full article
(This article belongs to the Section Antibiotics Use and Antimicrobial Stewardship)
Show Figures

Figure 1

23 pages, 2936 KB  
Article
Lightweight Transient-Source Detection Method for Edge Computing
by Jiahao Zhang, Yutian Fu, Feng Dong and Lingfeng Huang
Universe 2026, 12(4), 101; https://doi.org/10.3390/universe12040101 - 1 Apr 2026
Viewed by 193
Abstract
Transient-source detection without relying on difference images still faces challenges in achieving high accuracy, especially under practical space-based astronomical survey conditions where the data volume is enormous, on-orbit transmission bandwidth is limited, and real-time response is required for rapid follow-up observations. To address [...] Read more.
Transient-source detection without relying on difference images still faces challenges in achieving high accuracy, especially under practical space-based astronomical survey conditions where the data volume is enormous, on-orbit transmission bandwidth is limited, and real-time response is required for rapid follow-up observations. To address these issues, this paper proposes a lightweight detection network that integrates multi-scale feature fusion with contextual feature extraction, enabling efficient real-time processing on resource-constrained edge devices. The proposed model enhances robustness to point-spread-function variations across observation conditions and to complex background environments, while simultaneously improving detection accuracy. To evaluate performance comprehensively, lightweight VGG and lightweight ResNet architectures and other baseline models—commonly used as baselines for transient-source detection—are adopted for comparison. Experimental results show that under the condition that the models have approximately the same number of parameters, the proposed network achieves the best accuracy, obtaining nearly 1% improvement compared with the best-performing baseline model. Based on this design, an ultra-lightweight version with only 7k parameters is further developed by incorporating a compact multi-scale module, improving accuracy by 1% over the version without the multi-scale structure. Moreover, through heterogeneous knowledge distillation and adaptive iterative training, the accuracy of the ultra-lightweight model is further increased from 93.3% to 94.0%. Finally, the model is deployed and validated on an AI hardware acceleration platform. The results demonstrate that the proposed method substantially improves inference throughput while maintaining high accuracy, providing a practical solution for real-time, low-latency, on-device transient-source detection under large data volume and limited transmission conditions. Specifically, the proposed models are trained offline on a high-performance GPU and subsequently deployed on the Fudan Microelectronics 7100 AI board to evaluate their real-world inference efficiency on resource-constrained edge devices. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Modern Astronomy)
Show Figures

Figure 1

7 pages, 1450 KB  
Proceeding Paper
BEMAX: A Leaf-Based Endangered Tree Classification System Using Convolutional Neural Network in Bohol Biodiversity Complex, the Philippines
by Bem Gumapac and Jocelyn Villaverde
Eng. Proc. 2026, 134(1), 14; https://doi.org/10.3390/engproc2026134014 - 30 Mar 2026
Viewed by 330
Abstract
Biodiversity monitoring in tropical ecosystems is constrained by limited infrastructure, insufficient localized datasets, and reliance on cloud-based tools. We introduce BEMAX, a lightweight convolutional neural network for offline classification of endangered tree species in the Bohol Biodiversity Complex, Philippines. A curated leaf-image dataset [...] Read more.
Biodiversity monitoring in tropical ecosystems is constrained by limited infrastructure, insufficient localized datasets, and reliance on cloud-based tools. We introduce BEMAX, a lightweight convolutional neural network for offline classification of endangered tree species in the Bohol Biodiversity Complex, Philippines. A curated leaf-image dataset from five species and an unknown class was collected using a Raspberry Pi camera. The MobileNetV2-based model achieved a 93.89% validation accuracy and an 88.33% field accuracy. Deployed on a Raspberry Pi 4 with touchscreen and camera integration, BEMAX demonstrates embedded AI as a replicable framework for conservation in data-scarce environments. Full article
Show Figures

Figure 1

20 pages, 3963 KB  
Article
CalcTutor: Multi-Agent LLM Grading of Handwritten Mathematics with RAG-Grounded Feedback for Adaptive Learning Support
by Le Ying Tan, Buyuan Zhu, Shiyu Hu, Ankit Mishra, Darren J. Yeo and Kang Hao Cheong
Mathematics 2026, 14(7), 1094; https://doi.org/10.3390/math14071094 - 24 Mar 2026
Viewed by 297
Abstract
Personalized instruction remains a major bottleneck in higher education, especially in large classes where timely, individualized feedback is difficult to achieve. Existing automation typically relies on rigid rule-based pipelines or computationally heavy deep learning models, making it difficult to simultaneously achieve interpretability, instructional [...] Read more.
Personalized instruction remains a major bottleneck in higher education, especially in large classes where timely, individualized feedback is difficult to achieve. Existing automation typically relies on rigid rule-based pipelines or computationally heavy deep learning models, making it difficult to simultaneously achieve interpretability, instructional usability, and scalable deployment. In this study, we present CalcTutor, a generative-AI-based assessment and feedback system designed to support open-ended handwritten calculus problem solving. The system organizes instructional support through three coordinated components: (1) a multi-agent large language model (LLM) mechanism that evaluates solution processes and produces diagnostic feedback, (2) a retrieval-augmented generation (RAG) pipeline that links diagnosed difficulties to aligned instructional materials, and (3) real-time learner analytics for both students and instructors, forming an integrated instructional support workflow rather than an automated answer-checking tool. In offline evaluation and a pilot classroom deployment, the multi-agent grader achieved a weighted agreement accuracy of 0.931 and an F1-score of 0.934 on 1055 handwritten solutions. Participant feedback and workflow testing indicated that CalcTutor can be stably integrated into routine classroom use and enables students to interpret and act upon the provided feedback. These results indicate that automated assessment, diagnostic feedback, and targeted review can operate coherently within a single instructional process that supports instructor-led assessment practices. Using undergraduate calculus as an application domain for open-ended handwritten mathematical assessment, the study demonstrates the operational feasibility of a closed-loop assessment–feedback–revision workflow and provides a deployable instructional infrastructure for formative instructional support in real classroom contexts. Full article
Show Figures

Figure 1

11 pages, 5084 KB  
Article
AI-Assisted OCT Imaging for Core Needle Biopsy Guidance: The 1st in Humans Study
by Nicusor Iftimia, Poonam Yadav, Michael Primrose, Gopi Maguluri, Jack Jones, John Grimble and Rahul Anil Sheth
Diagnostics 2026, 16(5), 811; https://doi.org/10.3390/diagnostics16050811 - 9 Mar 2026
Viewed by 449
Abstract
Background: The heterogeneous nature of cancer with varying degrees of fat, necrosis, fibrosis, and varying degrees of tissue repair severely impacts the success of acquiring adequate tissue samples during percutaneous image-guided biopsy. Although ultrasound or CT fluoroscopy are used to identify tumor [...] Read more.
Background: The heterogeneous nature of cancer with varying degrees of fat, necrosis, fibrosis, and varying degrees of tissue repair severely impacts the success of acquiring adequate tissue samples during percutaneous image-guided biopsy. Although ultrasound or CT fluoroscopy are used to identify tumor location and thus to guide biopsy needle insertion, these technologies do not provide the necessary resolution to determine tissue composition and enable the selection of the most appropriate location for biopsy specimen extraction. As a result, biopsy must be repeated, leading to significant cost to the health care system. Methods: In this study, we introduce a combined optical imaging/artificial intelligence (OI/AI) methodology for the real-time assessment of tissue morphology at the tip of the biopsy needle, prior to the collection of a biopsy specimen. Addressing a significant clinical challenge, this approach aims to reduce the proportion of biopsy cores—currently as high as 40%—that yield low diagnostic value due to elevated adipose or low tumor content. Our methodology employs micron-scale optical coherence tomography (OCT) imaging to obtain detailed structural tissue information using a minimally invasive needle probe. The OCT images are automatically analyzed using a convolutional neural network (CNN)-driven AI software developed by our team. A U-net style architecture was used to segment regions of tumor from the OCT scans. U-Net is a specialized convolutional neural network (CNN) architecture designed for fast, precise image segmentation, which involves classifying each pixel in an image to outline objects. This streamlined approach shows promise to provide clinicians with real-time results, supporting more accurate and informed decisions regarding biopsy site selection. To evaluate this technology, we conducted a clinical study using a custom-made OCT imager and recorded OCT images from patients diagnosed with liver cancers. Expert OCT interpreters supplied annotated reference images that were used to train a custom AI algorithm. Results: OCT imaging with ~10 mm axial and 20 mm lateral resolution enabled the collection of high-quality images of the tissue. The AI analysis was performed offline. UNet achieved an AUC of ~0.877 on the validation dataset, indicating promising performance for the relatively small data set used to train the model. The AI model matched human interpretations approximately 90% of the time, highlighting its promise for making biopsy procedures both more accurate and more efficient. Conclusions: A novel OCT instrument and AI software were evaluated for assessing tissue composition at the tip of biopsy needle. The OCT instrument produced micron-scale resolution images of the tissue, enabling AI analysis and accurate real-time discrimination of tissue type. This preliminary study demonstrated the clinical potential of this technology for improving biopsy success. Full article
Show Figures

Figure 1

27 pages, 3000 KB  
Article
Response-Driven Optimal Emergency Control of Power Systems via Deep Learning-Based Sensitivity Embedded Optimization
by Lin Cheng, Han Wang, Yiwei Su and Gengfeng Li
Energies 2026, 19(5), 1284; https://doi.org/10.3390/en19051284 - 4 Mar 2026
Viewed by 269
Abstract
The transition towards high-renewable power systems introduces high-dimensional nonlinearity and uncertainty, rendering traditional offline look-up table schemes prone to control mismatch against “unseen” contingencies. Meanwhile, existing response-driven approaches face a dilemma between the computational latency of physics-based optimization and the safety risks of [...] Read more.
The transition towards high-renewable power systems introduces high-dimensional nonlinearity and uncertainty, rendering traditional offline look-up table schemes prone to control mismatch against “unseen” contingencies. Meanwhile, existing response-driven approaches face a dilemma between the computational latency of physics-based optimization and the safety risks of end-to-end AI. To bridge this gap, this paper proposes a Response-Driven Optimal Emergency Control Framework that ensures both millisecond-level speed and rigorous physical constraints. First, a deep learning-based predictor is employed to extract spatiotemporal features from real-time PMU data, enabling high-fidelity prediction of stability margins. Crucially, instead of direct black-box control, the data-driven model is utilized to derive linear control sensitivities via a batch-processing perturbation mechanism. This transforms the intractable Transient Stability Constrained Optimal Power Flow (TSC-OPF) problem into a real-time solvable Linear Programming model. Case studies on a regional AC/DC hybrid grid demonstrate that the proposed framework achieves high prediction accuracy and effectively restores stability in mismatch scenarios where traditional schemes fail. Furthermore, the decision speed of the proposed method is significantly improved compared to traditional time-domain simulations, thus strictly satisfying the real-time requirements of the second line of defense. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

19 pages, 3583 KB  
Article
Edge AI-Based Gait-Phase Detection for Closed-Loop Neuromodulation in SCI Mice
by Ahnsei Shon, Justin T. Vernam, Xiaolong Du and Wei Wu
Sensors 2026, 26(4), 1311; https://doi.org/10.3390/s26041311 - 18 Feb 2026
Viewed by 699
Abstract
Real-time detection of gait phase is a critical challenge for closed-loop neuromodulation systems aimed at restoring locomotion after spinal cord injury (SCI). However, many existing gait analysis approaches rely on offline processing or computationally intensive models that are unsuitable for low-latency, embedded deployment. [...] Read more.
Real-time detection of gait phase is a critical challenge for closed-loop neuromodulation systems aimed at restoring locomotion after spinal cord injury (SCI). However, many existing gait analysis approaches rely on offline processing or computationally intensive models that are unsuitable for low-latency, embedded deployment. In this study, we present a hybrid AI-based sensing architecture that enables real-time kinematic extraction and on-device gait-phase classification for closed-loop neuromodulation in SCI mice. A vision AI module performs marker-assisted, high-speed pose estimation to extract hindlimb joint angles during treadmill locomotion, while a lightweight edge AI model deployed on a microcontroller classifies gait phase and generates real-time phase-dependent stimulation triggers for closed-loop neuromodulation. The integrated system generalized to unseen SCI gait patterns without injury-specific retraining and enabled precise phase-locked biphasic stimulation in a bench-top closed-loop evaluation. This work demonstrates a low-latency, attachment-free sensing and control framework for gait-responsive neuromodulation, supporting future translation to wearable or implantable closed-loop neurorehabilitation systems. Full article
Show Figures

Figure 1

39 pages, 2492 KB  
Systematic Review
Cloud, Edge, and Digital Twin Architectures for Condition Monitoring of Computer Numerical Control Machine Tools: A Systematic Review
by Mukhtar Fatihu Hamza
Information 2026, 17(2), 153; https://doi.org/10.3390/info17020153 - 3 Feb 2026
Viewed by 1021
Abstract
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture [...] Read more.
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture of data, and offline interpretation, are proving too small to handle current machining processes. Being limited in their scale, having limited computational power, and not being responsive in real-time, they do not fit well in a dynamic and data-intensive production environment. Recent progress in the Industrial Internet of Things (IIoT), cloud computing, and edge intelligence has led to a push into distributed monitoring architectures capable of obtaining, processing, and interpreting large amounts of heterogeneous machining data. Such innovations have facilitated more adaptive decision-making approaches, which have helped in supporting predictive maintenance, enhancing machining stability, tool lifespan, and data-driven optimization in manufacturing businesses. A structured literature search was conducted across major scientific databases, and eligible studies were synthesized qualitatively. This systematic review synthesizes over 180 peer-reviewed studies found in major scientific databases, using specific inclusion criteria and a PRISMA-guided screening process. It provides a comprehensive look at sensor technologies, data acquisition systems, cloud–edge–IoT frameworks, and digital twin implementations from an architectural perspective. At the same time, it identifies ongoing challenges related to industrial scalability, standardization, and the maturity of deployment. The combination of cloud platforms and edge intelligence is of particular interest, with emphasis placed on how the two ensure a balance in the computational load and latency, and improve system reliability. The review is a synthesis of the major advances associated with sensor technologies, data collection approaches, machine operations, machine learning, deep learning methods, and digital twins. The paper concludes with what can and cannot be performed to date by providing a comparative analysis of what is known about this topic and the reported industrial case applications. The main issues, such as the inconsistency of data, the lack of standardization, cyber threats, and old system integration, are critically analyzed. Lastly, new research directions are touched upon, including hybrid cloud–edge intelligence, advanced AI models, and adaptive multisensory fusion, which is oriented to autonomous and self-evolving CNC monitoring systems in line with the Industry 4.0 and Industry 5.0 paradigms. The review process was made transparent and repeatable by using a PRISMA-guided approach to qualitative synthesis and literature screening. Full article
Show Figures

Figure 1

48 pages, 798 KB  
Review
Utah FORGE: A Decade of Innovation—Comprehensive Review of Field-Scale Advances (Part 1)
by Amr Ramadan, Mohamed A. Gabry, Mohamed Y. Soliman and John McLennan
Processes 2026, 14(3), 512; https://doi.org/10.3390/pr14030512 - 2 Feb 2026
Viewed by 577
Abstract
Enhanced Geothermal Systems (EGS) extend geothermal energy beyond conventional hydrothermal resources but face challenges in creating sustainable heat exchangers in low-permeability formations. This review synthesizes achievements from the Utah Frontier Observatory for Research in Geothermal Energy (FORGE), a field laboratory advancing EGS readiness [...] Read more.
Enhanced Geothermal Systems (EGS) extend geothermal energy beyond conventional hydrothermal resources but face challenges in creating sustainable heat exchangers in low-permeability formations. This review synthesizes achievements from the Utah Frontier Observatory for Research in Geothermal Energy (FORGE), a field laboratory advancing EGS readiness in 175–230 °C granitic basement. From 2017 to 2025, drilling, multi-stage hydraulic stimulation, and monitoring established feasibility and operating parameters for engineered reservoirs. Hydraulic connectivity was created between highly deviated wells with ~300 ft vertical separation via hydraulic and natural fracture networks, validated by sustained circulation tests achieving 10 bpm injection at 2–3 km depth. Advanced monitoring (DAS, DTS, and microseismic arrays) delivered fracture propagation diagnostics with ~1 m spatial resolution and temporal sampling up to 10 kHz. A data infrastructure of 300+ datasets (>133 TB) supports reproducible ML. Geomechanical analyses showed minimum horizontal stress gradients of 0.74–0.78 psi/ft and N–S to NNE–SSW fractures aligned with maximum horizontal stress. Near-wellbore tortuosity, driving treating pressures to 10,000 psi, underscores completion design optimization, improved proppant transport in high-temperature conditions, and coupled thermos-hydro-mechanical models for long-term prediction, supported by AI platforms including an offline Small Language Model trained on Utah FORGE datasets. Full article
Show Figures

Figure 1

32 pages, 2599 KB  
Article
Utilizing AIoT to Achieve Sustainable Agricultural Systems in a Climate-Change-Affected Environment
by Mohamed Naeem, Mohamed A. El-Khoreby, Hussein M. ELAttar and Mohamed Aboul-Dahab
Future Internet 2026, 18(2), 68; https://doi.org/10.3390/fi18020068 - 26 Jan 2026
Viewed by 992
Abstract
Smart agricultural systems are continually evolving to provide high-quality planting and defend against threats such as climate change, which necessitate improved adaptation and resource allocation. IoT technology offers a cost-effective approach to monitoring and managing system performance. However, this approach faces challenges, including [...] Read more.
Smart agricultural systems are continually evolving to provide high-quality planting and defend against threats such as climate change, which necessitate improved adaptation and resource allocation. IoT technology offers a cost-effective approach to monitoring and managing system performance. However, this approach faces challenges, including connectivity issues and complex decision-making. While researchers have studied these problems individually, no fully automated solution has addressed them simultaneously. There is still a need for an offline solution that manages multiple processes and reduces human error. This paper introduces an AI-powered edge computing system that serves as an early-warning solution for climate impacts. This system enables autonomous management through an Agentic AI model that observes, predicts, decides, and adapts. It provides a low-cost AIoT platform for data forecasting, classification, and decision-making, converting sensor data into actionable insights. The system integrates forecast evaluation with real-time data comparisons to optimize scheduling, efficiency, sustainability, and yields. Moreover, this solution is totally autonomous and independent of internet connectivity. Demonstrating its superior performance, it reduced errors by 50% and achieved an R-squared value of 0.985. Full article
(This article belongs to the Topic Smart Edge Devices: Design and Applications)
Show Figures

Graphical abstract

35 pages, 3075 KB  
Review
Agentic Artificial Intelligence for Smart Grids: A Comprehensive Review of Autonomous, Safe, and Explainable Control Frameworks
by Mahmoud Kiasari and Hamed Aly
Energies 2026, 19(3), 617; https://doi.org/10.3390/en19030617 - 25 Jan 2026
Viewed by 2302
Abstract
Agentic artificial intelligence (AI) is emerging as a paradigm for next-generation smart grids, enabling autonomous decision-making, adaptive coordination, and resilient control in complex cyber–physical environments. Unlike traditional AI models, which are typically static predictors or offline optimizers, agentic AI systems perceive grid states, [...] Read more.
Agentic artificial intelligence (AI) is emerging as a paradigm for next-generation smart grids, enabling autonomous decision-making, adaptive coordination, and resilient control in complex cyber–physical environments. Unlike traditional AI models, which are typically static predictors or offline optimizers, agentic AI systems perceive grid states, reason about goals, plan multi-step actions, and interact with operators in real time. This review presents the latest advances in agentic AI for power systems, including architectures, multi-agent control strategies, reinforcement learning frameworks, digital twin optimization, and physics-based control approaches. The synthesis is based on new literature sources to provide an aggregate of techniques that fill the gap between theoretical development and practical implementation. The main application areas studied were voltage and frequency control, power quality improvement, fault detection and self-healing, coordination of distributed energy resources, electric vehicle aggregation, demand response, and grid restoration. We examine the most effective agentic AI techniques in each domain for achieving operational goals and enhancing system reliability. A systematic evaluation is proposed based on criteria such as stability, safety, interpretability, certification readiness, and interoperability for grid codes, as well as being ready to deploy in the field. This framework is designed to help researchers and practitioners evaluate agentic AI solutions holistically and identify areas in which more research and development are needed. The analysis identifies important opportunities, such as hierarchical architectures of autonomous control, constraint-aware learning paradigms, and explainable supervisory agents, as well as challenges such as developing methodologies for formal verification, the availability of benchmark data, robustness to uncertainty, and building human operator trust. This study aims to provide a common point of reference for scholars and grid operators alike, giving detailed information on design patterns, system architectures, and potential research directions for pursuing the implementation of agentic AI in modern power systems. Full article
Show Figures

Figure 1

48 pages, 8070 KB  
Article
ResQConnect: An AI-Powered Multi-Agentic Platform for Human-Centered and Resilient Disaster Response
by Savinu Aththanayake, Chemini Mallikarachchi, Janeesha Wickramasinghe, Sajeev Kugarajah, Dulani Meedeniya and Biswajeet Pradhan
Sustainability 2026, 18(2), 1014; https://doi.org/10.3390/su18021014 - 19 Jan 2026
Cited by 3 | Viewed by 1469
Abstract
Effective disaster management is critical for safeguarding lives, infrastructure and economies in an era of escalating natural hazards like floods and landslides. Despite advanced early-warning systems and coordination frameworks, a persistent “last-mile” challenge undermines response effectiveness: transforming fragmented and unstructured multimodal data into [...] Read more.
Effective disaster management is critical for safeguarding lives, infrastructure and economies in an era of escalating natural hazards like floods and landslides. Despite advanced early-warning systems and coordination frameworks, a persistent “last-mile” challenge undermines response effectiveness: transforming fragmented and unstructured multimodal data into timely and accountable field actions. This paper introduces ResQConnect, a human-centered, AI-powered multimodal multi-agent platform that bridges this gap by directly linking incident intake to coordinated disaster response operations in hazard-prone regions. ResQConnect integrates three key components. It uses an agentic Retrieval-Augmented Generation (RAG) workflow in which specialized language-model agents extract metadata, refine queries, check contextual adequacy and generate actionable task plans using a curated, hazard-specific knowledge base. The contribution lies in structuring the RAG for correctness, safety and procedural grounding in high-risk settings. The platform introduces an Adaptive Event-Triggered (AET) multi-commodity routing algorithm that decides when to re-optimize routes, balancing responsiveness, computational cost and route stability under dynamic disaster conditions. Finally, ResQConnect deploys a compressed, domain-specific language model on mobile devices to provide policy-aligned guidance when cloud connectivity is limited or unavailable. Across realistic flood and landslide scenarios, ResQConnect improved overall task-quality scores from 61.4 to 82.9 (+21.5 points) over a standard RAG baseline, reduced solver calls by up to 85% compared to continuous re-optimization while remaining within 7–12% of optimal response time, and delivered fully offline mobile guidance with sub-500 ms response latency and 54 tokens/s throughput on commodity smartphones. Overall, ResQConnect demonstrates a practical and resilient approach to AI-augmented disaster response. From a sustainability perspective, the proposed system contributes to Sustainable Development Goal (SDG) 11 by improving the speed and coordination of disaster response. It also supports SDG 13 by strengthening adaptation and readiness for climate-driven hazards. ResQConnect is validated using real-world flood and landslide disaster datasets, ensuring realistic incidents, constraints and operational conditions. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

16 pages, 328 KB  
Article
SemanticHPC: Semantics-Aware, Hardware-Conscious Workflows for Distributed AI Training on HPC Architectures
by Alba Amato
Information 2026, 17(1), 78; https://doi.org/10.3390/info17010078 - 12 Jan 2026
Viewed by 494
Abstract
High-Performance Computing (HPC) has become essential for training medium- and large-scale Artificial Intelligence (AI) models, yet two bottlenecks remain under-exploited: the semantic coherence of training data and the interaction between distributed deep learning runtimes and heterogeneous HPC architectures. Existing work tends to optimise [...] Read more.
High-Performance Computing (HPC) has become essential for training medium- and large-scale Artificial Intelligence (AI) models, yet two bottlenecks remain under-exploited: the semantic coherence of training data and the interaction between distributed deep learning runtimes and heterogeneous HPC architectures. Existing work tends to optimise multi-node, multi-GPU training in isolation from data semantics or to apply semantic technologies to data curation without considering the constraints of large-scale training on modern clusters. This paper introduces SemanticHPC, an experimental framework that integrates ontology and Resource Description Framework (RDF)-based semantic preprocessing with distributed AI training (Horovod/PyTorch Distributed Data Parallel) and hardware-aware optimisations for Non-Uniform Memory Access (NUMA), multi-GPU and high-speed interconnects. The framework has been evaluated on 1–8 node configurations (4–32 GPUs) on a production-grade cluster. Experiments on a medium-size Open Images V7 workload show that semantic enrichment improves validation accuracy by 3.5–4.4 absolute percentage points while keeping the additional end-to-end overhead below 8% and preserving strong scaling efficiency above 79% on eight nodes. We argue that bringing semantic technologies into the training workflow—rather than treating them as an offline, detached phase—is a promising direction for large-scale AI on HPC systems. We detail an implementation based on standard Python libraries, RDF tooling and widely adopted deep learning runtimes, and we discuss the limitations and practical hurdles that need to be addressed for broader adoption. Full article
Show Figures

Graphical abstract

27 pages, 3371 KB  
Article
An Airflow-Orchestrated AI Pipeline for Podcast Transcription, Topic Modeling, and Recommendation System
by Ioannis Kazlaris, Georgios Papadopoulos, Konstantinos Diamantaras, Marina Delianidi, Eftychia Touliou and Anagnostis Yenitzes
Multimedia 2026, 2(1), 1; https://doi.org/10.3390/multimedia2010001 - 9 Jan 2026
Viewed by 1325
Abstract
This study presents a production-ready AI pipeline for audio content processing, implemented within the Youth Radio platform, which serves as an extension of the European School Radio initiative. The system uses a multi-server architecture: an AI Server that runs batch/offline jobs, orchestrated by [...] Read more.
This study presents a production-ready AI pipeline for audio content processing, implemented within the Youth Radio platform, which serves as an extension of the European School Radio initiative. The system uses a multi-server architecture: an AI Server that runs batch/offline jobs, orchestrated by Apache Airflow, and two Web Servers that deliver all the Backend as well as the Frontend applications, configured with load balancing and redundancy to ensure high availability and fault tolerance. The implemented AI Pipeline includes tasks such as preprocessing, transcription, audio classification and topic modeling. Processed Podcasts are indexed in a Qdrant vector database to facilitate both dense and sparse retrieval while a recommendation system enriches the user’s experience. We summarize design choices and report system-level metrics and task-level indicators (ASR quality after correction, retrieval effectiveness) to guide similar deployments. Full article
Show Figures

Graphical abstract

Back to TopTop