Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (275)

Search Parameters:
Keywords = cloud–edge collaborative

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 11319 KB  
Article
Confidence-Aware Topology Identification in Low-Voltage Distribution Networks: A Multi-Source Fusion Method Based on Weakly Supervised Learning
by Siliang Liu, Can Deng, Zenan Zheng, Ying Zhu, Hongxin Lu and Wenze Liu
Energies 2026, 19(6), 1503; https://doi.org/10.3390/en19061503 - 18 Mar 2026
Abstract
The topology identification (TI) of low-voltage distribution networks (LVDNs) is the foundation for their intelligent operation and lean management. However, the existing identification methods may produce inconsistent results under measurement noise, missing data, and heterogeneous load behaviors. Without principled multiple method fusion and [...] Read more.
The topology identification (TI) of low-voltage distribution networks (LVDNs) is the foundation for their intelligent operation and lean management. However, the existing identification methods may produce inconsistent results under measurement noise, missing data, and heterogeneous load behaviors. Without principled multiple method fusion and meter-level confidence quantification, the reliability of the identification results is questionable in the absence of ground-truth topology. To address these challenges, a confidence-aware TI (Ca-TI) method for the LVDN based on weakly supervised learning (WSL) and Dempster–Shafer (D-S) evidence theory is proposed, aiming to infer each meter’s latent topology connectivity label and quantify the meter-level confidence without ground truth by fusing different identification methods. Specifically, within the framework of data programming (DP) in WSL, different TI methods were modeled as labeling functions (LFs), and a weakly supervised label model (WSLM) was adopted to learn each method’s error pattern and each meter’s posterior responsibility; within the framework of D-S evidence theory, an uncertainty-aware basic probability assignment (BPA) was constructed from each meter’s posterior responsibility, with posterior uncertainty allocated to ignorance, and was further discounted according to the missing data rate; subsequently, a consensus-calibrated conflict-gated (CCCG)-enhanced D-S fusion rule was proposed to aggregate the TI results of multiple methods, producing the final TI decisions with meter-level confidence. Finally, the test was carried out in both simulated and actual low-voltage distribution transformer areas (LVDTAs), and the robustness of the proposed method under various measurement noise and missing data was tested. The results indicate that the proposed method can effectively integrate the performances of various TI methods, is not adversely affected by extreme bias from any single method, and provides the meter-level confidence for targeted on-site verification. Further, an engineering deployment scheme with cloud–edge collaboration is further discussed to support scalable implementation in utility environments. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Electrical Power Systems)
Show Figures

Figure 1

25 pages, 1580 KB  
Article
A Study on the Cloud-Edge-Terminal Framework for Large Computing Models in New Power Systems
by Hualiang Fang, Ziyi Feng and Weibo Li
Energies 2026, 19(6), 1501; https://doi.org/10.3390/en19061501 - 18 Mar 2026
Abstract
With the rapid evolution of a new power system characterized by a high proportion of renewable energy, system operations have become increasingly random, variable, and uncertain. The system model exhibits features such as high dimensionality, multiple time scales, stochastic behavior, and nonlinearity. This [...] Read more.
With the rapid evolution of a new power system characterized by a high proportion of renewable energy, system operations have become increasingly random, variable, and uncertain. The system model exhibits features such as high dimensionality, multiple time scales, stochastic behavior, and nonlinearity. This paper proposes a large-scale computational power system model architecture based on cloud-edge-terminal collaboration. By defining functional roles within the cloud-edge-terminal structure and implementing a global model coordination mechanism, the approach enables an organic integration of global awareness, local adaptation, dynamic training, and online optimization for power system problem models. At the cloud level, various object models and the power grid topology are constructed. The edge generates typical problem models for the power system, while the terminal devices produce lightweight models adapted to local grids. This architecture supports collaborative modeling for key business scenarios such as power flow analysis, stability assessment, and reactive power optimization. The study focuses on the training methods of distilled parameters within the terminal models to enhance their adaptability for real-world deployment in power systems. Simulation results demonstrate that the cloud-edge-terminal model offers excellent scalability, adaptability, and real-time performance for computations in new power systems, effectively supporting localized, intelligent operations and decision-making within the system. Full article
Show Figures

Figure 1

14 pages, 4757 KB  
Article
Design and Implementation of an IoT-Based Low-Power Wearable EEG Sensing System for Home-Based Sleep Monitoring
by Ya Wang, Jun-Bo Chen and Yu-Ting Chen
Sensors 2026, 26(6), 1803; https://doi.org/10.3390/s26061803 - 12 Mar 2026
Viewed by 199
Abstract
Long-term home-based sleep monitoring requires wearable sensing devices that strictly balance signal precision with power constraints. This study presents the design and implementation of a low-noise, low-power wearable single-channel electroencephalography (EEG) system for automatic sleep staging. The hardware architecture integrates a TI ADS1298 [...] Read more.
Long-term home-based sleep monitoring requires wearable sensing devices that strictly balance signal precision with power constraints. This study presents the design and implementation of a low-noise, low-power wearable single-channel electroencephalography (EEG) system for automatic sleep staging. The hardware architecture integrates a TI ADS1298 analog front-end with an STM32F4 microcontroller, utilizing differential sampling and hardware-based filtering to effectively suppress power-line interference and baseline drift. System-level testing demonstrates an average power consumption of approximately 150.85 mW, enabling over 24.6 h of continuous operation on a 1000 mAh battery, which meets the requirements for overnight monitoring. To achieve accurate staging without draining the wearable’s battery, we adopted and deployed a lightweight deep learning model, SleePyCo, on the cloud backend. This architecture was specifically optimized for our edge–cloud collaborative execution, which combines contrastive representation learning with temporal dependency modeling. Validation on the ISRUC dataset yielded an overall accuracy of 79.3% ± 3.0%, with a notable F1-score of 88.3% for Deep Sleep (N3). Furthermore, practical field trials involving 10 healthy subjects verified the system’s engineering stability, achieving a valid data rate exceeding 97% and a Bluetooth packet loss rate of only 0.8%. These results confirm that the proposed hardware–software co-designed system provides a robust, energy-efficient IoMT sensing solution for daily sleep health management. Full article
(This article belongs to the Section Wearables)
Show Figures

Graphical abstract

27 pages, 2344 KB  
Article
Cloud-Edge Resource Scheduling and Offloading Optimization Based on Deep Reinforcement Learning
by Lili Yin, Yunze Xie, Ze Zhao and Jie Gao
Sensors 2026, 26(5), 1704; https://doi.org/10.3390/s26051704 - 8 Mar 2026
Viewed by 149
Abstract
In the context of smart manufacturing, with the widespread deployment of Industrial Internet of Things (IoT) devices, a large number of computation tasks that are highly sensitive to latency and have strict deadlines have emerged, requiring real-time processing. Effectively offloading tasks to address [...] Read more.
In the context of smart manufacturing, with the widespread deployment of Industrial Internet of Things (IoT) devices, a large number of computation tasks that are highly sensitive to latency and have strict deadlines have emerged, requiring real-time processing. Effectively offloading tasks to address the issues of increased latency and task dropouts caused by dynamic changes in edge node load has become a key challenge in the cloud–edge–end collaborative environment of smart manufacturing. To tackle the complex issues of unknown edge node loads and dynamic system state changes, this paper proposes a distributed algorithm based on deep reinforcement learning, utilizing convolutional neural networks (CNN) and the Informer architecture. The proposed algorithm leverages CNN to extract local features of edge node loads while utilizing Informer’s self-attention mechanism to capture long-term load variation trends, thereby effectively handling the uncertainty and dynamics inherent in node loads. Furthermore, by integrating the Dueling Deep Q-Network (DQN) and Double DQN techniques, the algorithm achieves a precise approximation of the state–action value function, further enhancing its capability to perceive system temporal characteristics and adapt to heterogeneous tasks. Each mobile device can independently make task offloading decisions and scheduling strategies based on its observations, enabling dynamic task allocation and optimization of execution order. Simulation results show that, compared to various existing algorithms, the proposed method reduces task dropout rates by 82.3–94% and average latency by 28–39.2%. Experimental results validate the significant advantages of this method in intelligent manufacturing scenarios with high load and latency-sensitive tasks. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

27 pages, 2849 KB  
Systematic Review
Intrusion Detection in Fog Computing: A Systematic Review of Security Advances and Challenges
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Computers 2026, 15(3), 169; https://doi.org/10.3390/computers15030169 - 5 Mar 2026
Viewed by 290
Abstract
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents [...] Read more.
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents research on intrusion detection systems (IDSs) for fog computing and synthesizes advances and research gaps. The study was guided by the “Preferred-Reporting-Items for-Systematic-Reviews-and-Meta-Analyses” (PRISMA) framework. Scopus and Web of Science were searched in the title field using TITLE/TI = (“intrusion detection” AND “fog computing”) for 2021–2025. The inclusion criteria were (i) 2021–2025 publications, (ii) journal or conference papers, (iii) English language, and (iv) open access availability; duplicates were removed programmatically using a DOI-first key with a title, year, and author alternative. The search identified 8560 records, of which 4905 were unique and included for qualitative grouping and bibliometric synthesis. Metadata (year, venue, authors, affiliations, keywords, and citations) were extracted and analyzed in Python to compute trends and collaboration. Intrusion detection systems in fog networks were categorized into traditional/signature-based, machine learning, deep learning, and hybrid/ensemble. Hybrid and DL approaches reported accuracy ranging from 95 to 99% on benchmark datasets (such as NSL-KDD, UNSW-NB15, CIC-IDS2017, KDD99, BoT-IoT). Notable bottlenecks included computational load relative to real-time latency on resource-constrained nodes, elevated false-positive rates for anomaly detection under concept drift, limited generalization to unseen attacks, privacy risks from centralizing data, and limited real-world validation. Bibliometric analyses highlighted the field’s concentration in fast-turnaround, open-access journals such as IEEE Access and Sensors, as well as a small number of highly collaborative author clusters, alongside dominant terms such as “learning,” “federated,” “ensemble,” “lightweight,” and “explainability.” Emerging directions include federated and distributed training to preserve privacy, as well as online/continual learning adaptation. Future work should consist of real-world evaluation of fog networks, ultra-lightweight yet adaptive hybrid IDS, self-learning, and secure cooperative frameworks. These insights help researchers select appropriate IDS models for fog networks. Full article
Show Figures

Figure 1

15 pages, 1486 KB  
Review
Challenges of Space Debris Detection, Tracking, and Monitoring in Near-Earth Orbit: Overview of Current Status and Mitigation Strategies
by Motti Haridim, Assaf Shaked, Niv Cohen and Jacob Gavan
Information 2026, 17(3), 253; https://doi.org/10.3390/info17030253 - 3 Mar 2026
Viewed by 344
Abstract
The accumulation of space debris in near-Earth orbit, particularly in Low Earth Orbit (LEO), poses an increasing threat to satellite operations, communication infrastructures, and long-term space sustainability. As modern constellations expand and incorporate advanced satellite technologies, including sensing and wireless communications, artificial intelligence-of-things [...] Read more.
The accumulation of space debris in near-Earth orbit, particularly in Low Earth Orbit (LEO), poses an increasing threat to satellite operations, communication infrastructures, and long-term space sustainability. As modern constellations expand and incorporate advanced satellite technologies, including sensing and wireless communications, artificial intelligence-of-things (AIoT), enabled payloads, and edge computing for on-orbit data processing, the risk profile grows. This paper reviews the current debris environment and existing sensing and monitoring techniques, highlights major collision events and deliberate debris-generating activities, and analyzes the role of both governmental and commercial satellite constellations in exacerbating and mitigating the challenges. Emerging space surveillance and tracking (SST) techniques, leveraging radar, optical sensors, and interferometric SAR for enhanced intelligence, surveillance, and reconnaissance (ISR), are highlighted alongside software-defined networking (SDN) approaches and cloud communication technology that enable coordinated debris-avoidance maneuvers. Key international regulatory frameworks, tracking architectures, and mitigation measures, including alignment with ISO 24113 standards, advanced TT&C capabilities, and evolving active debris removal technologies, are examined. The study emphasizes the necessity of a global, interoperable ecosystem that integrates AI/ML (artificial intelligence and machine learning)-driven situational awareness, secure SATCOM links with AJ/LPI/LPD (anti-jamming/low probability of interception/low probability of detection) characteristics, and collaborative protocols among space agencies, commercial operators, and regulatory bodies to ensure the sustainable use of orbital space for future generations. Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

45 pages, 2170 KB  
Systematic Review
From Precision Agriculture to Intelligent Agricultural Ecosystems: A Systematic Review of Machine Learning and Big Data Applications
by Ania Cravero, Samuel Sepúlveda, Fernanda Gutiérrez and Lilia Muñoz
Agronomy 2026, 16(5), 516; https://doi.org/10.3390/agronomy16050516 - 27 Feb 2026
Viewed by 569
Abstract
This systematic review analyzes the evolution of Machine Learning and Big Data applications in agriculture from 2021 to 2025, with particular emphasis on how recent technological advances facilitate the transition from precision agriculture to Intelligent Agricultural Ecosystems. A comprehensive literature search was conducted [...] Read more.
This systematic review analyzes the evolution of Machine Learning and Big Data applications in agriculture from 2021 to 2025, with particular emphasis on how recent technological advances facilitate the transition from precision agriculture to Intelligent Agricultural Ecosystems. A comprehensive literature search was conducted across Scopus, Web of Science, IEEE Xplore, the ACM Digital Library, SpringerLink, and MDPI, following the PRISMA 2020 guidelines. After duplicate removal and a two-stage screening process (title/abstract screening followed by full-text assessment), eligible peer-reviewed studies were systematically extracted using a structured coding matrix encompassing six analytical domains: crops, soil, weather and water, land use, animal systems, and farmer decision-making. The findings reveal a substantial increase in ML-driven agricultural analytics. Although Random Forest and Convolutional Neural Networks remain widely adopted, recent studies demonstrate a marked shift toward advanced Deep Learning architectures, integrated cloud–edge–device infrastructures, Federated Learning frameworks for privacy-preserving collaboration, Explainable AI techniques to enhance transparency, and governance-oriented mechanisms to ensure interoperability. Notwithstanding these advances, several persistent challenges remain, including limited generalizability across diverse agroclimatic contexts, the high costs associated with high-quality data annotation, the integration of heterogeneous and multimodal datasets, and infrastructural constraints related to connectivity. These developments are synthesized within the IAE conceptual framework, underscoring governance- and lifecycle-aware orchestration MLOps as a critical differentiator that transcends purely technology-centric approaches. Full article
Show Figures

Figure 1

19 pages, 1335 KB  
Article
Resource Allocation and Task Migration in DIMA-Oriented Mobile Edge Computing Systems
by Ning Wang, Liang Liu, Peng Wei, Meiyan Teng and Xiaolin Qin
Mathematics 2026, 14(5), 781; https://doi.org/10.3390/math14050781 - 26 Feb 2026
Viewed by 302
Abstract
Avionics systems are evolving from Integrated Modular Avionics (IMA) to Distributed Integrated Modular Avionics (DIMA), where distributed computing nodes are interconnected through real-time networks to support flexible resource sharing and latency-critical services. This architecture is highly consistent with the paradigm of Mobile Edge [...] Read more.
Avionics systems are evolving from Integrated Modular Avionics (IMA) to Distributed Integrated Modular Avionics (DIMA), where distributed computing nodes are interconnected through real-time networks to support flexible resource sharing and latency-critical services. This architecture is highly consistent with the paradigm of Mobile Edge Computing (MEC), in which distributed edge resources collaboratively process computation workloads close to users to meet stringent real-time requirements. However, efficient task scheduling and migration remain key challenges in such distributed MEC platforms, since many existing approaches are designed for traditional centralized architectures and lack effective support for runtime workload dynamics and migration overheads. In this paper, we abstract the computing resource and task models for DIMA-oriented MEC systems and propose two algorithms: an Efficient Workload Scheduling Algorithm (EWSA) for workload placement and a Workload Migration Algorithm (WMA) for adaptive task relocation. CloudSim-based simulations show that the proposed methods significantly outperform the benchmark JIT-C approach in scheduling performance and migration efficiency, demonstrating their effectiveness for real-time distributed edge computing environments. Full article
Show Figures

Figure 1

18 pages, 14442 KB  
Review
5G Network Edge Intelligence for Smart Operation and Maintenance of Offshore Wind Power
by Yuqing Gao, Lingang Yang, Xialiang Zhu, Congxiao Jiang, Haoyu Wang, Shaonan You and Fangmin Xu
Sensors 2026, 26(4), 1390; https://doi.org/10.3390/s26041390 - 23 Feb 2026
Viewed by 388
Abstract
As global offshore wind power advances toward deeper, farther waters, harsh Operation and Maintenance (O&M) environments, equipment heterogeneity, and flaws in existing communication (e.g., insufficient 4G bandwidth, high-latency/cost satellite communication) drive the urgent need for intelligent O&M. This paper expounds on the development [...] Read more.
As global offshore wind power advances toward deeper, farther waters, harsh Operation and Maintenance (O&M) environments, equipment heterogeneity, and flaws in existing communication (e.g., insufficient 4G bandwidth, high-latency/cost satellite communication) drive the urgent need for intelligent O&M. This paper expounds on the development of Far-Reaching Sea Smart Wind Farms and intelligent service communication demands, studies 5G deployment schemes (hybrid networking, frequency selection, in-turbine coverage, 5G custom networks) and practical cases, discusses core edge intelligence applications (equipment monitoring, inspection, fault diagnosis, digital twin integration), and constructs a “terminal-edge-cloud-network” 5G-edge intelligence integrated architecture. It also summarizes key technology effects, points out current challenges, and looks forward to lightweight large language model deployment at the edge, providing references for 5G edge intelligence implementation in offshore wind power intelligent O&M. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

28 pages, 842 KB  
Review
AI-Driven Virtual Power Plants: A Comprehensive Review
by Jian Li, Chenxi Wang and Yonghe Liu
Energies 2026, 19(4), 1084; https://doi.org/10.3390/en19041084 - 20 Feb 2026
Viewed by 748
Abstract
The rapid proliferation of distributed energy resources (DERs), including photovoltaics, wind power, battery energy storage, and electric vehicles, has transformed traditional power systems into highly decentralized and data-rich environments. Virtual power plants (VPPs) have emerged as a key mechanism for aggregating these heterogeneous [...] Read more.
The rapid proliferation of distributed energy resources (DERs), including photovoltaics, wind power, battery energy storage, and electric vehicles, has transformed traditional power systems into highly decentralized and data-rich environments. Virtual power plants (VPPs) have emerged as a key mechanism for aggregating these heterogeneous assets and enabling coordinated control, market participation, and grid-support functions. Recent advances in artificial intelligence (AI) have further elevated the scalability, autonomy, and responsiveness of VPP operations. This paper presents a comprehensive review of AI for VPPs, organized around a taxonomy of machine learning, deep learning, reinforcement learning, and hybrid approaches, and examines how these methods map to core VPP functions such as forecasting, scheduling, market bidding, aggregation, and ancillary services. In parallel, we analyze enabling architectural frameworks—including centralized cloud, distributed edge, hybrid cloud–edge collaboration, and emerging 5G/LEO satellite communication infrastructures—that support real-time data exchange and scalable deployment of intelligent control. By integrating methodological, functional, and architectural perspectives, this review highlights the evolution of VPPs from rule-based coordination to intelligent, autonomous energy ecosystems. Key research challenges are identified in data quality, model interpretability, multi-agent scalability, cyber-physical resilience, and the integration of AI with digital twins and edge-native computation. These findings outline promising directions for next-generation intelligent VPPs capable of delivering secure, flexible, and self-optimizing DER aggregation at scale. Full article
(This article belongs to the Collection Review Papers in Energy and Environment)
Show Figures

Figure 1

27 pages, 10069 KB  
Article
Accelerating CNN Inference via In-Network Computing in Information-Centric Networking
by Kaiwei Hu, Haojiang Deng and Botao Ma
Electronics 2026, 15(4), 775; https://doi.org/10.3390/electronics15040775 - 11 Feb 2026
Viewed by 206
Abstract
Although Convolutional Neural Networks (CNNs) have achieved remarkable accuracy in intelligent tasks, their increasing complexity hinders low-latency execution. While edge computing mitigates the wide-area network delays typical of cloud-based inference, it remains constrained by limited computational resources when processing complex models under high [...] Read more.
Although Convolutional Neural Networks (CNNs) have achieved remarkable accuracy in intelligent tasks, their increasing complexity hinders low-latency execution. While edge computing mitigates the wide-area network delays typical of cloud-based inference, it remains constrained by limited computational resources when processing complex models under high concurrency. Collaborative inference has emerged as a promising paradigm to address these limitations; however, existing approaches often struggle with rigid routing, limited scalability, and inefficient resource utilization. In this paper, we propose a novel collaborative inference acceleration mechanism that integrates In-Network Computing (INC) within an Information-Centric Networking (ICN) framework. By leveraging the name-based resolution capability of ICN, our approach dynamically harnesses underutilized computational resources across distributed INC nodes, enabling flexible layer-wise offloading that transcends the limitations of static IP paths. Furthermore, a distributed decision-making and node-selection algorithm is designed to orchestrate CNN layer assignment based on real-time network conditions and node workloads. Extensive simulations on representative models demonstrate the effectiveness of the proposed method. Specifically, for the computationally intensive VGG16 model under high concurrency, the average task completion time is reduced by 43.3% and 60.2% relative to IP-based and Edge-Cloud baselines, respectively, with a load balancing fairness index maintained above 0.86. Full article
Show Figures

Figure 1

22 pages, 3535 KB  
Article
Bridge Health Monitoring and Assessment in Industry 5.0: Lessons Learned from Long-Term Real-Time Field Monitoring of Highway Bridges
by Prakash Bhandari, Shinae Jang, Song Han and Ramesh B. Malla
Infrastructures 2026, 11(2), 55; https://doi.org/10.3390/infrastructures11020055 - 7 Feb 2026
Viewed by 391
Abstract
The rapid aging of bridges has increased interest in real-time, data-driven monitoring for predictive maintenance and safety management; however, practical deployment on in-service bridges remains limited. This paper presents lessons learned from long-term field deployment of real-time bridge joint monitoring systems on three [...] Read more.
The rapid aging of bridges has increased interest in real-time, data-driven monitoring for predictive maintenance and safety management; however, practical deployment on in-service bridges remains limited. This paper presents lessons learned from long-term field deployment of real-time bridge joint monitoring systems on three in-service highway bridges and demonstrates how these insights can support the transition toward Industry 5.0. A unified framework is introduced to integrate key enabling technologies, including Internet of Things (IoT), digital twins, and artificial intelligence (AI), into a practical, human-centric monitoring architecture. Best practices for achieving durable, site-compliant, and cost-effective system design are summarized, with emphasis on sensor selection, wireless communication strategies, modular system development, and maintaining seamless operation. The development of a Docker-based analytics and visualization platform illustrates how interactive dashboards enhance human–machine collaboration and support informed decision-making. The role of advanced analytical tools, including digital twins, AI, and statistical modeling, in providing reliable structural assessments is highlighted, along with guidance on balancing cloud and edge computing for energy-efficient performance under constraints such as limited power, weather exposure, and site accessibility. Overall, the findings support the development of scalable, resilient, and human-centric real-time monitoring systems that advance data-driven decision-making and directly contribute to the realization of Industry 5.0 objectives in bridge health management. Full article
Show Figures

Figure 1

26 pages, 12359 KB  
Review
On-Board Implementation of Thermal Runaway Detection in Lithium-Ion Battery Packs: Methods, Metrics, and Challenges
by Run-Yu Yu, Bing-Chuan Wang and Yong Wang
Energies 2026, 19(3), 858; https://doi.org/10.3390/en19030858 - 6 Feb 2026
Viewed by 678
Abstract
Effective thermal runaway (TR) detection is critical for the safety of lithium-ion battery packs, particularly in electric vehicles. However, deploying laboratory-validated methods into resource-constrained battery management systems (BMS) presents significant engineering challenges. This review surveys the state of the art in on-board TR [...] Read more.
Effective thermal runaway (TR) detection is critical for the safety of lithium-ion battery packs, particularly in electric vehicles. However, deploying laboratory-validated methods into resource-constrained battery management systems (BMS) presents significant engineering challenges. This review surveys the state of the art in on-board TR monitoring, with an emphasis on the practical constraints of automotive applications. We first examine available precursor signals, including thermal, electrical, gas, and acoustic emissions, and evaluate their trade-offs regarding response speed and integration complexity. Second, diagnostic algorithms, from threshold-based logic to deep learning, are assessed against key performance metrics such as computational latency, false alarm rates, and lead time. Furthermore, the review discusses essential deployment considerations, including model compression techniques, inference hardware architectures, and compliance with functional safety standards. Specifically, the review discusses the implementation challenges of multi-modal data fusion, with a particular focus on the constraints imposed by limited hardware resources and long-term sensor reliability. Future directions regarding data standardization and cloud-edge collaboration are also discussed. Full article
Show Figures

Figure 1

28 pages, 4967 KB  
Article
Sustainable and Reliable Smart Grids: An Abnormal Condition Diagnosis Method for Low-Voltage Distribution Nodes via Multi-Source Domain Deep Transfer Learning and Cloud-Edge Collaboration
by Dongli Jia, Tianyuan Kang, Xueshun Ye, Jun Zhou and Zhenyu Zhang
Sustainability 2026, 18(3), 1550; https://doi.org/10.3390/su18031550 - 3 Feb 2026
Viewed by 285
Abstract
The transition toward sustainable and resilient new-type power systems requires robust diagnostic frameworks for terminal power supply units to ensure continuous grid stability. To ensure the resilience of modern power systems, this paper proposes a multi-source domain deep Transfer Learning method for the [...] Read more.
The transition toward sustainable and resilient new-type power systems requires robust diagnostic frameworks for terminal power supply units to ensure continuous grid stability. To ensure the resilience of modern power systems, this paper proposes a multi-source domain deep Transfer Learning method for the abnormal condition diagnosis of low-voltage distribution nodes within a cloud-edge collaborative framework. This approach integrates feature selection based on the Categorical Boosting (CatBoost) algorithm with a hybrid architecture combining a Convolutional Neural Network (CNN) and a Residual Network (ResNet). Additionally, it utilizes a multi-loss adaptation strategy consisting of Multi-Kernel Maximum Mean Difference (MK-MMD), Local Maximum Mean Difference (LMMD), and Mean Squared Error (MSE) to effectively bridge domain gaps and ensure diagnostic consistency. By balancing global commonality with local adaptation, the framework optimizes resource efficiency, reducing collaborative training time by 19.3%. Experimental results confirm that the method effectively prevents equipment failure, achieving diagnostic accuracies of 98.29% for low-voltage anomalies and 88.96% for three-phase imbalance conditions. Full article
(This article belongs to the Special Issue Microgrids, Electrical Power and Sustainable Energy Systems)
Show Figures

Figure 1

39 pages, 2492 KB  
Systematic Review
Cloud, Edge, and Digital Twin Architectures for Condition Monitoring of Computer Numerical Control Machine Tools: A Systematic Review
by Mukhtar Fatihu Hamza
Information 2026, 17(2), 153; https://doi.org/10.3390/info17020153 - 3 Feb 2026
Viewed by 619
Abstract
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture [...] Read more.
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture of data, and offline interpretation, are proving too small to handle current machining processes. Being limited in their scale, having limited computational power, and not being responsive in real-time, they do not fit well in a dynamic and data-intensive production environment. Recent progress in the Industrial Internet of Things (IIoT), cloud computing, and edge intelligence has led to a push into distributed monitoring architectures capable of obtaining, processing, and interpreting large amounts of heterogeneous machining data. Such innovations have facilitated more adaptive decision-making approaches, which have helped in supporting predictive maintenance, enhancing machining stability, tool lifespan, and data-driven optimization in manufacturing businesses. A structured literature search was conducted across major scientific databases, and eligible studies were synthesized qualitatively. This systematic review synthesizes over 180 peer-reviewed studies found in major scientific databases, using specific inclusion criteria and a PRISMA-guided screening process. It provides a comprehensive look at sensor technologies, data acquisition systems, cloud–edge–IoT frameworks, and digital twin implementations from an architectural perspective. At the same time, it identifies ongoing challenges related to industrial scalability, standardization, and the maturity of deployment. The combination of cloud platforms and edge intelligence is of particular interest, with emphasis placed on how the two ensure a balance in the computational load and latency, and improve system reliability. The review is a synthesis of the major advances associated with sensor technologies, data collection approaches, machine operations, machine learning, deep learning methods, and digital twins. The paper concludes with what can and cannot be performed to date by providing a comparative analysis of what is known about this topic and the reported industrial case applications. The main issues, such as the inconsistency of data, the lack of standardization, cyber threats, and old system integration, are critically analyzed. Lastly, new research directions are touched upon, including hybrid cloud–edge intelligence, advanced AI models, and adaptive multisensory fusion, which is oriented to autonomous and self-evolving CNC monitoring systems in line with the Industry 4.0 and Industry 5.0 paradigms. The review process was made transparent and repeatable by using a PRISMA-guided approach to qualitative synthesis and literature screening. Full article
Show Figures

Figure 1

Back to TopTop