Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = self-adaptive thresholding approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1721 KB  
Review
Type A Aortic Dissection: From Diagnosis to Cardiac Rehabilitation
by Monica Loguercio, Maria Grazia Romeo, Buket Akinci, Cristina Andreea Adam, Irfan Ullah, Marta Supervía, Giancarlo Trimarchi, Natalia Świątoniowska-Lonc, Federica Fogacci and Francesco Perone
J. Clin. Med. 2026, 15(7), 2749; https://doi.org/10.3390/jcm15072749 - 5 Apr 2026
Viewed by 207
Abstract
Acute type A aortic dissection is a life-threatening condition requiring emergency surgery and complex postoperative management. Although survival rates have improved, many patients experience long-term functional impairments, reduced quality of life, and an elevated risk of complications. Despite strong evidence supporting cardiac rehabilitation [...] Read more.
Acute type A aortic dissection is a life-threatening condition requiring emergency surgery and complex postoperative management. Although survival rates have improved, many patients experience long-term functional impairments, reduced quality of life, and an elevated risk of complications. Despite strong evidence supporting cardiac rehabilitation in other cardiovascular populations, structured programs remain underutilized in patients with surgically resolved acute type A aortic dissection. Exercise-based cardiac rehabilitation appears feasible and can be delivered safely in carefully selected patients when appropriately adapted to individual needs and conducted under close supervision. Postoperative patients are often physically deconditioned, prone to hospital-acquired disability, and may misjudge exercise intensity. Therefore, individualized exercise prescription, guided by exercise testing when available, is important to support safe training thresholds. Early and gradual introduction of physical activity may help prevent complications associated with immobility, support blood pressure control, and contribute to improvements in functional capacity. However, training volume should be purposefully lower than in conventional program settings to reduce hemodynamic stress. Education on safe exercise parameters and self-monitoring plays a central role in enabling long-term adherence and promoting patient autonomy. Cardiac rehabilitation programs should incorporate dietary, nutritional, and psychological support. Although evidence specific to this patient population remains limited, available data suggest the feasibility and potential benefits of cardiac rehabilitation when delivered with appropriate precautions. Our review underscores the need for a tailored, multidisciplinary CR approach aimed at enhancing physical recovery, supporting cardiovascular stability, and improving overall quality of life in patients following surgery. Further research is required to define optimal program protocols. Full article
(This article belongs to the Special Issue Diagnosis and Treatment of Aortic Dissection: Experts' Views)
Show Figures

Figure 1

29 pages, 3794 KB  
Article
Coupling Coordination and Driving Mechanisms Between Digital Productivity and High-Quality Development of the Energy Industry: Evidence from Guizhou, China
by Chengbin Yu, Ke Ding and Langang Feng
Sustainability 2026, 18(7), 3490; https://doi.org/10.3390/su18073490 - 2 Apr 2026
Viewed by 273
Abstract
In the context of the global dual-carbon goals and China’s DP strategy, strengthening the coupling between digital productivity (DP) and the high-quality development of the energy industry (HQDEI) is essential for resource-based regions. Doing so can help these regions overcome transition constraints and [...] Read more.
In the context of the global dual-carbon goals and China’s DP strategy, strengthening the coupling between digital productivity (DP) and the high-quality development of the energy industry (HQDEI) is essential for resource-based regions. Doing so can help these regions overcome transition constraints and advance green, low-carbon development. Using panel data for nine prefecture-level cities in Guizhou Province from 2014 to 2023, we construct composite indices for DP and HQDEI with an improved entropy-weight TOPSIS approach. We then characterize their spatiotemporal evolution using a coupling coordination degree (CCD) model and kernel density estimation. Finally, we examine the determinants of coupling coordination through panel regression and threshold models. The results show that: (1) The CCD between DP and HQDEI efficiency continues to increase, with regional differences displaying a periodic convergence–divergence pattern and a spatial structure characterized by core agglomeration and outward diffusion. Gradient disparities in coordinated development are evident between central and peripheral areas. (2) Consumption upgrading and fiscal self-sufficiency significantly promote CC, whereas a traditional resource-dependent growth model significantly suppresses it. Constrained by short-term adaptation and integration costs, digital innovation currently exerts a negative effect, and its enabling potential has not yet been fully realized. (3) Nonlinear tests identify a single digital-infrastructure threshold: the enabling effect of digital innovation turns positive only once infrastructure surpasses a critical level, revealing pronounced interval heterogeneity. This study advances the theoretical understanding of the bidirectional coupling between DP and HQDEI, provides empirical guidance for energy digital transformation and high-quality development in resource-based regions of western China, and offers transferable insights for green, low-carbon transitions in traditional energy regions worldwide. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

23 pages, 5616 KB  
Article
Informer–UNet: A Hybrid Deep Learning Framework for Multi-Point Soil Moisture Prediction and Precision Irrigation in Winter Wheat
by Dingkun Zheng, Chenghan Yang, Gang Zheng, Baurzhan Belgibaev, Madina Mansurova, Sholpan Jomartova and Baidong Zhao
Agriculture 2026, 16(6), 648; https://doi.org/10.3390/agriculture16060648 - 12 Mar 2026
Viewed by 404
Abstract
Soil moisture prediction is essential for precision irrigation in water-limited agricultural systems. This study presents a deep learning-driven irrigation framework for winter wheat, integrating a novel Informer–UNet model with a Comprehensive Irrigation Index for adaptive water management. The Informer–UNet combines ProbSparse self-attention mechanisms [...] Read more.
Soil moisture prediction is essential for precision irrigation in water-limited agricultural systems. This study presents a deep learning-driven irrigation framework for winter wheat, integrating a novel Informer–UNet model with a Comprehensive Irrigation Index for adaptive water management. The Informer–UNet combines ProbSparse self-attention mechanisms with UNet’s multi-scale feature fusion, enabling simultaneous prediction of soil moisture at 27 monitoring points across three depths, 10, 30, and 50 cm, while quantifying prediction uncertainty through Monte Carlo Dropout. A Comprehensive Irrigation Index incorporating moisture deviation, spatial variance, and confidence interval width was developed, with weights optimized via genetic algorithm. Field experiments were conducted in Chengdu, China, over two winter wheat growing seasons. The Informer–UNet achieved superior prediction accuracy, R2 greater than 0.98, RMSE less than 0.65, compared to LSTM, Transformer, and standard Informer models, with the fastest convergence and lowest validation loss. The proposed DeepIndexIrr strategy maintained soil moisture within the target range, 55% to 75%, for over 81% of the irrigation period, reducing water consumption by 38.2% compared to fixed-threshold control and 19.2% compared to expert manual scheduling. These results demonstrate that integrating spatially distributed deep learning predictions with uncertainty-informed decision rules offers a promising approach for sustainable precision irrigation. Full article
Show Figures

Figure 1

26 pages, 7392 KB  
Article
A CLIP-Based Zero-Shot Photovoltaic Segmentation Framework for Remote Sensing Imagery
by Hailong Li, Man Zhao, Lu Bai, Yan Liu, Xiaoqing He, Liangfu Chen, Jinhua Tao, Guangyan He and Zhibao Wang
Remote Sens. 2026, 18(6), 865; https://doi.org/10.3390/rs18060865 - 11 Mar 2026
Viewed by 376
Abstract
In photovoltaic remote sensing image segmentation tasks, fully supervised methods can achieve high accuracy. However, the high cost of pixel-level annotation significantly limits their scalability in large-scale scenarios. To overcome this annotation bottleneck, this paper proposes a zero-shot cross-modal segmentation framework based on [...] Read more.
In photovoltaic remote sensing image segmentation tasks, fully supervised methods can achieve high accuracy. However, the high cost of pixel-level annotation significantly limits their scalability in large-scale scenarios. To overcome this annotation bottleneck, this paper proposes a zero-shot cross-modal segmentation framework based on the visual-language pre-trained foundation model (CLIP). This approach harnesses CLIP’s cross-modal knowledge transfer capabilities to achieve precise extraction of photovoltaic targets without requiring any downstream training. This paper first introduces the Layer-wise Augmented Residual Attention (LARA) mechanism to enhance fine-grained detail representation in the feature space. Subsequently, a Cross-modal Semantic Attribution Module (CMSA) is designed to generate precise activation maps by leveraging image-text alignment gradient information. Finally, the Confidence-Aware Refinement Strategy (CARS) replaces the conventional training-based denoising process, directly producing high-quality binary segmentation masks through adaptive thresholding. Comparative experiments were conducted to evaluate the proposed method against various baselines using several public datasets with varying resolutions in Jiangsu Province including Unmanned Aerial Vehicles imagery, Beijing-2, Gaofen-2, and a self-created Sentinel-2 imagery covering multiple countries. Notably, the proposed method achieved an IoU of 70.3% on the Gaofen-2 PV03 dataset with a spatial resolution of approximately 0.3 m and 50.8% on the self-created Sentinel-2 PV_Sentinel-2 dataset with a spatial resolution of 10 m. Experimental results demonstrate that our proposed approach maintains excellent cross-domain generalisation capabilities while reducing annotation costs, thereby providing an efficient and viable technical pathway for the automated monitoring of large-scale photovoltaic facilities. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

20 pages, 8389 KB  
Article
SREF: Semantics-Refined Feature Extraction for Long-Term Visual Localization
by Danfeng Wu, Kaifeng Zhu, Heng Shi, Fenfen Zhou and Minchi Kuang
J. Imaging 2026, 12(2), 85; https://doi.org/10.3390/jimaging12020085 - 18 Feb 2026
Viewed by 417
Abstract
Accurate and robust visual localization under changing environments remains a fundamental challenge in autonomous driving and mobile robotics. Traditional handcrafted features often degrade under long-term illumination and viewpoint variations, while recent CNN-based methods, although more robust, typically rely on coarse semantic cues and [...] Read more.
Accurate and robust visual localization under changing environments remains a fundamental challenge in autonomous driving and mobile robotics. Traditional handcrafted features often degrade under long-term illumination and viewpoint variations, while recent CNN-based methods, although more robust, typically rely on coarse semantic cues and remain vulnerable to dynamic objects. In this paper, we propose a fine-grained semantics-guided feature extraction framework that adaptively selects stable keypoints while suppressing dynamic disturbances. A fine-grained semantic refinement module subdivides coarse semantic categories into stability-homogeneous sub-classes, and a dual-attention mechanism enhances local repeatability and semantic consistency. By integrating physical priors with self-supervised clustering, the proposed framework learns discriminative and reliable feature representations. Extensive experiments on the Aachen and RobotCar-Seasons benchmarks demonstrate that the proposed approach achieves state-of-the-art accuracy and robustness while maintaining real-time efficiency, effectively bridging coarse semantic guidance with fine-grained stability estimation. Quantitatively, our method achieves strong localization performance on Aachen (up to 88.1% at night under the (0.2°,0.25 m) threshold) and on RobotCar-Seasons (up to 57.2%/28.4% under the same threshold for day/night), demonstrating improved robustness to seasonal and illumination changes. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

25 pages, 2045 KB  
Article
A Comparative Analysis of Self-Aware Reinforcement Learning Models for Real-Time Intrusion Detection in Fog Networks
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Future Internet 2026, 18(2), 100; https://doi.org/10.3390/fi18020100 - 14 Feb 2026
Viewed by 478
Abstract
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study [...] Read more.
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study proposes integrating self-awareness (online learning and concept drift adaptation) into a lightweight RL (reinforcement learning)-based IDS for fog networks and quantitatively comparing it with non-RL static thresholds and bandit-based approaches in real time. Novel self-aware reinforcement learning (RL) models, the Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (HATS-RL) model, and the Federated Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (F-HATS-RL), were proposed for real-time intrusion detection in a fog network. These self-aware RL policies integrated online uncertainty estimation and concept-drift detection to adapt to evolving attacks. The RL models were benchmarked against the static threshold (ST) model and a widely adopted linear bandit (Linear Upper Confidence Bound/LinUCB). A realistic fog network simulator with heterogeneous nodes and streaming traffic, including multi-type attack bursts and gradual concept drift, was established. The models’ detection performance was compared using metrics including latency, energy consumption, detection accuracy, and the area under the precision–recall curve (AUPR) and the area under the receiver operating characteristic curve (AUROC). Notably, the federated self-aware agent (F-HATS-RL) achieved the best AUROC (0.933) and AUPR (0.857), with a latency of 0.27 ms and the lowest energy consumption of 0.0137 mJ, indicating its ability to detect intrusions in fog networks with minimal energy. The findings suggest that self-aware RL agents can detect traffic–dynamic attack methods and adapt accordingly, resulting in more stable long-term performance. By contrast, a static model’s accuracy degrades under drift. Full article
Show Figures

Figure 1

25 pages, 1563 KB  
Article
BERT-LogAnom: Enhancing Log Anomaly Detection with Gated Residual BiLSTM and Dynamic Thresholding
by Xi Lu, Shufan An, Jingmei Chen, Zhan Shu, Weiping Wang, Runyi Qi and Yapeng Diao
Electronics 2026, 15(4), 806; https://doi.org/10.3390/electronics15040806 - 13 Feb 2026
Viewed by 412
Abstract
As modern software systems continue to grow in scale and structural complexity, log anomaly detection has become an essential component of system monitoring and fault diagnosis. However, existing approaches often struggle to adequately capture sequential dependencies in log data and to remain robust [...] Read more.
As modern software systems continue to grow in scale and structural complexity, log anomaly detection has become an essential component of system monitoring and fault diagnosis. However, existing approaches often struggle to adequately capture sequential dependencies in log data and to remain robust under distributional changes. To mitigate these issues, this paper presents BERT-LogAnom, an unsupervised framework for log anomaly detection that combines contextual representation learning, sequential modeling, and adaptive decision mechanisms. Specifically, a BERT-based encoder is employed to learn global contextual semantics from log sequences, while a gated residual bidirectional Long Short-Term Memory (GR-BiLSTM) network is introduced to model bidirectional temporal dependencies without disrupting the learned contextual information. To characterize normal system behavior from unlabeled logs, two self-supervised objectives—masked log key prediction and volume hypersphere minimization—are jointly optimized during training. Furthermore, a Dynamic Thresholding Prediction Module (DTPM) is incorporated to adjust anomaly decision boundaries in response to short-term statistical fluctuations and longer-term distribution drift. Experiments conducted on three public benchmark datasets (HDFS, BGL, and Thunderbird) show that BERT-LogAnom achieves consistently superior performance compared with representative baseline methods across precision, recall, and F1-score. Additional ablation studies further confirm the contribution of each major component in the proposed framework. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 5962 KB  
Article
YOLO-FC: A Lightweight Fish Detection Model for High-Density Aquaculture Counting Scenarios
by Luowei Pei, Haodong Zhou, Guoxing Lu, Jian Zhao, Zequn Peng, Songming Zhu, Zhangying Ye and Jialong Zhou
Fishes 2026, 11(2), 114; https://doi.org/10.3390/fishes11020114 - 12 Feb 2026
Viewed by 595
Abstract
High-precision fish detection is the fundamental prerequisite for automated counting in aquaculture. However, current research lacks lightweight yet highly accurate detection models specifically designed to address occlusion challenges in high-density scenarios within controlled environments. To address this deficit, a novel lightweight fish detection [...] Read more.
High-precision fish detection is the fundamental prerequisite for automated counting in aquaculture. However, current research lacks lightweight yet highly accurate detection models specifically designed to address occlusion challenges in high-density scenarios within controlled environments. To address this deficit, a novel lightweight fish detection model was constructed, which signifies the adaptation of the YOLO (You Only Look Once) framework, optimized specifically for enhancing detection performance under counting-oriented conditions. This model has been named YOLO-FC (YOLO constructed specifically for Fish Counting Applications). In YOLO-FC, the backbone network is significantly streamlined through the integration of a new feature extraction module and the use of SAC (Switchable Atrous Convolution). Simultaneously, the neck network’s feature fusion approach is revamped with a weighted feature fusion method. Additionally, the model introduces improved EIOU (Efficient Intersection over Union) into the BBR (Bounding Box Regression) loss function. Following the evaluation of different detection head combinations and feature extraction modules, the final model utilizes a single detection head, with parameter count and computational demands representing only 14.7% and 73.2% respectively compared to YOLOv5 nano. Experimental results on the self-built fish dataset showed that the nano YOLO-FC achieved a detection P (precision) of 97.9%, R (recall rate) of 97.2%, and AP50 (Average Precision at Intersection over Union threshold of 0.50) of 98.8%. These metrics surpass those of mainstream object detection models and existing fish detection models. Furthermore, to verify generalizability, the model was evaluated on a shrimp larvae dataset, demonstrating robust detection capabilities across different aquatic species. The proposed model provides a solid technological foundation for the detection stage in high-density counting systems. Full article
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)
Show Figures

Figure 1

24 pages, 1724 KB  
Article
P3CL: Pseudo-Label Confidence-Calibrated Curriculum Learning for Weakly Supervised Urban Airborne Laser Scanning Point Cloud Classification
by Ziwei Luo, Tao Zeng, Jun Jiang, Ziyang Cai, Wanru Wu, Zhong Xie and Yongyang Xu
Remote Sens. 2026, 18(4), 552; https://doi.org/10.3390/rs18040552 - 9 Feb 2026
Cited by 3 | Viewed by 456
Abstract
Urban airborne laser scanning (ALS) point clouds cover extensive geographical areas, rendering dense point-level annotation economically prohibitive and limiting the feasibility of fully supervised learning. In weakly supervised settings for urban ALS data, the natural long-tailed class distribution—where ground and building points dominate [...] Read more.
Urban airborne laser scanning (ALS) point clouds cover extensive geographical areas, rendering dense point-level annotation economically prohibitive and limiting the feasibility of fully supervised learning. In weakly supervised settings for urban ALS data, the natural long-tailed class distribution—where ground and building points dominate and smaller objects are rare—combined with the use of fixed pseudo-label thresholds under sparse annotations exacerbates confirmation bias and increases prediction uncertainty. This ultimately restricts the effective utilization of unlabeled data during training. To overcome these challenges, we propose a pseudo-label confidence-calibrated curriculum learning framework designed for weakly supervised ALS point cloud classification. The framework introduces a confidence-aware self-adaptive soft gating (CSS) mechanism that dynamically adjusts category-specific thresholds online using exponential moving average statistics and scene-aware normalization, eliminating the need for manual scheduling while improving pseudo-label quality. In addition, a reliability-driven soft selection (RSS) constraint is incorporated, in which each point is assigned a comprehensive reliability score that integrates prediction confidence, entropy clarity, and cross-augmentation consistency, enabling adaptive soft weighting to replace hard pseudo-label selection and achieve more balanced sample utilization. These components are further integrated into a unified pseudo-label confidence-calibrated curriculum learning framework (P3CL) that progressively shifts the model’s focus from high-certainty samples to more ambiguous ones, effectively mitigating confirmation bias. Extensive experiments on three public ALS benchmarks demonstrate that the proposed method consistently outperforms existing weakly supervised approaches and achieves competitive performance compared with several fully supervised models. Full article
Show Figures

Figure 1

18 pages, 3369 KB  
Article
3D Local Feature Learning and Analysis on Point Cloud Parts via Momentum Contrast
by Xuanmeng Sha, Tomohiro Mashita, Naoya Chiba and Liyun Zhang
Sensors 2026, 26(3), 1007; https://doi.org/10.3390/s26031007 - 3 Feb 2026
Viewed by 514
Abstract
Self-supervised contrastive learning has demonstrated remarkable effectiveness in learning visual representations without labeled data, yet its application to 3D local feature learning from point clouds remains underexplored. Existing methods predominantly focus on complete object shapes, neglecting the critical challenge of recognizing partial observations [...] Read more.
Self-supervised contrastive learning has demonstrated remarkable effectiveness in learning visual representations without labeled data, yet its application to 3D local feature learning from point clouds remains underexplored. Existing methods predominantly focus on complete object shapes, neglecting the critical challenge of recognizing partial observations commonly encountered in real-world 3D perception. We propose a momentum contrastive learning framework specifically designed to learn discriminative local features from randomly sampled point cloud regions. By adapting the MoCo architecture with PointNet++ as the feature backbone, our method treats local parts of point cloud as fundamental contrastive learning units, combined with carefully designed augmentation strategies including random dropout and translation. Experiments on ShapeNet demonstrate that our approach effectively learns transferable local features and the empirical observation that approximately 30% object local part represents a practical threshold for effective learning when simulating real-world occlusion scenarios, and achieves comparable downstream classification accuracy while reducing training time by 16%. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

28 pages, 7850 KB  
Article
A Systematic Approach for the Conservation and Sustainable Activation of Traditional Military Settlements Using TRIZ Theory: A Case Study of Zhenjing Village, Arid Northern China
by Hubing Li, Feng Zhao and Haitao Ren
Buildings 2026, 16(2), 420; https://doi.org/10.3390/buildings16020420 - 19 Jan 2026
Viewed by 438
Abstract
This study aims to examine the methodological applicability of the Theory of Inventive Problem Solving (TRIZ) in the conservation and revitalization of traditional military settlements. Using Zhenjing Village in Jingbian County as a case, the research constructs a systematic framework for contradiction identification [...] Read more.
This study aims to examine the methodological applicability of the Theory of Inventive Problem Solving (TRIZ) in the conservation and revitalization of traditional military settlements. Using Zhenjing Village in Jingbian County as a case, the research constructs a systematic framework for contradiction identification and strategy generation. Methods: Through preliminary surveys, data integration, and system modeling, the study identifies major conflicts among authenticity preservation, ecological carrying capacity, and community vitality in Zhenjing Village. Technical contradiction matrices, separation principles, and the Algorithm of Inventive Problem Solving (ARIZ) are employed for structured analysis. Further, system dynamics modeling is used to simulate the effectiveness of strategies and to evaluate the dynamic impacts of various conservation interventions on authenticity maintenance, ecological stress, and community vitality. The research identifies three categories of core technical contradictions and translates the 39 engineering parameters into an indicator system adapted to the cultural heritage conservation context. ARIZ is used to derive the Ideal Final Result (IFR) for Zhenjing Village, which includes self-maintaining authenticity, self-regulating ecology, and self-activating community development, forming a systematic strategy. System dynamics simulations indicate that, compared with “inertial development,” TRIZ-oriented strategies reduce the decline in heritage authenticity by approximately 40%, keep ecological pressure indices below threshold levels, and significantly enhance the sustainability of community vitality. TRIZ enables a shift in the conservation of traditional military settlements from experience-driven approaches toward systematic problem solving. It strengthens conflict-identification capacity and improves the logical rigor of strategy generation, providing a structured and scalable innovative method for heritage conservation in arid and ecologically fragile regions in northern China and similar contexts worldwide. Full article
(This article belongs to the Special Issue Built Heritage Conservation in the Twenty-First Century: 2nd Edition)
Show Figures

Figure 1

29 pages, 4094 KB  
Article
Hybrid LSTM–DNN Architecture with Low-Discrepancy Hypercube Sampling for Adaptive Forecasting and Data Reliability Control in Metallurgical Information-Control Systems
by Jasur Sevinov, Barnokhon Temerbekova, Gulnora Bekimbetova, Ulugbek Mamanazarov and Bakhodir Bekimbetov
Processes 2026, 14(1), 147; https://doi.org/10.3390/pr14010147 - 1 Jan 2026
Cited by 1 | Viewed by 656
Abstract
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling [...] Read more.
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling using Sobol and Halton sequences to ensure uniform coverage of operating conditions and the hyperparameter space. The processing pipeline includes preprocessing and temporal synchronization of measurements, a parameter identification module, anomaly detection and correction using an ε-threshold scheme, and a decision-making and control loop. In simulation scenarios modeling the dynamics of temperature, pressure, level, and flow (1 min sampling interval, injected anomalies, and measurement noise), the hybrid model outperformed GRU and CNN architectures: a determination coefficient of R2 > 0.92 was achieved for key indicators, MAE and RMSE improved by 7–15%, and the proportion of unreliable measurements after correction decreased to <2% (compared with 8–12% without correction). The experiments also demonstrated accelerated adaptation during regime changes. The scientific novelty lies in combining recurrent memory and deep nonlinear approximation with deterministic experimental design in the hypercube of states and hyperparameters, enabling reproducible self-adaptation of the ICS and increased noise robustness without upgrading the measurement hardware. Modern metallurgical information-control systems operate under non-stationary regimes and limited measurement reliability, which reduces the robustness of conventional forecasting and decision-support approaches. To address this issue, a hybrid LSTM–DNN architecture combined with low-discrepancy hypercube probing and anomaly-aware data correction is proposed. The proposed approach is distinguished by the integration of hybrid neural forecasting, deterministic hypercube-based adaptation, and anomaly-aware data correction within a unified information-control loop for non-stationary industrial processes. Full article
Show Figures

Figure 1

21 pages, 5825 KB  
Article
Adaptive Dynamic Thresholds for Unsupervised Joint Anomaly Detection and Trend Prediction
by Fenglin Ding, Yilin Zhao, Zongliang Li, Haibin Tang, Yizhuo Liu and Danhuai Guo
Sensors 2026, 26(1), 257; https://doi.org/10.3390/s26010257 - 31 Dec 2025
Cited by 1 | Viewed by 1025
Abstract
Anomaly detection and degradation trend prediction are two pivotal tasks in system health management. However, most existing approaches treat them as independent problems and fail to exploit their intrinsic interdependence. In addition, the scarcity of labeled data in real-world scenarios limits the applicability [...] Read more.
Anomaly detection and degradation trend prediction are two pivotal tasks in system health management. However, most existing approaches treat them as independent problems and fail to exploit their intrinsic interdependence. In addition, the scarcity of labeled data in real-world scenarios limits the applicability of supervised learning methods. To address these challenges, we propose an adaptive thresholding strategy framework for unsupervised joint anomaly detection and trend prediction. Our framework introduces a self-adaptive threshold strategy from historical data distributions and dynamically updates them in response to evolving system behavior. The anomaly detection results are integrated to enhance degradation trend forecasting, while the predicted degradation trends, in turn, refine the anomaly thresholds through a feedback mechanism. Experiments on both public and real-world industrial datasets demonstrate that the proposed framework achieves superior detection accuracy, robust trend prediction, and high computational efficiency under diverse operational conditions. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

23 pages, 3559 KB  
Article
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
by Rao Mikkilineni and W. Patrick Kelly
Computers 2025, 14(12), 541; https://doi.org/10.3390/computers14120541 - 10 Dec 2025
Cited by 1 | Viewed by 1830
Abstract
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted [...] Read more.
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted in a Turing-paradigm architecture: statistical world models (opaque weights) bolted onto brittle, imperative workflows. They excel at pattern completion, but they externalize governance, memory, and purpose, thereby accumulating coherence debt—a structural fragility manifested as hallucinations, shallow and siloed memory, ad hoc guardrails, and costly human oversight. The shortcoming of current AI relative to human-like intelligence is therefore less about raw performance or scaling, and more about an architectural limitation: knowledge is treated as an after-the-fact annotation on computation, rather than as an organizing substrate that shapes computation. This paper introduces Mindful Machines, a computational paradigm that operationalizes coherence as an architectural property rather than an emergent afterthought. A Mindful Machine is specified by a Digital Genome (encoding purposes, constraints, and knowledge structures) and orchestrated by an Autopoietic and Meta-Cognitive Operating System (AMOS) that runs a continuous Discover–Reflect–Apply–Share (D-R-A-S) loop. Instead of a static model embedded in a one-shot ML pipeline or deep learning neural network, the architecture separates (1) a structural knowledge layer (Digital Genome and knowledge graphs), (2) an autopoietic control plane (health checks, rollback, and self-repair), and (3) meta-cognitive governance (critique-then-commit gates, audit trails, and policy enforcement). We validate this approach on the classic Credit Default Prediction problem by comparing a traditional, static Logistic Regression pipeline (monolithic training, fixed features, external scripting for deployment) with a distributed Mindful Machine implementation whose components can reconfigure logic, update rules, and migrate workloads at runtime. The Mindful Machine not only matches the predictive task, but also achieves autopoiesis (self-healing services and live schema evolution), explainability (causal, event-driven audit trails), and dynamic adaptation (real-time logic and threshold switching driven by knowledge constraints), thereby reducing the coherence debt that characterizes contemporary ML- and LLM-centric AI architectures. The case study demonstrates “a hybrid, runtime-switchable combination of machine learning and rule-based simulation, orchestrated by AMOS under knowledge and policy constraints”. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

15 pages, 16847 KB  
Article
High-Power Laser Coherent Beam Combination Through Self-Imaging in Plasma Waveguides
by Yixuan Huang, Haitao Zhang, Zhuoyi Yang, Yanwei Wang, Yihang Huang, Xiaozheng Liu and Junyu Chen
Appl. Sci. 2025, 15(22), 12141; https://doi.org/10.3390/app152212141 - 16 Nov 2025
Viewed by 759
Abstract
A novel approach for laser coherent beam combination (CBC) utilizing the self-imaging effect in plasma waveguides is presented in this study, which enables the transmission of ultrashort laser pulses at intensities above the bulk damage threshold of conventional solid optical waveguides. The feasibility [...] Read more.
A novel approach for laser coherent beam combination (CBC) utilizing the self-imaging effect in plasma waveguides is presented in this study, which enables the transmission of ultrashort laser pulses at intensities above the bulk damage threshold of conventional solid optical waveguides. The feasibility of self-imaging-based CBC in plasma waveguides was simulated and verified, demonstrating favorable combining efficiency and beam quality. This work explores the adaptive tuning of waveguide length via dynamic adjustment of plasma density, addressing the critical issue of fabrication tolerances in traditional waveguide systems. With CBC via plasma waveguide, this study offers support for the development of robust, high-power laser systems with enhanced beam quality and operational stability. Full article
(This article belongs to the Special Issue Advances in Fiber Lasers and Their Applications)
Show Figures

Figure 1

Back to TopTop