Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (91)

Search Parameters:
Keywords = belief entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 753 KB  
Article
A Dual-Source Evidence–Driven Semi-Supervised Belief Rule Base for Fault Diagnosis
by Xin Zhang, Zhiying Fan, Wei He and Huafeng He
Sensors 2026, 26(8), 2444; https://doi.org/10.3390/s26082444 - 16 Apr 2026
Viewed by 36
Abstract
In the fault diagnosis of complex industrial systems, labeled samples are expensive to obtain, which leads to insufficient training data for the belief rule base (BRB) model. Although unlabeled samples are abundant, the uncertainty of their pseudo-labels may undermine semi-supervised learning and hinder [...] Read more.
In the fault diagnosis of complex industrial systems, labeled samples are expensive to obtain, which leads to insufficient training data for the belief rule base (BRB) model. Although unlabeled samples are abundant, the uncertainty of their pseudo-labels may undermine semi-supervised learning and hinder accurate parameter optimization of the BRB model. To address these issues, a dual-source evidence-driven semi-supervised BRB method (SS-BRB) is proposed for fault diagnosis. The proposed method makes effective use of unlabeled samples while preserving the interpretability and inference transparency of the BRB model. To improve the reliability of pseudo-labels in semi-supervised learning, a dual-source evidence-driven pseudo-labeling mechanism is designed. In this mechanism, local similarity information is combined with the global inference results of the BRB model. An entropy factor and a feature distance factor are introduced to adaptively adjust the confidence of pseudo-labels. In this way, the quality of pseudo-labels is improved, and the influence of noisy samples is reduced. Based on this mechanism, high-confidence pseudo-labeled samples are incorporated into the training set to further optimize the model. Experimental results show that the proposed method achieves good diagnostic performance on both the gearbox dataset and the WD615 diesel engine dataset. Even with limited labeled data, the proposed method still achieves high accuracy, robustness, and good generalization performance. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

27 pages, 1999 KB  
Article
Uncertainty-Driven Risk Evaluation for Safety-Critical Software Under Conflicting Evidence Judgments: A Dual-Dimensional Evidence Fusion Approach
by Wenguang Xie, Wuhan Yang and Kenian Wang
Symmetry 2026, 18(4), 625; https://doi.org/10.3390/sym18040625 - 8 Apr 2026
Viewed by 246
Abstract
Risk assessment of safety-critical software relies heavily on expert reviews prone to high epistemic uncertainty and conflicting judgments. While evidence theory is widely used for information fusion, classical rules often yield counter-intuitive results in high-conflict scenarios. To address this, we propose an uncertainty-driven [...] Read more.
Risk assessment of safety-critical software relies heavily on expert reviews prone to high epistemic uncertainty and conflicting judgments. While evidence theory is widely used for information fusion, classical rules often yield counter-intuitive results in high-conflict scenarios. To address this, we propose an uncertainty-driven risk evaluation model based on a dual-dimensional evidence fusion approach. The framework integrates an improved Belief Entropy (BE) and an Evidence Conflict Coefficient (ECC) to quantify reliability from two perspectives: (1) Internal Dimension, using BE to measure inherent uncertainty within individual judgments; and (2) External Dimension, using ECC to measure divergence among multiple sources. By adaptively modifying Basic Probability Assignments (BPAs) with these dual-dimensional weights, the model effectively harmonizes data prior to fusion. Validated through an avionics software airworthiness case study, the methodology significantly enhances fusion stability and accuracy. Results confirm it effectively suppresses extreme deviations and raises the performance floor, providing a robust decision-support tool for safety-critical engineering. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

28 pages, 5258 KB  
Article
Dual-View Entropy-Driven AIS–Sonar Fusion for Surface and Underwater Target Discrimination
by Xiaoshuang Zhang, Jiayi Che, Xiaodan Xiong, Yucheng Zhang, Xinbo He, Mengsha Deng and Dezhi Wang
J. Mar. Sci. Eng. 2026, 14(7), 675; https://doi.org/10.3390/jmse14070675 - 4 Apr 2026
Viewed by 312
Abstract
Distinguishing surfaces from underwater targets in complex marine environments is challenging when relying solely on physical sonar features. To address the high uncertainty inherent in single-modal features and the conflicts arising from heterogeneous data, we propose a Dual-View Entropy-Driven Negation Dempster–Shafer (DVE-NDS) fusion [...] Read more.
Distinguishing surfaces from underwater targets in complex marine environments is challenging when relying solely on physical sonar features. To address the high uncertainty inherent in single-modal features and the conflicts arising from heterogeneous data, we propose a Dual-View Entropy-Driven Negation Dempster–Shafer (DVE-NDS) fusion method that integrates AIS kinematic priors with passive sonar signals. First, a heterogeneous recognition framework is constructed. LOFAR and DEMON features are extracted via convolutional neural networks (CNNs), while a Negation Basic Probability Assignment (Negation BPA) strategy is introduced to transform AIS spatiotemporal mismatches into effective "negation support" for non-cooperative underwater targets. Instead of relying on a single conflict coefficient, the proposed method jointly considers evidence self-information and inter-source consistency. Evidence quality is quantified using improved Deng entropy and negation belief entropy, while mutual trust is evaluated via the Jousselme distance. Heterogeneous evidence is weighted and corrected by generated coupling weights, effectively suppressing low-quality evidence and sharpening decision boundaries. Simulation results confirm that DVE-NDS improves macro-F1 over classical fusion, indicating the framework’s potential for handling conflicting evidence, though the current validation remains simulation-based and should be regarded as a methodological proof-of-concept. Full article
(This article belongs to the Special Issue Emerging Computational Methods in Intelligent Marine Vehicles)
Show Figures

Figure 1

25 pages, 3042 KB  
Article
Quantifying Epistemic Uncertainty in Multimodal Long-Tailed Classification: A Belief Entropy-Based Evidential Fusion Framework
by Guorui Zhu
Entropy 2026, 28(3), 343; https://doi.org/10.3390/e28030343 - 19 Mar 2026
Viewed by 384
Abstract
Deep multimodal learning has excelled in tasks involving vision, language, and audio modalities. Nevertheless, their performance on tail classes exhibits significant degradation under the long-tailed distributions common in real-world data, meanwhile related fusion schemes often provide only limited treatment of modality-specific uncertainty and [...] Read more.
Deep multimodal learning has excelled in tasks involving vision, language, and audio modalities. Nevertheless, their performance on tail classes exhibits significant degradation under the long-tailed distributions common in real-world data, meanwhile related fusion schemes often provide only limited treatment of modality-specific uncertainty and rarely incorporate explicit mechanisms for class-level fairness. To address these information discrepancies, we present a framework that integrates evidential reasoning with deep learning–Uncertainty-Quantified Multimodal Learning for Long-Tailed Classification (UMuLT). The framework includes: (i) an uncertainty-gated evidential fusion module that adaptively down-weights unreliable modalities; (ii) an exponential moving average (EMA) fairness regularizer that dynamically amplifies tail-class gradients; and (iii) a cross-modal consistency regularizer optimized in two stages: tail specialization with lightweight adapters on tail-class data to obtain a balanced initialization, followed by end-to-end fine-tuning. The effectiveness and practicality of our method are verified on three long-tailed benchmarks for multimodal classification. Experiments show consistent gains over strong baselines in overall metrics, calibration, and tail subset performance. Statistical significance tests confirm the superiority of the proposed framework. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

82 pages, 6808 KB  
Article
Agentic Finance: An Adaptive Inference Framework for Bounded-Rational Investing Agents
by Samuel Montañez Jacquez, John H. Clippinger and Matthew Moroney
Entropy 2026, 28(3), 321; https://doi.org/10.3390/e28030321 - 12 Mar 2026
Cited by 1 | Viewed by 714
Abstract
We propose Adaptive Inference, a portfolio management framework extending Active Inference to non-stationary financial environments. The framework integrates inference, control, and execution under endogenous uncertainty, modeling investment decisions as coupled dynamics of belief updating, preference encoding, and action selection rather than optimization [...] Read more.
We propose Adaptive Inference, a portfolio management framework extending Active Inference to non-stationary financial environments. The framework integrates inference, control, and execution under endogenous uncertainty, modeling investment decisions as coupled dynamics of belief updating, preference encoding, and action selection rather than optimization over fixed objectives. In this approach, portfolio behavior is governed by the expected free energy (EFE) minimization, showing that classical valuation models emerge as limiting cases when epistemic components vanish. Using train–test evaluation on the ARKK Innovation ETF (2015–2025), we identify a Passivity Paradox: frozen belief transfer outperforms naive adaptive learning. A Professional Agent achieves a Sharpe ratio of 0.39 while its adaptive counterpart degrades to 0.28, reflecting belief contamination when learning from policy-dependent signals. Crucially, the architecture is not designed to generate alpha but to perform endogenous risk management that mitigates overtrading under regime ambiguity and distributional shift. Adaptive Inference Agents maintain long exposure most of the time while tactically reducing positions during high-entropy periods, implementing uncertainty-aware passive investing. All agents reduce realized volatility relative to ARKK Buy-and-Hold (43.0% annualized). Cross-asset validation on the S&P 500 ETF (SPY) shows that inference-guided risk shaping achieves a positive Entropic Sharpe Ratio (ESR), defined as excess return per unit of informational work, thereby quantifying the economic value of information under thermodynamic constraints on inference. Full article
Show Figures

Figure 1

20 pages, 309 KB  
Article
A Comparison of Algorithms to Achieve the Maximum Entropy in the Theory of Evidence
by Joaquín Abellán, Aina López-Gay, Maria Isabel A. Benítez and Francisco Javier G. Castellano
Entropy 2026, 28(2), 247; https://doi.org/10.3390/e28020247 - 21 Feb 2026
Viewed by 341
Abstract
Within the framework of evidence theory, maximum entropy is regarded as a measure of total uncertainty that satisfies a comprehensive set of mathematical properties and behavioral requirements. However, its practical applicability is severely questioned due to the high computational complexity of its calculation, [...] Read more.
Within the framework of evidence theory, maximum entropy is regarded as a measure of total uncertainty that satisfies a comprehensive set of mathematical properties and behavioral requirements. However, its practical applicability is severely questioned due to the high computational complexity of its calculation, which involves the manipulation of the power set of the frame of discernment. In the literature, attempts have been made to reduce this complexity by restricting the computation to singleton elements, leading to a formulation based on reachable probability intervals. Although this approach relies on a less specific representation of evidential information, it has been shown to provide an equivalent maximum entropy value under certain conditions. In this paper, we present an experimental comparative study of two algorithms for calculating maximum entropy in evidence theory: the classical algorithm, which operates directly on belief functions, and an alternative algorithm based on reachable probability intervals. Through numerical experiments, we demonstrate that the differences between these approaches are less pronounced than previously suggested in the literature. Depending on the type of information representations to which it is applied, the original algorithm based on belief functions can be more efficient than the one using the reachable probability interval approach. This is an interesting result, and a reason for choosing one algorithm over the other depending on the situation. Full article
27 pages, 6788 KB  
Article
From Expert-Based Evaluation to Data-Driven Modeling: Performance-Based Flood Susceptibility Mapping
by Mustafa Tanrıverdi and Tülay Erbesler Ayaşlıgil
Limnol. Rev. 2026, 26(1), 6; https://doi.org/10.3390/limnolrev26010006 - 18 Feb 2026
Viewed by 558
Abstract
Floods are natural disasters that cause significant socioeconomic and environmental losses in both urban and rural areas. Within the framework of spatial planning, precautionary measures against flood hazards can be developed using analytical approaches based on different modeling techniques. In this study, flood-prone [...] Read more.
Floods are natural disasters that cause significant socioeconomic and environmental losses in both urban and rural areas. Within the framework of spatial planning, precautionary measures against flood hazards can be developed using analytical approaches based on different modeling techniques. In this study, flood-prone areas in the Melen Basin, Türkiye, were identified and mapped using five statistical methods, namely Frequency Ratio (FR), Shannon Entropy (SE), Evidential Belief Function (EBF), and the hybrid models EBF–SE and EBF–FR. The analysis was conducted using a flood inventory and environmental datasets covering the period 2019–2024, including elevation, slope, aspect, land use, plan and profile curvature, drainage density, distance to river, curve number, long-term average precipitation, geological formation, soil depth, topographic wetness index, sediment transport, and stream power index. Model performances were evaluated using the Receiver Operating Characteristic (ROC) curve and the Area Under the Curve (AUC). The results indicate that the SE method achieved the highest predictive performance (AUC = 0.979), followed by FR (0.974), EBF–SE (0.972), EBF–FR (0.968), and EBF (0.966). According to the FR and SE models, elevation, lithology, and slope were identified as the most influential factors in flood occurrence. In the evaluation of the success index of the models, the following values were determined according to their size: EBF–SE (96.0), SE (94.4), EBF (91.8), FR (81.9), and EBF–FR (79.4). In the classification of flood sensitivity maps, Natural Breaks (Jenks) is the most successful method according to the success index. The findings demonstrate that data-driven and hybrid models can effectively support flood risk assessment and provide valuable input for land-use planning and flood risk management. Full article
Show Figures

Figure 1

48 pages, 1830 KB  
Article
An Information–Theoretic Model of Abduction for Detecting Hallucinations in Explanations
by Boris Galitsky
Entropy 2026, 28(2), 173; https://doi.org/10.3390/e28020173 - 2 Feb 2026
Cited by 1 | Viewed by 1004
Abstract
We present an Information–Theoretic Model of Abduction for Detecting Hallucinations in Generative Models, a neuro-symbolic framework that combines entropy-based inference with abductive reasoning to identify unsupported or contradictory content in large language model outputs. Our approach treats hallucination detection as a dual optimization [...] Read more.
We present an Information–Theoretic Model of Abduction for Detecting Hallucinations in Generative Models, a neuro-symbolic framework that combines entropy-based inference with abductive reasoning to identify unsupported or contradictory content in large language model outputs. Our approach treats hallucination detection as a dual optimization problem: minimizing the information gain between source-conditioned and response-conditioned belief distributions, while simultaneously selecting the minimal abductive hypothesis capable of explaining discourse-salient claims. By incorporating discourse structure through RST-derived EDU weighting, the model distinguishes legitimate abductive elaborations from claims that cannot be justified under any computationally plausible hypothesis. Experimental evaluation across medical, factual QA, and multi-hop reasoning datasets demonstrates that the proposed method outperforms state-of-the-art neural and symbolic baselines in both accuracy and interpretability. Qualitative analysis further shows that the framework successfully exposes plausible-sounding but abductively unsupported model errors, including real hallucinations generated by GPT-5.1. Together, these results indicate that integrating Information–Theoretic divergence and abductive explanation provides a principled and effective foundation for robust hallucination detection in generative systems. Full article
(This article belongs to the Special Issue Information Theory in Artificial Intelligence)
Show Figures

Figure 1

30 pages, 3641 KB  
Article
Modified EfficientNet-B0 Architecture Optimized with Quantum-Behaved Algorithm for Skin Cancer Lesion Assessment
by Abdul Rehman Altaf, Abdullah Altaf and Faizan Ur Rehman
Diagnostics 2025, 15(24), 3245; https://doi.org/10.3390/diagnostics15243245 - 18 Dec 2025
Viewed by 864
Abstract
Background/Objectives: Skin cancer is one of the most common diseases in the world, whose early and accurate detection can have a survival rate more than 90% while the chance of mortality is almost 80% in case of late diagnostics. Methods: A [...] Read more.
Background/Objectives: Skin cancer is one of the most common diseases in the world, whose early and accurate detection can have a survival rate more than 90% while the chance of mortality is almost 80% in case of late diagnostics. Methods: A modified EfficientNet-B0 is developed based on mobile inverted bottleneck convolution with squeeze and excitation approach. The 3 × 3 convolutional layer is used to capture low-level visual features while the core features are extracted using a sequence of Mobile Inverted Bottleneck Convolution blocks having both 3 × 3 and 5 × 5 kernels. They not only balance fine-grained extraction with broader contextual representation but also increase the network’s learning capacity while maintaining computational cost. The proposed architecture hyperparameters and extracted feature vectors of standard benchmark datasets (HAM10000, ISIC 2019 and MSLD v2.0) of dermoscopic images are optimized with the quantum-behaved particle swarm optimization algorithm (QBPSO). The merit function is formulated by the training loss given in the form of standard classification cross-entropy with label smoothing, mean fitness value (mfval), average accuracy (mAcc), mean computational time (mCT) and other standard performance indicators. Results: Comprehensive scenario-based simulations were performed using the proposed framework on a publicly available dataset and found an mAcc of 99.62% and 92.5%, mfval of 2.912 × 10−10 and 1.7921 × 10−8, mCT of 501.431 s and 752.421 s for HAM10000 and ISIC2019 datasets, respectively. The results are compared with state of the art, pre-trained existing models like EfficentNet-B4, RegNetY-320, ResNetXt-101, EfficentNetV2-M, VGG-16, Deep Lab V3 as well as reported techniques based on Mask RCCN, Deep Belief Net, Ensemble CNN, SCDNet and FixMatch-LS techniques having varying accuracies from 85% to 94.8%. The reliability of the proposed architecture and stability of QBPSO is examined through Monte Carlo simulation of 100 independent runs and their statistical soundings. Conclusions: The proposed framework reduces diagnostic errors and assists dermatologists in clinical decisions for an improved patient outcomes despite the challenges like data imbalance and interpretability. Full article
(This article belongs to the Special Issue Medical Image Analysis and Machine Learning)
Show Figures

Figure 1

27 pages, 3758 KB  
Article
Belief Entropy-Based MAGDM Algorithm Under Double Hierarchy Quantum-like Bayesian Networks and Its Application to Wastewater Reuse
by Juxiang Wang, Yaping Li, Xin Wang and Yanjun Wang
Symmetry 2025, 17(11), 2013; https://doi.org/10.3390/sym17112013 - 20 Nov 2025
Viewed by 537
Abstract
The traditional multi-attribute group decision-making (MAGDM) method easily ignores the interference effect among decision-makers (DMs), while quantum theory can effectively portray the uncertainty in the decision-making process and quantify the preference interference among DMs. The asymmetry of evaluation information in social networks can [...] Read more.
The traditional multi-attribute group decision-making (MAGDM) method easily ignores the interference effect among decision-makers (DMs), while quantum theory can effectively portray the uncertainty in the decision-making process and quantify the preference interference among DMs. The asymmetry of evaluation information in social networks can have a significant impact on decision-making. In this paper, a quantum MAGDM algorithm based on probabilistic linguistic term sets (PLTSs) and a quantum-like Bayesian network (QLBN) is proposed (PL-QLBN), utilizing quantum theory and social network concepts and introducing a novel method for calculating interference effects based on belief entropy. Firstly, a complete trust network is constructed based on the probabilistic linguistic trust transfer operator and the minimum path method. A trust aggregation method, considering interference effects, is proposed for the QLBN to determine the DM weights. Next, the attribute weights are calculated based on the entropy weight method. Then, a probabilistic linguistic MAGDM considering interference effects is proposed based on the QLBN. Finally, the feasibility and validity of the provided method are verified through Hefei City’s selection of wastewater reuse alternatives. Full article
Show Figures

Figure 1

18 pages, 5051 KB  
Article
Entropy Reduction Across Odor Fields
by Hugo Magalhães and Lino Marques
Entropy 2025, 27(9), 909; https://doi.org/10.3390/e27090909 - 28 Aug 2025
Cited by 1 | Viewed by 1057
Abstract
Cognitive Odor Source Localization (OSL) strategies are reliable search strategies for turbulent environments, where chemical cues are sparse and intermittent. These methods estimate a probabilistic belief over the source location using Bayesian inference and guide the searching movement by evaluating expected entropy reduction [...] Read more.
Cognitive Odor Source Localization (OSL) strategies are reliable search strategies for turbulent environments, where chemical cues are sparse and intermittent. These methods estimate a probabilistic belief over the source location using Bayesian inference and guide the searching movement by evaluating expected entropy reduction at candidate new positions. By maximizing expected information gain, agents make informed decisions rather than simply reacting to sensor readings. However, computing entropy reductions is computationally expensive, making real-time implementation challenging for resource-constrained platforms. Interestingly, search trajectories produced by cognitive algorithms often resemble those of small insects, suggesting that informative movement patterns might be replicated using simpler, bio-inspired searching strategies. This work investigates that possibility by analysing spatial distribution of entropy reductions across the entire search area. Rather than focusing on searching algorithms and local decisions, the analysis maps information gain over the full environment, identifying consistent high-gain regions that may serve as navigational cues. Results show that these regions often emerge near the source and along plume borders and that expected entropy reduction is strongly influenced by prior belief shape and sensor observations. This global perspective enables identification of spatial patterns and high-gain regions that remain hidden when analysis is restricted to local neighborhoods. These insights enable synthesis of hybrid search strategies that preserve cognitive effectiveness while significantly reducing computational cost. Full article
Show Figures

Figure 1

32 pages, 2072 KB  
Article
Airline Ranking Using Social Feedback and Adapted Fuzzy Belief TOPSIS
by Ewa Roszkowska and Marzena Filipowicz-Chomko
Entropy 2025, 27(8), 879; https://doi.org/10.3390/e27080879 - 19 Aug 2025
Cited by 3 | Viewed by 2198
Abstract
In the era of digital interconnectivity, user-generated reviews on platforms such as TripAdvisor have become a valuable source of social feedback, reflecting collective experiences and perceptions of airline services. However, aggregating such feedback presents several challenges: evaluations are typically expressed using linguistic ordinal [...] Read more.
In the era of digital interconnectivity, user-generated reviews on platforms such as TripAdvisor have become a valuable source of social feedback, reflecting collective experiences and perceptions of airline services. However, aggregating such feedback presents several challenges: evaluations are typically expressed using linguistic ordinal scales, are subjective, often incomplete, and influenced by opinion dynamics within social networks. To effectively deal with these complexities and extract meaningful insights, this study proposes an information-driven decision-making framework that integrates Fuzzy Belief Structures with the TOPSIS method. To handle the uncertainty and imprecision of linguistic ratings, user opinions are modeled as fuzzy belief distributions over satisfaction levels. Rankings are then derived using TOPSIS by comparing each airline’s aggregated profile to ideal satisfaction benchmarks via a belief-based distance measure. This framework presents a novel solution for measuring synthetic satisfaction in complex social feedback systems, thereby contributing to the understanding of information flow, belief aggregation, and emergent order in digital opinion networks. The methodology is demonstrated using a real-world dataset of TripAdvisor airline reviews, providing a robust and interpretable benchmark for service quality. Moreover, this study applies Shannon entropy to classify and interpret the consistency of customer satisfaction ratings among Star Alliance airlines. The results confirm the stability of the Airline Satisfaction Index (ASI), with extremely high correlations among the five rankings generated using different fuzzy utility function models. The methodology reveals that airlines such as Singapore Airlines, ANA, EVA Air, and Air New Zealand consistently achieve high satisfaction scores across all fuzzy model configurations, highlighting their strong and stable performance regardless of model variation. These airlines also show both low entropy and high average scores, confirming their consistent excellence. Full article
(This article belongs to the Special Issue Dynamics in Biological and Social Networks)
Show Figures

Figure 1

21 pages, 737 KB  
Article
A Model for the Formation of Beliefs and Social Norms Based on the Satisfaction Problem (SAT)
by Bastien Chopard, Franck Raynaud and Julien Stalhandske
Entropy 2025, 27(4), 358; https://doi.org/10.3390/e27040358 - 28 Mar 2025
Cited by 2 | Viewed by 909
Abstract
We propose a numerical representation of beliefs in social systems based on the so-called SAT problem in computer science. The main idea is that a belief system is a set of true/false values associated with claims or propositions. Each individual assigns these values [...] Read more.
We propose a numerical representation of beliefs in social systems based on the so-called SAT problem in computer science. The main idea is that a belief system is a set of true/false values associated with claims or propositions. Each individual assigns these values according to its cognitive system in order to minimize logical contradictions, thus trying to solve a satisfaction problem. Social interactions between agents that disagree on a proposition can be introduced in order to see how, in the long term, social norms and competing belief systems build up in a population. Among other metrics, entropy is used to characterize the diversity of belief systems. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Sociophysics II)
Show Figures

Figure 1

28 pages, 6333 KB  
Article
Hybrid Machine Learning-Based Fault-Tolerant Sensor Data Fusion and Anomaly Detection for Fire Risk Mitigation in IIoT Environment
by Jayameena Desikan, Sushil Kumar Singh, A. Jayanthiladevi, Shashi Bhushan, Vinay Rishiwal and Manish Kumar
Sensors 2025, 25(7), 2146; https://doi.org/10.3390/s25072146 - 28 Mar 2025
Cited by 14 | Viewed by 3977
Abstract
In the oil and gas IIoT environment, fire detection systems heavily depend on fire sensor data, which can be prone to inaccuracies due to faulty or unreliable sensors. These sensor issues, such as noise, missing values, outliers, sensor drift, and faulty readings, can [...] Read more.
In the oil and gas IIoT environment, fire detection systems heavily depend on fire sensor data, which can be prone to inaccuracies due to faulty or unreliable sensors. These sensor issues, such as noise, missing values, outliers, sensor drift, and faulty readings, can lead to delayed or missed fire predictions, posing significant safety and operational risks in the oil and gas industrial IoT environment. This paper presents an approach for handling faulty sensors in edge servers within an IIoT environment to enhance the reliability and accuracy of fire prediction through multi-sensor fusion preprocessing, machine learning (ML)-driven probabilistic model adjustment, and uncertainty handling. First, a real-time anomaly detection and statistical assessment mechanism is employed to preprocess sensor data, filtering out faulty readings and normalizing data from multiple sensor types using dynamic thresholding, which adapts to sensor behavior in real-time. The proposed approach also deploys machine learning algorithms to dynamically adjust probabilistic models based on real-time sensor reliability, thereby improving prediction accuracy even in the presence of unreliable sensor data. A belief mass assignment mechanism is introduced, giving more weight to reliable sensors to ensure they have a stronger influence on fire detection. Simultaneously, a dynamic belief update strategy continuously adjusts sensor trust levels, reducing the impact of faulty readings over time. Additionally, uncertainty measurements using Hellinger and Deng entropy, along with Dempster–Shafer Theory, enable the integration of conflicting sensor inputs and enhance decision-making in fire detection. This approach improves decision-making by managing sensor discrepancies and provides a reliable solution for real-time fire predictions, even in the presence of faulty sensor readings, thereby mitigating the fire risks in IIoT environments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

40 pages, 687 KB  
Article
Irreversibility, Dissipation, and Its Measure: A New Perspective
by Purushottam Das Gujrati
Symmetry 2025, 17(2), 232; https://doi.org/10.3390/sym17020232 - 5 Feb 2025
Cited by 2 | Viewed by 1484
Abstract
Dissipation and irreversibility are two central concepts of classical thermodynamics that are often treated as synonymous. Dissipation D is lost or dissipated work Wdiss0 but is commonly quantified by entropy generation ΔiS in an isothermal irreversible macroscopic process [...] Read more.
Dissipation and irreversibility are two central concepts of classical thermodynamics that are often treated as synonymous. Dissipation D is lost or dissipated work Wdiss0 but is commonly quantified by entropy generation ΔiS in an isothermal irreversible macroscopic process that is often expressed as Kullback–Leibler distance DKL in modern literature. We argue that DKL is nonthermodynamic, and is erroneously justified for quantification by mistakenly equating exchange microwork ΔeWk with the system-intrinsic microwork ΔWk=ΔeWk+ΔiWk, which is a very common error permeating stochastic thermodynamics as was first pointed out several years ago, see text. Recently, it is discovered that dissipation D is properly identified by ΔiW0 for all spontaneously irreversible processes and all temperatures T, positive and negative in an isolated system. As T plays an important role in the quantification, dissipation allows for ΔiS0 for T>0, and ΔiS<0 for T<0, a surprising result. The connection of D with Wdiss and its extension to interacting systems have not been explored and is attempted here. It is found that D is not always proportional to ΔiS. The determination of D requires dipk, but we show that Fokker-Planck and master equations are not general enough to determine it, which is contrary to the common belief. We modify the Fokker-Planck equation to fix the issue. We find that detailed balance also allows for all microstates to remain disconnected without any transition among them in an equilibrium macrostate, another surprising result. We argue that Liouville’s theorem should not apply to irreversible processes, contrary to the claim otherwise. We suggest to use nonequilibrium statistical mechanics in extended space, where pk’s are uniquely determined to evaluate D. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

Back to TopTop