Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (407)

Search Parameters:
Keywords = Kullback–Leibler divergence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 3484 KB  
Article
Multiparametric Quantitative Ultrasound as a Potential Imaging Biomarker for Noninvasive Detection of Nonalcoholic Steatohepatitis: A Clinical Feasibility Study
by Trina Chattopadhyay, Hsien-Jung Chan, Duy Chi Le, Chiao-Yin Wang, Dar-In Tai, Zhuhuang Zhou and Po-Hsiang Tsui
Diagnostics 2025, 15(17), 2214; https://doi.org/10.3390/diagnostics15172214 - 1 Sep 2025
Abstract
Objectives: The FibroScan–aspartate transaminase (AST) score (FAST score) is a hybrid biomarker combining ultrasound and blood test data for identifying nonalcoholic steatohepatitis (NASH). This study aimed to assess the feasibility of using quantitative ultrasound (QUS) biomarkers related to hepatic steatosis for NASH [...] Read more.
Objectives: The FibroScan–aspartate transaminase (AST) score (FAST score) is a hybrid biomarker combining ultrasound and blood test data for identifying nonalcoholic steatohepatitis (NASH). This study aimed to assess the feasibility of using quantitative ultrasound (QUS) biomarkers related to hepatic steatosis for NASH detection and to compare their diagnostic performance with the FAST score. Methods: A total of 137 participants, comprising 71 individuals with NASH and 66 with non-NASH (including 49 normal controls), underwent FibroScan and ultrasound exams. QUS imaging features (Nakagami parameter m, homodyned-K parameter μ, entropy H, and attenuation coefficient α) were extracted from backscattered radiofrequency data. A weighted QUS parameter based on m, μ, H, and α was derived via linear discriminant analysis. NASH was diagnosed based on liver biopsy findings using the nonalcoholic fatty liver disease activity score (NAS). Diagnostic performance was evaluated using the area under the receiver operating characteristic curve (AUROC) and compared with the FAST score using the DeLong test. Separation metrics, including the complement of overlap coefficient, Bhattacharyya distance, Kullback–Leibler divergence, and silhouette score, were used to assess inter-group separability. Results: All QUS parameters were significantly elevated in NASH patients (p < 0.05). AUROC values for individual QUS features ranged from 0.82 to 0.91, with the weighted QUS parameter achieving 0.91. The FAST score had the highest AUROC (0.96), though differences with the weighted QUS and homodyned-K parameters were not statistically significant (p > 0.05). Separation metrics ranked the FAST score highest, closely followed by the weighted QUS parameter. Conclusions: QUS biomarkers can be repurposed for NASH detection, with the weighted QUS parameter offering diagnostic accuracy comparable to the FAST score and serving as a promising, blood-free alternative. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

34 pages, 5703 KB  
Article
Evaluating Sampling Strategies for Characterizing Energy Demand in Regions of Colombia Without AMI Infrastructure
by Oscar Alberto Bustos, Julián David Osorio, Javier Rosero-García, Cristian Camilo Marín-Cano and Luis Alirio Bolaños
Appl. Sci. 2025, 15(17), 9588; https://doi.org/10.3390/app15179588 (registering DOI) - 30 Aug 2025
Abstract
This study presents and evaluates three sampling strategies to characterize electricity demand in regions of Colombia with limited metering infrastructure. These areas lack Advanced Metering Infrastructure (AMI), relying instead on traditional monthly consumption records. The objective of the research is to obtain user [...] Read more.
This study presents and evaluates three sampling strategies to characterize electricity demand in regions of Colombia with limited metering infrastructure. These areas lack Advanced Metering Infrastructure (AMI), relying instead on traditional monthly consumption records. The objective of the research is to obtain user samples that are representative of the original population and logistically efficient, in order to support energy planning and decision-making. The analysis draws on five years of historical data from 2020 to 2024. It includes monthly energy consumption, geographic coordinates, customer classification, and population type, covering over 500,000 users across four subregions of operation determined by the region grid operator: North, South, Center, and East. The proposed methodologies are based on Shannon entropy, consumption-based probabilistic sampling, and Kullback–Leibler divergence minimization. Each method is assessed for its ability to capture demand variability, ensure representativeness, and optimize field deployment. Representativeness is evaluated by comparing the differences in class proportions between the sample and the original population, complemented by the Pearson correlation coefficient between their distributions. Results indicate that entropy-based sampling excels in logistical simplicity and preserves categorical diversity, while KL divergence offers the best statistical fit to population characteristics. The findings demonstrate how combining information theory and statistical optimization enables flexible, scalable sampling solutions for demand characterization in under-instrumented electricity grids. Full article
Show Figures

Figure 1

31 pages, 3554 KB  
Article
FFFNet: A Food Feature Fusion Model with Self-Supervised Clustering for Food Image Recognition
by Zhejun Kuang, Haobo Gao, Jian Zhao, Liu Wang and Lei Sun
Appl. Sci. 2025, 15(17), 9542; https://doi.org/10.3390/app15179542 (registering DOI) - 29 Aug 2025
Viewed by 182
Abstract
With the growing emphasis on healthy eating and nutrition management in modern society, food image recognition has become increasingly important. However, it faces challenges such as large intra-class differences and high inter-class similarities. To tackle these issues, we present a Food Feature Fusion [...] Read more.
With the growing emphasis on healthy eating and nutrition management in modern society, food image recognition has become increasingly important. However, it faces challenges such as large intra-class differences and high inter-class similarities. To tackle these issues, we present a Food Feature Fusion Network (FFFNet), which leverages a multi-head cross-attention mechanism to integrate the local detail-capturing capability of Convolutional Neural Networks with the global modeling capacity of Vision Transformers. This enables the model to capture key discriminative features when addressing such challenging food recognition tasks. FFFNet also introduces self-supervised clustering, generating pseudo-labels from the feature space distribution and employing a clustering objective derived from Kullback–Leibler divergence to optimize the feature space distribution. By maximizing similarity between features and their corresponding cluster centers, and minimizing similarity with non-corresponding centers, it promotes intra-class compactness and inter-class separability, thereby addressing the core challenges. We evaluated FFFNet across the ISIA Food-500, ETHZ Food-101, and UEC Food256 datasets, attaining Top-1/Top-5 accuracies of 65.31%/88.94%, 89.98%/98.37%, and 80.91%/94.92%, respectively, outperforming existing approaches. Full article
Show Figures

Figure 1

10 pages, 304 KB  
Proceeding Paper
A Rapid, Fully Automated Denoising Method for Time Series Utilizing Wavelet Theory
by Livio Fenga
Eng. Proc. 2025, 101(1), 18; https://doi.org/10.3390/engproc2025101018 - 25 Aug 2025
Viewed by 159
Abstract
A wavelet-based noise reduction method for time series is proposed. Traditional denoising techniques often adopt a “trial-and-error” approach, which can prove inefficient and may result in suboptimal filtering outcomes. In contrast, our method systematically selects the most suitable wavelet function from a predefined [...] Read more.
A wavelet-based noise reduction method for time series is proposed. Traditional denoising techniques often adopt a “trial-and-error” approach, which can prove inefficient and may result in suboptimal filtering outcomes. In contrast, our method systematically selects the most suitable wavelet function from a predefined set, along with its associated tuning parameters, to ensure an optimal denoising process. The denoised series produced by this approach maximizes a suitable objective function based on information-theoretic divergence. This is particularly significant in economic time series, which are frequently characterized by non-linear dynamics and erratic patterns, often influenced by measurement errors and various external disturbances. The method’s performance is evaluated using time series data derived from the Business Confidence Climate Survey, which is freely and publicly accessible via the World Wide Web through the Italian National Institute of Statistics. The results of our empirical analysis demonstrate the effectiveness of the proposed method in delivering robust filtering capabilities, adeptly distinguishing informative signals from noise, and successfully eliminating uninformative components from the time series. This capability not only enhances the clarity of the data, but also significantly improves the overall reliability of subsequent analyses, such as forecasting. Full article
Show Figures

Figure 1

23 pages, 7614 KB  
Article
A Cascaded Data-Driven Approach for Photovoltaic Power Output Forecasting
by Chuan Xiang, Xiang Liu, Wei Liu and Tiankai Yang
Mathematics 2025, 13(17), 2728; https://doi.org/10.3390/math13172728 - 25 Aug 2025
Viewed by 317
Abstract
Accurate photovoltaic (PV) power output forecasting is critical for ensuring stable operation of modern power systems, yet it is constrained by high-dimensional redundancy in input weather data and the inherent heterogeneity of output scenarios. To address these challenges, this paper proposes a novel [...] Read more.
Accurate photovoltaic (PV) power output forecasting is critical for ensuring stable operation of modern power systems, yet it is constrained by high-dimensional redundancy in input weather data and the inherent heterogeneity of output scenarios. To address these challenges, this paper proposes a novel cascaded data-driven forecasting approach that enhances forecasting accuracy through systematically improving and optimizing the feature extraction, scenario clustering, and temporal modeling. Firstly, guided by weather data–PV power output correlations, the Deep Autoencoder (DAE) is enhanced by integrating Pearson Correlation Coefficient loss, reconstruction loss, and Kullback–Leibler divergence sparsity penalty into a multi-objective loss function to extract key weather factors. Secondly, the Fuzzy C-Means (FCM) algorithm is comprehensively refined through Mahalanobis distance-based sample similarity measurement, max–min dissimilarity principle for initial center selection, and Partition Entropy Index-driven optimal cluster determination to effectively cluster complex PV power output scenarios. Thirdly, a Long Short-Term Memory–Temporal Pattern Attention (LSTM–TPA) model is constructed. It utilizes the gating mechanism and TPA to capture time-dependent relationships between key weather factors and PV power output within each scenario, thereby heightening the sensitivity to key weather dynamics. Validation using actual data from distributed PV power plants demonstrates that: (1) The enhanced DAE eliminates redundant data while strengthening feature representation, thereby enabling extraction of key weather factors. (2) The enhanced FCM achieves marked improvements in both the Silhouette Coefficient and Calinski–Harabasz Index, consequently generating distinct typical output scenarios. (3) The constructed LSTM–TPA model adaptively adjusts the forecasting weights and obtains superior capability in capturing fine-grained temporal features. The proposed approach significantly outperforms conventional approaches (CNN–LSTM, ARIMA–LSTM), exhibiting the highest forecasting accuracy (97.986%), optimal evaluation metrics (such as Mean Absolute Error, etc.), and exceptional generalization capability. This novel cascaded data-driven model has achieved a comprehensive improvement in the accuracy and robustness of PV power output forecasting through step-by-step collaborative optimization. Full article
(This article belongs to the Special Issue Artificial Intelligence and Game Theory)
Show Figures

Figure 1

22 pages, 370 KB  
Article
Tight Bounds Between the Jensen–Shannon Divergence and the Minmax Divergence
by Arseniy Akopyan, Herbert Edelsbrunner, Žiga Virk and Hubert Wagner
Entropy 2025, 27(8), 854; https://doi.org/10.3390/e27080854 - 11 Aug 2025
Viewed by 460
Abstract
Motivated by questions arising at the intersection of information theory and geometry, we compare two dissimilarity measures between finite categorical distributions. One is the well-known Jensen–Shannon divergence, which is easy to compute and whose square root is a proper metric. The other is [...] Read more.
Motivated by questions arising at the intersection of information theory and geometry, we compare two dissimilarity measures between finite categorical distributions. One is the well-known Jensen–Shannon divergence, which is easy to compute and whose square root is a proper metric. The other is what we call the minmax divergence, which is harder to compute. Just like the Jensen–Shannon divergence, it arises naturally from the Kullback–Leibler divergence. The main contribution of this paper is a proof showing that the minmax divergence can be tightly approximated by the Jensen–Shannon divergence. The bounds suggest that the square root of the minmax divergence is a metric, and we prove that this is indeed true in the one-dimensional case. The general case remains open. Finally, we consider analogous questions in the context of another Bregman divergence and the corresponding Burbea–Rao (Jensen–Bregman) divergence. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

15 pages, 415 KB  
Article
Enhancing MusicGen with Prompt Tuning
by Hohyeon Shin, Jeonghyeon Im and Yunsick Sung
Appl. Sci. 2025, 15(15), 8504; https://doi.org/10.3390/app15158504 - 31 Jul 2025
Viewed by 614
Abstract
Generative AI has been gaining attention across various creative domains. In particular, MusicGen stands out as a representative approach capable of generating music based on text or audio inputs. However, it has limitations in producing high-quality outputs for specific genres and fully reflecting [...] Read more.
Generative AI has been gaining attention across various creative domains. In particular, MusicGen stands out as a representative approach capable of generating music based on text or audio inputs. However, it has limitations in producing high-quality outputs for specific genres and fully reflecting user intentions. This paper proposes a prompt tuning technique that effectively adjusts the output quality of MusicGen without modifying its original parameters and optimizes its ability to generate music tailored to specific genres and styles. Experiments were conducted to compare the performance of the traditional MusicGen with the proposed method and evaluate the quality of generated music using the Contrastive Language-Audio Pretraining (CLAP) and Kullback–Leibler Divergence (KLD) scoring approaches. The results demonstrated that the proposed method significantly improved the output quality and musical coherence, particularly for specific genres and styles. Compared with the traditional model, the CLAP score was increased by 0.1270, and the KLD score was increased by 0.00403 on average. The effectiveness of prompt tuning in optimizing the performance of MusicGen validated the proposed method and highlighted its potential for advancing generative AI-based music generation tools. Full article
Show Figures

Figure 1

21 pages, 343 KB  
Proceeding Paper
Detecting Financial Bubbles with Tail-Weighted Entropy
by Omid M. Ardakani
Comput. Sci. Math. Forum 2025, 11(1), 3; https://doi.org/10.3390/cmsf2025011003 - 25 Jul 2025
Viewed by 154
Abstract
This paper develops a novel entropy-based framework to quantify tail risk and detect speculative bubbles in financial markets. By integrating extreme value theory with information theory, I introduce the Tail-Weighted Entropy (TWE) measure, which captures how information scales with extremeness in asset price [...] Read more.
This paper develops a novel entropy-based framework to quantify tail risk and detect speculative bubbles in financial markets. By integrating extreme value theory with information theory, I introduce the Tail-Weighted Entropy (TWE) measure, which captures how information scales with extremeness in asset price distributions. I derive explicit bounds for TWE under heavy-tailed models and establish its connection to tail index parameters, revealing a phase transition in entropy decay rates during bubble formation. Empirically, I demonstrate that TWE-based signals detect crises in equities, commodities, and cryptocurrencies days earlier than traditional variance-ratio tests, with Bitcoin’s 2021 collapse identified weeks prior to the peak. The results show that entropy decay—not volatility explosions—serves as the primary precursor to systemic risk, offering policymakers a robust tool for preemptive crisis management. Full article
19 pages, 5415 KB  
Article
Intelligent Optimized Diagnosis for Hydropower Units Based on CEEMDAN Combined with RCMFDE and ISMA-CNN-GRU-Attention
by Wenting Zhang, Huajun Meng, Ruoxi Wang and Ping Wang
Water 2025, 17(14), 2125; https://doi.org/10.3390/w17142125 - 17 Jul 2025
Viewed by 359
Abstract
This study suggests a hybrid approach that combines improved feature selection and intelligent diagnosis to increase the operational safety and intelligent diagnosis capabilities of hydropower units. In order to handle the vibration data, complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) is [...] Read more.
This study suggests a hybrid approach that combines improved feature selection and intelligent diagnosis to increase the operational safety and intelligent diagnosis capabilities of hydropower units. In order to handle the vibration data, complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) is used initially. A novel comprehensive index is constructed by combining the Pearson correlation coefficient, mutual information (MI), and Kullback–Leibler divergence (KLD) to select intrinsic mode functions (IMFs). Next, feature extraction is performed on the selected IMFs using Refined Composite Multiscale Fluctuation Dispersion Entropy (RCMFDE). Then, time and frequency domain features are screened by calculating dispersion and combined with IMF features to build a hybrid feature vector. The vector is then fed into a CNN-GRU-Attention model for intelligent diagnosis. The improved slime mold algorithm (ISMA) is employed for the first time to optimize the hyperparameters of the CNN-GRU-Attention model. The experimental results show that the classification accuracy reaches 96.79% for raw signals and 93.33% for noisy signals, significantly outperforming traditional methods. This study incorporates entropy-based feature extraction, combines hyperparameter optimization with the classification model, and addresses the limitations of single feature selection methods for non-stationary and nonlinear signals. The proposed approach provides an excellent solution for intelligent optimized diagnosis of hydropower units. Full article
(This article belongs to the Special Issue Optimization-Simulation Modeling of Sustainable Water Resource)
Show Figures

Figure 1

18 pages, 9981 KB  
Article
Toward Adaptive Unsupervised and Blind Image Forgery Localization with ViT-VAE and a Gaussian Mixture Model
by Haichang Yin, KinTak U, Jing Wang and Wuyue Ma
Mathematics 2025, 13(14), 2285; https://doi.org/10.3390/math13142285 - 16 Jul 2025
Viewed by 329
Abstract
Most image forgery localization methods rely on supervised learning, requiring large labeled datasets for training. Recently, several unsupervised approaches based on the variational autoencoder (VAE) framework have been proposed for forged pixel detection. In these approaches, the latent space is built by a [...] Read more.
Most image forgery localization methods rely on supervised learning, requiring large labeled datasets for training. Recently, several unsupervised approaches based on the variational autoencoder (VAE) framework have been proposed for forged pixel detection. In these approaches, the latent space is built by a simple Gaussian distribution or a Gaussian Mixture Model. Despite their success, there are still some limitations: (1) A simple Gaussian distribution assumption in the latent space constrains performance due to the diverse distribution of forged images. (2) Gaussian Mixture Models (GMMs) introduce non-convex log-sum-exp functions in the Kullback–Leibler (KL) divergence term, leading to gradient instability and convergence issues during training. (3) Estimating GMM mixing coefficients typically involves either the expectation-maximization (EM) algorithm before VAE training or a multilayer perceptron (MLP), both of which increase computational complexity. To address these limitations, we propose the Deep ViT-VAE-GMM (DVVG) framework. First, we employ Jensen’s inequality to simplify the KL divergence computation, reducing gradient instability and improving training stability. Second, we introduce convolutional neural networks (CNNs) to adaptively estimate the mixing coefficients, enabling an end-to-end architecture while significantly lowering computational costs. Experimental results on benchmark datasets demonstrate that DVVG not only enhances VAE performance but also improves efficiency in modeling complex latent distributions. Our method effectively balances performance and computational feasibility, making it a practical solution for real-world image forgery localization. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

23 pages, 3404 KB  
Article
MST-AI: Skin Color Estimation in Skin Cancer Datasets
by Vahid Khalkhali, Hayan Lee, Joseph Nguyen, Sergio Zamora-Erazo, Camille Ragin, Abhishek Aphale, Alfonso Bellacosa, Ellis P. Monk and Saroj K. Biswas
J. Imaging 2025, 11(7), 235; https://doi.org/10.3390/jimaging11070235 - 13 Jul 2025
Viewed by 664
Abstract
The absence of skin color information in skin cancer datasets poses a significant challenge for accurate diagnosis using artificial intelligence models, particularly for non-white populations. In this paper, based on the Monk Skin Tone (MST) scale, which is less biased than the Fitzpatrick [...] Read more.
The absence of skin color information in skin cancer datasets poses a significant challenge for accurate diagnosis using artificial intelligence models, particularly for non-white populations. In this paper, based on the Monk Skin Tone (MST) scale, which is less biased than the Fitzpatrick scale, we propose MST-AI, a novel method for detecting skin color in images of large datasets, such as the International Skin Imaging Collaboration (ISIC) archive. The approach includes automatic frame, lesion removal, and lesion segmentation using convolutional neural networks, and modeling normal skin tones with a Variational Bayesian Gaussian Mixture Model (VB-GMM). The distribution of skin color predictions was compared with MST scale probability distribution functions (PDFs) using the Kullback-Leibler Divergence (KLD) metric. Validation against manual annotations and comparison with K-means clustering of image and skin mean RGBs demonstrated the superior performance of the MST-AI, with Kendall’s Tau, Spearman’s Rho, and Normalized Discounted Cumulative Gain (NDGC) of 0.68, 0.69, and 1.00, respectively. This research lays the groundwork for developing unbiased AI models for early skin cancer diagnosis by addressing skin color imbalances in large datasets. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

24 pages, 3524 KB  
Article
Transient Stability Assessment of Power Systems Based on Temporal Feature Selection and LSTM-Transformer Variational Fusion
by Zirui Huang, Zhaobin Du, Jiawei Gao and Guoduan Zhong
Electronics 2025, 14(14), 2780; https://doi.org/10.3390/electronics14142780 - 10 Jul 2025
Viewed by 402
Abstract
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep [...] Read more.
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep learning-based modeling. First, a two-stage feature selection strategy is designed using the inter-class Mahalanobis distance and Spearman rank correlation. This helps extract highly discriminative and low-redundancy features from wide-area measurement system (WAMS) time-series data. Then, a parallel LSTM-Transformer architecture is constructed to capture both short-term local fluctuations and long-term global dependencies. A variational inference mechanism based on a Gaussian mixture model (GMM) is introduced to enable dynamic representations fusion and uncertainty modeling. A composite loss function combining improved focal loss and Kullback–Leibler (KL) divergence regularization is designed to enhance model robustness and training stability under complex disturbances. The proposed method is validated on a modified IEEE 39-bus system. Results show that it outperforms existing models in accuracy, robustness, interpretability, and other aspects. This provides an effective solution for TSA in power systems with high renewable energy integration. Full article
(This article belongs to the Special Issue Advanced Energy Systems and Technologies for Urban Sustainability)
Show Figures

Figure 1

16 pages, 662 KB  
Article
Augmenting Naïve Bayes Classifiers with k-Tree Topology
by Fereshteh R. Dastjerdi and Liming Cai
Mathematics 2025, 13(13), 2185; https://doi.org/10.3390/math13132185 - 4 Jul 2025
Viewed by 382
Abstract
The Bayesian network is a directed, acyclic graphical model that can offer a structured description for probabilistic dependencies among random variables. As powerful tools for classification tasks, Bayesian classifiers often require computing joint probability distributions, which can be computationally intractable due to potential [...] Read more.
The Bayesian network is a directed, acyclic graphical model that can offer a structured description for probabilistic dependencies among random variables. As powerful tools for classification tasks, Bayesian classifiers often require computing joint probability distributions, which can be computationally intractable due to potential full dependencies among feature variables. On the other hand, Naïve Bayes, which presumes zero dependencies among features, trades accuracy for efficiency and often comes with underperformance. As a result, non-zero dependency structures, such as trees, are often used as more feasible probabilistic graph approximations; in particular, Tree Augmented Naïve Bayes (TAN) has been demonstrated to outperform Naïve Bayes and has become a popular choice. For applications where a variable is strongly influenced by multiple other features, TAN has been further extended to the k-dependency Bayesian classifier (KDB), where one feature can depend on up to k other features (for a given k2). In such cases, however, the selection of the k parent features for each variable is often made through heuristic search methods (such as sorting), which do not guarantee an optimal approximation of network topology. In this paper, the novel notion of k-tree Augmented Naïve Bayes (k-TAN) is introduced to augment Naïve Bayesian classifiers with k-tree topology as an approximation of Bayesian networks. It is proved that, under the Kullback–Leibler divergence measurement, k-tree topology approximation of Bayesian classifiers loses the minimum information with the topology of a maximum spanning k-tree, where the edge weights of the graph are mutual information between random variables conditional upon the class label. In addition, while in general finding a maximum spanning k-tree is NP-hard for fixed k2, this work shows that the approximation problem can be solved in time O(nk+1) if the spanning k-tree also desires to retain a given Hamiltonian path in the graph. Therefore, this algorithm can be employed to ensure efficient approximation of Bayesian networks with k-tree augmented Naïve Bayesian classifiers of the guaranteed minimum loss of information. Full article
Show Figures

Figure 1

15 pages, 2722 KB  
Article
Predicting the Evolution of Capacity Degradation Histograms of Rechargeable Batteries Under Dynamic Loads via Latent Gaussian Processes
by Daocan Wang, Xinggang Li and Jiahuan Lu
Energies 2025, 18(13), 3503; https://doi.org/10.3390/en18133503 - 2 Jul 2025
Viewed by 344
Abstract
Accurate prediction of lithium-ion battery capacity degradation under dynamic loads is crucial yet challenging due to limited data availability and high cell-to-cell variability. This study proposes a Latent Gaussian Process (GP) model to forecast the full distribution of capacity fade in the form [...] Read more.
Accurate prediction of lithium-ion battery capacity degradation under dynamic loads is crucial yet challenging due to limited data availability and high cell-to-cell variability. This study proposes a Latent Gaussian Process (GP) model to forecast the full distribution of capacity fade in the form of high-dimensional histograms, rather than relying on point estimates. The model integrates Principal Component Analysis with GP regression to learn temporal degradation patterns from partial early-cycle data of a target cell, using a fully degraded reference cell. Experiments on the NASA dataset with randomized dynamic load profiles demonstrate that Latent GP enables full-lifecycle capacity distribution prediction using only early-cycle observations. Compared with standard GP, long short-term memory (LSTM), and Monte Carlo Dropout LSTM baselines, it achieves superior accuracy in terms of Kullback–Leibler divergence and mean squared error. Sensitivity analyses further confirm the model’s robustness to input noise and hyperparameter settings, highlighting its potential for practical deployment in real-world battery health prognostics. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

24 pages, 5959 KB  
Article
An Information Geometry-Based Track-Before-Detect Algorithm for Range-Azimuth Measurements in Radar Systems
by Jinguo Liu, Hao Wu, Zheng Yang, Xiaoqiang Hua and Yongqiang Cheng
Entropy 2025, 27(6), 637; https://doi.org/10.3390/e27060637 - 14 Jun 2025
Viewed by 608
Abstract
The detection of weak moving targets in heterogeneous clutter backgrounds is a significant challenge in radar systems. In this paper, we propose a track-before-detect (TBD) method based on information geometry (IG) theory applied to range-azimuth measurements, which extends the IG detectors to multi-frame [...] Read more.
The detection of weak moving targets in heterogeneous clutter backgrounds is a significant challenge in radar systems. In this paper, we propose a track-before-detect (TBD) method based on information geometry (IG) theory applied to range-azimuth measurements, which extends the IG detectors to multi-frame detection through inter-frame information integration. The approach capitalizes on the distinctive benefits of the information geometry detection framework in scenarios with strong clutter, while enhancing the integration of information across multiple frames within the TBD approach. Specifically, target and clutter trajectories in multi-frame range-azimuth measurements are modeled on the Hermitian positive definite (HPD) and power spectrum (PS) manifolds. A scoring function based on information geometry, which uses Kullback–Leibler (KL) divergence as a geometric metric, is then devised to assess these motion trajectories. Moreover, this study devises a solution framework employing dynamic programming (DP) with constraints on state transitions, culminating in an integrated merit function. This algorithm identifies target trajectories by maximizing the integrated merit function. Experimental validation using real-recorded sea clutter datasets showcases the effectiveness of the proposed algorithm, yielding a minimum 3 dB enhancement in signal-to-clutter ratio (SCR) compared to traditional approaches. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Back to TopTop