Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = Kullback–Leibler distance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 307 KB  
Article
Identity Extension for Function on Three Intervals and Application to Csiszar Divergence, Levinson and Ky Fan Inequalities
by Josip Pečarić, Jinyan Miao and Ðilda Pečarić
AppliedMath 2025, 5(4), 136; https://doi.org/10.3390/appliedmath5040136 - 5 Oct 2025
Viewed by 98
Abstract
Using Taylor-type expansions, we obtain identity expressions for functions on three intervals and differences for two pairs of Csiszár ϕ-divergence. With some more assumptions in these identities, inequalities for functions on three intervals and Csiszár ϕ-divergence can be obtained as special [...] Read more.
Using Taylor-type expansions, we obtain identity expressions for functions on three intervals and differences for two pairs of Csiszár ϕ-divergence. With some more assumptions in these identities, inequalities for functions on three intervals and Csiszár ϕ-divergence can be obtained as special cases. They can also deduce the known generalized trapezoid type inequality. Furthermore, we use the identity to obtain a new extension for Levinson inequality; thus, new refinements and reverses for Ky Fan-type inequalities are established, which can be used to compare or estimate the yields in investments. Special cases of Csiszár ϕ-divergence are given, and we obtain new inequalities concerning different pairs of Kullback–Leibler distance, Hellinger distance, α-order entropy and χ2-distance. Full article
10 pages, 790 KB  
Proceeding Paper
A Comparison of MCMC Algorithms for an Inverse Squeeze Flow Problem
by Aricia Rinkens, Rodrigo L. S. Silva, Clemens V. Verhoosel, Nick O. Jaensson and Erik Quaeghebeur
Phys. Sci. Forum 2025, 12(1), 4; https://doi.org/10.3390/psf2025012004 - 22 Sep 2025
Viewed by 144
Abstract
Using Bayesian inference to calibrate constitutive model parameters has recently seen a rise in interest. The Markov chain Monte Carlo (MCMC) algorithm is one of the most commonly used methods to sample from the posterior. However, the choice of which MCMC algorithm to [...] Read more.
Using Bayesian inference to calibrate constitutive model parameters has recently seen a rise in interest. The Markov chain Monte Carlo (MCMC) algorithm is one of the most commonly used methods to sample from the posterior. However, the choice of which MCMC algorithm to apply is typically pragmatic and based on considerations such as software availability and experience. We compare three commonly used MCMC algorithms: Metropolis-Hastings (MH), Affine Invariant Stretch Move (AISM) and No-U-Turn sampler (NUTS). For the comparison, we use the Kullback-Leibler (KL) divergence as a convergence criterion, which measures the statistical distance between the sampled and the ‘true’ posterior. We apply the Bayesian framework to a Newtonian squeeze flow problem, for which there exists an analytical model. Furthermore, we have collected experimental data using a tailored setup. The ground truth for the posterior is obtained by evaluating it on a uniform reference grid. We conclude that, for the same number of samples, the NUTS results in the lowest KL divergence, followed by the AISM sampler and last the MH sampler. Full article
Show Figures

Figure 1

14 pages, 3484 KB  
Article
Multiparametric Quantitative Ultrasound as a Potential Imaging Biomarker for Noninvasive Detection of Nonalcoholic Steatohepatitis: A Clinical Feasibility Study
by Trina Chattopadhyay, Hsien-Jung Chan, Duy Chi Le, Chiao-Yin Wang, Dar-In Tai, Zhuhuang Zhou and Po-Hsiang Tsui
Diagnostics 2025, 15(17), 2214; https://doi.org/10.3390/diagnostics15172214 - 1 Sep 2025
Viewed by 612
Abstract
Objectives: The FibroScan–aspartate transaminase (AST) score (FAST score) is a hybrid biomarker combining ultrasound and blood test data for identifying nonalcoholic steatohepatitis (NASH). This study aimed to assess the feasibility of using quantitative ultrasound (QUS) biomarkers related to hepatic steatosis for NASH [...] Read more.
Objectives: The FibroScan–aspartate transaminase (AST) score (FAST score) is a hybrid biomarker combining ultrasound and blood test data for identifying nonalcoholic steatohepatitis (NASH). This study aimed to assess the feasibility of using quantitative ultrasound (QUS) biomarkers related to hepatic steatosis for NASH detection and to compare their diagnostic performance with the FAST score. Methods: A total of 137 participants, comprising 71 individuals with NASH and 66 with non-NASH (including 49 normal controls), underwent FibroScan and ultrasound exams. QUS imaging features (Nakagami parameter m, homodyned-K parameter μ, entropy H, and attenuation coefficient α) were extracted from backscattered radiofrequency data. A weighted QUS parameter based on m, μ, H, and α was derived via linear discriminant analysis. NASH was diagnosed based on liver biopsy findings using the nonalcoholic fatty liver disease activity score (NAS). Diagnostic performance was evaluated using the area under the receiver operating characteristic curve (AUROC) and compared with the FAST score using the DeLong test. Separation metrics, including the complement of overlap coefficient, Bhattacharyya distance, Kullback–Leibler divergence, and silhouette score, were used to assess inter-group separability. Results: All QUS parameters were significantly elevated in NASH patients (p < 0.05). AUROC values for individual QUS features ranged from 0.82 to 0.91, with the weighted QUS parameter achieving 0.91. The FAST score had the highest AUROC (0.96), though differences with the weighted QUS and homodyned-K parameters were not statistically significant (p > 0.05). Separation metrics ranked the FAST score highest, closely followed by the weighted QUS parameter. Conclusions: QUS biomarkers can be repurposed for NASH detection, with the weighted QUS parameter offering diagnostic accuracy comparable to the FAST score and serving as a promising, blood-free alternative. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

23 pages, 7614 KB  
Article
A Cascaded Data-Driven Approach for Photovoltaic Power Output Forecasting
by Chuan Xiang, Xiang Liu, Wei Liu and Tiankai Yang
Mathematics 2025, 13(17), 2728; https://doi.org/10.3390/math13172728 - 25 Aug 2025
Viewed by 506
Abstract
Accurate photovoltaic (PV) power output forecasting is critical for ensuring stable operation of modern power systems, yet it is constrained by high-dimensional redundancy in input weather data and the inherent heterogeneity of output scenarios. To address these challenges, this paper proposes a novel [...] Read more.
Accurate photovoltaic (PV) power output forecasting is critical for ensuring stable operation of modern power systems, yet it is constrained by high-dimensional redundancy in input weather data and the inherent heterogeneity of output scenarios. To address these challenges, this paper proposes a novel cascaded data-driven forecasting approach that enhances forecasting accuracy through systematically improving and optimizing the feature extraction, scenario clustering, and temporal modeling. Firstly, guided by weather data–PV power output correlations, the Deep Autoencoder (DAE) is enhanced by integrating Pearson Correlation Coefficient loss, reconstruction loss, and Kullback–Leibler divergence sparsity penalty into a multi-objective loss function to extract key weather factors. Secondly, the Fuzzy C-Means (FCM) algorithm is comprehensively refined through Mahalanobis distance-based sample similarity measurement, max–min dissimilarity principle for initial center selection, and Partition Entropy Index-driven optimal cluster determination to effectively cluster complex PV power output scenarios. Thirdly, a Long Short-Term Memory–Temporal Pattern Attention (LSTM–TPA) model is constructed. It utilizes the gating mechanism and TPA to capture time-dependent relationships between key weather factors and PV power output within each scenario, thereby heightening the sensitivity to key weather dynamics. Validation using actual data from distributed PV power plants demonstrates that: (1) The enhanced DAE eliminates redundant data while strengthening feature representation, thereby enabling extraction of key weather factors. (2) The enhanced FCM achieves marked improvements in both the Silhouette Coefficient and Calinski–Harabasz Index, consequently generating distinct typical output scenarios. (3) The constructed LSTM–TPA model adaptively adjusts the forecasting weights and obtains superior capability in capturing fine-grained temporal features. The proposed approach significantly outperforms conventional approaches (CNN–LSTM, ARIMA–LSTM), exhibiting the highest forecasting accuracy (97.986%), optimal evaluation metrics (such as Mean Absolute Error, etc.), and exceptional generalization capability. This novel cascaded data-driven model has achieved a comprehensive improvement in the accuracy and robustness of PV power output forecasting through step-by-step collaborative optimization. Full article
(This article belongs to the Special Issue Artificial Intelligence and Game Theory)
Show Figures

Figure 1

29 pages, 569 KB  
Article
Born’s Rule from Contextual Relative-Entropy Minimization
by Arash Zaghi
Entropy 2025, 27(9), 898; https://doi.org/10.3390/e27090898 - 25 Aug 2025
Viewed by 1039
Abstract
We give a variational characterization of the Born rule. For each measurement context, we project a quantum state ρ onto the corresponding abelian algebra by minimizing Umegaki relative entropy; Petz’s Pythagorean identity makes the dephased state the unique local minimizer, so the Born [...] Read more.
We give a variational characterization of the Born rule. For each measurement context, we project a quantum state ρ onto the corresponding abelian algebra by minimizing Umegaki relative entropy; Petz’s Pythagorean identity makes the dephased state the unique local minimizer, so the Born weights pC(i)=Tr(ρPi) arise as a consequence, not an assumption. Globally, we measure contextuality by the minimum classical Kullback–Leibler distance from the bundle {pC(ρ)} to the noncontextual polytope, yielding a convex objective Φ(ρ). Thus, Φ(ρ)=0 exactly when a sheaf-theoretic global section exists (noncontextuality), and Φ(ρ)>0 otherwise; the closest noncontextual model is the classical I-projection of the Born bundle. Assuming finite dimension, full-rank states, and rank-1 projective contexts, the construction is unique and non-circular; it extends to degenerate PVMs and POVMs (via Naimark dilation) without change to the statements. Conceptually, the work unifies information-geometric projection, the presheaf view of contextuality, and categorical classical structure into a single optimization principle. Compared with Gleason-type, decision-theoretic, or envariance approaches, our scope is narrower but more explicit about contextuality and the relational, context-dependent status of quantum probabilities. Full article
(This article belongs to the Special Issue Quantum Foundations: 100 Years of Born’s Rule)
Show Figures

Figure 1

24 pages, 3524 KB  
Article
Transient Stability Assessment of Power Systems Based on Temporal Feature Selection and LSTM-Transformer Variational Fusion
by Zirui Huang, Zhaobin Du, Jiawei Gao and Guoduan Zhong
Electronics 2025, 14(14), 2780; https://doi.org/10.3390/electronics14142780 - 10 Jul 2025
Cited by 1 | Viewed by 569
Abstract
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep [...] Read more.
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep learning-based modeling. First, a two-stage feature selection strategy is designed using the inter-class Mahalanobis distance and Spearman rank correlation. This helps extract highly discriminative and low-redundancy features from wide-area measurement system (WAMS) time-series data. Then, a parallel LSTM-Transformer architecture is constructed to capture both short-term local fluctuations and long-term global dependencies. A variational inference mechanism based on a Gaussian mixture model (GMM) is introduced to enable dynamic representations fusion and uncertainty modeling. A composite loss function combining improved focal loss and Kullback–Leibler (KL) divergence regularization is designed to enhance model robustness and training stability under complex disturbances. The proposed method is validated on a modified IEEE 39-bus system. Results show that it outperforms existing models in accuracy, robustness, interpretability, and other aspects. This provides an effective solution for TSA in power systems with high renewable energy integration. Full article
(This article belongs to the Special Issue Advanced Energy Systems and Technologies for Urban Sustainability)
Show Figures

Figure 1

24 pages, 2044 KB  
Article
Bregman–Hausdorff Divergence: Strengthening the Connections Between Computational Geometry and Machine Learning
by Tuyen Pham, Hana Dal Poz Kouřimská and Hubert Wagner
Mach. Learn. Knowl. Extr. 2025, 7(2), 48; https://doi.org/10.3390/make7020048 - 26 May 2025
Cited by 1 | Viewed by 1351
Abstract
The purpose of this paper is twofold. On a technical side, we propose an extension of the Hausdorff distance from metric spaces to spaces equipped with asymmetric distance measures. Specifically, we focus on extending it to the family of Bregman divergences, which includes [...] Read more.
The purpose of this paper is twofold. On a technical side, we propose an extension of the Hausdorff distance from metric spaces to spaces equipped with asymmetric distance measures. Specifically, we focus on extending it to the family of Bregman divergences, which includes the popular Kullback–Leibler divergence (also known as relative entropy). The resulting dissimilarity measure is called a Bregman–Hausdorff divergence and compares two collections of vectors—without assuming any pairing or alignment between their elements. We propose new algorithms for computing Bregman–Hausdorff divergences based on a recently developed Kd-tree data structure for nearest neighbor search with respect to Bregman divergences. The algorithms are surprisingly efficient even for large inputs with hundreds of dimensions. As a benchmark, we use the new divergence to compare two collections of probabilistic predictions produced by different machine learning models trained using the relative entropy loss. In addition to the introduction of this technical concept, we provide a survey. It outlines the basics of Bregman geometry, and motivated the Kullback–Leibler divergence using concepts from information theory. We also describe computational geometric algorithms that have been extended to this geometry, focusing on algorithms relevant for machine learning. Full article
Show Figures

Figure 1

21 pages, 4044 KB  
Article
FedHSQA: Robust Aggregation in Hierarchical Federated Learning via Anomaly Scoring-Based Adaptive Quantization for IoV
by Ling Xing, Zhaocheng Luo, Kaikai Deng, Honghai Wu, Huahong Ma and Xiaoying Lu
Electronics 2025, 14(8), 1661; https://doi.org/10.3390/electronics14081661 - 19 Apr 2025
Cited by 1 | Viewed by 916
Abstract
Hierarchical Federated Learning (HFL) for the Internet of Vehicles (IoV) leverages roadside units (RSU) to construct a low-latency, highly scalable multilayer cooperative training framework. However, with the rapid growth in the number of vehicle nodes, this framework faces two major challenges: (i) communication [...] Read more.
Hierarchical Federated Learning (HFL) for the Internet of Vehicles (IoV) leverages roadside units (RSU) to construct a low-latency, highly scalable multilayer cooperative training framework. However, with the rapid growth in the number of vehicle nodes, this framework faces two major challenges: (i) communication inefficiency under bandwidth-constrained conditions, where uplink congestion imposes significant burden on intra-framework communication; and (ii) interference from untrustworthy vehicle nodes, which disrupts model training and affects convergence. Therefore, in order to achieve secure aggregation while alleviating the communication bottleneck problem, we design a hierarchical three-layer federated learning framework with Gradient Quantization (GQ) and secure aggregation, called FedHSQA, which further integrates anomaly scoring to enhance robustness against untrustworthy vehicle nodes. Specifically, FedHSQA organizes IoV devices into three layers based on their respective roles: the cloud service layer, the RSU layer, and the vehicle node layer. During each non-initial communication round, the cloud server at the cloud layer computes anomaly scores for vehicle nodes using a Kullback–Leibler (KL) divergence-based multilayer perceptron (MLP) model. These anomaly scores are used to design a secure aggregation algorithm (ASA) that is robust to anomalous behavior. The anomaly scores and the aggregated global model are then transmitted to the RSU. To further reduce communication overhead and maintain model utility, FedHSQA introduces an adaptive GQ method based on the anomaly scores (ASQ). Unlike conventional vehicle node-side quantization, ASQ is performed at the RSU layer. It calculates the Jensen–Shannon (JS) distance between each vehicle node’s anomaly distribution and the target distribution, and adaptively adjusts the quantization level to minimize redundant gradient transmission. We validate the robustness of FedHSQA against anomalous nodes through extensive experiments on three real-world datasets. Compared to classical aggregation algorithms and GQ methods, FedHSQA reduced the average network traffic consumption by approximately 30 times while improving the average accuracy of the aggregation model by about 5.3%. Full article
Show Figures

Figure 1

13 pages, 440 KB  
Article
A Constrained Talagrand Transportation Inequality with Applications to Rate-Distortion-Perception Theory
by Li Xie, Liangyan Li, Jun Chen, Lei Yu and Zhongshan Zhang
Entropy 2025, 27(4), 441; https://doi.org/10.3390/e27040441 - 19 Apr 2025
Viewed by 691
Abstract
A constrained version of Talagrand’s transportation inequality is established, which reveals an intrinsic connection between the Gaussian distortion-rate-perception functions with limited common randomness under the Kullback–Leibler divergence-based and squared Wasserstein-2 distance-based perception measures. This connection provides an organizational framework for assessing existing bounds [...] Read more.
A constrained version of Talagrand’s transportation inequality is established, which reveals an intrinsic connection between the Gaussian distortion-rate-perception functions with limited common randomness under the Kullback–Leibler divergence-based and squared Wasserstein-2 distance-based perception measures. This connection provides an organizational framework for assessing existing bounds on these functions. In particular, we show that the best-known bounds of Xie et al. are nonredundant when examined through this connection. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory, the Third Edition)
Show Figures

Figure 1

22 pages, 921 KB  
Article
Semantic Matching for Chinese Language Approach Using Refined Contextual Features and Sentence–Subject Interaction
by Peng Jiang and Xiaodong Cai
Symmetry 2025, 17(4), 585; https://doi.org/10.3390/sym17040585 - 11 Apr 2025
Viewed by 607
Abstract
Aiming at the problems of noise interference and the lack of information on sentence–subject interactions in Chinese matching models, which lead to their low accuracy, a new semantic matching method using fine-grained context features and sentence–subject interaction is designed. Compared with the method [...] Read more.
Aiming at the problems of noise interference and the lack of information on sentence–subject interactions in Chinese matching models, which lead to their low accuracy, a new semantic matching method using fine-grained context features and sentence–subject interaction is designed. Compared with the method that relies on noisy data to improve the noise resistance of the model, we introduce data pollution and imprecise noise processing. We also design a novel context refinement strategy, which uses dot product attention, a gradient reversal layer, the Softmax function, and projection theorem to accurately identify and eliminate noise features to obtain high-quality refined context features, effectively overcoming the above shortcomings. Then, we innovatively use a one-way interaction strategy based on the projection theorem to map the sentence subject to the refined context feature space, which produces effective interaction ability between the features in the model. The refined context and sentence’s main idea are fused in the final prediction stage to compute the matching result. In addition, this study uses Kullback–Leibler divergence scatter to optimize the distance between the distributions of the refined context and the sentence’s main idea so that their distributions are closer to each other. The experimental results show that the accuracy of the method on the PAWS dataset and F1 are 87.65% and 86.51%, respectively; 80.51% on the Ant Financial dataset, and 85.29% on the BQ dataset. The accuracies on the Medic-SM dataset and Macro-F1 are 74.69% and 74% respectively, both of which are superior to those of other methods. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

14 pages, 2749 KB  
Article
Power Spectra’s Perspective on Meteorological Drivers of Snow Depth Multiscale Behavior over the Tibetan Plateau
by Yueqian Cao and Lingmei Jiang
Land 2025, 14(4), 790; https://doi.org/10.3390/land14040790 - 7 Apr 2025
Viewed by 481
Abstract
The meteorology-driven multiscale behavior of snow depth over the Tibetan Plateau was investigated via analyzing the spatio-temporal variability of snow depth over 28 intraseasonal continuous snow cover regions. By employing power spectra and the Kullback–Leibler (K-L) distance, the spectral similarities between snow depth [...] Read more.
The meteorology-driven multiscale behavior of snow depth over the Tibetan Plateau was investigated via analyzing the spatio-temporal variability of snow depth over 28 intraseasonal continuous snow cover regions. By employing power spectra and the Kullback–Leibler (K-L) distance, the spectral similarities between snow depth and meteorological factors were examined at scales of 5 km, 10 km, 20 km, and 50 km across seasons from 2008 to 2014. Results reveal distinct seasonal and scale-dependent dynamics: in spring and winter, snow depth exhibits lower spectral variance with scale breaks around 50 km, emphasizing the critical roles of precipitation, atmospheric moisture, and temperature, with lower K-L distances at smaller scales. Summer shows the highest spatial variance, with snow depth primarily influenced by wind and radiation, as indicated by lower K-L distances at 15–45 km. Autumn demonstrates the lowest spatial heterogeneity, with windspeed driving snow redistribution at finer scales. The alignment between spatial variance maps and power spectra implies that snow depth data can be effectively downscaled or upscaled without significant loss of spatial information. These findings are essential for improving snow cover modeling and forecasting, particularly in the context of climate change, as well as for effective water resource management and climate adaptation strategies in this strategically vital plateau. Full article
Show Figures

Figure 1

40 pages, 687 KB  
Article
Irreversibility, Dissipation, and Its Measure: A New Perspective
by Purushottam Das Gujrati
Symmetry 2025, 17(2), 232; https://doi.org/10.3390/sym17020232 - 5 Feb 2025
Cited by 2 | Viewed by 795
Abstract
Dissipation and irreversibility are two central concepts of classical thermodynamics that are often treated as synonymous. Dissipation D is lost or dissipated work Wdiss0 but is commonly quantified by entropy generation ΔiS in an isothermal irreversible macroscopic process [...] Read more.
Dissipation and irreversibility are two central concepts of classical thermodynamics that are often treated as synonymous. Dissipation D is lost or dissipated work Wdiss0 but is commonly quantified by entropy generation ΔiS in an isothermal irreversible macroscopic process that is often expressed as Kullback–Leibler distance DKL in modern literature. We argue that DKL is nonthermodynamic, and is erroneously justified for quantification by mistakenly equating exchange microwork ΔeWk with the system-intrinsic microwork ΔWk=ΔeWk+ΔiWk, which is a very common error permeating stochastic thermodynamics as was first pointed out several years ago, see text. Recently, it is discovered that dissipation D is properly identified by ΔiW0 for all spontaneously irreversible processes and all temperatures T, positive and negative in an isolated system. As T plays an important role in the quantification, dissipation allows for ΔiS0 for T>0, and ΔiS<0 for T<0, a surprising result. The connection of D with Wdiss and its extension to interacting systems have not been explored and is attempted here. It is found that D is not always proportional to ΔiS. The determination of D requires dipk, but we show that Fokker-Planck and master equations are not general enough to determine it, which is contrary to the common belief. We modify the Fokker-Planck equation to fix the issue. We find that detailed balance also allows for all microstates to remain disconnected without any transition among them in an equilibrium macrostate, another surprising result. We argue that Liouville’s theorem should not apply to irreversible processes, contrary to the claim otherwise. We suggest to use nonequilibrium statistical mechanics in extended space, where pk’s are uniquely determined to evaluate D. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

22 pages, 1347 KB  
Article
Semi-Empirical Approach to Evaluating Model Fit for Sea Clutter Returns: Focusing on Future Measurements in the Adriatic Sea
by Bojan Vondra
Entropy 2024, 26(12), 1069; https://doi.org/10.3390/e26121069 - 9 Dec 2024
Cited by 1 | Viewed by 950
Abstract
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing [...] Read more.
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing such as histogram binning. The proposed method converges almost surely, with the proof based on the use of exponentially distributed waiting times. An example demonstrates convergence of the KL divergence and SH distance to their true values when utilising the Generalised Pareto (GP) distribution as empirical data and the K distribution as the model. Another example illustrates the goodness of fit of these (GP and K-distribution) models to real sea clutter data from the widely used Intelligent PIxel processing X-band (IPIX) measurements. The proposed method can be applied to assess the goodness of fit of various models (not limited to GP or K distribution) to clutter measurement data such as those from the Adriatic Sea. Distinctive features of this small and immature sea, like the presence of over 1300 islands that affect local wind and wave patterns, are likely to result in an amplitude distribution of sea clutter returns that differs from predictions of models designed for oceans or open seas. However, to the author’s knowledge, no data on this specific topic are currently available in the open literature, and such measurements have yet to be conducted. Full article
Show Figures

Figure 1

18 pages, 435 KB  
Article
Some Improvements on Good Lattice Point Sets
by Yu-Xuan Lin, Tian-Yu Yan and Kai-Tai Fang
Entropy 2024, 26(11), 910; https://doi.org/10.3390/e26110910 - 27 Oct 2024
Cited by 2 | Viewed by 1235
Abstract
Good lattice point (GLP) sets are a type of number-theoretic method widely utilized across various fields. Their space-filling property can be further improved, especially with large numbers of runs and factors. In this paper, Kullback-Leibler (KL) divergence is used to measure GLP sets. [...] Read more.
Good lattice point (GLP) sets are a type of number-theoretic method widely utilized across various fields. Their space-filling property can be further improved, especially with large numbers of runs and factors. In this paper, Kullback-Leibler (KL) divergence is used to measure GLP sets. The generalized good lattice point (GGLP) sets obtained from linear-level permutations of GLP sets have demonstrated that the permutation does not reduce the criterion maximin distance. This paper confirms that linear-level permutation may lead to greater mixture discrepancy. Nevertheless, GGLP sets can still enhance the space-filling property of GLP sets under various criteria. For small-sized cases, the KL divergence from the uniform distribution of GGLP sets is lower than that of the initial GLP sets, and there is nearly no difference for large-sized points, indicating the similarity of their distributions. This paper incorporates a threshold-accepting algorithm in the construction of GGLP sets and adopts Frobenius distance as the space-filling criterion for large-sized cases. The initial GLP sets have been included in many monographs and are widely utilized. The corresponding GGLP sets are partially included in this paper and will be further calculated and posted online in the future. The performance of GGLP sets is evaluated in two applications: computer experiments and representative points, compared to the initial GLP sets. It shows that GGLP sets perform better in many cases. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

13 pages, 4948 KB  
Article
Feature Vector Effectiveness Evaluation for Pattern Selection in Computational Lithography
by Yaobin Feng, Jiamin Liu, Hao Jiang and Shiyuan Liu
Photonics 2024, 11(10), 990; https://doi.org/10.3390/photonics11100990 - 21 Oct 2024
Viewed by 1495
Abstract
Pattern selection is crucial for optimizing the calibration process of optical proximity correction (OPC) models in computational lithography. However, it remains a challenge to achieve a balance between representative coverage and computational efficiency. This work presents a comprehensive evaluation of the feature vectors’ [...] Read more.
Pattern selection is crucial for optimizing the calibration process of optical proximity correction (OPC) models in computational lithography. However, it remains a challenge to achieve a balance between representative coverage and computational efficiency. This work presents a comprehensive evaluation of the feature vectors’ (FVs’) effectiveness in pattern selection for OPC model calibration, leveraging key performance indicators (KPIs) based on Kullback–Leibler divergence and distance ranking. Through the construction of autoencoder-based FVs and fast Fourier transform (FFT)-based FVs, we compare their efficacy in capturing critical pattern features. Validation experimental results indicate that autoencoder-based FVs, particularly augmented with the lithography domain knowledge, outperform FFT-based counterparts in identifying anomalies and enhancing lithography model performance. These results also underscore the importance of adaptive pattern representation methods in calibrating the OPC model with evolving complexities. Full article
Show Figures

Figure 1

Back to TopTop