Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (249)

Search Parameters:
Keywords = kernel density estimation (KDE)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 12813 KB  
Article
Training-Free Few-Shot Image Classification via Kernel Density Estimation with CLIP Embeddings
by Marcos Sergio Pacheco dos Santos Lima Junior, Juan Miguel Ortiz-de-Lazcano-Lobato and Ezequiel López-Rubio
Mathematics 2025, 13(22), 3615; https://doi.org/10.3390/math13223615 - 11 Nov 2025
Abstract
Few-shot image classification aims to recognize novel classes from only a handful of labeled examples, a challenge in domains where data collection is costly or impractical. Existing solutions often rely on meta learning, fine tuning, or data augmentation, introducing computational overhead, risk of [...] Read more.
Few-shot image classification aims to recognize novel classes from only a handful of labeled examples, a challenge in domains where data collection is costly or impractical. Existing solutions often rely on meta learning, fine tuning, or data augmentation, introducing computational overhead, risk of overfitting, or are not highly efficient. This paper introduces ProbaCLIP, a simple training-free approach that leverages Kernel Density Estimation (KDE) within the embedding space of Contrastive Language-Image Pre-training (CLIP). Unlike other CLIP-based methods, the proposed approach operates solely on visual embeddings and does not require text labels. Class-conditional probability densities were estimated from few-shot support examples, and queries were classified by likelihood evaluation, where Principal Component Analysis (PCA) was used for dimensionality reduction, compressing the dissimilarities between classes on each episode. We further introduced an optional bandwidth optimization strategy and a consensus decision mechanism through cross-validation, while addressing the special case of one-shot classification with distance-based measures. Extensive experiments on multiple datasets demonstrated that our method achieved competitive or superior accuracy compared to the state-of-the-art few-shot classifiers, reaching up to 98.37% accuracy in five-shot tasks and up to 99.80% in a 16-shot framework with ViT-L/14@336px. We proved our methodology by achieving high performance without gradient-based training, text supervision, or auxiliary meta-training datasets, emphasizing the effectiveness of combining pre-trained embeddings with statistical density estimation for data-scarce classification. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 15702 KB  
Article
Provenance of Wushan Loess in the Yangtze Three Gorges Region: Insights from Detrital Zircon U-Pb Geochronology and Late Pleistocene East Asian Monsoon Variations
by Xulong Hu, Yufen Zhang, Chang’an Li, Guoqing Li, Juxiang Liu, Yawei Li, Jianchao Su and Mingming Jia
Minerals 2025, 15(11), 1180; https://doi.org/10.3390/min15111180 - 9 Nov 2025
Viewed by 154
Abstract
The Wushan Loess, situated in the Yangtze Three Gorges region of China, represents the southernmost aeolian loess deposit in China and provides critical insights into Late Pleistocene paleoenvironmental conditions and East Asian monsoon dynamics. Despite its significance, the genesis and provenance of this [...] Read more.
The Wushan Loess, situated in the Yangtze Three Gorges region of China, represents the southernmost aeolian loess deposit in China and provides critical insights into Late Pleistocene paleoenvironmental conditions and East Asian monsoon dynamics. Despite its significance, the genesis and provenance of this unique loess deposit remain controversial. This study employs an integrated multi-proxy approach combining detrital zircon U-Pb geochronology, optically stimulated luminescence (OSL) dating, and detailed grain size analysis to systematically investigate the provenance and depositional mechanisms of the Wushan Loess. Three representative loess–paleosol profiles (Gaotang-GT, Badong-BD, and Zigui-ZG) were analyzed, yielding 17 OSL ages, 729 grain size measurements, and approximately 420 analyses per profile were conducted, yielding 1189 valid ages (GT 406, BD 391, ZG 402). OSL chronology constrains the deposition period to 18–103 ka (Marine Isotope Stages 2–5), coinciding with enhanced East Asian winter monsoon activity during the Last Glacial period. Grain size analysis reveals a dominant silt fraction (modal size: 20–25 μm) characteristic of aeolian transport, with coarse silt (20–63 μm) averaging 47.1% and fine silt (<20 μm) averaging 44.2% of the sediments. Detrital zircon U-Pb age spectra exhibit consistent major peaks at 200–220 Ma, 450–500 Ma, 720–780 Ma, and 1800–1850 Ma across all profiles. Kernel Density Estimation (KDE) and Multi-Dimensional Scaling (MDS) analyses indicate a mixed provenance model. Non-negative least squares (NNLS) unmixing confirms this quantitative source apportionment., dominated by proximal contributions from the upper Yangtze River basin (including the Three Gorges area and Sichuan Basin, ~65%–70%), supplemented by distal dust input from the Loess Plateau and northern Chinese deserts (~30%–35%). This study establishes for the first time a proximal-dominated provenance model for the Wushan Loess, providing new evidence for understanding southern Chinese loess formation mechanisms and Late Pleistocene East Asian monsoon evolution. Full article
(This article belongs to the Section Mineral Geochemistry and Geochronology)
Show Figures

Graphical abstract

23 pages, 898 KB  
Article
A Unified Global and Local Outlier Detection Framework with Application to Chinese Financial Budget Auditing
by Xiuguo Wu
Systems 2025, 13(11), 978; https://doi.org/10.3390/systems13110978 - 2 Nov 2025
Viewed by 347
Abstract
The identification of anomalous data objects within massive datasets is a critical technique in financial auditing. Most existing methods, however, focus on global outlier anomalies detection with less effective in contexts such as Chinese financial budget auditing, where local outliers are often more [...] Read more.
The identification of anomalous data objects within massive datasets is a critical technique in financial auditing. Most existing methods, however, focus on global outlier anomalies detection with less effective in contexts such as Chinese financial budget auditing, where local outliers are often more prevalent and meaningful. To overcome this limitation, a unified outlier detection framework is proposed that integrates both global and local detection mechanisms using k-nearest neighbors (KNN) and kernel density estimation (KDE) methodologies. The global outlier score is redefined as the sum of the distances to the k-nearest neighbors, while the local outlier score is computed as the ratio of the average cluster density to the kernel density—replacing the cutoff distance employed in Density Peak Clustering (DPC). Furthermore, an adaptive adjustment coefficient is further incorporated to balance the contributions of global and local scores, and outliers are identified as the top-ranked objects based on the combined outlier scores. Extensive experiments on synthetic datasets and benchmarks from Python Outlier Detection (PyOD) demonstrate that the proposed method achieves superior detection accuracy for both global and local outliers compared to existing techniques. When applied to real-world Chinese financial budget data, the approach yields a substantial improvement in detection precision-with 38.6% enhancement over conventional methods in practical auditing scenarios. Full article
Show Figures

Figure 1

35 pages, 4288 KB  
Article
Validating Express Rail Optimization with AFC and Backcasting: A Bi-Level Operations–Assignment Model to Improve Speed and Accessibility Along the Gyeongin Corridor
by Cheng-Xi Li and Cheol-Jae Yoon
Appl. Sci. 2025, 15(21), 11652; https://doi.org/10.3390/app152111652 - 31 Oct 2025
Viewed by 198
Abstract
This study develops an integrated bi-level operations–assignment model to optimise express service on the Gyeongin Line, a core corridor connecting Seoul and Incheon. The upper level jointly selects express stops and time-of-day headways under coverage constraints—a minimum share of key stations and a [...] Read more.
This study develops an integrated bi-level operations–assignment model to optimise express service on the Gyeongin Line, a core corridor connecting Seoul and Incheon. The upper level jointly selects express stops and time-of-day headways under coverage constraints—a minimum share of key stations and a maximum inter-stop spacing—while the lower level assigns passengers under user equilibrium using a generalised time function that incorporates in-vehicle time, 0.5× headway wait, walking and transfers, and crowding-sensitive dwell times. Undergrounding and alignment straightening are incorporated into segment run-time functions, enabling the co-design of infrastructure and operations. Using automatic-fare-collection-calibrated origin–destination matrices, seat-occupancy records, and station-area population grids, we evaluate five rail scenarios and one intermodal extension. The results indicate substantial system-wide gains: peak average door-to-door times fall by approximately 44–46% in the AM (07:00–09:00) and 30–38% in the PM (17:30–19:30) for rail-only options, and by up to 55% with the intermodal extension. Kernel density estimation (KDE) and cumulative distribution function (CDF) analyses show a leftward shift and tail compression (median −8.7 min; 90th percentile (P90) −11.2 min; ≤45 min share: 0.0% → 47.2%; ≤60 min: 59.7% → 87.9%). The 45-min isochrone expands by ≈12% (an additional 0.21 million residents), while the 60-min reach newly covers Incheon Jung-gu and Songdo. Backcasting against observed express/local ratios yields deviations near the ±10% band (PM one comparator within and one slightly above), and the Kolmogorov–Smirnov (KS) statistic and Mann–Whitney (MW) test results confirm significant post-implementation shifts. The most cost-effective near-term package combines mixed stopping with modest alignment and capacity upgrades and time-differentiated headways; the intermodal express–transfer scheme offers a feasible long-term upper bound. The methodology is fully transparent through provision of pseudocode, explicit convergence criteria, and all hyperparameter settings. We also report SDG-aligned indicators—traction energy and CO2-equivalent (CO2-eq) per passenger-kilometre, and jobs reachable within 45- and 60-min isochrones—providing indicative yet robust evidence consistent with SDG 9, 11, and 13. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

36 pages, 3632 KB  
Article
Integrated Modeling of Maritime Accident Hotspots and Vessel Traffic Networks in High-Density Waterways: A Case Study of the Strait of Malacca
by Sien Chen, Xuzhe Cai, Jiao Qiao and Jian-Bo Yang
J. Mar. Sci. Eng. 2025, 13(11), 2052; https://doi.org/10.3390/jmse13112052 - 27 Oct 2025
Viewed by 516
Abstract
The Strait of Malacca faces persistent maritime safety challenges due to high vessel density and complex navigational conditions. Current risk assessment methods often lean towards treating static accident analysis and dynamic traffic modeling separately, although some nascent hybrid approaches exist. However, these hybrids [...] Read more.
The Strait of Malacca faces persistent maritime safety challenges due to high vessel density and complex navigational conditions. Current risk assessment methods often lean towards treating static accident analysis and dynamic traffic modeling separately, although some nascent hybrid approaches exist. However, these hybrids frequently lack the capacity for comprehensive, real-time factor integration. This study proposes an integrated framework coupling accident hotspot identification with vessel traffic network analysis. The framework combines trajectory clustering using improved DBSCAN with directional filters, Kernel Density Estimation (KDE) for accident hotspots, and Fuzzy Analytic Hierarchy Process (FAHP) for multi-factor risk evaluation, acknowledging its subjective and region-specific nature. The model was trained and tuned exclusively on the 2023 dataset (47 incidents), reserving the 2024 incidents (24 incidents) exclusively for independent, zero-information-leakage validation. Results demonstrate superior performance: Area Under the ROC Curve (AUC) improved by 0.14 (0.78 vs. 0.64; +22% relative to KDE-only), and Precision–Recall AUC (PR-AUC) improved by 0.16 (0.65 vs. 0.49); both p < 0.001. Crucially, all model tuning and parameter finalization (including DBSCAN/Fréchet, FAHP weights, and adaptive thresholds) relied solely on 2023 data, with the 2024 incidents reserved exclusively for independent temporal validation. The model captures 75.2% of reported incidents within 20% of the study area. Cross-validation confirms stability across all folds. The framework reveals accidents concentrate at network bottlenecks where traffic centrality exceeds 0.15 and accident density surpasses 0.6. Model-based associations suggest amplification through three pathways: environmental-mediated (34%), traffic convergence (34%), and historical persistence (23%). The integrated approach enables identification of both where and why maritime accidents cluster, providing practical applications for vessel traffic services, risk-aware navigation, and evidence-based safety regulation in congested waterways. Full article
(This article belongs to the Special Issue Recent Advances in Maritime Safety and Ship Collision Avoidance)
Show Figures

Figure 1

24 pages, 3293 KB  
Article
Short-Term Forecasting of Photovoltaic Clusters Based on Spatiotemporal Graph Neural Networks
by Zhong Wang, Mao Yang, Yitao Li, Bo Wang, Zhao Wang and Zheng Wang
Processes 2025, 13(11), 3422; https://doi.org/10.3390/pr13113422 - 24 Oct 2025
Viewed by 432
Abstract
Driven by the dual-carbon goals, photovoltaic (PV) battery systems at renewable energy stations are increasingly clustered on the distribution side. The rapid expansion of these clusters, together with the pronounced uncertainty and spatio-temporal heterogeneity of PV generation, degrades battery utilization and forces conservative [...] Read more.
Driven by the dual-carbon goals, photovoltaic (PV) battery systems at renewable energy stations are increasingly clustered on the distribution side. The rapid expansion of these clusters, together with the pronounced uncertainty and spatio-temporal heterogeneity of PV generation, degrades battery utilization and forces conservative dispatch. To address this, we propose a “spatio-temporal clustering–deep estimation” framework for short-term interval forecasting of PV clusters. First, a graph is built from meteorological–geographical similarity and partitioned into sub-clusters by a self-supervised DAEGC. Second, an attention-based spatio-temporal graph convolutional network (ASTGCN) is trained independently for each sub-cluster to capture local dynamics; the individual forecasts are then aggregated to yield the cluster-wide point prediction. Finally, kernel density estimation (KDE) non-parametrically models the residuals, producing probabilistic power intervals for the entire cluster. At the 90% confidence level, the proposed framework improves PICP by 4.01% and reduces PINAW by 7.20% compared with the ASTGCN-KDE baseline without spatio-temporal clustering, demonstrating enhanced interval forecasting performance. Full article
Show Figures

Figure 1

25 pages, 3034 KB  
Article
Distributional CNN-LSTM, KDE, and Copula Approaches for Multimodal Multivariate Data: Assessing Conditional Treatment Effects
by Jong-Min Kim
Analytics 2025, 4(4), 29; https://doi.org/10.3390/analytics4040029 - 21 Oct 2025
Viewed by 384
Abstract
We introduce a distributional CNN-LSTM framework for probabilistic multivariate modeling and heterogeneous treatment effect (HTE) estimation. The model jointly captures complex dependencies among multiple outcomes and enables precise estimation of individual-level conditional average treatment effects (CATEs). In simulation studies with multivariate Gaussian mixtures, [...] Read more.
We introduce a distributional CNN-LSTM framework for probabilistic multivariate modeling and heterogeneous treatment effect (HTE) estimation. The model jointly captures complex dependencies among multiple outcomes and enables precise estimation of individual-level conditional average treatment effects (CATEs). In simulation studies with multivariate Gaussian mixtures, the CNN-LSTM demonstrates robust density estimation and strong CATE recovery, particularly as mixture complexity increases, while classical methods such as Kernel Density Estimation (KDE) and Gaussian Copulas may achieve higher log-likelihood or coverage in simpler scenarios. On real-world datasets, including Iris and Criteo Uplift, the CNN-LSTM achieves the lowest CATE RMSE, confirming its practical utility for individualized prediction, although KDE and Gaussian Copula approaches may perform better on global likelihood or coverage metrics. These results indicate that the CNN-LSTM can be trained efficiently on moderate-sized datasets while maintaining stable predictive performance. Overall, the framework is particularly valuable in applications requiring accurate individual-level effect estimation and handling of multimodal heterogeneity—such as personalized medicine, economic policy evaluation, and environmental risk assessment—with its primary strength being superior CATE recovery under complex outcome distributions, even when likelihood-based metrics favor simpler baselines. Full article
Show Figures

Figure 1

22 pages, 7103 KB  
Article
Home Range Size and Habitat Usage of Hatchling and Juvenile Wood Turtles (Glyptemys insculpta) in Iowa
by Jeffrey W. Tamplin, Joshua G. Otten, Samuel W. Berg, Nadia E. Patel, Jacob B. Tipton and Justine M. R. Radunzel
Diversity 2025, 17(10), 733; https://doi.org/10.3390/d17100733 - 18 Oct 2025
Viewed by 401
Abstract
The Wood Turtle (Glyptemys insculpta) is an endangered species in the state of Iowa and a species of conservation concern across their entire range. The Iowa population is characterized by high levels of adult and egg predation, displays little or no [...] Read more.
The Wood Turtle (Glyptemys insculpta) is an endangered species in the state of Iowa and a species of conservation concern across their entire range. The Iowa population is characterized by high levels of adult and egg predation, displays little or no annual recruitment, and harbors an extremely low number of juveniles (7.3%). Home range and habitat usage studies of hatchling and juvenile Wood Turtles are limited to a few studies, and only one study of juveniles exists from the state of Iowa. Over a 10 yr period, we conducted a radiotelemetry study in Iowa on seven juvenile wood turtles for 32–182 weeks, and a 6-week study on six head-started hatchlings to determine home range sizes and habitat usage patterns and to provide comparisons with similar studies on adult Wood Turtles. Mean home range sizes of hatchling Wood Turtles were significantly smaller than the mean home range of older juvenile turtles for 100%, 95%, and 50% minimum convex polygons (MCPs), for 95% and 50% kernel density estimators (KDEs), and for linear home range (LHR) and stream home range (SHR). Habitat usage patterns of hatchlings and juveniles also differed. During periods of terrestrial activity, older juveniles utilized grass and forb clearings significantly more frequently than did hatchlings, and hatchlings used riverbank habitat more frequently than did juvenile turtles. In addition, juveniles were, on average, located significantly farther from the stream than were hatchlings. Our study provides important data on the home range size and habitat usage patterns of two under-represented age classes of this endangered species. These data will inform conservation agencies regarding relevant habitat protection and age-class management strategies of riparian areas that are necessary for the continued survival and protection of this imperiled species. Full article
(This article belongs to the Section Biodiversity Conservation)
Show Figures

Figure 1

23 pages, 6751 KB  
Article
Health Risk Assessment of Groundwater in Cold Regions Based on Kernel Density Estimation–Trapezoidal Fuzzy Number–Monte Carlo Simulation Model: A Case Study of the Black Soil Region in Central Songnen Plain
by Jiani Li, Yu Wang, Jianmin Bian, Xiaoqing Sun and Xingrui Feng
Water 2025, 17(20), 2984; https://doi.org/10.3390/w17202984 - 16 Oct 2025
Cited by 1 | Viewed by 482
Abstract
The quality of groundwater, a crucial freshwater resource in cold regions, directly affects human health. This study used groundwater quality monitoring data collected in the central Songnen Plain in 2014 and 2022 as a case study. The improved DRASTICL model was used to [...] Read more.
The quality of groundwater, a crucial freshwater resource in cold regions, directly affects human health. This study used groundwater quality monitoring data collected in the central Songnen Plain in 2014 and 2022 as a case study. The improved DRASTICL model was used to assess the vulnerability index, while water quality indicators were selected using a random forest algorithm and combined with the entropy-weighted groundwater quality index (E-GQI) approach to realize water quality assessment. Furthermore, self-organizing maps (SOM) were used for pollutant source analysis. Finally, the study identified the synergistic migration mechanism of NH4+ and Cl, as well as the activation trend of As in reducing environments. The uncertainty inherent to health risk assessment was considered by developing a kernel density estimation–trapezoidal fuzzy number–Monte Carlo simulation (KDE-TFN-MCSS) model that reduced the distribution mis-specification risks and high-risk misjudgment rates associated with conventional assessment methods. The results indicated that: (1) The water chemistry type in the study area was predominantly HCO3–Ca2+ with moderately to weakly alkaline water, and the primary and nitrogen pollution indicators were elevated, with the average NH4+ concentration significantly increasing from 0.06 mg/L in 2014 to 1.26 mg/L in 2022, exceeding the Class III limit of 1.0 mg/L. (2) The groundwater quality in the central Songnen Plain was poor in 2014, comprising predominantly Classes IV and V; by 2022, it comprised mostly Classes I–IV following a banded distribution, but declined in some central and northern areas. (3) The results of the SOM analysis revealed that the principal hardness component shifted from Ca2+ in 2014 to Ca2+–Mg2+ synergy in 2022. Local high values of As and NH4+ were determined to reflect geogenic origin and diffuse agricultural pollution, whereas the Cl distribution reflected the influence of de-icing agents and urbanization. (4) Through drinking water exposure, a deterministic evaluation conducted using the conventional four-step method indicated that the non-carcinogenic risk (HI) in the central and eastern areas significantly exceeded the threshold (HI > 1) in 2014, with the high-HI area expanding westward to the central and western regions in 2022; local areas in the north also exhibited carcinogenic risk (CR) values exceeding the threshold (CR > 0.0001). The results of a probabilistic evaluation conducted using the proposed simulation model indicated that, except for children’s CR in 2022, both HI and CR exceeded acceptable thresholds with 95% probability. Therefore, the proposed assessment method can provide a basis for improved groundwater pollution zoning and control decisions in cold regions. Full article
(This article belongs to the Special Issue Soil and Groundwater Quality and Resources Assessment, 2nd Edition)
Show Figures

Figure 1

17 pages, 3282 KB  
Article
Comparing Spatial Analysis Methods for Habitat Selection: GPS Telemetry Reveals Methodological Bias in Raccoon Dog (Nyctereutes procyonoides) Ecology
by Sumin Jeon, Soo Kyeong Hwang, Yeon Woo Lee, Jihye Son, Hyeok Jae Lee, Chae Won Yoon, Ju Yeong Lee, Dong Kyun Yoo, Ok-Sik Chung and Jong Koo Lee
Forests 2025, 16(10), 1588; https://doi.org/10.3390/f16101588 - 16 Oct 2025
Viewed by 437
Abstract
Recent issues that have emerged in regard to raccoon dog (Nyctereutes procyonoides) include interaction with humans and disease transmission. Therefore, understanding their habitat characteristics and preferences is crucial in the effort to limit conflicts with humans. A total of thirteen raccoon [...] Read more.
Recent issues that have emerged in regard to raccoon dog (Nyctereutes procyonoides) include interaction with humans and disease transmission. Therefore, understanding their habitat characteristics and preferences is crucial in the effort to limit conflicts with humans. A total of thirteen raccoon dogs were captured from three regions in South Korea, each with distinct habitat characteristics. GPS trackers were attached for tracking the raccoon dogs’ movements. Utilizing GPS tracking data, Kernel Density Estimation (KDE), Minimum Convex Polygon (MCP), and Jacobs Index were applied to learn more about the habitat preferences of the raccoon dogs. According to the results, the habitat composition ratios for KDE and MCP showed that forests had the largest proportion. However, a habitat composition ratio similar to the land proportion of the area that they inhabit indicated that raccoon dogs had the ability to adapt to various habitats. Jacobs Index analysis revealed different habitat selection patterns compared to KDE and MCP, with forests showing neutral to negative selection despite comprising large proportions of home ranges. Our results highlight important methodological considerations when inferring habitat preferences from spatial data, suggesting that multiple analytical approaches provide complementary insights into animal space use. Full article
(This article belongs to the Section Forest Biodiversity)
Show Figures

Figure 1

17 pages, 1996 KB  
Article
Short-Term Probabilistic Prediction of Photovoltaic Power Based on Bidirectional Long Short-Term Memory with Temporal Convolutional Network
by Weibo Yuan, Jinjin Ding, Li Zhang, Jingyi Ni and Qian Zhang
Energies 2025, 18(20), 5373; https://doi.org/10.3390/en18205373 - 12 Oct 2025
Viewed by 384
Abstract
To mitigate the impact of photovoltaic (PV) power generation uncertainty on power systems and accurately depict the PV output range, this paper proposes a quantile regression probabilistic prediction model (TCN-QRBiLSTM) integrating a Temporal Convolutional Network (TCN) and Bidirectional Long Short-Term Memory (BiLSTM). First, [...] Read more.
To mitigate the impact of photovoltaic (PV) power generation uncertainty on power systems and accurately depict the PV output range, this paper proposes a quantile regression probabilistic prediction model (TCN-QRBiLSTM) integrating a Temporal Convolutional Network (TCN) and Bidirectional Long Short-Term Memory (BiLSTM). First, the historical dataset is divided into three weather scenarios (sunny, cloudy, and rainy) to generate training and test samples under the same weather conditions. Second, a TCN is used to extract local temporal features, and BiLSTM captures the bidirectional temporal dependencies between power and meteorological data. To address the non-differentiable issue of traditional interval prediction quantile loss functions, the Huber norm is introduced as an approximate replacement for the original loss function by constructing a differentiable improved Quantile Regression (QR) model to generate confidence intervals. Finally, Kernel Density Estimation (KDE) is integrated to output probability density prediction results. Taking a distributed PV power station in East China as the research object, using data from July to September 2022 (15 min resolution, 4128 samples), comparative verification with TCN-QRLSTM and QRBiLSTM models shows that under a 90% confidence level, the Prediction Interval Coverage Probability (PICP) of the proposed model under sunny/cloudy/rainy weather reaches 0.9901, 0.9553, 0.9674, respectively, which is 0.56–3.85% higher than that of comparative models; the Percentage Interval Normalized Average Width (PINAW) is 0.1432, 0.1364, 0.1246, respectively, which is 1.35–6.49% lower than that of comparative models; the comprehensive interval evaluation index (I) is the smallest; and the Bayesian Information Criterion (BIC) is the lowest under all three weather conditions. The results demonstrate that the model can effectively quantify and mitigate PV power generation uncertainty, verifying its reliability and superiority in short-term PV power probabilistic prediction, and it has practical significance for ensuring the safe and economical operation of power grids with high PV penetration. Full article
(This article belongs to the Special Issue Advanced Load Forecasting Technologies for Power Systems)
Show Figures

Figure 1

21 pages, 3120 KB  
Article
Modelling Dynamic Parameter Effects in Designing Robust Stability Control Systems for Self-Balancing Electric Segway on Irregular Stochastic Terrains
by Desejo Filipeson Sozinando, Bernard Xavier Tchomeni and Alfayo Anyika Alugongo
Physics 2025, 7(4), 46; https://doi.org/10.3390/physics7040046 - 10 Oct 2025
Viewed by 635
Abstract
In this study, a nonlinear dynamic model is developed to examine the stability and vibration behavior of a self-balancing electric Segway operating over irregular stochastic terrains. The Segway is treated as a three-degrees-of-freedom cart–inverted pendulum system, incorporating elastic and damping effects at the [...] Read more.
In this study, a nonlinear dynamic model is developed to examine the stability and vibration behavior of a self-balancing electric Segway operating over irregular stochastic terrains. The Segway is treated as a three-degrees-of-freedom cart–inverted pendulum system, incorporating elastic and damping effects at the wheel–ground interface. Road irregularities are generated in accordance with international standard using high-order filtered noise, allowing for representation of surface classes from smooth to highly degraded. The governing equations, formulated via Lagrange’s method, are transformed into a Lorenz-like state-space form for nonlinear analysis. Numerical simulations employ the fourth-order Runge–Kutta scheme to compute translational and angular responses under varying speeds and terrain conditions. Frequency-domain analysis using Fast Fourier Transform (FFT) identifies resonant excitation bands linked to road spectral content, while Kernel Density Estimation (KDE) maps the probability distribution of displacement states to distinguish stable from variable regimes. The Lyapunov stability assessment and bifurcation analysis reveal critical velocity thresholds and parameter regions marking transitions from stable operation to chaotic motion. The study quantifies the influence of the gravity–damping ratio, mass–damping coupling, control torque ratio, and vertical excitation on dynamic stability. The results provide a methodology for designing stability control systems that ensure safe and comfortable Segway operation across diverse terrains. Full article
(This article belongs to the Section Applied Physics)
Show Figures

Figure 1

26 pages, 14595 KB  
Article
Practical Application of Passive Air-Coupled Ultrasonic Acoustic Sensors for Wheel Crack Detection
by Aashish Shaju, Nikhil Kumar, Giovanni Mantovani, Steve Southward and Mehdi Ahmadian
Sensors 2025, 25(19), 6126; https://doi.org/10.3390/s25196126 - 3 Oct 2025
Viewed by 555
Abstract
Undetected cracks in railroad wheels pose significant safety and economic risks, while current inspection methods are limited by cost, coverage, or contact requirements. This study explores the use of passive, air-coupled ultrasonic acoustic (UA) sensors for detecting wheel damage on stationary or moving [...] Read more.
Undetected cracks in railroad wheels pose significant safety and economic risks, while current inspection methods are limited by cost, coverage, or contact requirements. This study explores the use of passive, air-coupled ultrasonic acoustic (UA) sensors for detecting wheel damage on stationary or moving wheels. Two controlled datasets of wheelsets, one with clear damage and another with early, service-induced defects, were tested using hammer impacts. An automated system identified high-energy bursts and extracted features in both time and frequency domains, such as decay rate, spectral centroid, and entropy. The results demonstrate the effectiveness of UAE (ultrasonic acoustic emission) techniques through Kernel Density Estimation (KDE) visualization, hypothesis testing with effect sizes, and Receiver Operating Characteristic (ROC) analysis. The decay rate consistently proved to be the most effective discriminator, achieving near-perfect classification of severely damaged wheels and maintaining meaningful separation for early defects. Spectral features provided additional information but were less decisive. The frequency spectrum characteristics were effective across both axial and radial sensor orientations, with ultrasonic frequencies (20–80 kHz) offering higher spectral fidelity than sonic frequencies (1–20 kHz). This work establishes a validated “ground-truth” signature essential for developing a practical wayside detection system. The findings guide a targeted engineering approach to physically isolate this known signature from ambient noise and develop advanced models for reliable in-motion detection. Full article
(This article belongs to the Special Issue Sensing and Imaging for Defect Detection: 2nd Edition)
Show Figures

Figure 1

17 pages, 1466 KB  
Article
Robust Minimum-Cost Consensus Model with Non-Cooperative Behavior: A Data-Driven Approach
by Jiangyue Fu, Xingrui Guan, Xun Han and Gang Chen
Mathematics 2025, 13(19), 3098; https://doi.org/10.3390/math13193098 - 26 Sep 2025
Viewed by 355
Abstract
Achieving consensus in group decision-making is both essential and challenging, especially in which non-cooperative behaviors can significantly hinder the process under uncertainty. These behaviors may distort consensus outcomes, leading to increased costs and reduced efficiency. To address this issue, this study proposes a [...] Read more.
Achieving consensus in group decision-making is both essential and challenging, especially in which non-cooperative behaviors can significantly hinder the process under uncertainty. These behaviors may distort consensus outcomes, leading to increased costs and reduced efficiency. To address this issue, this study proposes a data-driven robust minimum-cost consensus model (MCCM) that accounts for non-cooperative behaviors by leveraging individual adjustment willingness. The model introduces an adjustment willingness function to identify non-cooperative participants during the consensus-reached process (CRP). To handle uncertainty in unit consensus costs, Principal Component Analysis (PCA) and Kernel Density Estimation (KDE) are employed to construct data-driven uncertainty sets. A robust optimization framework is then used to minimize the worst-case consensus cost within these sets, improving the model’s adaptability and reducing the risk of suboptimal decisions. To enhance computational tractability, the model is reformulated into a linear equivalent using the duality theory. Experimental results from a case study on house demolition compensation negotiations in Guiyang demonstrate the model’s effectiveness in identifying and mitigating non-cooperative behaviors. The proposed approach significantly improves consensus efficiency and consistency, while the data-driven robust strategy offers greater flexibility than traditional robust optimization methods. These findings suggest that the model is well-suited for complex real-world group decision-making scenarios under uncertainty. Full article
Show Figures

Figure 1

15 pages, 2761 KB  
Article
An Adaptive Importance Sampling Method Based on Improved MCMC Simulation for Structural Reliability Analysis
by Yue Zhang, Changjiang Wang and Xiewen Hu
Appl. Sci. 2025, 15(19), 10438; https://doi.org/10.3390/app151910438 - 26 Sep 2025
Viewed by 490
Abstract
Constructing an effective importance sampling density is crucial for structural reliability analysis via importance sampling (IS), particularly when dealing with performance functions that have multiple design points or disjoint failure domains. This study introduces an adaptive importance sampling technique leveraging an improved Markov [...] Read more.
Constructing an effective importance sampling density is crucial for structural reliability analysis via importance sampling (IS), particularly when dealing with performance functions that have multiple design points or disjoint failure domains. This study introduces an adaptive importance sampling technique leveraging an improved Markov chain Monte Carlo (IMCMC) approach. The method begins by efficiently gathering distributed samples across all failure regions using IMCMC. Subsequently, based on the obtained samples, it constructs the importance sampling density adaptively through a kernel density estimation (KDE) technique that integrates local bandwidth factors. Case studies confirm that the proposed approach successfully constructs an importance sampling density that closely mirrors the theoretical optimum, thereby boosting both the accuracy and efficiency of failure probability estimations. Full article
Show Figures

Figure 1

Back to TopTop