Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,743)

Search Parameters:
Keywords = classical statistics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 419 KB  
Article
Non-Uniformly Multidimensional Moran Random Walk with Resets
by Mohamed Abdelkader
Axioms 2025, 14(10), 756; https://doi.org/10.3390/axioms14100756 - 7 Oct 2025
Abstract
In this paper, we investigate the non-uniform m-dimensional Moran walk (Zn(1),,Zn(m)), where each component process (Zn(j))1jm, [...] Read more.
In this paper, we investigate the non-uniform m-dimensional Moran walk (Zn(1),,Zn(m)), where each component process (Zn(j))1jm, either increases by one unit or resets to zero at each step. Using probability generating functions, we analyze key statistical properties of the walk, with particular emphasis on the mean and variance of its final altitude. We further establish closed-form expressions for the limiting distribution of the process, as well as for the mean and variance of each component. These results extend classical findings on one- and two-dimensional Moran models to the general m-dimensional setting, thereby providing new insights into the asymptotic behavior of multi-component random walks with resets. Full article
(This article belongs to the Section Mathematical Analysis)
27 pages, 2189 KB  
Article
Miss-Triggered Content Cache Replacement Under Partial Observability: Transformer-Decoder Q-Learning
by Hakho Kim, Teh-Jen Sun and Eui-Nam Huh
Mathematics 2025, 13(19), 3217; https://doi.org/10.3390/math13193217 - 7 Oct 2025
Abstract
Content delivery networks (CDNs) face steadily rising, uneven demand, straining heuristic cache replacement. Reinforcement learning (RL) is promising, but most work assumes a fully observable Markov Decision Process (MDP), unrealistic under delayed, partial, and noisy signals. We model cache replacement as a Partially [...] Read more.
Content delivery networks (CDNs) face steadily rising, uneven demand, straining heuristic cache replacement. Reinforcement learning (RL) is promising, but most work assumes a fully observable Markov Decision Process (MDP), unrealistic under delayed, partial, and noisy signals. We model cache replacement as a Partially Observable MDP (POMDP) and present the Miss-Triggered Cache Transformer (MTCT), a Transformer-decoder Q-learning agent that encodes recent histories with self-attention. MTCT invokes its policy only on cache misses to align compute with informative events and uses a delayed-hit reward to propagate information from hits. A compact, rank-based action set (12 actions by default) captures popularity–recency trade-offs with complexity independent of cache capacity. We evaluate MTCT on a real trace (MovieLens) and two synthetic workloads (Mandelbrot–Zipf, Pareto) against Adaptive Replacement Cache (ARC), Windowed TinyLFU (W-TinyLFU), classical heuristics, and Double Deep Q-Network (DDQN). MTCT achieves the best or statistically comparable cache-hit rates on most cache sizes; e.g., on MovieLens at M=600, it reaches 0.4703 (DDQN 0.4436, ARC 0.4513). Miss-triggered inference also lowers mean wall-clock time per episode; Transformer inference is well suited to modern hardware acceleration. Ablations support CL=50 and show that finer action grids improve stability and final accuracy. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

18 pages, 4823 KB  
Article
Spatial Structure and Optimal Sampling Intervals of Soil Moisture at Different Depths in a Typical Karst Demonstration Zone
by Hui Yin, Bo Xiong, Xiaomin Lao, Zhongcheng Jiang, Yi’an Wu and Tongyu Wang
Water 2025, 17(19), 2891; https://doi.org/10.3390/w17192891 - 4 Oct 2025
Abstract
Related studies analyzing the spatial structure of soil moisture from both horizontal and vertical directions, as well as the spacing interval distances of soil moisture sampling points in typical karst demonstration zones, are relatively rare. This study applied classical statistics, geostatistics, and “3S” [...] Read more.
Related studies analyzing the spatial structure of soil moisture from both horizontal and vertical directions, as well as the spacing interval distances of soil moisture sampling points in typical karst demonstration zones, are relatively rare. This study applied classical statistics, geostatistics, and “3S” technology to analyze the spatial structure, influencing factors, and spacing interval distances of soil moisture sampling points in the Guohua Demonstration Zone. The results showed that Moran’s I indices of soil moisture at different soil depths in the Guohua Demonstration Zone presented positive spatial correlation, and the spatial distribution of soil moisture at different soil depths showed a distinct spatial clustering pattern, with few spatially isolated zones. The spatial autocorrelation distance for soil moisture at 5 cm and 10 cm soil depths was 2400 m, while the autocorrelation distances for soil moisture at 20 cm and 30 cm soil depths were 2200 m and 2000 m, respectively. The spatial range value for soil moisture at a soil depth of 20 cm in the Guohua Demonstration Zone was the largest (Range = 6318.0 m), while the spatial range value for soil moisture at a soil depth of 30 cm was the smallest (Range = 646.0 m). The minimum value (threshold: 646.0 m) between the spatial autocorrelation distance and the spatial range of soil moisture at different soil depths in the Guohua Demonstration Zone could serve as an appropriate spacing interval distance of soil moisture sampling points. Soil moisture at different soil depths in the Guohua Demonstration Zone was primarily influenced by rock desertification, vegetation cover, soil layer thickness, and elevation. The synergistic effect of “rocky desertification + vegetation”, “rocky desertification + soil thickness”, and “vegetation + soil thickness” had a greater influence on soil moisture. Through high-density soil moisture sampling points in typical karst areas, the study results strengthened the application research on soil moisture in typical karst areas, providing scientific references for studies on the spatial structure, influencing factors, and appropriate spacing interval distance of soil moisture sampling points in karst areas. Full article
(This article belongs to the Section Soil and Water)
Show Figures

Figure 1

19 pages, 578 KB  
Article
Growth of Renewable Energy: A Review of Drivers from the Economic Perspective
by Yoram Krozer, Sebastian Bykuc and Frans Coenen
Energies 2025, 18(19), 5250; https://doi.org/10.3390/en18195250 - 3 Oct 2025
Abstract
Global modern renewable energy based on geothermal, wind, solar, and marine resources has grown rapidly over the last decades despite low energy density, intermittent supply, and other qualities inferior to those of fossil fuels. What is the explanation for this growth? The main [...] Read more.
Global modern renewable energy based on geothermal, wind, solar, and marine resources has grown rapidly over the last decades despite low energy density, intermittent supply, and other qualities inferior to those of fossil fuels. What is the explanation for this growth? The main drivers of growth are assessed using economic theories and verified with statistical data. From the neo-classic viewpoint that focuses on price substitutions, the growth can be explained by the shift from energy-intensive agriculture and industry to labour-intensive services. However, the energy resources complemented rather than substituted for each other. In the evolutionary idea, investments supported by policies enabled cost-reducing technological change. Still, policies alone are insufficient to generate the growth of modern renewable energy as they are inconsistent across countries and in time. From the behavioural perspective that is preoccupied with innovative entrepreneurs, the value addition of electrification can explain the introduction of modern renewable energy in market niches, but not its fast growth. Instead of these mono-causalities, the growth of modern renewable energy is explained by technology diffusion during the pioneering, growth, and maturation phases. Possibilities that postpone the maturation are pinpointed. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

27 pages, 6645 KB  
Article
Performance Comparison of Metaheuristic and Hybrid Algorithms Used for Energy Cost Minimization in a Solar–Wind–Battery Microgrid
by Seyfettin Vadi, Merve Bildirici and Orhan Kaplan
Sustainability 2025, 17(19), 8849; https://doi.org/10.3390/su17198849 - 2 Oct 2025
Abstract
The integration of renewable energy sources has become a strategic necessity for sustainable energy management and supply security. This study evaluates the performance of eight metaheuristic optimization algorithms in scheduling a renewable-based smart grid system that integrates solar, wind, and battery storage for [...] Read more.
The integration of renewable energy sources has become a strategic necessity for sustainable energy management and supply security. This study evaluates the performance of eight metaheuristic optimization algorithms in scheduling a renewable-based smart grid system that integrates solar, wind, and battery storage for a factory in İzmir, Türkiye. The algorithms considered include classical approaches—Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), the Whale Optimization Algorithm (WOA), Krill Herd Optimization (KOA), and the Ivy Algorithm (IVY)—alongside hybrid methods, namely KOA–WOA, WOA–PSO, and Gradient-Assisted PSO (GD-PSO). The optimization objectives were minimizing operational energy cost, maximizing renewable utilization, and reducing dependence on grid power, evaluated over a 7-day dataset in MATLAB. The results showed that hybrid algorithms, particularly GD-PSO and WOA–PSO, consistently achieved the lowest average costs with strong stability, while classical methods such as ACO and IVY exhibited higher costs and variability. Statistical analyses confirmed the robustness of these findings, highlighting the effectiveness of hybridization in improving smart grid energy optimization. Full article
Show Figures

Figure 1

19 pages, 611 KB  
Article
An Adjusted CUSUM-Based Method for Change-Point Detection in Two-Phase Inverse Gaussian Degradation Processes
by Mei Li, Tian Fu and Qian Li
Mathematics 2025, 13(19), 3167; https://doi.org/10.3390/math13193167 - 2 Oct 2025
Abstract
Degradation data plays a crucial role in the reliability assessment and condition monitoring of engineering systems. The stage-wise changes in degradation rates often signal turning points in system performance or potential fault risks. To address the issue of structural changes during the degradation [...] Read more.
Degradation data plays a crucial role in the reliability assessment and condition monitoring of engineering systems. The stage-wise changes in degradation rates often signal turning points in system performance or potential fault risks. To address the issue of structural changes during the degradation process, this paper constructs a degradation modeling framework based on a two-stage Inverse Gaussian (IG) process and proposes a change-point detection method based on an adjusted CUSUM (cumulative sum) statistic to identify potential stage changes in the degradation path. This method does not rely on complex prior information and constructs statistics by accumulating deviations, utilizing a binary search approach to achieve accurate change-point localization. In simulation experiments, the proposed method demonstrated superior detection performance compared to the classical likelihood ratio method and modified information criterion, verified through a combination of experiments with different change-point positions and degradation rates. Finally, the method was applied to real degradation data of a hydraulic piston pump, successfully identifying two structural change points during the degradation process. Based on these change points, the degradation stages were delineated, thereby enhancing the model’s ability to characterize the true degradation path of the equipment. Full article
(This article belongs to the Special Issue Reliability Analysis and Statistical Computing)
Show Figures

Figure 1

26 pages, 5861 KB  
Article
Robust Industrial Surface Defect Detection Using Statistical Feature Extraction and Capsule Network Architectures
by Azeddine Mjahad and Alfredo Rosado-Muñoz
Sensors 2025, 25(19), 6063; https://doi.org/10.3390/s25196063 - 2 Oct 2025
Abstract
Automated quality control is critical in modern manufacturing, especially for metallic cast components, where fast and accurate surface defect detection is required. This study evaluates classical Machine Learning (ML) algorithms using extracted statistical parameters and deep learning (DL) architectures including ResNet50, Capsule Networks, [...] Read more.
Automated quality control is critical in modern manufacturing, especially for metallic cast components, where fast and accurate surface defect detection is required. This study evaluates classical Machine Learning (ML) algorithms using extracted statistical parameters and deep learning (DL) architectures including ResNet50, Capsule Networks, and a 3D Convolutional Neural Network (CNN3D) using 3D image inputs. Using the Dataset Original, ML models with the selected parameters achieved high performance: RF reached 99.4 ± 0.2% precision and 99.4 ± 0.2% sensitivity, GB 96.0 ± 0.2% precision and 96.0 ± 0.2% sensitivity. ResNet50 trained with extracted parameters reached 98.0 ± 1.5% accuracy and 98.2 ± 1.7% F1-score. Capsule-based architectures achieved the best results, with ConvCapsuleLayer reaching 98.7 ± 0.2% accuracy and 100.0 ± 0.0% precision for the normal class, and 98.9 ± 0.2% F1-score for the affected class. CNN3D applied on 3D image inputs reached 88.61 ± 1.01% accuracy and 90.14 ± 0.95% F1-score. Using the Dataset Expanded with ML and PCA-selected features, Random Forest achieved 99.4 ± 0.2% precision and 99.4 ± 0.2% sensitivity, K-Nearest Neighbors 99.2 ± 0.0% precision and 99.2 ± 0.0% sensitivity, and SVM 99.2 ± 0.0% precision and 99.2 ± 0.0% sensitivity, demonstrating consistent high performance. All models were evaluated using repeated train-test splits to calculate averages of standard metrics (accuracy, precision, recall, F1-score), and processing times were measured, showing very low per-image execution times (as low as 3.69×104 s/image), supporting potential real-time industrial application. These results indicate that combining statistical descriptors with ML and DL architectures provides a robust and scalable solution for automated, non-destructive surface defect detection, with high accuracy and reliability across both the original and expanded datasets. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

49 pages, 517 KB  
Review
A Comprehensive Review of Data-Driven Techniques for Air Pollution Concentration Forecasting
by Jaroslaw Bernacki and Rafał Scherer
Sensors 2025, 25(19), 6044; https://doi.org/10.3390/s25196044 - 1 Oct 2025
Abstract
Air quality is crucial for public health and the environment, which makes it important to both monitor and forecast the level of pollution. Polluted air, containing harmful substances such as particulate matter, nitrogen oxides, or ozone, can lead to serious respiratory and circulatory [...] Read more.
Air quality is crucial for public health and the environment, which makes it important to both monitor and forecast the level of pollution. Polluted air, containing harmful substances such as particulate matter, nitrogen oxides, or ozone, can lead to serious respiratory and circulatory diseases, especially in people at risk. Air quality forecasting allows for early warning of smog episodes and taking actions to reduce pollutant emissions. In this article, we review air pollutant concentration forecasting methods, analyzing both classical statistical approaches and modern techniques based on artificial intelligence, including deep models, neural networks, and machine learning, as well as advanced sensing technologies. This work aims to present the current state of research and identify the most promising directions of development in air quality modeling, which can contribute to more effective health and environmental protection. According to the reviewed literature, deep learning–based models, particularly hybrid and attention-driven architectures, emerge as the most promising approaches, while persistent challenges such as data quality, interpretability, and integration of heterogeneous sensing systems define the open issues for future research. Full article
(This article belongs to the Special Issue Smart Gas Sensor Applications in Environmental Change Monitoring)
Show Figures

Figure 1

25 pages, 63826 KB  
Article
Mutual Effects of Face-Swap Deepfakes and Digital Watermarking—A Region-Aware Study
by Tomasz Walczyna and Zbigniew Piotrowski
Sensors 2025, 25(19), 6015; https://doi.org/10.3390/s25196015 - 30 Sep 2025
Abstract
Face swapping is commonly assumed to act locally on the face region, which motivates placing watermarks away from the face to preserve the integrity of the face. We demonstrate that this assumption is violated in practice. Using a region-aware protocol with tunable-strength visible [...] Read more.
Face swapping is commonly assumed to act locally on the face region, which motivates placing watermarks away from the face to preserve the integrity of the face. We demonstrate that this assumption is violated in practice. Using a region-aware protocol with tunable-strength visible and invisible watermarks and six face-swap families, we quantify both identity transfer and watermark retention on the VGGFace2 dataset. First, edits are non-local—generators alter background statistics and degrade watermarks even far from the face, as measured by background-only PSNR and Pearson correlation relative to a locality-preserving baseline. Second, dependencies between watermark strength, identity transfer, and retention are non-monotonic and architecture-dependent. Methods that better confine edits to the face—typically those employing segmentation-weighted objectives—preserve background signal more reliably than globally trained GAN pipelines. At comparable perceptual distortion, invisible marks tuned to the background retain higher correlation with the background than visible overlays. These findings indicate that classical robustness tests are insufficient alone—watermark evaluation should report region-wise metrics and be strength- and architecture-aware. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies—Second Edition)
Show Figures

Figure 1

37 pages, 4368 KB  
Article
High-Performance Simulation of Generalized Tempered Stable Random Variates: Exact and Numerical Methods for Heavy-Tailed Data
by Aubain Nzokem and Daniel Maposa
Math. Comput. Appl. 2025, 30(5), 106; https://doi.org/10.3390/mca30050106 - 28 Sep 2025
Abstract
The Generalized Tempered Stable (GTS) distribution extends classical stable laws through exponential tempering, preserving the power-law behavior while ensuring finite moments. This makes it especially suitable for modeling heavy-tailed financial data. However, the lack of closed-form densities poses significant challenges for simulation. This [...] Read more.
The Generalized Tempered Stable (GTS) distribution extends classical stable laws through exponential tempering, preserving the power-law behavior while ensuring finite moments. This makes it especially suitable for modeling heavy-tailed financial data. However, the lack of closed-form densities poses significant challenges for simulation. This study provides a comprehensive and systematic comparison of GTS simulation methods, including rejection-based algorithms, series representations, and an enhanced Fast Fractional Fourier Transform (FRFT)-based inversion method. Through extensive numerical experiments on major financial assets (Bitcoin, Ethereum, the S&P 500, and the SPY ETF), this study demonstrates that the FRFT method outperforms others in terms of accuracy and ability to capture tail behavior, as validated by goodness-of-fit tests. Our results provide practitioners with robust and efficient simulation tools for applications in risk management, derivative pricing, and statistical modeling. Full article
(This article belongs to the Special Issue Statistical Inference in Linear Models, 2nd Edition)
Show Figures

Figure 1

68 pages, 8643 KB  
Article
From Sensors to Insights: Interpretable Audio-Based Machine Learning for Real-Time Vehicle Fault and Emergency Sound Classification
by Mahmoud Badawy, Amr Rashed, Amna Bamaqa, Hanaa A. Sayed, Rasha Elagamy, Malik Almaliki, Tamer Ahmed Farrag and Mostafa A. Elhosseini
Machines 2025, 13(10), 888; https://doi.org/10.3390/machines13100888 - 28 Sep 2025
Abstract
Unrecognized mechanical faults and emergency sounds in vehicles can compromise safety, particularly for individuals with hearing impairments and in sound-insulated or autonomous driving environments. As intelligent transportation systems (ITSs) evolve, there is a growing need for inclusive, non-intrusive, and real-time diagnostic solutions that [...] Read more.
Unrecognized mechanical faults and emergency sounds in vehicles can compromise safety, particularly for individuals with hearing impairments and in sound-insulated or autonomous driving environments. As intelligent transportation systems (ITSs) evolve, there is a growing need for inclusive, non-intrusive, and real-time diagnostic solutions that enhance situational awareness and accessibility. This study introduces an interpretable, sound-based machine learning framework to detect vehicle faults and emergency sound events using acoustic signals as a scalable diagnostic source. Three purpose-built datasets were developed: one for vehicular fault detection, another for emergency and environmental sounds, and a third integrating both to reflect real-world ITS acoustic scenarios. Audio data were preprocessed through normalization, resampling, and segmentation and transformed into numerical vectors using Mel-Frequency Cepstral Coefficients (MFCCs), Mel spectrograms, and Chroma features. To ensure performance and interpretability, feature selection was conducted using SHAP (explainability), Boruta (relevance), and ANOVA (statistical significance). A two-phase experimental workflow was implemented: Phase 1 evaluated 15 classical models, identifying ensemble classifiers and multi-layer perceptrons (MLPs) as top performers; Phase 2 applied advanced feature selection to refine model accuracy and transparency. Ensemble models such as Extra Trees, LightGBM, and XGBoost achieved over 91% accuracy and AUC scores exceeding 0.99. SHAP provided model transparency without performance loss, while ANOVA achieved high accuracy with fewer features. The proposed framework enhances accessibility by translating auditory alarms into visual/haptic alerts for hearing-impaired drivers and can be integrated into smart city ITS platforms via roadside monitoring systems. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

15 pages, 313 KB  
Article
On Wijsman fρ-Statistical Convergence of Order α of Modulus Functions
by Gülcan Atıcı Turan and Mikail Et
Axioms 2025, 14(10), 730; https://doi.org/10.3390/axioms14100730 - 26 Sep 2025
Abstract
In the present paper, we introduce and investigate the concepts of Wijsman fρ-statistical convergence of order α and Wijsman strong fρ-convergence of order α. These notions are defined as natural generalizations of classical statistical convergence and Wijsman convergence, [...] Read more.
In the present paper, we introduce and investigate the concepts of Wijsman fρ-statistical convergence of order α and Wijsman strong fρ-convergence of order α. These notions are defined as natural generalizations of classical statistical convergence and Wijsman convergence, incorporating the tools of modulus functions and natural density through the function f. We provide a detailed analysis of their structural properties, including inclusion relations, basic characterizations, and illustrative examples. Furthermore, we establish the inclusion relations between Wijsman fρ-statistical convergence and Wijsman strong fρ-convergence of order α, showing conditions under which one implies the other. These notions are different in general, while coinciding under certain restrictions on the function f, the parameter α, and the sequence ρ. The results obtained not only extend some well-known findings in the literature but also open up new directions for further study in the theory of statistical convergence and its applications to analysis and approximation theory. Full article
(This article belongs to the Special Issue Recent Advances in Functional Analysis and Operator Theory)
48 pages, 31470 KB  
Article
Integrating Climate and Economic Predictors in Hybrid Prophet–(Q)LSTM Models for Sustainable National Energy Demand Forecasting: Evidence from The Netherlands
by Ruben Curiël, Ali Mohammed Mansoor Alsahag and Seyed Sahand Mohammadi Ziabari
Sustainability 2025, 17(19), 8687; https://doi.org/10.3390/su17198687 - 26 Sep 2025
Abstract
Forecasting national energy demand is challenging under climate variability and macroeconomic uncertainty. We assess whether hybrid Prophet–(Q)LSTM models that integrate climate and economic predictors improve long-horizon forecasts for The Netherlands. This study covers 2010–2024 and uses data from ENTSO-E (hourly load), KNMI and [...] Read more.
Forecasting national energy demand is challenging under climate variability and macroeconomic uncertainty. We assess whether hybrid Prophet–(Q)LSTM models that integrate climate and economic predictors improve long-horizon forecasts for The Netherlands. This study covers 2010–2024 and uses data from ENTSO-E (hourly load), KNMI and Copernicus/ERA5 (weather and climate indices), Statistics Netherlands (CBS), and the World Bank (macroeconomic and commodity series). We evaluate Prophet–LSTM and Prophet–QLSTM, each with and without stacking via XGBoost, under rolling-origin cross-validation; feature choice is guided by Bayesian optimisation. Stacking provides the largest and most consistent accuracy gains across horizons. The quantum-inspired variant performs on par with the classical ensemble while using a smaller recurrent core, indicating value as a complementary learner. Substantively, short-run variation is dominated by weather and calendar effects, whereas selected commodity and activity indicators stabilise longer-range baselines; combining both domains improves robustness to regime shifts. In sustainability terms, improved long-horizon accuracy supports renewable integration, resource adequacy, and lower curtailment by strengthening seasonal planning and demand-response scheduling. The pipeline demonstrates the feasibility of integrating quantum-inspired components into national planning workflows, using The Netherlands as a case study, while acknowledging simulator constraints and compute costs. Full article
Show Figures

Figure 1

40 pages, 19754 KB  
Article
Trans-cVAE-GAN: Transformer-Based cVAE-GAN for High-Fidelity EEG Signal Generation
by Yiduo Yao, Xiao Wang, Xudong Hao, Hongyu Sun, Ruixin Dong and Yansheng Li
Bioengineering 2025, 12(10), 1028; https://doi.org/10.3390/bioengineering12101028 - 26 Sep 2025
Abstract
Electroencephalography signal generation remains a challenging task due to its non-stationarity, multi-scale oscillations, and strong spatiotemporal coupling. Conventional generative models, including VAEs and GAN variants such as DCGAN, WGAN, and WGAN-GP, often yield blurred waveforms, unstable spectral distributions, or lack semantic controllability, limiting [...] Read more.
Electroencephalography signal generation remains a challenging task due to its non-stationarity, multi-scale oscillations, and strong spatiotemporal coupling. Conventional generative models, including VAEs and GAN variants such as DCGAN, WGAN, and WGAN-GP, often yield blurred waveforms, unstable spectral distributions, or lack semantic controllability, limiting their effectiveness in emotion-related applications. To address these challenges, this research proposes a Transformer-based conditional variational autoencoder–generative adversarial network (Trans-cVAE-GAN) that combines Transformer-driven temporal modeling, label-conditioned latent inference, and adversarial learning. A multi-dimensional structural loss further constrains generation by preserving temporal correlation, frequency-domain consistency, and statistical distribution. Experiments on three SEED-family datasets—SEED, SEED-FRA, and SEED-GER—demonstrate high similarity to real EEG, with representative mean ± SD correlations of Pearson ≈ 0.84 ± 0.08/0.74 ± 0.12/0.84 ± 0.07 and Spearman ≈ 0.82 ± 0.07/0.72 ± 0.12/0.83 ± 0.08, together with low spectral divergence (KL ≈ 0.39 ± 0.15/0.41 ± 0.20/0.37 ± 0.18). Comparative analyses show consistent gains over classical GAN baselines, while ablations verify the indispensable roles of the Transformer encoder, label conditioning, and cVAE module. In downstream emotion recognition, augmentation with generated EEG raises accuracy from 86.9% to 91.8% on SEED (with analogous gains on SEED-FRA and SEED-GER), underscoring enhanced generalization and robustness. These results confirm that the proposed approach simultaneously ensures fidelity, stability, and controllability across cohorts, offering a scalable solution for affective computing and brain–computer interface applications. Full article
Show Figures

Figure 1

22 pages, 3364 KB  
Article
Empirical Rules for Oscillation and Harmonic Approximation of Fractional Kelvin–Voigt Oscillators
by Paweł Łabędzki
Appl. Sci. 2025, 15(19), 10385; https://doi.org/10.3390/app151910385 - 24 Sep 2025
Viewed by 5
Abstract
Fractional Kelvin–Voigt (FKV) oscillators describe vibrations in viscoelastic structures with memory effects, leading to dynamics that are often more complex than those of classical harmonic oscillators. Since the harmonic oscillator is a simple, widely known, and broadly applied model, it is natural to [...] Read more.
Fractional Kelvin–Voigt (FKV) oscillators describe vibrations in viscoelastic structures with memory effects, leading to dynamics that are often more complex than those of classical harmonic oscillators. Since the harmonic oscillator is a simple, widely known, and broadly applied model, it is natural to ask under which conditions the dynamics of an FKV oscillator can be reliably approximated by a classical harmonic oscillator. In this work, we develop practical tools for such analysis by deriving approximate formulas that relate the parameters of an FKV oscillator to those of a best-fitting harmonic oscillator. The fitting is performed by minimizing a so-called divergence coefficient, a discrepancy measure that quantifies the difference between the responses of the FKV oscillator and its harmonic counterpart, using a genetic algorithm. The resulting data are then used to identify functional relationships between FKV parameters and the corresponding frequency and damping ratio of the approximating harmonic oscillator. The quality of these approximations is evaluated across a broad range of FKV parameters, leading to the identification of parameter regions where the approximation is reliable. In addition, we establish an empirical criterion that separates oscillatory from non-oscillatory FKV systems and employ statistical tools to validate both this classification and the accuracy of the proposed formulas over a wide parameter space. The methodology supports simplified modeling of viscoelastic dynamics and may contribute to applications in structural vibration analysis and material characterization. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

Back to TopTop