Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,690)

Search Parameters:
Keywords = difference of Gaussian

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5289 KiB  
Article
Research on the Transformer Failure Diagnosis Method Based on Fluorescence Spectroscopy Analysis and SBOA Optimized BPNN
by Xueqing Chen, Dacheng Li and Anjing Wang
Sensors 2025, 25(7), 2296; https://doi.org/10.3390/s25072296 (registering DOI) - 4 Apr 2025
Abstract
The representative dissolved gases analysis (DGA) method for transformer fault detection faces many shortcomings in early fault diagnosis, which restricts the application and development of fault detection technology in the field of transformers. In order to diagnose early failure in time, fluorescence analysis [...] Read more.
The representative dissolved gases analysis (DGA) method for transformer fault detection faces many shortcomings in early fault diagnosis, which restricts the application and development of fault detection technology in the field of transformers. In order to diagnose early failure in time, fluorescence analysis technology has recently been used for the research of transformer failure diagnosis, which makes up for the shortcomings of DGA. However, most of the existing fluorescence analyses of insulating oil studies combined with intelligent algorithms are a qualitative diagnosis of fault types; the quantitative fault diagnosis of the same oil sample has not been reported. In this study, a typical fault simulation experiment of the interval discharge of insulating oil was carried out with the new Xinjiang Karamay oil, and the fluorescence spectroscopy data of insulating oil under different discharge durations were collected. In order to eliminate the influence of noise factors on the spectral analysis and boost the accuracy of the diagnosis, a variety of spectral preprocessing algorithms, such as Savitzky–Golay (SG), moving median, moving mean, gaussian, locally weighted linear regression smoothing (Lowess), locally weighted quadratic regression smoothing (Loess), and robust (RLowess) and (Rloess), are used to smooth denoise the collected spectral data. Then, the dimensionality reduction techniques of principal component analysis (PCA), kernel principal component analysis (KPCA), and multi-dimensional scale (MDS) are used for further processing. Based on various preprocessed and dimensionally reduced data, transformer failure diagnosis models based on the particle swarm optimization algorithm (PSO) and the secretary bird optimization algorithm (SBOA) optimized BPNN are established to quantitatively analyze the state of insulating oil and predict the durations of transformer failure. By using the mathematical evaluation methods to comprehensively evaluate and compare the effects of various algorithm models, it was found that the Loess-MDS-SBOA-BP model has the best performance, with its determination coefficient (R2) increasing to 99.711%, the root mean square error (RMSE) being only 0.27144, and the other evaluation indicators also being optimal. The experimental results show that the failure diagnosis model finally proposed in this paper can perform an accurate diagnosis of the failure time; the predicted time is closest to the true value, which lays a foundation for the further development of the field of transformer failure diagnosis. Full article
(This article belongs to the Special Issue Spectral Detection Technology, Sensors and Instruments, 2nd Edition)
Show Figures

Figure 1

25 pages, 2874 KiB  
Article
The Combined Decision Problem: “Pull” vs. “Push” and the Degree of Centralization of Warehousing in the Field of Physical Distribution with a Special Focus on the Polish Market
by Dariusz Milewski
Appl. Sci. 2025, 15(7), 3970; https://doi.org/10.3390/app15073970 (registering DOI) - 3 Apr 2025
Viewed by 46
Abstract
This article concerns the efficiency of the distribution system with different strategies—“Pull” or “Push”—and different sizes of distribution network in terms of when products produced by the manufacturing plant are sent to distribution warehouses. The article hypothesized that the choice of how to [...] Read more.
This article concerns the efficiency of the distribution system with different strategies—“Pull” or “Push”—and different sizes of distribution network in terms of when products produced by the manufacturing plant are sent to distribution warehouses. The article hypothesized that the choice of how to replenish stocks in these warehouses—“Pull” or “Push”—and the choice of the degree of centralization of the distribution network (number of warehouses) were two decision problems that should be considered together. This hypothesis was confirmed. A simulation model was developed to conduct simulations for different scenarios (different demand distributions—Gaussian or Gamma different demand fluctuations and the timeliness of replenishing inventories in warehouses). With more expensive goods and greater sales fluctuations, there was a certain tendency towards centralizing storage and using the Pull strategy. The choice of a given strategy had a significant impact on the costs of logistics processes and on the profitability of enterprises. The cost savings ranged from 17% to 54%. The average share of distribution costs in the sales value was 6%. In some cases, it was over 10% (the level of profitability of industrial enterprises In Poland). Choosing the right strategy could, in some cases, change profits by 20%. In most cases, the most cost-effective strategy was a flexible Pull system and centralized storage, which is consistent with real-life business cases. Full article
Show Figures

Figure 1

29 pages, 1725 KiB  
Article
Finger Vein Recognition Based on Unsupervised Spiking Convolutional Neural Network with Adaptive Firing Threshold
by Li Yang, Qiong Yao and Xiang Xu
Sensors 2025, 25(7), 2279; https://doi.org/10.3390/s25072279 - 3 Apr 2025
Viewed by 24
Abstract
Currently, finger vein recognition (FVR) stands as a pioneering biometric technology, with convolutional neural networks (CNNs) and Transformers, among other advanced deep neural networks (DNNs), consistently pushing the boundaries of recognition accuracy. Nevertheless, these DNNs are inherently characterized by static, continuous-valued neuron activations, [...] Read more.
Currently, finger vein recognition (FVR) stands as a pioneering biometric technology, with convolutional neural networks (CNNs) and Transformers, among other advanced deep neural networks (DNNs), consistently pushing the boundaries of recognition accuracy. Nevertheless, these DNNs are inherently characterized by static, continuous-valued neuron activations, necessitating intricate network architectures and extensive parameter training to enhance performance. To address these challenges, we introduce an adaptive firing threshold-based spiking neural network (ATSNN) for FVR. ATSNN leverages discrete spike encodings to transforms static finger vein images into spike trains with spatio-temporal dynamic features. Initially, Gabor and difference of Gaussian (DoG) filters are employed to convert image pixel intensities into spike latency encodings. Subsequently, these spike encodings are fed into the ATSNN, where spiking features are extracted using biologically plausible local learning rules. Our proposed ATSNN dynamically adjusts the firing thresholds of neurons based on average potential tensors, thereby enabling adaptive modulation of the neuronal input-output response and enhancing network robustness. Ultimately, the spiking features with the earliest emission times are retained and utilized for classifier training via a support vector machine (SVM). Extensive experiments conducted across three benchmark finger vein datasets reveal that our ATSNN model not only achieves remarkable recognition accuracy but also excels in terms of reduced parameter count and model complexity, surpassing several existing FVR methods. Furthermore, the sparse and event-driven nature of our ATSNN renders it more biologically plausible compared to traditional DNNs. Full article
(This article belongs to the Section Biosensors)
18 pages, 11121 KiB  
Article
Separation of Body and Surface Wave Background Noise and Passive Seismic Interferometry Based on Synchrosqueezed Continuous Wavelet Transform
by Xiaolong Li, Fengjiao Zhang, Zhuo Xu and Xiangbo Gong
Appl. Sci. 2025, 15(7), 3917; https://doi.org/10.3390/app15073917 - 2 Apr 2025
Viewed by 48
Abstract
Passive seismic interferometry is a technique that reconstructs virtual seismic records using ambient noise, such as random noise or microseisms. The ambient noise in passive seismic data contains rich information, with surface waves being useful for the inversion of shallow subsurface structures, while [...] Read more.
Passive seismic interferometry is a technique that reconstructs virtual seismic records using ambient noise, such as random noise or microseisms. The ambient noise in passive seismic data contains rich information, with surface waves being useful for the inversion of shallow subsurface structures, while body waves are employed for deep-layer inversion. However, due to the low signal-to-noise ratio in actual passive seismic data, different types of seismic waves mix together, making them difficult to distinguish. This issue not only affects the dispersion measurements of surface waves but also interferes with the imaging accuracy of reflected waves. Therefore, it is crucial to extract the target waves from passive source data. In practical passive seismic data, body wave noise and surface wave noise often overlap in frequency bands, making it challenging to separate them effectively using conventional methods. The synchrosqueezed continuous wavelet transform, as a high-resolution time–frequency analysis method, can effectively capture the variations in frequency of passive seismic data. This study performs time–frequency analysis of passive seismic data using synchrosqueezed continuous wavelet transform. It combines wavelet thresholding and Gaussian filtering to separate body wave noise from surface wave noise. Furthermore, wavelet cross-correlation is applied to separately obtain high-quality virtual seismic records for both surface waves and body waves. Full article
Show Figures

Figure 1

22 pages, 5387 KiB  
Article
Landslide Segmentation in High-Resolution Remote Sensing Images: The Van–UPerAttnSeg Framework with Multi-Scale Feature Enhancement
by Chang Li, Quan Zou, Guoqing Li and Wenyang Yu
Remote Sens. 2025, 17(7), 1265; https://doi.org/10.3390/rs17071265 - 2 Apr 2025
Viewed by 44
Abstract
Among geological disasters, landslides are a common and extremely destructive disaster. Their rapid identification is crucial for disaster analysis and response. However, traditional methods of landslide recognition mainly rely on visual interpretation and manual recognition of remote sensing images, which are time-consuming and [...] Read more.
Among geological disasters, landslides are a common and extremely destructive disaster. Their rapid identification is crucial for disaster analysis and response. However, traditional methods of landslide recognition mainly rely on visual interpretation and manual recognition of remote sensing images, which are time-consuming and susceptible to subjective factors, thereby limiting the accuracy and efficiency of recognition. To overcome these limitations, for high-resolution remote sensing images, this method first uses online equalization sampling and enhancement strategy to sample high-resolution remote sensing images to ensure data balance and diversity. Then, it adopts an encoder–decoder structure, where the encoder is a visual attention network (Van) that focuses on extracting discriminative features of different scales from landslide images. The decoder consists of a pyramid pooling module (PPM) and feature pyramid network (FPN), combined with a convolutional block attention module (CBAM) module. Through this structure, the model can effectively integrate features of different scales, achieving precise positioning and recognition of landslide areas. In addition, this study introduces a sliding window algorithm based on Gaussian fusion as a post-processing method, which optimizes the prediction of landslide edge in high-resolution remote sensing images and ensures the context reasoning ability of the model. In the validation set, this method achieved a significant landslide recognition effect with a Dice score of 84.75%, demonstrating high accuracy and efficiency. This result demonstrates the importance and effectiveness of the research method in improving the accuracy and efficiency of landslide recognition, providing strong technical support for analysis and response to geological disasters. Full article
(This article belongs to the Topic Remote Sensing and Geological Disasters)
Show Figures

Figure 1

19 pages, 566 KiB  
Article
Bayesian FDOA Positioning with Correlated Measurement Noise
by Wenjun Zhang, Xi Li, Yi Liu, Le Yang and Fucheng Guo
Remote Sens. 2025, 17(7), 1266; https://doi.org/10.3390/rs17071266 - 2 Apr 2025
Viewed by 39
Abstract
In this paper, the problem of source localization using only frequency difference of arrival (FDOA) measurements is considered. A new FDOA-only localization technique is developed to determine the position of a narrow-band source. In this scenario, time difference of arrival (TDOA) measurements are [...] Read more.
In this paper, the problem of source localization using only frequency difference of arrival (FDOA) measurements is considered. A new FDOA-only localization technique is developed to determine the position of a narrow-band source. In this scenario, time difference of arrival (TDOA) measurements are not normally useful because they may have large errors due to the received signal having a small bandwidth. Conventional localization algorithms such as the two-stage weighted least squares (TSWLS) method, which jointly exploits TDOA and FDOA measurements for positioning, are thus no longer applicable since they will suffer from the thresholding effect and yield meaningless localization results. FDOA-only localization is non-trivial, mainly due to the high nonlinearity inherent in FDOA equations. Even with two FDOA measurements being available, FDOA-only localization still requires finding the roots of a high-order polynomial. For practical scenarios with more sensors, a divide-and-conquer (DAC) approach may be applied, but the positioning solution is suboptimal due to ignoring the correlation between FDOA measurements. To address these challenges, in this work, we propose a Bayesian approach for FDOA-only source positioning. The developed method, referred to as the Gaussian division method (GDM), first converts one FDOA measurement into a Gaussian mixture model (GMM) that specifies the prior distribution of the source position. Next, the GDM assumes uncorrelated FDOA measurements and fuses the remaining FDOAs sequentially by invoking nonlinear filtering techniques to obtain an initial positioning result. The GDM refines the solution by taking into account and compensating for the information loss caused by ignoring that the FDOAs are in fact correlated. Extensive simulations demonstrate that the proposed algorithm provides improved performance over existing methods and that it can attain the Cramér–Rao lower bound (CRLB) accuracy under moderate noise levels. Full article
Show Figures

Figure 1

17 pages, 21270 KiB  
Article
Enhancing the Anti-Interference Capability of Orbital Angular Momentum Beams Generated by an Ultra-Large-Scale Metasurface
by Boli Su, Ke Guan and An Qian
Appl. Sci. 2025, 15(7), 3900; https://doi.org/10.3390/app15073900 - 2 Apr 2025
Viewed by 45
Abstract
Orbital angular momentum beams have been extensively researched due to their ability to enhance the channel capacity of microwave systems. Metasurface near-field calculations of different sizes have been completed. Near-field calculations with Gaussian noise for metasurfaces of different sizes were also completed. The [...] Read more.
Orbital angular momentum beams have been extensively researched due to their ability to enhance the channel capacity of microwave systems. Metasurface near-field calculations of different sizes have been completed. Near-field calculations with Gaussian noise for metasurfaces of different sizes were also completed. The presence of noise suggests that the vortex electric field generated by the small metasurface of the vortex wave may experience disturbance and be overwhelmed by strong noise. On the other hand, the large vortex wave metasurface exhibits superior anti-noise capability. Its anti-interference characteristic was verified by conducting full-wave simulations on metasurfaces of l = −3 and l = −5. Based on the OAM spectral analysis, the mode purity of the generated vortex waves was calculated in detail. Simulation results indicated that a large-scale metasurface exhibits stronger anti-interference capability, which may inspire the design and research of vortex wave metasurfaces in the future. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

17 pages, 2736 KiB  
Article
Using Machine Learning for Lunar Mineralogy-I: Hyperspectral Imaging of Volcanic Samples
by Fatemeh Fazel Hesar, Mojtaba Raouf, Peyman Soltani, Bernard Foing, Michiel J. A. de Dood and Fons J. Verbeek
Universe 2025, 11(4), 117; https://doi.org/10.3390/universe11040117 - 2 Apr 2025
Viewed by 48
Abstract
This study examines the mineral composition of volcanic samples similar to lunar materials, focusing on olivine and pyroxene. Using hyperspectral imaging (HSI) from 400 to 1000 nm, we created data cubes to analyze the reflectance characteristics of samples from Vulcano, a volcanically active [...] Read more.
This study examines the mineral composition of volcanic samples similar to lunar materials, focusing on olivine and pyroxene. Using hyperspectral imaging (HSI) from 400 to 1000 nm, we created data cubes to analyze the reflectance characteristics of samples from Vulcano, a volcanically active island in the Aeolian archipelago, north of Sicily, Italy, categorizing them into nine regions of interest (ROIs) and analyzing spectral data for each. We applied various unsupervised clustering algorithms, including K-Means, hierarchical clustering, Gaussian mixture models (GMMs), and spectral clustering, to classify the spectral profiles. Principal component analysis (PCA) revealed distinct spectral signatures associated with specific minerals, facilitating precise identification. The clustering performance varied by region, with K-Means achieving the highest silhouette score of 0.47, whereas GMMs performed poorly with a score of only 0.25. Non-negative matrix factorization (NMF) aided in identifying similarities among clusters across different methods and reference spectra for olivine and pyroxene. Hierarchical clustering emerged as the most reliable technique, achieving a 94% similarity with the olivine spectrum in one sample, whereas GMMs exhibited notable variability. Overall, the analysis indicated that both the hierarchical and K-Means methods yielded lower errors in total measurements, with K-Means demonstrating superior performance in estimated dispersion and clustering. Additionally, GMMs showed a higher root mean square error (RMSE) compared to the other models. The RMSE analysis confirmed K-Means as the most consistent algorithm across all samples, suggesting a predominance of olivine in the Vulcano region relative to pyroxene. This predominance is likely linked to historical formation conditions similar to volcanic processes on the Moon, where olivine-rich compositions are common in ancient lava flows and impact-melt rocks. These findings provide a deeper context for mineral distribution and formation processes in volcanic landscapes. Full article
(This article belongs to the Special Issue Planetary Radar Astronomy)
Show Figures

Figure 1

23 pages, 5491 KiB  
Article
Data Uncertainty (DU)-Former: An Episodic Memory Electroencephalography Classification Model for Pre- and Post-Training Assessment
by Xianglong Wan, Zheyuan Liu, Yiduo Yao, Wan Zuha Wan Hasan, Tiange Liu, Dingna Duan, Xueguang Xie and Dong Wen
Bioengineering 2025, 12(4), 359; https://doi.org/10.3390/bioengineering12040359 - 30 Mar 2025
Viewed by 64
Abstract
Episodic memory training plays a crucial role in cognitive enhancement, particularly in addressing age-related memory decline and cognitive disorders. Accurately assessing the effectiveness of such training requires reliable methods to capture changes in memory function. Electroencephalography (EEG) offers an objective way of evaluating [...] Read more.
Episodic memory training plays a crucial role in cognitive enhancement, particularly in addressing age-related memory decline and cognitive disorders. Accurately assessing the effectiveness of such training requires reliable methods to capture changes in memory function. Electroencephalography (EEG) offers an objective way of evaluating neural activity before and after training. However, EEG classification in episodic memory assessment remains challenging due to the variability in brain responses, individual differences, and the complex temporal–spatial dynamics of neural signals. Traditional EEG classification methods, such as Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs), face limitations when applied to episodic memory training assessment, struggling to extract meaningful features and handle the inherent uncertainty in EEG signals. To address these issues, this paper introduces DU-former, which improves feature extraction and enhances the model’s robustness against noise. Specifically, data uncertainty (DU) explicitly handles data uncertainty by modeling input features as Gaussian distributions within the reparameterization module. One branch predicts the mean through convolution and normalization, while the other estimates the variance via average pooling and normalization. These values are then used for Gaussian reparameterization, enabling the model to learn more robust feature representations. This approach allows the model to remain stable when dealing with complex or noisy data. To validate the method, an episodic memory training experiment was designed with 17 participants who underwent 28 days of training. Behavioral data showed a significant reduction in task completion time. Object recognition accuracy also improved, as indicated by the higher proportion of correctly identified target items in the episodic memory testing game. Furthermore, EEG data collected before and after the training were used to evaluate the DU-former’s performance, demonstrating significant improvements in classification accuracy. This paper contributes by introducing uncertainty learning and proposing a more efficient and robust method for EEG signal classification, demonstrating superior performance in episodic memory assessment. Full article
Show Figures

Figure 1

25 pages, 16103 KiB  
Article
Compressive Response and Damage Distribution of Fiber-Reinforced Concrete with Various Saturation Degrees
by Lu Feng and Xudong Chen
Materials 2025, 18(7), 1555; https://doi.org/10.3390/ma18071555 - 29 Mar 2025
Viewed by 173
Abstract
Tunnels frequently experience issues such as lining spalling and water leakage, making the stability of tunnel support critical for engineering safety. Given that tunnels are subjected to various ground stress disturbances and groundwater influences, it is essential to investigate the mechanical properties and [...] Read more.
Tunnels frequently experience issues such as lining spalling and water leakage, making the stability of tunnel support critical for engineering safety. Given that tunnels are subjected to various ground stress disturbances and groundwater influences, it is essential to investigate the mechanical properties and damage mechanisms of tunnel support materials under different loading paths and saturation levels. Fiber-reinforced concrete (FRC) is widely used for tunnel support; in this study, uniaxial compression tests were conducted on FRC with different fiber contents (0%, 0.5%, 1.0%) under varying loading paths (monotonic, pre-peak cyclic loading, full cyclic loading). The stress–strain behavior, volumetric strain, and elastic modulus were analyzed. The results indicate that increasing fiber content enhances strength and stiffness, while higher water content leads to a significant water-weakening effect, reducing both parameters. To classify crack types, the logistic regression (LR) algorithm is employed based on the AF-RA features, identifying tensile damage (which accounts for 60–80%) as more dominant than shear damage. Using this classification, AE event distributions reveal the spatial characteristics of internal damage in FRC. Gaussian process regression (GPR) is further applied to predict the AE parameters, enabling the assessment of the tensile and shear damage responses in FRC. The location and magnitude of the predicted wave crest indicate extreme damage levels, which become more pronounced under a higher saturation condition. A damage constitutive model is proposed to characterize the post-peak softening behavior of FRC. The numerical verification demonstrates good agreement with the experimental results, confirming the model’s capability to describe the softening behavior of FRC under various fiber and water contents. Full article
(This article belongs to the Special Issue Advanced Characterization of Fiber-Reinforced Composite Materials)
Show Figures

Figure 1

16 pages, 1374 KiB  
Article
Quantifying Deviations from Gaussianity with Application to Flight Delay Distributions
by Felipe Olivares and Massimiliano Zanin
Entropy 2025, 27(4), 354; https://doi.org/10.3390/e27040354 - 28 Mar 2025
Viewed by 127
Abstract
We propose a novel approach for quantifying deviations from Gaussianity by leveraging the Jensen–Shannon distance. Using stable distributions as a flexible framework, we analyze the effects of skewness and heavy tails in synthetic sequences. We employ phase-randomized surrogates as Gaussian references to systematically [...] Read more.
We propose a novel approach for quantifying deviations from Gaussianity by leveraging the Jensen–Shannon distance. Using stable distributions as a flexible framework, we analyze the effects of skewness and heavy tails in synthetic sequences. We employ phase-randomized surrogates as Gaussian references to systematically evaluate the statistical distance between this reference and stable distributions. Our methodology is validated using real flight delay datasets from major airports in Europe and the United States, revealing significant deviations from Gaussianity, particularly at high-traffic airports. These results highlight systematic air traffic management strategy differences between the two geographic regions. Full article
(This article belongs to the Special Issue Ordinal Patterns-Based Tools and Their Applications)
Show Figures

Figure 1

16 pages, 1104 KiB  
Article
Multi-Channel Underwater Acoustic Signal Analysis Using Improved Multivariate Multiscale Sample Entropy
by Jing Zhou, Yaan Li and Mingzhou Wang
J. Mar. Sci. Eng. 2025, 13(4), 675; https://doi.org/10.3390/jmse13040675 - 27 Mar 2025
Viewed by 100
Abstract
Underwater acoustic signals typically exhibit non-Gaussian, non-stationary, and nonlinear characteristics. When processing real-world underwater acoustic signals, traditional multivariate entropy algorithms often struggle to simultaneously ensure stability and extract cross-channel information. To address these issues, the improved multivariate multiscale sample entropy (IMMSE) algorithm is [...] Read more.
Underwater acoustic signals typically exhibit non-Gaussian, non-stationary, and nonlinear characteristics. When processing real-world underwater acoustic signals, traditional multivariate entropy algorithms often struggle to simultaneously ensure stability and extract cross-channel information. To address these issues, the improved multivariate multiscale sample entropy (IMMSE) algorithm is proposed, which extracts the complexity of multi-channel data, enabling a more comprehensive and stable representation of the dynamic characteristics of complex nonlinear systems. This paper explores the optimal parameter selection range for the IMMSE algorithm and compares its sensitivity to noise and computational efficiency with traditional multivariate entropy algorithms. The results demonstrate that IMMSE outperforms its counterparts in terms of both stability and computational efficiency. Analysis of various types of ship-radiated noise further demonstrates IMMSE’s superior stability in handling complex underwater acoustic signals. Moreover, IMMSE’s ability to extract features enables more accurate discrimination between different signal types. Finally, the paper presents data processing results in mechanical fault diagnosis, underscoring the broad applicability of IMMSE. Full article
(This article belongs to the Special Issue Navigation and Detection Fusion for Autonomous Underwater Vehicles)
Show Figures

Figure 1

25 pages, 14143 KiB  
Article
U-Turn Diffusion
by Hamidreza Behjoo and Michael Chertkov
Entropy 2025, 27(4), 343; https://doi.org/10.3390/e27040343 - 26 Mar 2025
Viewed by 100
Abstract
We investigate diffusion models generating synthetic samples from the probability distribution represented by the ground truth (GT) samples. We focus on how GT sample information is encoded in the score function (SF), computed (not simulated) from the Wiener–Ito linear forward process in the [...] Read more.
We investigate diffusion models generating synthetic samples from the probability distribution represented by the ground truth (GT) samples. We focus on how GT sample information is encoded in the score function (SF), computed (not simulated) from the Wiener–Ito linear forward process in the artificial time t[0], and then used as a nonlinear drift in the simulated Wiener–Ito reverse process with t[0]. We propose U-Turn diffusion, an augmentation of a pre-trained diffusion model, which shortens the forward and reverse processes to t[0Tu] and t[Tu0]. The U-Turn reverse process is initialized at Tu with a sample from the probability distribution of the forward process (initialized at t=0 with a GT sample) ensuring a detailed balance relation between the shortened forward and reverse processes. Our experiments on the class-conditioned SF of the ImageNet dataset and the multi-class, single SF of the CIFAR-10 dataset reveal a critical Memorization Time Tm, beyond which generated samples diverge from the GT sample used to initialize the U-Turn scheme, and a Speciation Time Ts, where for Tu>Ts>Tm, samples begin representing different classes. We further examine the role of SF nonlinearity through a Gaussian Test, comparing empirical and Gaussian-approximated U-Turn auto-correlation functions and showing that the SF becomes effectively affine for t>Ts and approximately affine for t[Tm,Ts]. Full article
(This article belongs to the Special Issue The Statistical Physics of Generative Diffusion Models)
Show Figures

Figure 1

14 pages, 3155 KiB  
Article
Variation in Electron Radiation Properties Under the Action of Chirped Pulses in Nonlinear Thomson Scattering
by Jiachen Li, Junyuan Xu, Zi Wang, Qianmin Zheng, Juncheng Yan and Youwei Tian
Appl. Sci. 2025, 15(7), 3619; https://doi.org/10.3390/app15073619 - 26 Mar 2025
Viewed by 63
Abstract
This paper primarily focuses on the changes in electron motion trajectories, radiation spatial distribution, radiation spectra, and time spectra under the combined influence of pulse width and chirp parameters. It discusses the motion characteristics of electrons in Gaussian circularly polarized laser-chirped pulses with [...] Read more.
This paper primarily focuses on the changes in electron motion trajectories, radiation spatial distribution, radiation spectra, and time spectra under the combined influence of pulse width and chirp parameters. It discusses the motion characteristics of electrons in Gaussian circularly polarized laser-chirped pulses with different chirp parameters and pulse widths. This study examines the asymmetry in radiation distribution, the increase in peak power, the time adjustment in main peak generation, and the coupling effects of spatial distribution under the combined action of pulse width and chirp parameters. It also explores the similarity between chirp effects and pulse broadening. Overall, this paper provides an important reference for further understanding and applying chirped pulses in optics and physics by deeply studying the characteristics of electrons under varying conditions of chirp parameters and pulse widths. Full article
Show Figures

Figure 1

34 pages, 1976 KiB  
Article
A Comparative Study of COVID-19 Dynamics in Major Turkish Cities Using Fractional Advection–Diffusion–Reaction Equations
by Larissa Margareta Batrancea, Dilara Altan Koç, Ömer Akgüller, Mehmet Ali Balcı and Anca Nichita
Fractal Fract. 2025, 9(4), 201; https://doi.org/10.3390/fractalfract9040201 - 25 Mar 2025
Viewed by 88
Abstract
Robust epidemiological models are essential for managing COVID-19, especially in diverse urban settings. In this study, we present a fractional advection–diffusion–reaction model to analyze COVID-19 spread in three major Turkish cities: Ankara, Istanbul, and Izmir. The model employs a Caputo-type time-fractional derivative, with [...] Read more.
Robust epidemiological models are essential for managing COVID-19, especially in diverse urban settings. In this study, we present a fractional advection–diffusion–reaction model to analyze COVID-19 spread in three major Turkish cities: Ankara, Istanbul, and Izmir. The model employs a Caputo-type time-fractional derivative, with its order dynamically determined by the Hurst exponent, capturing the memory effects of disease transmission. A nonlinear reaction term models self-reinforcing viral spread, while a Gaussian forcing term simulates public health interventions with adjustable spatial and temporal parameters. We solve the resulting fractional PDE using an implicit finite difference scheme that ensures numerical stability. Calibration with weekly case data from February 2021 to March 2022 reveals that Ankara has a Hurst exponent of 0.4222, Istanbul 0.1932, and Izmir 0.6085, indicating varied persistence characteristics. Distribution fitting shows that a Weibull model best represents the data for Ankara and Istanbul, whereas a two-component normal mixture suits Izmir. Sensitivity analysis confirms that key parameters, including the fractional order and forcing duration, critically influence outcomes. These findings provide valuable insights for public health policy and urban planning, offering a tailored forecasting tool for epidemic management. Full article
Show Figures

Figure 1

Back to TopTop