Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (426)

Search Parameters:
Keywords = Bayesian deep learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
45 pages, 3217 KB  
Systematic Review
A Systematic Literature Review of Machine Learning Techniques for Observational Constraints in Cosmology
by Luis Rojas, Sebastián Espinoza, Esteban González, Carlos Maldonado and Fei Luo
Galaxies 2025, 13(5), 114; https://doi.org/10.3390/galaxies13050114 - 9 Oct 2025
Abstract
This paper presents a systematic literature review focusing on the application of machine learning techniques for deriving observational constraints in cosmology. The goal is to evaluate and synthesize existing research to identify effective methodologies, highlight gaps, and propose future research directions. Our review [...] Read more.
This paper presents a systematic literature review focusing on the application of machine learning techniques for deriving observational constraints in cosmology. The goal is to evaluate and synthesize existing research to identify effective methodologies, highlight gaps, and propose future research directions. Our review identifies several key findings: (1) Various machine learning techniques, including Bayesian neural networks, Gaussian processes, and deep learning models, have been applied to cosmological data analysis, improving parameter estimation and handling large datasets. However, models achieving significant computational speedups often exhibit worse confidence regions compared to traditional methods, emphasizing the need for future research to enhance both efficiency and measurement precision. (2) Traditional cosmological methods, such as those using Type Ia Supernovae, baryon acoustic oscillations, and cosmic microwave background data, remain fundamental, but most studies focus narrowly on specific datasets. We recommend broader dataset usage to fully validate alternative cosmological models. (3) The reviewed studies mainly address the H0 tension, leaving other cosmological challenges—such as the cosmological constant problem, warm dark matter, phantom dark energy, and others—unexplored. (4) Hybrid methodologies combining machine learning with Markov chain Monte Carlo offer promising results, particularly when machine learning techniques are used to solve differential equations, such as Einstein Boltzmann solvers, prior to Markov chain Monte Carlo models, accelerating computations while maintaining precision. (5) There is a significant need for standardized evaluation criteria and methodologies, as variability in training processes and experimental setups complicates result comparability and reproducibility. (6) Our findings confirm that deep learning models outperform traditional machine learning methods for complex, high-dimensional datasets, underscoring the importance of clear guidelines to determine when the added complexity of learning models is warranted. Full article
35 pages, 7130 KB  
Article
A Hybrid Framework Integrating End-to-End Deep Learning with Bayesian Inference for Maritime Navigation Risk Prediction
by Fanyu Zhou and Shengzheng Wang
J. Mar. Sci. Eng. 2025, 13(10), 1925; https://doi.org/10.3390/jmse13101925 - 9 Oct 2025
Abstract
Currently, maritime navigation safety risks—particularly those related to ship navigation—are primarily assessed through traditional rule-based methods and expert experience. However, such approaches often suffer from limited accuracy and lack real-time responsiveness. As maritime environments and operational conditions become increasingly complex, traditional techniques struggle [...] Read more.
Currently, maritime navigation safety risks—particularly those related to ship navigation—are primarily assessed through traditional rule-based methods and expert experience. However, such approaches often suffer from limited accuracy and lack real-time responsiveness. As maritime environments and operational conditions become increasingly complex, traditional techniques struggle to cope with the diversity and uncertainty of navigation scenarios. Therefore, there is an urgent need for a more intelligent and precise risk prediction method. This study proposes a ship risk prediction framework that integrates a deep learning model based on Long Short-Term Memory (LSTM) networks with Bayesian risk evaluation. The model first leverages deep neural networks to process time-series trajectory data, enabling accurate prediction of a vessel’s future positions and navigational status. Then, Bayesian inference is applied to quantitatively assess potential risks of collision and grounding by incorporating vessel motion data, environmental conditions, surrounding obstacles, and water depth information. The proposed framework combines the advantages of deep learning and Bayesian reasoning to improve the accuracy and timeliness of risk prediction. By providing real-time warnings and decision-making support, this model offers a novel solution for maritime safety management. Accurate risk forecasts enable ship crews to take precautionary measures in advance, effectively reducing the occurrence of maritime accidents. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

31 pages, 2358 KB  
Article
Semi-Supervised Bayesian GANs with Log-Signatures for Uncertainty-Aware Credit Card Fraud Detection
by David Hirnschall
Mathematics 2025, 13(19), 3229; https://doi.org/10.3390/math13193229 - 9 Oct 2025
Abstract
We present a novel deep generative semi-supervised framework for credit card fraud detection, formulated as a time series classification task. As financial transaction data streams grow in scale and complexity, traditional methods often require large labeled datasets and struggle with time series of [...] Read more.
We present a novel deep generative semi-supervised framework for credit card fraud detection, formulated as a time series classification task. As financial transaction data streams grow in scale and complexity, traditional methods often require large labeled datasets and struggle with time series of irregular sampling frequencies and varying sequence lengths. To address these challenges, we extend conditional Generative Adversarial Networks (GANs) for targeted data augmentation, integrate Bayesian inference to obtain predictive distributions and quantify uncertainty, and leverage log-signatures for robust feature encoding of transaction histories. We propose a composite Wasserstein distance-based loss to align generated and real unlabeled samples while simultaneously maximizing classification accuracy on labeled data. Our approach is evaluated on the BankSim dataset, a widely used simulator for credit card transaction data, under varying proportions of labeled samples, demonstrating consistent improvements over benchmarks in both global statistical and domain-specific metrics. These findings highlight the effectiveness of GAN-driven semi-supervised learning with log-signatures for irregularly sampled time series and emphasize the importance of uncertainty-aware predictions. Full article
(This article belongs to the Special Issue Artificial Intelligence Techniques in the Financial Services Industry)
Show Figures

Figure 1

21 pages, 5895 KB  
Article
Intelligent 3D Potato Cutting Simulation System Based on Multi-View Images and Point Cloud Fusion
by Ruize Xu, Chen Chen, Fanyi Liu and Shouyong Xie
Agriculture 2025, 15(19), 2088; https://doi.org/10.3390/agriculture15192088 - 7 Oct 2025
Abstract
The quality of seed pieces is crucial for potato planting. Each seed piece should contain viable potato eyes and maintain a uniform size for mechanized planting. However, existing intelligent methods are limited by a single view, making it difficult to satisfy both requirements [...] Read more.
The quality of seed pieces is crucial for potato planting. Each seed piece should contain viable potato eyes and maintain a uniform size for mechanized planting. However, existing intelligent methods are limited by a single view, making it difficult to satisfy both requirements simultaneously. To address this problem, we present an intelligent 3D potato cutting simulation system. A sparse 3D point cloud of the potato is reconstructed from multi-perspective images, which are acquired with a single-camera rotating platform. Subsequently, the 2D positions of potato eyes in each image are detected using deep learning, from which their 3D positions are mapped via back-projection and a clustering algorithm. Finally, the cutting paths are optimized by a Bayesian optimizer, which incorporates both the potato’s volume and the locations of its eyes, and generates cutting schemes suitable for different potato size categories. Experimental results showed that the system achieved a mean absolute percentage error of 2.16% (95% CI: 1.60–2.73%) for potato volume estimation, a potato eye detection precision of 98%, and a recall of 94%. The optimized cutting plans showed a volume coefficient of variation below 0.10 and avoided damage to the detected potato eyes, producing seed pieces that each contained potato eyes. This work demonstrates that the system can effectively utilize the detected potato eye information to obtain seed pieces containing potato eyes and having uniform size. The proposed system provides a feasible pathway for high-precision automated seed potato cutting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 26694 KB  
Article
Research on Wind Field Correction Method Integrating Position Information and Proxy Divergence
by Jianhong Gan, Mengjia Zhang, Cen Gao, Peiyang Wei, Zhibin Li and Chunjiang Wu
Biomimetics 2025, 10(10), 651; https://doi.org/10.3390/biomimetics10100651 - 1 Oct 2025
Viewed by 229
Abstract
The accuracy of numerical model outputs strongly depends on the quality of the initial wind field, yet ground observation data are typically sparse and provide incomplete spatial coverage. More importantly, many current mainstream correction models rely on reanalysis grid datasets like ERA5 as [...] Read more.
The accuracy of numerical model outputs strongly depends on the quality of the initial wind field, yet ground observation data are typically sparse and provide incomplete spatial coverage. More importantly, many current mainstream correction models rely on reanalysis grid datasets like ERA5 as the true value, which relies on interpolation calculation, which directly affects the accuracy of the correction results. To address these issues, we propose a new deep learning model, PPWNet. The model directly uses sparse and discretely distributed observation data as the true value, which integrates observation point positions and a physical consistency term to achieve a high-precision corrected wind field. The model design is inspired by biological intelligence. First, observation point positions are encoded as input and observation values are included in the loss function. Second, a parallel dual-branch DenseInception network is employed to extract multi-scale grid features, simulating the hierarchical processing of the biological visual system. Meanwhile, PPWNet references the PointNet architecture and introduces an attention mechanism to efficiently extract features from sparse and irregular observation positions. This mechanism reflects the selective focus of cognitive functions. Furthermore, this paper incorporates physical knowledge into the model optimization process by adding a learned physical consistency term to the loss function, ensuring that the corrected results not only approximate the observations but also adhere to physical laws. Finally, hyperparameters are automatically tuned using the Bayesian TPE algorithm. Experiments demonstrate that PPWNet outperforms both traditional and existing deep learning methods. It reduces the MAE by 38.65% and the RMSE by 28.93%. The corrected wind field shows better agreement with observations in both wind speed and direction, confirming the effectiveness of incorporating position information and a physics-informed approach into deep learning-based wind field correction. Full article
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)
Show Figures

Figure 1

26 pages, 5143 KB  
Article
SymOpt-CNSVR: A Novel Prediction Model Based on Symmetric Optimization for Delivery Duration Forecasting
by Kun Qi, Wangyu Wu and Yao Ni
Symmetry 2025, 17(10), 1608; https://doi.org/10.3390/sym17101608 - 28 Sep 2025
Viewed by 326
Abstract
Accurate prediction of food delivery time is crucial for enhancing operational efficiency and customer satisfaction in real-world logistics and intelligent dispatch systems. To address this challenge, this study proposes a novel symmetric optimization prediction framework, termed SymOpt-CNSVR. The framework is designed to leverage [...] Read more.
Accurate prediction of food delivery time is crucial for enhancing operational efficiency and customer satisfaction in real-world logistics and intelligent dispatch systems. To address this challenge, this study proposes a novel symmetric optimization prediction framework, termed SymOpt-CNSVR. The framework is designed to leverage the strengths of both deep learning and statistical learning models in a complementary architecture. It employs a Convolutional Neural Network (CNN) to extract and assess the importance of multi-feature data. An Enhanced Superb Fairy-Wren Optimization Algorithm (ESFOA) is utilized to optimize the diverse hyperparameters of the CNN, forming an optimal adaptive feature extraction structure. The significant features identified by the CNN are then fed into a Support Vector Regression (SVR) model, whose hyperparameters are optimized using Bayesian optimization, for final prediction. This combination reduces the overall parameter search time and incorporates probabilistic reasoning. Extensive experimental evaluations demonstrate the superior performance of the proposed SymOpt-CNSVR model. It achieves outstanding results with an R2 of 0.9269, MAE of 3.0582, RMSE of 4.1947, and MSLE of 0.1114, outperforming a range of benchmark and state-of-the-art models. Specifically, the MAE was reduced from 4.713 (KNN) and 5.2676 (BiLSTM) to 3.0582, and the RMSE decreased from 6.9073 (KNN) and 6.9194 (BiLSTM) to 4.1947. The results confirm the framework’s powerful capability and robustness in handling high-dimensional delivery time prediction tasks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

39 pages, 10741 KB  
Article
Modeling the Dynamics of the Jebel Zaghouan Karst Aquifer Using Artificial Neural Networks: Toward Improved Management of Vulnerable Water Resources
by Emna Gargouri-Ellouze, Tegawende Arnaud Ouedraogo, Fairouz Slama, Jean-Denis Taupin, Nicolas Patris and Rachida Bouhlila
Hydrology 2025, 12(10), 250; https://doi.org/10.3390/hydrology12100250 - 26 Sep 2025
Viewed by 370
Abstract
Karst aquifers are critical yet vulnerable water resources in semi-arid Mediterranean regions, where structural complexity, nonlinearity, and delayed hydrological responses pose significant modeling challenges under increasing climatic and anthropogenic pressures. This study examines the Jebel Zaghouan aquifer in northeastern Tunisia, aiming to simulate [...] Read more.
Karst aquifers are critical yet vulnerable water resources in semi-arid Mediterranean regions, where structural complexity, nonlinearity, and delayed hydrological responses pose significant modeling challenges under increasing climatic and anthropogenic pressures. This study examines the Jebel Zaghouan aquifer in northeastern Tunisia, aiming to simulate its natural discharge dynamics prior to intensive exploitation (1915–1944). Given the fragmented nature of historical datasets, meteorological inputs (rainfall, temperature, and pressure) were reconstructed using a data recovery process combining linear interpolation and statistical distribution fitting. The hyperparameters of the artificial neural network (ANN) model were optimized through a Bayesian search. Three deep learning architectures—Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM)—were trained to model spring discharge. Model performance was evaluated using Kling–Gupta Efficiency (KGE′), Nash–Sutcliffe Efficiency (NSE), and R2 metrics. Hydrodynamic characterization revealed moderate variability and delayed discharge response, while isotopic analyses (δ18O, δ2H, 3H, 14C) confirmed a dual recharge regime from both modern and older waters. LSTM outperformed other models at the weekly scale (KGE′ = 0.62; NSE = 0.48; R2 = 0.68), effectively capturing memory effects. This study demonstrates the value of combining historical data rescue, ANN modeling, and hydrogeological insight to support sustainable groundwater management in data-scarce karst systems. Full article
Show Figures

Graphical abstract

34 pages, 11521 KB  
Article
Explainable AI-Driven 1D-CNN with Efficient Wireless Communication System Integration for Multimodal Diabetes Prediction
by Radwa Ahmed Osman
AI 2025, 6(10), 243; https://doi.org/10.3390/ai6100243 - 25 Sep 2025
Viewed by 551
Abstract
The early detection of diabetes risk and effective management of patient data are critical for avoiding serious consequences and improving treatment success. This research describes a two-part architecture that combines an energy-efficient wireless communication technology with an interpretable deep learning model for diabetes [...] Read more.
The early detection of diabetes risk and effective management of patient data are critical for avoiding serious consequences and improving treatment success. This research describes a two-part architecture that combines an energy-efficient wireless communication technology with an interpretable deep learning model for diabetes categorization. In Phase 1, a unique wireless communication model is created to assure the accurate transfer of real-time patient data from wearable devices to medical centers. Using Lagrange optimization, the model identifies the best transmission distance and power needs, lowering energy usage while preserving communication dependability. This contribution is especially essential since effective data transport is a necessary condition for continuous monitoring in large-scale healthcare systems. In Phase 2, the transmitted multimodal clinical, genetic, and lifestyle data are evaluated using a one-dimensional Convolutional Neural Network (1D-CNN) with Bayesian hyperparameter tuning. The model beat traditional deep learning architectures like LSTM and GRU. To improve interpretability and clinical acceptance, SHAP and LIME were used to find global and patient-specific predictors. This approach tackles technological and medicinal difficulties by integrating energy-efficient wireless communication with interpretable predictive modeling. The system ensures dependable data transfer, strong predictive performance, and transparent decision support, boosting trust in AI-assisted healthcare and enabling individualized diabetes control. Full article
Show Figures

Figure 1

22 pages, 7360 KB  
Article
Evaporation Duct Height Short-Term Prediction Based on Bayesian Hyperparameter Optimization
by Ye-Wen Wu, Yu Zhang, Zhi-Qiang Fan, Han-Yi Chen, Sheng-Lin Zhang and Yu-Qiang Zhang
Atmosphere 2025, 16(10), 1126; https://doi.org/10.3390/atmos16101126 - 25 Sep 2025
Viewed by 256
Abstract
Accurately predicting evaporation duct height (EDH) is a crucial technology for enabling over-the-horizon communication and radar detection at sea. To address the issues of overfitting in neural network training and the low efficiency of manual hyperparameter tuning in conventional evaporation duct height (EDH) [...] Read more.
Accurately predicting evaporation duct height (EDH) is a crucial technology for enabling over-the-horizon communication and radar detection at sea. To address the issues of overfitting in neural network training and the low efficiency of manual hyperparameter tuning in conventional evaporation duct height (EDH) prediction, this study proposes the application of Bayesian optimization (BO)-based deep learning techniques to EDH forecasting. Specifically, we developed a novel BO–LSTM hybrid model to enhance the predictive accuracy of EDH. First, based on the CFSv2 reanalysis data from 2011 to 2020, we employed the NPS model to calculate the hourly evaporation duct height (EDH) over the Yongshu Reef region in the South China Sea. Then, the Mann–Kendall (M–K) method and the Augmented Dickey–Fuller (ADF) test were employed to analyze the overall trend and stationarity of the EDH time series in the Yongshu Reef area. The results indicate a significant declining trend in EDH in recent years, and the time series is stationary. This suggests that the data can enhance the convergence speed and prediction stability of neural network models. Finally, the BO–LSTM model was utilized for 24 h short-term forecasting of the EDH time series. The results demonstrate that BO–LSTM can effectively predict EDH values for the next 24 h, with the prediction accuracy gradually decreasing as the forecast horizon extends. Specifically, the 1 h forecast achieves a root mean square error (RMSE) of 0.592 m, a mean absolute error (MAE) of 0.407 m, and a model goodness-of-fit (R2) of 0.961. In contrast, the 24 h forecast shows an RMSE of 2.393 m, MAE of 1.808 m, and R2 of only 0.362. A comparative analysis between BO–LSTM and LSTM reveals that BO–LSTM exhibits marginally superior accuracy over LSTM for 1–15 h forecasts, with its performance advantage becoming increasingly pronounced for longer forecast horizons. This confirms that the Bayesian optimization-based hyperparameter tuning method significantly enhances model prediction accuracy. Full article
Show Figures

Figure 1

23 pages, 924 KB  
Article
Energy and Water Management in Smart Buildings Using Spiking Neural Networks: A Low-Power, Event-Driven Approach for Adaptive Control and Anomaly Detection
by Malek Alrashidi, Sami Mnasri, Maha Alqabli, Mansoor Alghamdi, Michael Short, Sean Williams, Nashwan Dawood, Ibrahim S. Alkhazi and Majed Abdullah Alrowaily
Energies 2025, 18(19), 5089; https://doi.org/10.3390/en18195089 - 24 Sep 2025
Viewed by 336
Abstract
The growing demand for energy efficiency and sustainability in smart buildings necessitates advanced AI-driven methods for adaptive control and predictive maintenance. This study explores the application of Spiking Neural Networks (SNNs) to event-driven processing, real-time anomaly detection, and edge computing-based optimization in building [...] Read more.
The growing demand for energy efficiency and sustainability in smart buildings necessitates advanced AI-driven methods for adaptive control and predictive maintenance. This study explores the application of Spiking Neural Networks (SNNs) to event-driven processing, real-time anomaly detection, and edge computing-based optimization in building automation. In contrast to conventional deep learning models, SNNs provide low-power, high-efficiency computation by mimicking biological neural processes, making them particularly suitable for real-time, edge-deployed decision-making. The proposed SNN based on Reward-Modulated Spike-Timing-Dependent Plasticity (STDP) and Bayesian Optimization (BO) integrates occupancy and ambient condition monitoring to dynamically manage assets such as appliances while simultaneously identifying anomalies for predictive maintenance. Experimental evaluations show that our BO-STDP-SNN framework achieves notable reductions in both energy consumption by 27.8% and power requirements by 70%, while delivering superior accuracy in anomaly detection compared with CNN, RNN, and LSTM based baselines. These results demonstrate the potential of SNNs to enhance the efficiency and resilience of smart building systems, reduce operational costs, and support long-term sustainability through low-latency, event-driven intelligence. Full article
(This article belongs to the Special Issue Digital Engineering for Future Smart Cities)
Show Figures

Graphical abstract

27 pages, 6430 KB  
Article
Bayesian–Geometric Fusion: A Probabilistic Framework for Robust Line Feature Matching
by Chenyang Zhang, Yufan Ge and Shuo Gu
Electronics 2025, 14(19), 3783; https://doi.org/10.3390/electronics14193783 - 24 Sep 2025
Viewed by 139
Abstract
Line feature matching is a fundamental and extensively studied subject in the fields of photogrammetry and computer vision. Traditional methods, which rely on handcrafted descriptors and distance-based filtering outliers, frequently encounter challenges related to robustness and a high incidence of outliers. While some [...] Read more.
Line feature matching is a fundamental and extensively studied subject in the fields of photogrammetry and computer vision. Traditional methods, which rely on handcrafted descriptors and distance-based filtering outliers, frequently encounter challenges related to robustness and a high incidence of outliers. While some approaches leverage point features to assist line feature matching by establishing the invariant geometric constraints between points and lines, this typically results in a considerable computational load. In order to overcome these limitations, we introduce a novel Bayesian posterior probability framework for line matching that incorporates three geometric constraints: the distance between line feature endpoints, midpoint distance, and angular consistency. Our approach initially characterizes inter-image geometric relationships using Fourier representation. Subsequently, we formulate the posterior probability distributions for the distance constraint and the uniform distribution based on the constraint of angular consistency. By calculating the joint probability distribution under three geometric constraints, robust line feature matches are iteratively optimized through the Expectation–Maximization (EM) algorithm. Comprehensive experiments confirm the effectiveness of our approach: (i) it outperforms state-of-the-art (including deep learning-based) algorithms in match count and accuracy across common scenarios; (ii) it exhibits superior robustness to rotation, illumination variation, and motion blur compared to descriptor-based methods; and (iii) it notably reduces computational overhead in comparison to algorithms that involve point-assisted line matching. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

28 pages, 6622 KB  
Article
Bayesian Spatio-Temporal Trajectory Prediction and Conflict Alerting in Terminal Area
by Yangyang Li, Yong Tian, Xiaoxuan Xie, Bo Zhi and Lili Wan
Aerospace 2025, 12(9), 855; https://doi.org/10.3390/aerospace12090855 - 22 Sep 2025
Viewed by 403
Abstract
Precise trajectory prediction in the airspace of a high-density terminal area (TMA) is crucial for Trajectory Based Operations (TBO), but frequent aircraft interactions and maneuvering behaviors can introduce significant uncertainties. Most existing approaches use deterministic deep learning models that lack uncertainty quantification and [...] Read more.
Precise trajectory prediction in the airspace of a high-density terminal area (TMA) is crucial for Trajectory Based Operations (TBO), but frequent aircraft interactions and maneuvering behaviors can introduce significant uncertainties. Most existing approaches use deterministic deep learning models that lack uncertainty quantification and explicit spatial awareness. To address this gap, we propose the BST-Transformer, a Bayesian spatio-temporal deep learning framework that produces probabilistic multi-step trajectory forecasts and supports probabilistic conflict alerting. The framework first extracts temporal and spatial interaction features via spatio-temporal attention encoders and then uses a Bayesian decoder with variational inference to yield trajectory distributions. Potential conflicts are evaluated by Monte Carlo sampling of the predictive distributions to produce conflict probabilities and alarm decisions. Experiments based on real SSR data from the Guangzhou TMA show that this model performs exceptionally well in improving prediction accuracy by reducing MADE 60.3% relative to a deterministic ST-Transformer with analogous reductions in horizontal and vertical errors (MADHE and MADVE), quantifying uncertainty and significantly enhancing the system’s ability to identify safety risks, and providing strong support for intelligent air traffic management with uncertainty perception capabilities. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

32 pages, 28470 KB  
Article
A Bearing Fault Detection Method Based on EMDWS-CNT-BO
by Dayou Cui, Zhaoyan Xie, Zhixue Wang, Xiaowei Li and Sihao Wu
Machines 2025, 13(9), 865; https://doi.org/10.3390/machines13090865 - 17 Sep 2025
Viewed by 370
Abstract
Accurate diagnosis of bearing faults is crucial for ensuring the safe and reliable operation of rotating machinery. To enhance the recognition accuracy of rolling bearings under nonlinear and non-stationary vibration conditions, this study proposes an integrated approach combining a multi-stage signal preprocessing strategy, [...] Read more.
Accurate diagnosis of bearing faults is crucial for ensuring the safe and reliable operation of rotating machinery. To enhance the recognition accuracy of rolling bearings under nonlinear and non-stationary vibration conditions, this study proposes an integrated approach combining a multi-stage signal preprocessing strategy, termed EMDWS (Empirical Mode Decomposition with Wavelet denoising and SMOTE), with a hybrid deep learning architecture that integrates a Convolutional Neural Network (CNN) and a Transformer model, hereinafter referred to as CNT (CNN-Transformer). The method first applies empirical mode decomposition (EMD) in conjunction with wavelet denoising to enhance the representation of non-stationary fault features. Subsequently, the synthetic minority oversampling technique (SMOTE) is employed to address the issue of class imbalance in the dataset. A hybrid CNN-Transformer model is constructed by integrating convolutional neural networks and Transformer modules, enabling the extraction of both local and global signal characteristics. Furthermore, Bayesian optimization is applied to fine-tune the model’s hyperparameters, thereby enhancing both the efficiency and robustness of the model. Experimental results demonstrate that the proposed method achieves a high identification accuracy of 99.83%, indicating its effectiveness in distinguishing various bearing fault types. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

19 pages, 3935 KB  
Article
Integrating Bayesian Networks and Numerical Simulation for Risk Assessment of Deep Foundation Pit Clusters
by Chun Huang, Zixin Zheng, Yanlin Li and Wenjie Li
Buildings 2025, 15(18), 3355; https://doi.org/10.3390/buildings15183355 - 16 Sep 2025
Viewed by 240
Abstract
With rapid urbanization, deep foundation pit clusters (DFPCs) have become increasingly common, introducing complex and significant construction risks. To improve risk evaluation under such complexity and uncertainty, this study proposes a hierarchical assessment framework. First, fault tree analysis is used to systematically identify [...] Read more.
With rapid urbanization, deep foundation pit clusters (DFPCs) have become increasingly common, introducing complex and significant construction risks. To improve risk evaluation under such complexity and uncertainty, this study proposes a hierarchical assessment framework. First, fault tree analysis is used to systematically identify and decompose DFPC-related risks. Second, a Bayesian network (BN) is constructed based on the fault tree to model interactions among risks, and structural learning techniques are applied to optimize the BN structure. An analytic hierarchy process (AHP) is then used to assign prior probabilities, enabling the identification of critical risk factors. To validate the framework, numerical simulations are used to analyze the impact of support failures on pit stability. The results show that mid-span support failures have the greatest influence. Two DFPC layouts are simulated to assess the effects of failure location and pit spacing. When the spacing is 0.10H (H = excavation depth), failures in a subpit’s mid-support cause the most severe impact on adjacent pits. These results confirm the framework’s effectiveness in evaluating DFPC risk. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

29 pages, 4506 KB  
Article
Adaptive Deep Belief Networks and LightGBM-Based Hybrid Fault Diagnostics for SCADA-Managed PV Systems: A Real-World Case Study
by Karl Kull, Muhammad Amir Khan, Bilal Asad, Muhammad Usman Naseer, Ants Kallaste and Toomas Vaimann
Electronics 2025, 14(18), 3649; https://doi.org/10.3390/electronics14183649 - 15 Sep 2025
Viewed by 632
Abstract
Photovoltaic (PV) systems are increasingly integral to global energy solutions, but their long-term reliability is challenged by various operational faults. In this article, we propose an advanced hybrid diagnostic framework combining a Deep Belief Network (DBN) for feature pattern extraction and a Light [...] Read more.
Photovoltaic (PV) systems are increasingly integral to global energy solutions, but their long-term reliability is challenged by various operational faults. In this article, we propose an advanced hybrid diagnostic framework combining a Deep Belief Network (DBN) for feature pattern extraction and a Light Gradient Boosting Machine (LightGBM) for classification to detect and diagnose PV panel faults. The proposed model is trained and validated on the QASP PV Fault Detection Dataset, a real-time SCADA-based dataset collected from 255 W panels at the Quaid-e-Azam Solar 100 MW Power Plant (QASP), Pakistan’s largest solar facility. The dataset encompasses seven classes: Healthy, Open Circuit, Photovoltaic Ground (PVG), Partial Shading, Busbar, Soiling, and Hotspot Faults. The DBN captures complex non-linear relationships in SCADA parameters such as DC voltage, DC current, irradiance, inverter power, module temperature, and performance ratio, while LightGBM ensures high accuracy in classifying fault types. The proposed model is trained and evaluated on a real-world SCADA-based dataset comprising 139,295 samples, with a 70:30 split for training and testing, ensuring robust generalization across diverse PV fault conditions. Experimental results demonstrate the robustness and generalization capabilities of the proposed hybrid (DBN–LightGBM) model, outperforming conventional machine learning methods and showing an accuracy of 98.21% classification accuracy, 98.0% macro-F1 score, and significantly reduced training time compared to Transformer and CNN-LSTM baselines. This study contributes to a reliable and scalable AI-driven solution for real-time PV fault monitoring, offering practical implications for large-scale solar plant maintenance and operational efficiency. Full article
Show Figures

Figure 1

Back to TopTop