Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (879)

Search Parameters:
Keywords = Bayesian Neural Network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 5479 KB  
Review
Analog Design and Machine Learning: A Review
by Konstantinos G. Liakos and Fotis Plessas
Electronics 2025, 14(17), 3541; https://doi.org/10.3390/electronics14173541 - 5 Sep 2025
Viewed by 314
Abstract
Analog and mixed-signal integrated circuit (AMS-IC) designs remain a critical yet challenging aspect within electronic design automation (EDA), primarily due to the inherent complexity, nonlinear behavior, and increasing variability associated with advanced semiconductor technologies. Traditional manual and intuition-driven methodologies for AMS-ICs design, which [...] Read more.
Analog and mixed-signal integrated circuit (AMS-IC) designs remain a critical yet challenging aspect within electronic design automation (EDA), primarily due to the inherent complexity, nonlinear behavior, and increasing variability associated with advanced semiconductor technologies. Traditional manual and intuition-driven methodologies for AMS-ICs design, which rely heavily on iterative simulation loops and extensive designer experience, face significant limitations concerning efficiency, scalability, and reproducibility. Recently, machine learning (ML) techniques have emerged as powerful tools to address these challenges, offering significant enhancements in modeling, abstraction, optimization, and automation capabilities for AMS-ICs. This review systematically examines recent advancements in ML-driven methodologies applied to analog circuit design, specifically focusing on modeling techniques such as Bayesian inference and neural network (NN)-based surrogate models, optimization and sizing strategies, specification-driven predictive design, and artificial intelligence (AI)-assisted design automation for layout generation. Through an extensive survey of the existing literature, we analyze the effectiveness, strengths, and limitations of various ML approaches, identifying key trends and gaps within the current research landscape. Finally, the paper outlines potential future research directions aimed at advancing ML integration in AMS-ICs design, emphasizing the need for improved explainability, data availability, methodological rigor, and end-to-end automation. Full article
(This article belongs to the Special Issue Recent Advances in AI Hardware Design)
Show Figures

Figure 1

30 pages, 7728 KB  
Article
Optuna-Optimized Ensemble and Neural Network Models for Static Characteristics Prediction of Active Bearings with Geometric Adjustments
by Girish Hariharan, Ravindra Mallya, Nitesh Kumar, Deepak Doreswamy, Gowrishankar Mandya Chennegowda and Subraya Krishna Bhat
Modelling 2025, 6(3), 98; https://doi.org/10.3390/modelling6030098 - 5 Sep 2025
Viewed by 322
Abstract
Active vibration control designs for journal bearings have improved rotordynamic stability and led to advancements in adjustable bearing types that enable precise control of bearing geometry. In this study, optimized machine learning (ML) algorithms were modeled and implemented to accurately predict the static [...] Read more.
Active vibration control designs for journal bearings have improved rotordynamic stability and led to advancements in adjustable bearing types that enable precise control of bearing geometry. In this study, optimized machine learning (ML) algorithms were modeled and implemented to accurately predict the static performance envelope of a four-pad active journal bearing with features of controlling the radial and tilt positions of pads in real time. ML models developed for the adjustable bearing system help predict its behavior as a function of three key input parameters such as the eccentricity ratio and radial and tilt positions of pads. Four supervised regression models, such as Random Forest Regression (RFR), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), and a feedforward Artificial Neural Network (ANN), were chosen for their demonstrated ability to capture complex nonlinear patterns and their robustness against overfitting in such tribological applications. Hyperparameter tuning for each model was performed using the Optuna framework, which applies Bayesian optimization to efficiently determine the best parameter settings. The Optuna-optimized ensemble and neural network models were used to identify the optimal combinations of input variables that maximize the static performance envelope of the active bearing system with geometric adjustments. Full article
Show Figures

Figure 1

20 pages, 2252 KB  
Article
Enhanced Physics-Informed Neural Networks for Deep Tunnel Seepage Field Prediction: A Bayesian Optimization Approach
by Yiheng Pan, Yongqi Zhang, Qiyuan Lu, Peng Xia, Jiarui Qi and Qiqi Luo
Water 2025, 17(17), 2621; https://doi.org/10.3390/w17172621 - 4 Sep 2025
Viewed by 575
Abstract
Predicting tunnel seepage field is a critical challenge in the construction of underground engineering projects. While traditional analytical solutions and numerical methods struggle with complex geometric boundaries, standard Physics-Informed Neural Networks (PINNs) encounter additional challenges in tunnel seepage problems, including training instability, boundary [...] Read more.
Predicting tunnel seepage field is a critical challenge in the construction of underground engineering projects. While traditional analytical solutions and numerical methods struggle with complex geometric boundaries, standard Physics-Informed Neural Networks (PINNs) encounter additional challenges in tunnel seepage problems, including training instability, boundary handling difficulties, and low sampling efficiency. This paper develops an enhanced PINN framework designed specifically for predicting tunnel seepage field by integrating Exponential Moving Average (EMA) weight stabilization, Residual Adaptive Refinement with Distribution (RAR-D) sampling, and Bayesian optimization for collaborative training. The framework introduces a trial function method based on an approximate distance function (ADF) to address the precise handling of circular tunnel boundaries. The results demonstrate that the enhanced PINN framework achieves an exceptional prediction accuracy with an overall average relative error of 5 × 10−5. More importantly, the method demonstrates excellent practical applicability in data-scarce scenarios, maintaining acceptable prediction accuracy even when monitoring points are severely limited. Bayesian optimization reveals the critical insight that loss weight configuration is more important than network architecture in physics-constrained problems. This study is a systematic application of PINNs to prediction of tunnel seepage field and holds significant value for tunnel construction monitoring and risk assessment. Full article
(This article belongs to the Section Hydrogeology)
Show Figures

Figure 1

34 pages, 2542 KB  
Article
Uncertainty-Based Design Optimization Framework Based on Improved Chicken Swarm Algorithm and Bayesian Optimization Neural Network
by Qiang Ji, Ran Li and Shi Jing
Appl. Sci. 2025, 15(17), 9671; https://doi.org/10.3390/app15179671 - 2 Sep 2025
Viewed by 248
Abstract
As the complexity and functional integration of mechanism systems continue to increase in modern practical engineering, the challenges of changing environmental conditions and extreme working conditions are becoming increasingly severe. Traditional uncertainty-based design optimization (UBDO) has exposed problems of low efficiency and slow [...] Read more.
As the complexity and functional integration of mechanism systems continue to increase in modern practical engineering, the challenges of changing environmental conditions and extreme working conditions are becoming increasingly severe. Traditional uncertainty-based design optimization (UBDO) has exposed problems of low efficiency and slow convergence when dealing with nonlinear, high-dimensional, and strongly coupled problems. In response to these issues, this paper proposes an UBDO framework that integrates an efficient intelligent optimization algorithm with an excellent surrogate model. By fusing butterfly search with Levy flight optimization, an improved chicken swarm algorithm is introduced, aiming to address the imbalance between global exploitation and local exploration capabilities in the original algorithm. Additionally, Bayesian optimization is employed to fit the limit-state evaluation function using a BP neural network, with the objective of reducing the high computational costs associated with uncertainty analysis through repeated limit-state evaluations in uncertainty-based optimization. Finally, a decoupled optimization framework is adopted to integrate uncertainty analysis with design optimization, enhancing global optimization capabilities under uncertainty and addressing challenges associated with results that lack sufficient accuracy or reliability to meet design requirements. Based on the results from engineering case studies, the proposed UBDO framework demonstrates notable effectiveness and superiority. Full article
(This article belongs to the Special Issue Data-Enhanced Engineering Structural Integrity Assessment and Design)
Show Figures

Figure 1

24 pages, 1389 KB  
Article
Analysis and Forecasting of Cryptocurrency Markets Using Bayesian and LSTM-Based Deep Learning Models
by Bidesh Biswas Biki, Makoto Sakamoto, Amane Takei, Md. Jubirul Alam, Md. Riajuliislam and Showaibuzzaman Showaibuzzaman
Informatics 2025, 12(3), 87; https://doi.org/10.3390/informatics12030087 - 30 Aug 2025
Viewed by 711
Abstract
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling [...] Read more.
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling approaches: the Bayesian State-Space model and Long Short-Term Memory (LSTM) neural networks. Historical price data from January 2024 to April 2025 is used for model training and testing. The Bayesian model provided probabilistic insights by achieving a Mean Squared Error (MSE) of 0.0000 and a Mean Absolute Error (MAE) of 0.0026 for training data. For testing data, it provided 0.0013 for MSE and 0.0307 for MAE. On the other hand, the LSTM model provided temporal dependencies and performed strongly by achieving 0.0004 for MSE, 0.0160 for MAE, 0.0212 for RMSE, 0.9924 for R2 in terms of training data and for testing data, and 0.0007 for MSE with an R2 of 0.3505. From the result, it indicates that while the LSTM model excels in training performance, the Bayesian model provides better interpretability with lower error margins in testing by highlighting the trade-offs between model accuracy and probabilistic forecasting in the cryptocurrency markets. Full article
Show Figures

Figure 1

16 pages, 1499 KB  
Article
Predicting Flatfish Growth in Aquaculture Using Bayesian Deep Kernel Machines
by Junhee Kim, Seung-Won Seo, Ho-Jin Jung, Hyun-Seok Jang, Han-Kyu Lim and Seongil Jo
Appl. Sci. 2025, 15(17), 9487; https://doi.org/10.3390/app15179487 - 29 Aug 2025
Viewed by 230
Abstract
Olive flounder (Paralichthys olivaceus) is a key aquaculture species in South Korea, but its production has been challenged by rising mortality under environmental stress from key environmental factors such as water temperature, dissolved oxygen, and feeding conditions. To support adaptive management, [...] Read more.
Olive flounder (Paralichthys olivaceus) is a key aquaculture species in South Korea, but its production has been challenged by rising mortality under environmental stress from key environmental factors such as water temperature, dissolved oxygen, and feeding conditions. To support adaptive management, this study proposes a Bayesian Deep Kernel Machine Regression (BDKMR) model that integrates Gaussian process regression with neural network-based feature learning. Using longitudinal data from commercial farms, we model fish growth as a function of water temperature, dissolved oxygen, and feed quantity. Model performance is assessed via Leave-One-Out Cross-Validation and compared against kernel ridge regression and Bayesian kernel machine regression. Results show that BDKMR achieves substantially lower prediction errors, indicating superior accuracy and robustness. These findings suggest that BDKMR offers a flexible and effective framework for predictive modeling in aquaculture systems. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

19 pages, 2725 KB  
Article
Enhancing Photovoltaic Energy Output Predictions Using ANN and DNN: A Hyperparameter Optimization Approach
by Atıl Emre Cosgun
Energies 2025, 18(17), 4564; https://doi.org/10.3390/en18174564 - 28 Aug 2025
Viewed by 365
Abstract
This study investigates the use of artificial neural networks (ANNs) and deep neural networks (DNNs) for estimating photovoltaic (PV) energy output, with a particular focus on hyperparameter tuning. Supervised regression for photovoltaic (PV) direct current power prediction was conducted using only sensor-based inputs [...] Read more.
This study investigates the use of artificial neural networks (ANNs) and deep neural networks (DNNs) for estimating photovoltaic (PV) energy output, with a particular focus on hyperparameter tuning. Supervised regression for photovoltaic (PV) direct current power prediction was conducted using only sensor-based inputs (PanelTemp, Irradiance, AmbientTemp, Humidity), together with physically motivated-derived features (ΔT, IrradianceEff, IrradianceSq, Irradiance × ΔT). Samples acquired under very low irradiance (<50 W m−2) were excluded. Predictors were standardized with training-set statistics (z-score), and the target variable was modeled in log space to stabilize variance. A shallow artificial neural network (ANN; single hidden layer, widths {4–32}) was compared with deeper multilayer perceptrons (DNN; stacks {16 8}, {32 16}, {64 32}, {128 64}, {128 64 32}). Hyperparameters were selected with a grid search using validation mean squared error in log space with early stopping; Bayesian optimization was additionally applied to the ANN. Final models were retrained and evaluated on a held-out test set after inverse transformation to watts. Test performance was obtained as MSE, RMSE, MAE, R2, and MAPE for the ANN and DNN. Hence, superiority in absolute/squared error and explained variance was exhibited by the ANN, whereas lower relative error was achieved by the DNN with a marginal MAE advantage. Ablation studies showed that moderate depth can be beneficial (e.g., two-layer variants), and a simple bootstrap ensemble improved robustness. In summary, the ANN demonstrated superior performance in terms of absolute-error accuracy, whereas the DNN exhibited better consistency with relative-error accuracy. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

26 pages, 642 KB  
Article
Bayesian Input Compression for Edge Intelligence in Industry 4.0
by Handuo Zhang, Jun Guo, Xiaoxiao Wang and Bin Zhang
Electronics 2025, 14(17), 3416; https://doi.org/10.3390/electronics14173416 - 27 Aug 2025
Viewed by 316
Abstract
In Industry 4.0 environments, edge intelligence plays a critical role in enabling real-time analytics and autonomous decision-making by integrating artificial intelligence (AI) with edge computing. However, deploying deep neural networks (DNNs) on resource-constrained edge devices remains challenging due to limited computational capacity and [...] Read more.
In Industry 4.0 environments, edge intelligence plays a critical role in enabling real-time analytics and autonomous decision-making by integrating artificial intelligence (AI) with edge computing. However, deploying deep neural networks (DNNs) on resource-constrained edge devices remains challenging due to limited computational capacity and strict latency requirements. While conventional methods primarily focus on structural model compression, we propose an adaptive input-centric approach that reduces computational overhead by pruning redundant features prior to inference. A Bayesian network is employed to quantify the influence of each input feature on the model output, enabling efficient input reduction without modifying the model architecture. A bidirectional chain structure facilitates robust feature ranking, and an automated algorithm optimizes input selection to meet predefined constraints on model accuracy and size. Experimental results demonstrate that the proposed method significantly reduces memory usage and computation cost while maintaining competitive performance, making it highly suitable for real-time edge intelligence in industrial settings. Full article
(This article belongs to the Special Issue Intelligent Cloud–Edge Computing Continuum for Industry 4.0)
Show Figures

Figure 1

28 pages, 1687 KB  
Article
MaGNet-BN: Markov-Guided Bayesian Neural Networks for Calibrated Long-Horizon Sequence Forecasting and Community Tracking
by Daozheng Qu and Yanfei Ma
Mathematics 2025, 13(17), 2740; https://doi.org/10.3390/math13172740 - 26 Aug 2025
Viewed by 494
Abstract
Forecasting over dynamic graph environments necessitates modeling both long-term temporal dependencies and evolving structural patterns. We propose MaGNet-BN, a modular framework that simultaneously performs probabilistic forecasting and dynamic community detection on temporal graphs. MaGNet-BN integrates Bayesian node embeddings for uncertainty modeling, prototype-guided [...] Read more.
Forecasting over dynamic graph environments necessitates modeling both long-term temporal dependencies and evolving structural patterns. We propose MaGNet-BN, a modular framework that simultaneously performs probabilistic forecasting and dynamic community detection on temporal graphs. MaGNet-BN integrates Bayesian node embeddings for uncertainty modeling, prototype-guided Louvain clustering for community discovery, Markov-based transition modeling to preserve temporal continuity, and reinforcement-based refinement to improve structural boundary accuracy. Evaluated on real-world datasets in pedestrian mobility, energy consumption, and retail demand, our model achieves on average 11.48% lower MSE, 6.62% lower NLL, and 10.82% higher Modularity (Q) compared with the best-performing baselines, with peak improvements reaching 12.0% in MSE, 7.9% in NLL, and 16.0% in Q on individual datasets. It also improves uncertainty calibration (PICP) and temporal community coherence (tARI). Ablation studies highlight the complementary strengths of each component. Overall, MaGNet-BN delivers a structure-aware and uncertainty-calibrated forecasting system that models both temporal evolution and dynamic community formation, with a modular design enabling interpretable predictions and scalable applications across smart cities, energy systems, and personalized services. Full article
Show Figures

Figure 1

24 pages, 11782 KB  
Article
Research on Joint Game-Theoretic Modeling of Network Attack and Defense Under Incomplete Information
by Yifan Wang, Xiaojian Liu and Xuejun Yu
Entropy 2025, 27(9), 892; https://doi.org/10.3390/e27090892 - 23 Aug 2025
Viewed by 555
Abstract
In the face of increasingly severe cybersecurity threats, incomplete information and environmental dynamics have become central challenges in network attack–defense scenarios. In real-world network environments, defenders often find it difficult to fully perceive attack behaviors and network states, leading to a high degree [...] Read more.
In the face of increasingly severe cybersecurity threats, incomplete information and environmental dynamics have become central challenges in network attack–defense scenarios. In real-world network environments, defenders often find it difficult to fully perceive attack behaviors and network states, leading to a high degree of uncertainty in the system. Traditional approaches are inadequate in dealing with the diversification of attack strategies and the dynamic evolution of network structures, making it difficult to achieve highly adaptive defense strategies and efficient multi-agent coordination. To address these challenges, this paper proposes a multi-agent network defense approach based on joint game modeling, termed JG-Defense (Joint Game-based Defense), which aims to enhance the efficiency and robustness of defense decision-making in environments characterized by incomplete information. The method integrates Bayesian game theory, graph neural networks, and a proximal policy optimization framework, and it introduces two core mechanisms. First, a Dynamic Communication Graph Neural Network (DCGNN) is used to model the dynamic network structure, improving the perception of topological changes and attack evolution trends. A multi-agent communication mechanism is incorporated within the DCGNN to enable the sharing of local observations and strategy coordination, thereby enhancing global consistency. Second, a joint game loss function is constructed to embed the game equilibrium objective into the reinforcement learning process, optimizing both the rationality and long-term benefit of agent strategies. Experimental results demonstrate that JG-Defense outperforms the Cybermonic model by 15.83% in overall defense performance. Furthermore, under the traditional PPO loss function, the DCGNN model improves defense performance by 11.81% compared to the Cybermonic model. These results verify that the proposed integrated approach achieves superior global strategy coordination in dynamic attack–defense scenarios with incomplete information. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

15 pages, 1839 KB  
Article
Fault Recovery Strategy with Net Load Forecasting Using Bayesian Optimized LSTM for Distribution Networks
by Zekai Ding and Yundi Chu
Entropy 2025, 27(9), 888; https://doi.org/10.3390/e27090888 - 22 Aug 2025
Viewed by 493
Abstract
To address the impact of distributed energy resource volatility on distribution network fault restoration, this paper proposes a strategy that incorporates net load forecasting. A Bayesian-optimized long short-term memory neural network is used to accurately predict the net load within fault-affected areas, achieving [...] Read more.
To address the impact of distributed energy resource volatility on distribution network fault restoration, this paper proposes a strategy that incorporates net load forecasting. A Bayesian-optimized long short-term memory neural network is used to accurately predict the net load within fault-affected areas, achieving an R2 of 0.9569 and an RMSE of 12.15 kW. Based on the forecasting results, a fast restoration optimization model is established, with objectives to maximize critical load recovery, minimize switching operations, and reduce network losses. The model is solved using a genetic algorithm enhanced with quantum particle swarm optimization (GA-QPSO), a hybrid metaheuristic known for its superior global exploration and local refinement capabilities. GA-QPSO has been successfully applied in various power system optimization problems, including service restoration, network reconfiguration, and distributed generation planning, owing to its effectiveness in navigating large, complex solution spaces. Simulation results on the IEEE 33-bus system show that the proposed method reduces network losses by 33.2%, extends the power supply duration from 60 to 120 min, and improves load recovery from 72.7% to 75.8%, demonstrating enhanced accuracy and efficiency of the restoration process. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 3304 KB  
Article
ANN-Based Prediction of OSL Decay Curves in Quartz from Turkish Mediterranean Beach Sand
by Mehmet Yüksel, Fırat Deniz and Emre Ünsal
Crystals 2025, 15(8), 733; https://doi.org/10.3390/cryst15080733 - 19 Aug 2025
Viewed by 868
Abstract
Quartz is a widely used mineral in dosimetric and geochronological applications due to its stable luminescence properties under ionizing radiation. This study presents an artificial neural network (ANN)-based approach to predict the optically stimulated luminescence (OSL) decay curves of quartz extracted from Mediterranean [...] Read more.
Quartz is a widely used mineral in dosimetric and geochronological applications due to its stable luminescence properties under ionizing radiation. This study presents an artificial neural network (ANN)-based approach to predict the optically stimulated luminescence (OSL) decay curves of quartz extracted from Mediterranean beach sand samples in Turkey. Experimental OSL signals were obtained from quartz samples irradiated with beta doses ranging from 0.1 Gy to 1034.9 Gy. The dataset was used to train ANN models with three different learning algorithms: Levenberg–Marquardt (LM), Bayesian Regularization (BR), and Scaled Conjugate Gradient (SCG). Forty-seven decay curves were used for training and three for testing. The ANN models were evaluated based on regression accuracy, training–validation–test performance, and their predictive capability for low, medium, and high doses (1 Gy, 72.4 Gy, 465.7 Gy). The results showed that BR achieved the highest overall regression (R = 0.99994) followed by LM (R = 0.99964) and SCG (R = 0.99820), confirming the superior generalization and fits across all dose ranges. LM performs optimally at low-to-moderate doses, and SCG delivers balanced yet slightly noisier predictions. The proposed ANN-based method offers a robust and effective alternative to conventional kinetic modeling approaches for analyzing OSL decay behavior and holds considerable potential for advancing luminescence-based retrospective dosimetry and OSL dating applications. Full article
(This article belongs to the Section Inorganic Crystalline Materials)
Show Figures

Figure 1

52 pages, 15058 KB  
Article
Optimizing Autonomous Vehicle Navigation Through Reinforcement Learning in Dynamic Urban Environments
by Mohammed Abdullah Alsuwaiket
World Electr. Veh. J. 2025, 16(8), 472; https://doi.org/10.3390/wevj16080472 - 18 Aug 2025
Viewed by 706
Abstract
Autonomous vehicle (AV) navigation in dynamic urban environments faces challenges such as unpredictable traffic conditions, varying road user behaviors, and complex road networks. This study proposes a novel reinforcement learning-based framework that enhances AV decision making through spatial-temporal context awareness. The framework integrates [...] Read more.
Autonomous vehicle (AV) navigation in dynamic urban environments faces challenges such as unpredictable traffic conditions, varying road user behaviors, and complex road networks. This study proposes a novel reinforcement learning-based framework that enhances AV decision making through spatial-temporal context awareness. The framework integrates Proximal Policy Optimization (PPO) and Graph Neural Networks (GNNs) to effectively model urban features like intersections, traffic density, and pedestrian zones. A key innovation is the urban context-aware reward mechanism (UCARM), which dynamically adapts the reward structure based on traffic rules, congestion levels, and safety considerations. Additionally, the framework incorporates a Dynamic Risk Assessment Module (DRAM), which uses Bayesian inference combined with Markov Decision Processes (MDPs) to proactively evaluate collision risks and guide safer navigation. The framework’s performance was validated across three datasets—Argoverse, nuScenes, and CARLA. Results demonstrate significant improvements: An average travel time of 420 ± 20 s, a collision rate of 3.1%, and energy consumption of 11,833 ± 550 J in Argoverse; 410 ± 20 s, 2.5%, and 11,933 ± 450 J in nuScenes; and 450 ± 25 s, 3.6%, and 13,000 ± 600 J in CARLA. The proposed method achieved an average navigation success rate of 92.5%, consistently outperforming baseline models in safety, efficiency, and adaptability. These findings indicate the framework’s robustness and practical applicability for scalable AV deployment in real-world urban traffic conditions. Full article
(This article belongs to the Special Issue Modeling for Intelligent Vehicles)
Show Figures

Figure 1

18 pages, 2291 KB  
Article
Forecasting Tibetan Plateau Lake Level Responses to Climate Change: An Explainable Deep Learning Approach Using Altimetry and Climate Models
by Atefeh Gholami and Wen Zhang
Water 2025, 17(16), 2434; https://doi.org/10.3390/w17162434 - 17 Aug 2025
Viewed by 665
Abstract
The Tibetan Plateau’s lakes, serving as critical water towers for over two billion people, exhibit divergent responses to climate change that remain poorly quantified. This study develops a deep learning framework integrating Synthetic Aperture Radar (SAR) altimetry from Sentinel-3A with bias-corrected CMIP6 (Coupled [...] Read more.
The Tibetan Plateau’s lakes, serving as critical water towers for over two billion people, exhibit divergent responses to climate change that remain poorly quantified. This study develops a deep learning framework integrating Synthetic Aperture Radar (SAR) altimetry from Sentinel-3A with bias-corrected CMIP6 (Coupled Model Intercomparison Project Phase 6) climate projections under Shared Socioeconomic Pathways (SSP) scenarios (SSP2-4.5 and SSP5-8.5, adjusted via quantile mapping) to predict lake-level changes across eight Tibetan Plateau (TP) lakes. Using a Feed-Forward Neural Network (FFNN) optimized via Bayesian optimization using the Optuna framework, we achieve robust water level projections (mean validation R2 = 0.861) and attribute drivers through Shapley Additive exPlanations (SHAP) analysis. Results reveal a stark north–south divergence: glacier-fed northern lakes like Migriggyangzham will rise by 13.18 ± 0.56 m under SSP5-8.5 due to meltwater inputs (temperature SHAP value = 0.41), consistent with the early (melt-dominated) phase of the IPCC’s ‘peak water’ framework. In comparison, evaporation-dominated southern lakes such as Langacuo face irreversible desiccation (−4.96 ± 0.68 m by 2100) as evaporative demand surpasses precipitation gains. Transitional western lakes exhibit “peak water” inflection points (e.g., Lumajang Dong’s 2060 maximum) signaling cryospheric buffer loss. These projections, validated through rigorous quantile mapping and rolling-window cross-validation, provide the first process-aware assessment of TP Lake vulnerabilities, informing adaptation strategies under the Sustainable Development Goals (SDGs) for water security (SDG 6) and climate action (SDG 13). The methodological framework establishes a transferable paradigm for monitoring high-altitude freshwater systems globally. Full article
Show Figures

Figure 1

22 pages, 894 KB  
Article
Adaptive Knowledge Assessment via Symmetric Hierarchical Bayesian Neural Networks with Graph Symmetry-Aware Concept Dependencies
by Wenyang Cao, Nhu Tam Mai and Wenhe Liu
Symmetry 2025, 17(8), 1332; https://doi.org/10.3390/sym17081332 - 15 Aug 2025
Cited by 5 | Viewed by 484
Abstract
Traditional educational assessment systems suffer from inefficient question selection strategies that fail to optimally probe student knowledge while requiring extensive testing time. We present a novel hierarchical probabilistic neural framework that integrates Bayesian inference with symmetric deep neural architectures to enable adaptive, efficient [...] Read more.
Traditional educational assessment systems suffer from inefficient question selection strategies that fail to optimally probe student knowledge while requiring extensive testing time. We present a novel hierarchical probabilistic neural framework that integrates Bayesian inference with symmetric deep neural architectures to enable adaptive, efficient knowledge assessment. Our method models student knowledge as latent representations within a graph-structured concept dependency network, where probabilistic mastery states, updated through variational inference, are encoded by symmetric graph properties and symmetric concept representations that preserve structural equivalences across similar knowledge configurations. The system employs a symmetric dual-network architecture: a concept embedding network that learns scale-invariant hierarchical knowledge representations from assessment data and a question selection network that optimizes symmetric information gain through deep reinforcement learning with symmetric reward structures. We introduce a novel uncertainty-aware objective function that leverages symmetric uncertainty measures to balance exploration of uncertain knowledge regions with exploitation of informative question patterns. The hierarchical structure captures both fine-grained concept mastery and broader domain understanding through multi-scale graph convolutions that preserve local graph symmetries and global structural invariances. Our symmetric information-theoretic method ensures balanced assessment strategies that maintain diagnostic equivalence across isomorphic concept subgraphs. Experimental validation on large-scale educational datasets demonstrates that our method achieves 76.3% diagnostic accuracy while reducing the question count by 35.1% compared to traditional assessments. The learned concept embeddings reveal interpretable knowledge structures with symmetric dependency patterns that align with pedagogical theory. Our work generalizes across domains and student populations through symmetric transfer learning mechanisms, providing a principled framework for intelligent tutoring systems and adaptive testing platforms. The integration of probabilistic reasoning with symmetric neural pattern recognition offers a robust solution to the fundamental trade-off between assessment efficiency and diagnostic precision in educational technology. Full article
(This article belongs to the Special Issue Advances in Graph Theory Ⅱ)
Show Figures

Figure 1

Back to TopTop