Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (13,684)

Search Parameters:
Keywords = Data-Driven Models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5409 KB  
Article
Frequency-Domain Physics-Informed Neural Networks for Modeling and Parameter Inversion of Wave-Induced Seabed Response
by Weiyun Chen, Hairong Tao, Lei Wang and Shaofen Fan
J. Mar. Sci. Eng. 2026, 14(8), 690; https://doi.org/10.3390/jmse14080690 (registering DOI) - 8 Apr 2026
Abstract
Modeling the dynamic response of saturated marine soils is crucial yet computationally challenging for traditional methods. Meanwhile, purely data-driven models suffer from sparse data and lack of physical interpretability. To overcome these limitations, this study proposes an intelligent engineering framework based on a [...] Read more.
Modeling the dynamic response of saturated marine soils is crucial yet computationally challenging for traditional methods. Meanwhile, purely data-driven models suffer from sparse data and lack of physical interpretability. To overcome these limitations, this study proposes an intelligent engineering framework based on a frequency-domain physics-informed neural network (FD-PINN) for the forward simulation and inverse parameter identification of saturated seabed soils. Constrained directly by physical laws during the learning process, FD-PINN remains highly reliable even when training data is sparse. By formulating the governing equations in the frequency domain, it directly predicts complex-valued displacement and pore-pressure phasors. Multiscale Fourier feature mappings mitigate spectral bias and capture boundary layers and high-frequency effects. For inverse problems, a phase-sensitive lock-in extraction strategy transforms time-domain measurements into robust frequency-domain targets, enabling the accurate and noise-tolerant identification of poroelastic parameters with clear physical meaning (nondimensional storage parameter S and permeability parameter Γ). Numerical experiments show that FD-PINN substantially outperforms conventional time-domain PINN, achieving relative L2 errors of 102103 for single- and multi-frequency excitations typical of wave-induced loadings. In particular, Γ is consistently recovered with sub-percent relative error, while S can be reliably identified with multi-frequency data. The framework offers a data-efficient, noise-robust approach for high-fidelity modeling and robust parameter inversion, which is particularly valuable in offshore environments where high-quality data is scarce. Full article
(This article belongs to the Special Issue Advances in Marine Geomechanics and Geotechnics)
Show Figures

Figure 1

32 pages, 7135 KB  
Article
Evolutionary Multi-Objective Prompt Learning for Synthetic Text Data Generation with Black-Box Large Language Models
by Diego Pastrián, Nicolás Hidalgo, Víctor Reyes and Erika Rosas
Appl. Sci. 2026, 16(8), 3623; https://doi.org/10.3390/app16083623 (registering DOI) - 8 Apr 2026
Abstract
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are [...] Read more.
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are scarce or difficult to obtain. Large Language Models (LLMs) provide powerful capabilities for synthetic text generation, yet the quality of generated data strongly depends on the design of input prompts. Prompt engineering is therefore critical, but it remains largely manual and difficult to scale, particularly in black-box settings where model internals are inaccessible. This work introduces EVOLMD-MO, a multi-objective evolutionary framework for automated prompt learning aimed at generating high-quality synthetic text datasets using black-box LLMs. The proposed approach formulates prompt optimization as a multi-objective search problem in which candidate prompts evolve through genetic operators guided by two complementary objectives: semantic fidelity to reference data and generative diversity of the produced samples. To support scalable optimization, the framework integrates a modular multi-agent architecture that decouples prompt evolution, LLM interaction, and evaluation mechanisms. The evolutionary process is implemented using the NSGA-II algorithm, enabling the discovery of diverse Pareto-optimal prompts that balance semantic preservation and diversity. Experimental evaluation using large-scale disaster-related social media data demonstrates that the proposed approach consistently improves prompt quality across generations while maintaining a stable trade-off between fidelity and diversity. Compared with a single-objective baseline, EVOLMD-MO explores a significantly broader semantic search space and produces more diverse yet semantically coherent synthetic datasets. These results indicate that multi-objective evolutionary prompt learning constitutes a promising strategy for black-box LLM-driven data generation, with potential applicability to adaptive data analytics and real-time decision-support systems in highly dynamic environments, pending broader validation across domains and models. Full article
(This article belongs to the Special Issue Resource Management for AI-Centric Computing Systems)
Show Figures

Figure 1

15 pages, 3434 KB  
Article
Cyclic Fatigue of Rotary Versus Reciprocating Endodontic Files: An In Vitro Study of Engine-Driven Endodontic Files
by Sverre Brun, Andrine Rebni Kristoffersen, Malene Nerbøberg Solsvik, Marit Øilo and Inge Fristad
Dent. J. 2026, 14(4), 216; https://doi.org/10.3390/dj14040216 - 8 Apr 2026
Abstract
Background/Objectives: Instrument fracture remains a significant complication in endodontics. This study compared the resistance to cyclic fatigue failure between rotary and reciprocating nickel–titanium file systems, as well as differences related to file size and taper. Methods: Nineteen rotary and reciprocating file types (n [...] Read more.
Background/Objectives: Instrument fracture remains a significant complication in endodontics. This study compared the resistance to cyclic fatigue failure between rotary and reciprocating nickel–titanium file systems, as well as differences related to file size and taper. Methods: Nineteen rotary and reciprocating file types (n = 10 per group) were evaluated in three independent test series, harmonized according to file size and system. Cyclic fatigue testing was conducted using a static model with a stainless-steel artificial canal, with an internal diameter of 0.9 mm, a 75° curvature angle, and a fixed radius for each series. Files were operated using preset programs on the X-Smart Plus, Rooter X3000, and Sendoline Endo torque-controlled motors. Time to fracture was recorded digitally, and the total number of full rotations to failure was calculated. The fractured fragments were examined with scanning electron microscopy and fractographic analysis. The data were analyzed using linear models in Stata version 19, with significance set at p ≤ 0.05. Results: Reciprocating file systems demonstrated greater time-to-fracture fatigue resistance than rotary systems. However, these differences were diminished or, in some cases, eliminated when normalized to the number of complete rotations. Fractographic analysis indicated that fractures predominantly resulted from tensile stress rather than shear forces. Conclusions: Reciprocating kinematics generally enhanced fatigue resistance compared with continuous rotation. The results suggest that fatigue resistance in machine-driven nickel–titanium instruments cannot be predicted by motion type or file design alone but reflects a complex interaction between alloy composition, heat treatment, and cross-sectional geometry. Full article
(This article belongs to the Special Issue Endodontics: From Technique to Regeneration)
Show Figures

Graphical abstract

25 pages, 3968 KB  
Article
Explainable Data-Driven Approach for Smart Crop Yield Prediction in Sub-Saharan Africa: Performance and Interpretability Analysis
by Damilola D. Olatinwo, Herman C. Myburgh, Allan De Freitas and Adnan Abu-Mahfouz
Agriculture 2026, 16(8), 826; https://doi.org/10.3390/agriculture16080826 - 8 Apr 2026
Abstract
The increasing demand for innovative strategies in sustainable food production—driven by rapid global population growth, particularly in sub-Saharan Africa (SSA)—necessitates urgent attention to agricultural resilience. Recent technological advancements have enhanced crop productivity, post-harvest preservation, and environmentally sustainable farming practices. However, three critical bottlenecks [...] Read more.
The increasing demand for innovative strategies in sustainable food production—driven by rapid global population growth, particularly in sub-Saharan Africa (SSA)—necessitates urgent attention to agricultural resilience. Recent technological advancements have enhanced crop productivity, post-harvest preservation, and environmentally sustainable farming practices. However, three critical bottlenecks remain: (i) the lack of accurate, maize-specific yield prediction methods tailored to SSA; (ii) limited multimodal modeling approaches capable of capturing complex, nonlinear interactions among heterogeneous data sources; and (iii) a lack of explainability mechanisms, which render high-performing models “black boxes” and hinder stakeholder trust. To address these gaps, this study presents an explainable machine learning framework for smart maize yield prediction. We integrate multimodal SSA-specific soil, crop, and weather data to capture the multi-dimensional drivers of maize productivity. Six diverse algorithms—including extreme gradient boosting (XGBoost), light gradient boosting machine (LGBM), categorical boosting (CatBoost), support vector machine (SVM), random forest (RF), and an artificial neural network (ANN) combined with a k-nearest neighbors (kNN)—were benchmarked to evaluate predictive performance. To ensure robustness against spatial heterogeneity, we employed a Leave-One-Plot-Out (LOPO) cross-validation strategy. Empirical results on unseen test data identify CatBoost as the best-performing model, achieving a coefficient of determination of (R2 =~76%), demonstrating its ability to capture complex, nonlinear relationships in agricultural data. To enhance transparency and stakeholder trust, we integrated Local Interpretable Model-agnostic Explanations (LIME), providing plot-level insights into the physiological and environmental drivers of maize yield. Together, these contributions establish a scalable and interpretable modeling framework capable of supporting data-driven agricultural decision-making in SSA. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 1578 KB  
Article
NAR–SPEI–NARX Hybrid Forecasting Model for Soil Moisture Index (SMI)
by Miloš Todorov, Darjan Karabašević, Predrag M. Tekić, Dragana Dudić and Dejan Viduka
Algorithms 2026, 19(4), 287; https://doi.org/10.3390/a19040287 (registering DOI) - 8 Apr 2026
Abstract
This paper introduces a new hybrid forecasting architecture that combines Nonlinear Autoregressive (NAR) models, the proxy Standardized Precipitation-Evapotranspiration Index (SPEI), and a Nonlinear Autoregressive with Exogenous Inputs (NARX) framework for Soil Moisture Index (SMI) prediction. The suggested methodology solves the crucial difficulty of [...] Read more.
This paper introduces a new hybrid forecasting architecture that combines Nonlinear Autoregressive (NAR) models, the proxy Standardized Precipitation-Evapotranspiration Index (SPEI), and a Nonlinear Autoregressive with Exogenous Inputs (NARX) framework for Soil Moisture Index (SMI) prediction. The suggested methodology solves the crucial difficulty of combining future climatic knowledge into soil moisture forecasting by using a cascaded approach. Stage 1 uses univariate NAR models to create multi-step-ahead predictions of precipitation and temperature. Stage 2 converts these forecasts into proxy SPEI values using a physically based water balance computation, and Stage 3 employs a NARX model that uses observed historical SMI and forecast-derived proxy SPEI as exogenous inputs. The framework is assessed using high-frequency observations from 2014 to 2020, with training data through 2019 and validation covering the whole 2020 horizon. The study combining forecast-driven climatic indicators with autoregressive soil moisture dynamics resulted in prediction accuracy (R2 = 0.9888, RMSE = 0.0827, MAE = 0.0567). This study presents a new NAR–SPEI–NARX model for SMI prediction forecasting, based on three-stage modeling, where NAR models forecast precipitation and temperature and then turn them into SPEI-proxy as an exogenous input for NARX. Full article
Show Figures

Figure 1

24 pages, 656 KB  
Article
Digital Technology and Energy Efficiency Enhancement: A Theoretical Framework and Empirical Evidence
by Lianghu Wang, Bin Li and Jun Shao
Energies 2026, 19(8), 1819; https://doi.org/10.3390/en19081819 - 8 Apr 2026
Abstract
Improving energy efficiency is critical for tackling environmental issues and achieving sustainable development. Understanding how digital technology affects energy efficiency and its underlying mechanisms can deepen our comprehension of the economic consequences of digital innovation. This study adopts a dictionary-based method to identify [...] Read more.
Improving energy efficiency is critical for tackling environmental issues and achieving sustainable development. Understanding how digital technology affects energy efficiency and its underlying mechanisms can deepen our comprehension of the economic consequences of digital innovation. This study adopts a dictionary-based method to identify digital technology patents from a large-scale patent dataset and employs a comprehensive evaluation approach incorporating both subjective and objective weights to measure digital technology advancement. Building on this framework, the research uses city-level data from China and applies panel data models alongside mediation effect models as core analytical tools to investigate the impact mechanisms and effects of digital technology on energy efficiency. Key findings reveal that digital technology has developed rapidly, exhibiting distinct phase-specific characteristics, especially after 2010, though notable regional disparities remain. Robust tests confirm that digital technology significantly enhances energy efficiency. Nonlinear regression results indicate that the marginal effect of digital technology changes dynamically across different stages of energy efficiency development. Heterogeneity tests demonstrate that the impact of digital technology on energy efficiency exhibits typical heterogeneous characteristics. Mechanism analysis shows that digital technology enhances energy efficiency primarily through two pathways: green technology innovation and industrial structure upgrading. Further analysis suggests that regional convergence in energy efficiency is objectively present, and digital technology actively accelerates this convergence process. These findings offer practical insights to guide policymakers in designing and implementing digital technology-driven strategies aimed at enhancing energy efficiency. Full article
Show Figures

Figure 1

14 pages, 981 KB  
Article
Modeling and Computational Analysis of Failure Mechanism of Photocatalytic Anti-Corrosion Materials Driven by Multi-Source Environmental Data
by Yanwei Tong, Hui Xu and Shuyuan Jia
Coatings 2026, 16(4), 449; https://doi.org/10.3390/coatings16040449 - 8 Apr 2026
Abstract
Photocatalytic anti-corrosion materials are an emerging intelligent protective material that has been widely used in marine and offshore engineering in recent years. However, its failure mechanism under multi-factor coupling is complex, and it is difficult for traditional methods to achieve accurate life prediction [...] Read more.
Photocatalytic anti-corrosion materials are an emerging intelligent protective material that has been widely used in marine and offshore engineering in recent years. However, its failure mechanism under multi-factor coupling is complex, and it is difficult for traditional methods to achieve accurate life prediction and mechanism analysis. This article takes submarine pipelines as the research object and designs an innovative multi-source environmental data-driven method combined with deep learning (DL), aiming to establish an intelligent prediction model for the failure of the material. This article first systematically collects the multi-source heterogeneous data of materials during service; on this basis, this article constructs a hybrid DL model. Firstly, a multi-scale multimodal image feature fusion network (MMFCT) based on the combination of convolutional neural network (CNN) and Transformer is adopted to automatically extract corrosion features from microscopic images and capture the dynamic correlation between environmental temporal data and performance degradation; then, the Sparrow Search Algorithm (SSA) was constructed to optimize the BP neural network (BPNN) model for predicting the ultimate bearing capacity of submarine corroded pipelines. Simulation experiments show that the proposed method achieves accurate prediction of material remaining life and key performance degradation paths. The corrosion recognition precision reaches 94.7%, and the bearing capacity prediction error remains below 3.1%. Full article
Show Figures

Figure 1

21 pages, 3681 KB  
Article
Experiment-Driven Gaussian Process Surrogate Modeling and Bayesian Optimization for Multi-Objective Injection Molding
by Hanafy M. Omar and Saad M. S. Mukras
Polymers 2026, 18(8), 902; https://doi.org/10.3390/polym18080902 - 8 Apr 2026
Abstract
Injection molding process optimization has predominantly relied on simulation-generated data, which cannot capture machine-specific variability and stochastic process noise inherent in real manufacturing environments. This paper presents an experiment-driven machine learning framework for multi-objective optimization of injection molding process parameters targeting volumetric shrinkage, [...] Read more.
Injection molding process optimization has predominantly relied on simulation-generated data, which cannot capture machine-specific variability and stochastic process noise inherent in real manufacturing environments. This paper presents an experiment-driven machine learning framework for multi-objective optimization of injection molding process parameters targeting volumetric shrinkage, warpage, cycle time, and part weight. Physical experiments were conducted on an industrial injection molding machine using high-density polyethylene with a face-centered central composite design. Systematic benchmarking of four machine learning algorithms under identical cross-validation protocols identified Gaussian process regression as the best-performing surrogate model for the majority of quality metrics, while warpage prediction remained challenging across all algorithms due to its complex thermo-mechanical origins. Permutation-based feature importance analysis established a clear parameter hierarchy, identifying holding time as the dominant factor governing multiple quality responses. Constrained Bayesian optimization with progressive constraint tightening was employed to identify optimal parameter sets and fundamental process capability boundaries. The resulting parameter configurations were validated against a held-out test set. This work demonstrates that rigorous, data-driven optimization using exclusively experimental data provides a viable and practically achievable alternative to simulation-based approaches, contributing to experiment-centric smart manufacturing in polymer processing. Full article
(This article belongs to the Section Polymer Processing and Engineering)
Show Figures

Figure 1

25 pages, 3820 KB  
Article
Ensemble Machine Learning Predicts Platinum Resistance in Ovarian Cancer Using Laboratory Data
by Xueting Peng, Yangyang Zhang, Chaoyu Zhu, Weijie Chen, Xiaohua Wu, Fan Zhong, Qinhao Guo and Lei Liu
Cancers 2026, 18(8), 1190; https://doi.org/10.3390/cancers18081190 - 8 Apr 2026
Abstract
Objectives: Platinum resistance remains a critical bottleneck in ovarian cancer management, yet reliable pre-treatment predictive tools are lacking. Existing markers like the platinum-free interval are retrospective, while genomic profiling is often cost-prohibitive. This study aimed to develop an accessible, machine learning-based dynamic weighted [...] Read more.
Objectives: Platinum resistance remains a critical bottleneck in ovarian cancer management, yet reliable pre-treatment predictive tools are lacking. Existing markers like the platinum-free interval are retrospective, while genomic profiling is often cost-prohibitive. This study aimed to develop an accessible, machine learning-based dynamic weighted fusion (DWF) model using routine laboratory data to provide bidirectional risk stratification, particularly to reliably rule out platinum resistance before treatment initiation. Methods: In this retrospective study (2019–2023), seventy baseline clinical features were collected to differentiate platinum-resistant from platinum-sensitive ovarian cancer patients. We developed a DWF framework that dynamically integrates the top-performing classifiers from a library of 168 algorithms (combining 14 feature selection and 12 machine learning methods). Class imbalance was addressed via oversampling, and model efficacy was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity. Results: The DWF model achieved a robust AUC of 0.760 (95% CI: 0.683–0.837), outperforming all individual base classifiers. Subgroup analysis demonstrated highly consistent overall discrimination across initial treatment strategies (AUC of 0.755 for primary debulking surgery and 0.761 for neoadjuvant chemotherapy). Feature interpretation highlighted that resistance is driven by synergistic dysregulation of systemic inflammation and hypercoagulability, rather than single biomarkers. Conclusions: The proposed DWF model effectively leverages low-cost, standardized clinical data to serve as a robust bidirectional stratification tool. Its exceptional ability to rule out resistance provides clinicians with the evidence-based confidence to proceed with standard therapies, while its high-risk alerts identify candidates for early therapeutic adjustments and enhanced surveillance in ovarian cancer care. Full article
Show Figures

Figure 1

19 pages, 2572 KB  
Article
Evaluating and Optimizing Air Quality Forecasting for Critical Particulate Matter Episodes in the Santiago Metropolitan Region, Chile
by Luis Alonso Díaz-Robles, Marcelo Oyaneder, Julio López, Ariel Meza, Serguei Alejandro-Martin, Rasa Zalakeviciute, Diana Yánez, Andrea Espinoza-Pérez, Lorena Espinoza-Pérez, Ernesto Pino-Cortés and Fidel Vallejo
Sustainability 2026, 18(8), 3652; https://doi.org/10.3390/su18083652 - 8 Apr 2026
Abstract
Severe wintertime particulate pollution (PM10 and PM2.5) affects the Santiago Metropolitan Region in Chile and is intensified by basin topography and frequent thermal inversions. Local authorities rely on the Critical Episodes Management (CEM) forecasting system, yet its predictive performance is [...] Read more.
Severe wintertime particulate pollution (PM10 and PM2.5) affects the Santiago Metropolitan Region in Chile and is intensified by basin topography and frequent thermal inversions. Local authorities rely on the Critical Episodes Management (CEM) forecasting system, yet its predictive performance is variable. This study assesses CEM to identify operational vulnerabilities and propose data-driven improvements for urban air-quality governance. About ~1.2 million hourly meteorological and air-quality records (2017–2022) were analyzed using Generalized Additive Models (GAMs) to characterize key nonlinear relationships, and we evaluated the operational skill of the Cassmassi-1 PM10 model and the WRF-Chem-based PM2.5 forecasting component used by the system. Cassmassi-1 missed more than 50% of critical episodes and showed a false-alarm rate above 60%, consistent with limitations associated with static or incomplete emission representations. By contrast, the WRF-Chem-based component achieved episode prediction accuracy above 70%. GAM results indicate that wind speeds below 2 m s−1, high diurnal temperature range, and relative humidity below 65% are strongly associated with extreme events. Considering the results, we recommend transitioning to nonlinear forecasting approaches that explicitly incorporate these meteorological thresholds and vertical stability indicators to improve alert reliability, strengthen urban resilience, and reduce population exposure. Full article
(This article belongs to the Special Issue Sustainable Air Quality Management and Monitoring)
Show Figures

Figure 1

22 pages, 2332 KB  
Article
A Multi-Model Machine Learning Framework for Predicting and Ranking High-Risk Urban Intersections in Riyadh
by Saleh Altwaijri, Saleh Alotaibi, Faisal Alosaimi, Adel Almutairi and Abdulaziz Alauany
Sustainability 2026, 18(8), 3651; https://doi.org/10.3390/su18083651 - 8 Apr 2026
Abstract
Road traffic accidents at intersections pose a persistent challenge in Riyadh, Saudi Arabia, contributing significantly to public health burdens and economic losses. Traditional statistical approaches often fail to capture the complex, non-linear interactions among geometric design, traffic parameters, and accident severity. This study [...] Read more.
Road traffic accidents at intersections pose a persistent challenge in Riyadh, Saudi Arabia, contributing significantly to public health burdens and economic losses. Traditional statistical approaches often fail to capture the complex, non-linear interactions among geometric design, traffic parameters, and accident severity. This study develops a multi-methodological machine learning framework to predict intersection accident severity using the Equivalent Property Damage Only (EPDO) metric. Historical data (2017–2023) from Riyadh Municipality for 150 high-risk intersections were analyzed, incorporating predictors such as service road distance (SRD), U-turn distance (UTD), median width (MW), peak hour volume (PHV), heavy vehicle percentage (HV%), and injury/frequency counts. Six algorithms, i.e., Decision Tree, Random Forest, Gradient Boosting, Support Vector Machine, Linear Regression, and Artificial Neural Network, were compared using a 70/30 train–test split and k-fold cross-validation in this study. The Gradient Boosting model achieved superior performance (R2 = 0.89 with MSE = 63.43 and RMSE = 7.96) and was selected for final deployment. SHAP feature importance analysis revealed minor injuries (MIs), serious injuries (SRIs), and fatalities (FAs) as the most important dominant predictors, with geometric factors (UTD, MW) and traffic composition (HV%) providing actionable infrastructure insights. The model ranked intersections and identified the “Jeddah Road with Taif Road” (predicted EPDO = 137.22) as the highest-risk location. Evidence-based recommendations include enforcing the minimum 300 m U-turn buffers with staggering service road exits ≥150 m and restricting heavy vehicles during peak hours. The scalable framework developed in this study supports the data-driven prioritization of safety interventions and aligns with sustainable urban mobility goals and offers transferability to other metropolitan contexts worldwide. Full article
Show Figures

Figure 1

25 pages, 4741 KB  
Article
An Edge-Enabled Predictive Maintenance Approach Based on Anomaly-Driven Health Indicators for Industrial Production Systems
by Bouzidi Lamdjad and Adem Chaiter
Algorithms 2026, 19(4), 286; https://doi.org/10.3390/a19040286 - 8 Apr 2026
Abstract
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach [...] Read more.
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach combines edge-level monitoring, anomaly detection, and predictive modeling to analyze operational signals and estimate system health conditions from high-frequency industrial data. Empirical validation was conducted using operational datasets collected from two industrial production facilities between 2024 and 2025. The model evaluates patterns associated with operational instability and degradation-related anomalies and translates them into interpretable health indicators that can support proactive intervention. The empirical results show strong predictive performance, with R2 reaching 0.989, a mean absolute percentage error of 3.67%, and a root mean square error of 0.79. In addition, the mitigation of early anomaly signals was associated with an observed improvement of approximately 3.99% in system stability. Unlike many existing studies that treat anomaly detection, predictive modeling, and prognostic analysis as separate tasks, the proposed framework connects these stages within a unified analytical structure designed for deployment in industrial environments. The findings indicate that edge-generated anomaly signals can provide meaningful early information about potential system deterioration and can assist in planning timely maintenance actions even when explicit failure labels are limited. The study contributes to the development of scalable predictive maintenance solutions that integrate artificial intelligence with edge-based industrial monitoring systems. Full article
Show Figures

Figure 1

25 pages, 738 KB  
Article
Investigating Decision-Support Chatbot Acceptance Among Professionals: An Application of the UTAUT Model in a Marketing and Sales Context
by Sven Kottmann and Jürgen Seitz
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 113; https://doi.org/10.3390/jtaer21040113 - 7 Apr 2026
Abstract
This study investigates the acceptance of an AI-powered decision-support chatbot among professionals in a marketing and sales context, addressing a gap in technology acceptance research by examining data-intensive decision environments that remain underexplored. Building on the Unified Theory of Acceptance and Use of [...] Read more.
This study investigates the acceptance of an AI-powered decision-support chatbot among professionals in a marketing and sales context, addressing a gap in technology acceptance research by examining data-intensive decision environments that remain underexplored. Building on the Unified Theory of Acceptance and Use of Technology (UTAUT), the study proposes an extended model incorporating Behavioral Intention, Performance Expectancy, Effort Expectancy, Social Influence, Output Quality, Time Saving, Source Trustworthiness, Cognitive Load, and Chatbot Self-Efficacy. An experimental study was conducted with 106 professionals using a chatbot-enhanced business analytics platform to complete marketing KPI analysis tasks. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results demonstrate that Behavioral Intention to use decision-support chatbots is significantly influenced by Performance Expectancy, Effort Expectancy, and Social Influence. Performance Expectancy is strongly driven by Output Quality, Time Saving, and Source Trustworthiness, while Effort Expectancy is significantly shaped by reduced Cognitive Load and higher Chatbot Self-Efficacy. The findings suggest that chatbot acceptance in professional decision-making depends not only on usability and performance beliefs but also on cognitive relief, trust in information sources, and efficiency gains, highlighting important implications for both theory and the design of AI-based decision-support systems. Full article
(This article belongs to the Special Issue Emerging Technologies and Marketing Innovation)
Show Figures

Figure 1

30 pages, 7627 KB  
Article
An Experimental and Numerical Simulation Study on a Three-Hydraulic-Cylinder Synchronous Steering Offset Actuator Driven by a Drilling Fluid Rotary Valve Distributor
by Junfeng Kang, Gonghui Liu, Tian Chen, Chunqing Zha, Wei Wang and Lincong Wang
Appl. Sci. 2026, 16(7), 3612; https://doi.org/10.3390/app16073612 - 7 Apr 2026
Abstract
The rotary steerable system (RSS) is the core equipment for precise wellbore trajectory control in deep oil and gas drilling, and its performance is directly determined by the coordination and adaptability of the tool’s offset actuator and control platform. To overcome the limitations [...] Read more.
The rotary steerable system (RSS) is the core equipment for precise wellbore trajectory control in deep oil and gas drilling, and its performance is directly determined by the coordination and adaptability of the tool’s offset actuator and control platform. To overcome the limitations of complex control architectures and low positioning accuracy of conventional offset actuators for rotary steering drilling tools, a novel three hydraulic cylinder synchronous steering offset actuator driven by a drilling fluid rotary valve distributor, along with its dedicated control strategy, is proposed. Laboratory experiments and numerical simulations are performed to analyze the piston displacement characteristics of the three hydraulic cylinder under different drilling fluid flow rates and rotary valve rotational speeds. The results demonstrate that the proposed actuator exhibits controllable piston displacement behavior. The simulated and experimental data show consistent variation tendencies with a relative error of less than 8%, thus validating the reliability of the proposed numerical model. Increasing the flow rate from 1 to 1.5 L/s increases the cycle-averaged peak-to-peak piston displacement by 14.5 mm, while raising the rotational speed from 60 rpm to 120 rpm reduces it by 25.3 mm, corresponding to a dogleg severity variation of approximately 1.9–3.1°/30 m. Piston displacement deviations are mainly attributed to valve port machining tolerance, drilling fluid compressibility, pipeline pressure loss, and internal leakage, and these discrepancies are exacerbated as the rotary valve speed or flow rate increases. Finally, optimization strategies for improving synchronization performance are proposed, thereby providing theoretical and technical support for the engineering implementation and parameter optimization of the proposed actuator. Full article
(This article belongs to the Special Issue Development of Intelligent Software in Geotechnical Engineering)
Show Figures

Figure 1

25 pages, 3942 KB  
Article
Deep Reinforcement Learning-Based Scheduling for an Electric–Hydrogen Integrated Station Using a Data-Driven Electrolyzer Model
by Dongdong Li, Liang Liu and Haiyu Liao
Appl. Sci. 2026, 16(7), 3605; https://doi.org/10.3390/app16073605 - 7 Apr 2026
Abstract
To address the inaccurate scheduling of electric–hydrogen integrated stations (EHISs) caused by the limited accuracy of conventional mechanistic models for proton exchange membrane (PEM) electrolyzers, this study proposes a deep reinforcement learning (DRL)-based scheduling strategy incorporating a data-driven electrolyzer model. First, a deep [...] Read more.
To address the inaccurate scheduling of electric–hydrogen integrated stations (EHISs) caused by the limited accuracy of conventional mechanistic models for proton exchange membrane (PEM) electrolyzers, this study proposes a deep reinforcement learning (DRL)-based scheduling strategy incorporating a data-driven electrolyzer model. First, a deep XGBoost model is developed to characterize the hydrogen production behavior of the PEM electrolyzer, thereby replacing the traditional mechanistic model and reducing prediction errors. Second, the EHIS scheduling problem is formulated as a constrained Markov decision process (CMDP) that explicitly considers user demand and carbon emission constraints. Third, an improved deep Q-network (DQN) algorithm integrating Lagrangian relaxation and the template policy-based reinforcement learning (TPRL) method is designed to solve the scheduling problem, which enhances convergence speed and generalization performance under similar operating scenarios. The simulation results demonstrate that the proposed method can effectively alleviate the decision-making risks introduced by model inaccuracies and significantly improve the operational profitability of the station while satisfying user demand and carbon emission constraints. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

Back to TopTop