Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,315)

Search Parameters:
Keywords = pipeline area

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 35497 KB  
Article
Hierarchical YOLO-SAM: A Scalable Pipeline for Automated Segmentation and Morphometric Tracking of Coral Recruits in Time-Series Microscopy
by Richard S. Zhao, Cuixian Chen, Meg Van Horn and Nicole D. Fogarty
Sensors 2026, 26(8), 2291; https://doi.org/10.3390/s26082291 - 8 Apr 2026
Abstract
Coral reef ecosystems are declining rapidly due to climate change, disease, and anthropogenic stressors, driving the expansion of land-based coral propagation for reef restoration. A major bottleneck in these efforts is the manual measurement of coral recruit tissue area from microscopy images, which [...] Read more.
Coral reef ecosystems are declining rapidly due to climate change, disease, and anthropogenic stressors, driving the expansion of land-based coral propagation for reef restoration. A major bottleneck in these efforts is the manual measurement of coral recruit tissue area from microscopy images, which requires 2–7 min per image and limits scalability. We present a hierarchical deep learning pipeline that automates this measurement by integrating YOLO-based detection with Segment Anything Model (SAM) segmentation. YOLO localizes recruits and classifies them by developmental stage; stage-specific fine-tuned SAM models then segment live tissue using bounding box and background point prompts to suppress segmentation leakage and improve boundary precision. Surface area is computed directly from the segmented masks using pixel size extracted from image metadata. The pipeline reduces processing time to approximately 3–5 s per image—a 24–140× speedup over manual tracing. Evaluated on 3668 microscopy images from two national coral research facilities, the system achieves a mean IoU exceeding 95% and an auto-acceptance rate (AAR) of 71.51%, where predicted-to-ground-truth area ratios fall within a ±5% tolerance of expert annotation, substantially reducing manual workload while maintaining measurement reliability across species, developmental stages, and imaging conditions. This workflow addresses a critical bottleneck in restoration research and demonstrates the broader applicability of AI-based image analysis in marine ecology. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies—Second Edition)
Show Figures

Figure 1

17 pages, 33215 KB  
Data Descriptor
ANAID: Autonomous Naturalistic Obstacle-Avoidance Interaction Dataset
by Manuel Garcia-Fernandez, Maria Juarez Molera, Adrian Canadas Gallardo, Nourdine Aliane and Javier Fernandez Andres
Data 2026, 11(4), 77; https://doi.org/10.3390/data11040077 - 8 Apr 2026
Abstract
This paper presents ANAID (Autonomous Naturalistic obstacle-Avoidance Interaction Dataset), a new multimodal dataset designed to support research on autonomous driving, particularly with regard to obstacle avoidance and naturalistic driver–vehicle interaction. Data were collected using a Hyundai Tucson Hybrid equipped with a Comma-3X autonomous-driving [...] Read more.
This paper presents ANAID (Autonomous Naturalistic obstacle-Avoidance Interaction Dataset), a new multimodal dataset designed to support research on autonomous driving, particularly with regard to obstacle avoidance and naturalistic driver–vehicle interaction. Data were collected using a Hyundai Tucson Hybrid equipped with a Comma-3X autonomous-driving development kit, combining high-resolution front-facing video with detailed CAN-bus telemetry. The dataset comprises four data collection campaigns, each corresponding to a single continuous driving session, yielding a total of 208 videos and 240,014 synchronized frames. In addition to the video data, the dataset provides vehicle state measurements (speed, acceleration, steering, pedal positions, turn signals, etc.) and an additional annotation layer identifying evasive maneuvers derived from steering-related signals. Data were recorded across four driving campaigns on an urban circuit at Universidad Europea de Madrid, capturing diverse real-world scenarios such as roundabouts, intersections, pedestrian areas, and segments requiring obstacle avoidance. A multi-stage processing pipeline aligns telemetry and visual data, extracts frames at 20 FPS, and detects evasive maneuvers using threshold-based time-series analysis. ANAID provides a fully aligned and non-destructive representation of naturalistic driving behavior, enabling research on control prediction, driver modeling, anomaly detection, and human–autonomy interaction in realistic traffic conditions. Full article
Show Figures

Figure 1

28 pages, 7099 KB  
Article
AI-Driven Tethered Drone Surveillance for Maritime Security in Ports and Coastal Areas
by Alberto Belmonte-Hernández, Briac Grauby, Anaida Fernández García, Solange Tardi, Torbjørn Houge, Hidalgo García Bango and Álvaro Gutiérrez
Drones 2026, 10(4), 268; https://doi.org/10.3390/drones10040268 - 8 Apr 2026
Abstract
Effective port and coastal surveillance require persistent monitoring, flexible deployment, and reliable target detection in dynamic maritime environments. This paper presents a system- and deployment-oriented autonomous tethered drone architecture, integrated with AI-based perception, for persistent maritime surveillance in ports and coastal areas. Mounted [...] Read more.
Effective port and coastal surveillance require persistent monitoring, flexible deployment, and reliable target detection in dynamic maritime environments. This paper presents a system- and deployment-oriented autonomous tethered drone architecture, integrated with AI-based perception, for persistent maritime surveillance in ports and coastal areas. Mounted on a moving maritime platform and powered through a tether, the drone provides a persistent elevated viewpoint without the endurance limitations of conventional battery-powered Unmanned Aerial Vehicles (UAVs). The system combines maritime platform integration, tethered flight operation, fail-safe and safety mechanisms, and a distributed Artificial Intelligence (AI) pipeline for real-time object detection and tracking. The perception module is based on YOLOv8m for vessel detection and BoT-SORT for multi-object tracking, enabling continuous monitoring of maritime targets in realistic operational scenarios. Field trials conducted from moving vessels in maritime environments demonstrate autonomous take-off and landing, stable surveillance operation under realistic wind and wave conditions, and effective vessel detection and tracking on real image sequences. The results show the potential of AI-enabled tethered drone surveillance as a persistent and operationally relevant tool for maritime monitoring and security. Full article
Show Figures

Figure 1

29 pages, 2990 KB  
Article
Federated and Interpretable AI Framework for Secure and Transparent Loan Default Prediction in Financial Institutions
by Awad M. Awadelkarim
Math. Comput. Appl. 2026, 31(2), 56; https://doi.org/10.3390/mca31020056 - 5 Apr 2026
Viewed by 226
Abstract
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, [...] Read more.
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, which limit the application of advanced models because of regulatory and confidentiality issues, and black-box decision making, which diminishes confidence in automated credit risk tools. This study mitigates these problems by adopting a federated-inspired decentralized ensemble learning model combined with explainable artificial intelligence (XAI) in predicting loan defaults. Various machine learning classifiers are trained on partitioned institutional data without the need to share any data; they include K-Nearest Neighbors, support vector machine, random forest, and XGBoost. They use a prediction-level aggregation strategy to simulate the collaborative decision-making process without losing locality of data. SHAP and LIME are used to promote model interpretability by giving both global and local explanations of the consequences of prediction. The proposed framework was tested on a large public dataset of loans that contains more than 116,000 records, including various financial and borrower-related features. The experimental findings show that XGBoost has high and reliable predictive accuracy in both centralized and decentralized scenarios, achieving 99.7% accuracy under federated-inspired evaluation. The explanation analysis shows interest rate spread and upfront charges as the most significant predictors of loan default risk. The main contributions of this research are as follows: (i) a privacy-preserving decentralized ensemble learning framework that is applicable in multi-institutional financial contexts, (ii) a detailed analysis of centralized and decentralized predictive performances, and (iii) the pipeline of the XAI, which can be used to increase its transparency and regulatory confidence in automated credit risk evaluation. These results prove that decentralized learning combined with explainable AI can provide high-performing, transparent and privacy-sensitive loan default prediction systems in practice in real-world banking systems. Full article
Show Figures

Figure 1

24 pages, 4159 KB  
Article
A UAV–Satellite Hybrid Pipeline for Wildfire Detection and Dynamic Perimeter Prediction
by Hossein Keshmiri and Khan A. Wahid
Drones 2026, 10(4), 263; https://doi.org/10.3390/drones10040263 - 4 Apr 2026
Viewed by 296
Abstract
Effective wildfire management demands seamless integration of real-time detection and long-term spread forecasting. This paper proposes a novel power-efficient UAV–satellite hybrid pipeline that synergizes the agility of UAVs with the scale of satellite intelligence. The system begins with a dashboard-guided, multi-UAV detection module [...] Read more.
Effective wildfire management demands seamless integration of real-time detection and long-term spread forecasting. This paper proposes a novel power-efficient UAV–satellite hybrid pipeline that synergizes the agility of UAVs with the scale of satellite intelligence. The system begins with a dashboard-guided, multi-UAV detection module that scores fire likelihood from historical satellite data and enables scalable, energy-efficient deployment with low-latency onboard processing. This aerial component ensures persistent surveillance and reliable ignition detection, supported by a Dual LoRa (Long Range) communication scheme for robust and low-power connectivity. It achieves an F1-score of 97.4% while minimizing power consumption to extend operational flight times. Following detection, the pipeline transitions to a dynamic perimeter-prediction phase utilizing a custom Canadian boreal dataset. We employ a Squeeze-and-Excitation Residual U-Net (SE-ResUNet) to model spatiotemporal fire propagation based on static terrain and dynamic environmental features. The model was validated using a dynamic simulation framework that evaluates temporal consistency and convergence behavior against final cumulative burned-area masks, effectively addressing the absence of daily ground truth. Under these conditions, the model achieves a recall of 84% and an AUC of 0.97, demonstrating a strong capability to delineate active fire fronts. By coupling dashboard-driven UAV sensing with satellite-based predictive modeling, this work establishes a modular, foundational framework to support data-scarce forecasting in modern wildfire management. Full article
(This article belongs to the Special Issue UAVs and UGVs Robotics for Emergency Response in a Changing Climate)
Show Figures

Figure 1

8 pages, 540 KB  
Proceeding Paper
A Federated Learning Approach for Privacy-Preserving Automated Signature Verification
by Haris Veraros, Fotios Zantalis, Stylianos Katsoulis, Elias N. Zois and Grigorios Koulouras
Eng. Proc. 2026, 124(1), 100; https://doi.org/10.3390/engproc2026124100 - 1 Apr 2026
Viewed by 316
Abstract
The growing interconnectivity of digital systems has led to the massive collection and centralization of sensitive data, raising serious concerns about confidentiality and compliance with privacy regulations. Biometric authentication systems, such as offline signature verification, are particularly vulnerable. Federated learning (FL) provides a [...] Read more.
The growing interconnectivity of digital systems has led to the massive collection and centralization of sensitive data, raising serious concerns about confidentiality and compliance with privacy regulations. Biometric authentication systems, such as offline signature verification, are particularly vulnerable. Federated learning (FL) provides a promising framework by enabling model training without exposing raw client data. However, keeping data strictly localized inherently creates severe data scarcity, which is a significant barrier to building robust deep learning (DL) models. This work investigates the feasibility of a privacy-preserving writer-dependent (WD) offline signature verification (OSV) system within an FL framework. To make local training viable under these constraints, we integrate complementary techniques into the federated pipeline: data augmentation is utilized to increase local sample diversity, while transfer learning provides robust pre-trained feature representations, drastically reducing the volume of data required for effective local fine-tuning. The proposed WD-OSV system was trained and evaluated on the popular CEDAR signature dataset, for which an average area under the curve of 0.8893, along with an average binary accuracy (ACC) of 80.12%, are reported as preliminary results. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

17 pages, 2196 KB  
Article
Machine Learning-Based Static Ransomware Detection Using PE Header Features and SHAP Interpretation
by Gabryella Barnes and Ahmad Ghafarian
J. Cybersecur. Priv. 2026, 6(2), 58; https://doi.org/10.3390/jcp6020058 - 1 Apr 2026
Viewed by 334
Abstract
Cybercriminals use advanced techniques to launch an attack against organizations, which causes disruption of normal business activities. The traditional signature-based malware detection methods are not effective in the detection of ransomware. Therefore, the use of machine learning and deep learning for malware detection [...] Read more.
Cybercriminals use advanced techniques to launch an attack against organizations, which causes disruption of normal business activities. The traditional signature-based malware detection methods are not effective in the detection of ransomware. Therefore, the use of machine learning and deep learning for malware detection is becoming a major area of research. There are two types of malware detection strategies, namely, static and dynamic. This work investigates the task-dependent effectiveness of static PE header-based detection by systematically evaluating three binary classification problems of increasing difficulty: ransomware vs. benign, malware vs. benign, and ransomware vs. other malware families. An end-to-end machine learning pipeline is implemented, including dataset-specific preprocessing, class imbalance handling, model training, and evaluation using imbalance-aware metrics. Random Forest, Support Vector Machine, and XGBoost models are assessed across all tasks, with SHAP used to analyze feature contribution and explain performance degradation. The experimental results demonstrate that tree-based ensemble models, particularly XGBoost, achieve strong detection performance when class boundaries are structurally distinct, but they struggle when ransomware must be distinguished from structurally similar malware. The results indicate that static analysis based on PE header features can be a viable approach for pre-execution triage, but they exhibit clear limitations for fine-grained ransomware discrimination. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

24 pages, 16213 KB  
Article
Monitoring Remote Archaeological Sites Through Open-Access Satellite Datasets Against Natural Hazards—Case Study: Delos
by Ana Sofia Duțu, Vlad Florin Osztrovszky, Kyriakos Michaelides and Athos Agapiou
Heritage 2026, 9(4), 143; https://doi.org/10.3390/heritage9040143 - 31 Mar 2026
Viewed by 253
Abstract
This research presents a comprehensive multi-domain environmental assessment of Delos Island, a UNESCO World Heritage Site, through integration of long-term atmospheric and satellite remote sensing datasets. A significant methodological contribution of this research is the development of a cross-mission harmonization approach that enables [...] Read more.
This research presents a comprehensive multi-domain environmental assessment of Delos Island, a UNESCO World Heritage Site, through integration of long-term atmospheric and satellite remote sensing datasets. A significant methodological contribution of this research is the development of a cross-mission harmonization approach that enables the reconstruction of a continuous, multi-decadal atmospheric record. By implementing a hierarchical calibration pipeline to harmonise Ozone Monitoring Instrument (OMI) and Tropospheric Monitoring Instrument (TROPOMI) observations, the study effectively eliminated a 6.61-fold systematic instrument offset, producing a 21-year time series (2004–2025) of tropospheric NO2 concentrations. Simultaneously, a 24-year analysis (2000–2024) of coastline dynamics was conducted using the Landsat archive to quantify land area changes across the island and within a 1.03 km2 Archaeological Area of Interest (AOI). Results indicate that atmospheric NO2 concentrations stabilised following a 2015 peak, while coastal erosion represents a measurable risk to structural integrity. Net land loss of 18,400 m2 was documented within the AOI, driven by localised geomorphological factors and exposure to Meltemi winds. The results indicate that these environmental processes are physically independent yet collectively require a multilayered conservation strategy to protect vulnerable archaeological heritage from atmospheric pollution and coastal retreat. Furthermore, the research highlights the value of long-term satellite datasets spanning more than two decades for supporting heritage monitoring and management, especially in remote or hard-to-reach locations. Through the analysis of the spatial and temporal characteristics of these sensors, the research enables the identification of hazard proxies that can inform risk-aware decision-making. Full article
Show Figures

Figure 1

24 pages, 6716 KB  
Article
In-Situ Infrared Camera Monitoring for Defect and Anomaly Detection in Laser Powder Bed Fusion: Calibration, Data Mapping, and Feature Extraction
by Shawn Hinnebusch, David Anderson, Berkay Bostan and Albert C. To
Appl. Sci. 2026, 16(7), 3378; https://doi.org/10.3390/app16073378 - 31 Mar 2026
Viewed by 203
Abstract
Laser powder bed fusion (LPBF) is susceptible to defects arising from melt pool instabilities, spatter, heat accumulation, and powder spreading anomalies. In situ infrared (IR) monitoring can detect these issues; however, it typically generates large volumes of data that are costly to store [...] Read more.
Laser powder bed fusion (LPBF) is susceptible to defects arising from melt pool instabilities, spatter, heat accumulation, and powder spreading anomalies. In situ infrared (IR) monitoring can detect these issues; however, it typically generates large volumes of data that are costly to store and analyze. This work proposes a projection-based framework that directly maps in situ thermal measurements onto a three-dimensional (3D) voxelized part geometry, substantially reducing storage requirements while preserving spatial fidelity. In addition, several IR derived features are incorporated into a practical workflow for defect detection and process model calibration, including laser scan order, local pre-deposition temperature, maximum pre-scan temperature, and spatter generation and landing locations. For completeness, commonly used metrics such as interpass temperature, heat intensity, cooling rate, and relative melt pool area are extracted within the same unified processing pipeline. All features are computed using a consistent, reproducible Python-based implementation to streamline integration into routine monitoring and analysis tasks. Multiple parts are fabricated, monitored, and characterized to evaluate the proposed framework, demonstrating that the extracted features reliably identify process anomalies and correlate with observed defects. Full article
Show Figures

Graphical abstract

40 pages, 5095 KB  
Article
When Lie Groups Meet Hyperspectral Images: Equivariant Manifold Network for Few-Shot HSI Classification
by Haolong Ban, Junchao Feng, Zejin Liu, Yue Jiang, Zhenxing Wang, Jialiang Liu, Yaowen Hu and Yuanshan Lin
Sensors 2026, 26(7), 2117; https://doi.org/10.3390/s26072117 - 29 Mar 2026
Viewed by 293
Abstract
Hyperspectral imagery (HSI) offers rich spectral signatures and fine-grained spatial structures for remote sensing, but practical HSI classification is often constrained by scarce labels and complex geometric disturbances, including translation, rotation, scaling, and shear. Existing deep models are typically developed under Euclidean assumptions [...] Read more.
Hyperspectral imagery (HSI) offers rich spectral signatures and fine-grained spatial structures for remote sensing, but practical HSI classification is often constrained by scarce labels and complex geometric disturbances, including translation, rotation, scaling, and shear. Existing deep models are typically developed under Euclidean assumptions and rely on data-hungry training pipelines, which makes them brittle in the few-shot regime. To address this challenge, we propose EMNet, a Lie-group-based Equivariant Manifold Network for few-shot HSI classification that explicitly encodes geometric invariance and improves discriminative accuracy. EMNet couples an SE(2)-based Equivariance-Guided Module (EGM) to enforce equivariance to translations and rotations with an affine Lie-group-based Characteristic Filtering Convolution (CFC) that models scaling and shearing on the feature manifold while adaptively suppressing redundant responses. Extensive experiments on WHU-Hi-HongHu, Houston2013, and Indian Pines demonstrate state-of-the-art performance with competitive complexity, achieving OAs of 95.77% (50 samples/class), 97.37% (50 samples/class), and 96.09% (5% labeled samples), respectively, and yielding up to +3.34% OA, +6.01% AA, and +4.14% Kappa over the strong DGPF-RENet baseline. Under a stricter 25-samples-per-class protocol with 10 repeated random hold-out splits, EMNet consistently improves the mean accuracy while exhibiting lower variance, indicating better stability to sampling uncertainty. On the city-scale Xiongan New Area dataset with extreme long-tail imbalance (1580 × 3750 pixels, 256 bands, and 5.925 M labeled pixels), EMNet further boosts OA from 85.89% to 93.77% under the 1% labeled-sample protocol, highlighting robust generalization for large-area mapping. Beyond point estimates, we report mean ± SD/SE across repeated splits and provide rigorous statistical validation by computing Yule’s Q statistic for class-wise behavior similarity, performing the Friedman test with Nemenyi post hoc comparisons for multi-method ranking significance, and presenting 95% confidence intervals together with Cohen’s d effect sizes to quantify practical improvement. Full article
(This article belongs to the Special Issue Hyperspectral Sensing: Imaging and Applications)
Show Figures

Figure 1

40 pages, 4626 KB  
Review
A Systematic Lifecycle-Referenced Capability Mapping of MLOps Platforms for Energy Forecasting
by Xun Zhao, Zheng Grace Ma and Bo Nørregaard Jørgensen
Information 2026, 17(4), 328; https://doi.org/10.3390/info17040328 - 28 Mar 2026
Viewed by 346
Abstract
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy [...] Read more.
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy data pipelines. However, the capabilities of existing MLOps platforms for energy forecasting have not been systematically compared. This study adopts a PRISMA-informed review process to identify relevant end-to-end MLOps platforms for energy forecasting and then maps their documented capabilities using an established energy forecasting pipeline lifecycle as the reference structure. A total of 256 records were screened across vendor documentation, open-source repositories, and academic literature, of which 13 MLOps platforms were selected for comparative capability analysis. Platform capabilities are organised and presented across an end-to-end lifecycle covering project setup and governance, data ingestion and management, model development and experimentation, deployment and serving, and monitoring and feedback. Commercial platforms such as Amazon SageMaker and Google Vertex AI generally provide stronger end-to-end integration and production readiness, while open-source platforms such as Kubeflow and ClearML offer modular flexibility that typically requires additional integration effort to achieve end-to-end operation. The mapping identifies four priority areas where platform support remains limited, namely (i) governance workflow automation, (ii) automated data quality validation, (iii) feature management, and (iv) deployment and monitoring support under nonstationary conditions. These findings indicate that platform selection for energy forecasting should be treated as a lifecycle capability decision, balancing end-to-end integration, operational assurance, and long-term flexibility. Full article
Show Figures

Graphical abstract

26 pages, 2135 KB  
Article
Mapping Research Trends in Road Safety: A Topic Modeling Perspective
by Iulius Alexandru Tudor and Florin Gîrbacia
Vehicles 2026, 8(4), 69; https://doi.org/10.3390/vehicles8040069 - 27 Mar 2026
Viewed by 414
Abstract
Over the past decade, road safety research has experienced rapid development due to the rapid expansion of large crash databases, the adoption of artificial intelligence techniques, and the demand for proactive and predictive safety solutions. This study conducts a data-driven review of recent [...] Read more.
Over the past decade, road safety research has experienced rapid development due to the rapid expansion of large crash databases, the adoption of artificial intelligence techniques, and the demand for proactive and predictive safety solutions. This study conducts a data-driven review of recent research trends in transport safety. It focuses on main domains including crash severity analysis, human factors, vulnerable road users (VRUs), spatial modeling, and artificial intelligence applications. A systematic search of the Scopus database identified 15,599 relevant scientific papers published between 2016 and 2025. After constructing this corpus, titles, abstracts, and keywords were preprocessed using a natural language pipeline. The analysis employed BERTopic, a transformer-based topic modeling framework. The analysis identified 29 distinct research topics, further synthesized into five major thematic areas: (1) crash severity and injury analysis, (2) driver behavior and human factors, (3) vulnerable road users, (4) artificial intelligence, machine learning, and computer vision in intelligent transportation systems, and (5) spatial analysis and hotspot detection. A notable increase in publications related to artificial intelligence and machine learning has been evident since 2020. The results show a transition from descriptive, post-crash studies to integrated, multimodal, predictive analysis. Overall, the findings reveal a paradigm shift in the field. This study also identifies ethical and economic issues associated with the use of artificial intelligence in intelligent transportation systems, including data management, infrastructure requirements, system security, and model transparency. The results signify a transition from intuition-based models to explainable, spatially explicit, and data-intensive models, ultimately facilitating proactive risk assessment and informed decision-making. Full article
(This article belongs to the Special Issue Intelligent Mobility and Sustainable Automotive Technologies)
Show Figures

Figure 1

17 pages, 4972 KB  
Article
Effect of Automated Multi-Pass MAG Welding Parameters on the Fracture Toughness and Hydrogen Embrittlement Susceptibility of API 5L X70 Pipeline Steel
by Danko Ćorić, Kristijan Jurgec, Ivica Garašić and Maja Remenar
Processes 2026, 14(7), 1069; https://doi.org/10.3390/pr14071069 - 27 Mar 2026
Viewed by 263
Abstract
Welded joints in API 5L X70 pipeline steel represent critical locations for pipelines intended for hydrogen service because welding can create microstructural inhomogeneity, stress concentrations, and uneven mechanical properties that can promote hydrogen-assisted degradation. In hydrogen-containing environments, these effects may manifest as reduced [...] Read more.
Welded joints in API 5L X70 pipeline steel represent critical locations for pipelines intended for hydrogen service because welding can create microstructural inhomogeneity, stress concentrations, and uneven mechanical properties that can promote hydrogen-assisted degradation. In hydrogen-containing environments, these effects may manifest as reduced ductility, loss of fracture resistance, and increased cracking susceptibility, particularly in the weld metal and heat-affected zone. Therefore, welding procedures for X70 intended for hydrogen applications must be evaluated using systematic mechanical testing and microstructural characterization under defined hydrogen exposure conditions. The study investigates the detrimental effects of hydrogen on the mechanical integrity of pipeline materials, focusing on welded joints of the API 5L X70 steel, a candidate material for use in hydrogen-containing environments. The weldability and structural performance of the X70 pipeline steel joints in hydrogen environments, produced using automated multi-pass metal active gas (MAG) welding, was experimentally studied. Welding was performed on a DN800 pipe with precise control over welding parameters. Comprehensive analyses were conducted on the welded joints, including microstructure examinations, hardness measurements, slow strain rate testing in high-pressure gaseous H2 with a N2 baseline and fracture toughness testing. In high-pressure hydrogen SSRT showed a moderate reduction in ductility relative to nitrogen, with reduction of area decreasing from 81.2% (N2) to 69.1 and 71.5% (H2), while time-to-failure remained comparable (475 min in N2 vs. 497 and 496 min in H2) Ultimate tensile strength was not reduced (579 MPa in N2 vs. 609 and 597 MPa in H2). Secondary surface cracks were observed only on specimens tested in hydrogen. Fracture mechanics testing after hydrogen exposure yielded KIH values of 58–59 MPa√m in the weld metal and 57–61 MPa√m in the HAZ, exceeding the 55 MPa√m acceptance threshold applied in this study. The results highlight the necessity of optimized welding techniques and targeted material analyses to ensure the safety and durability of pipelines in hydrogen-rich environments, thereby contributing to the development of reliable infrastructure for sustainable energy systems. Full article
(This article belongs to the Section Materials Processes)
Show Figures

Figure 1

47 pages, 1879 KB  
Review
Advancing Offshore Wind Capacity Through Turbine Size Scaling
by Paweł Martynowicz, Piotr Ślimak and Desta Kalbessa Kumsa
Energies 2026, 19(7), 1625; https://doi.org/10.3390/en19071625 - 25 Mar 2026
Viewed by 572
Abstract
The upscaling of turbines in the offshore wind industry has been unprecedented, as compared to 5–6 MW rated turbines 10 years ago. A typical 20–26 MW rated turbine in modern commercial applications (MingYang MySE 18.X-20 MW installed in 2025 and 26 MW prototype [...] Read more.
The upscaling of turbines in the offshore wind industry has been unprecedented, as compared to 5–6 MW rated turbines 10 years ago. A typical 20–26 MW rated turbine in modern commercial applications (MingYang MySE 18.X-20 MW installed in 2025 and 26 MW prototype by Dongfang Electric tested in 2025) has been demonstrated. This scaling has been made possible by increasing rotor diameters (>250 m) and hub heights (>150–180 m) to achieve capacity factors of up to 55–65%, annual energy generation of more than 80 GWh/turbine, and significant decreases in levelised cost of energy (LCOE) to current values of up to 63–65 USD 2023/MWh globally averaged in 2023 (with minor variability in 2024 due to market changes and new regional areas). The paper analyses turbine upscaling over three levels of hierarchy, including turbine scale—rated capacity and physical aspect, project scale—multi-gigawatts of farms, and market scale—the global pipeline > 1500 GW level, and combines techno-economic evaluation, structural evaluation of loads, and infrastructure needs assessment. The upscaling has the advantage of reducing the number of turbines dramatically (e.g., 500 to 67 turbines in a 1 GW farm, as turbine size is increased to 15 MW) and balancing-of-plant (BoP) CAPEX (turbine-to-turbine foundations and cables) by some 20 to 30 percent per unit of capacity, and serial production learning rates of between 15 and 18% per doubling of capacity. But the problems that come with the increase in ultra-large designs are nonlinear increments in mass and load (i.e., blade-root and tower-bending moments), logistical constraints (blades > 120 m, nacelle up to 800–1000 tonnes demanding special vessels and ports), supply-chain issues (rare-earth materials, vessel shortages increase day rates by 30–50%), and technology limitations (aeroelastic compounded by numerical differences between reference 5 MW, 10 MW, and 15 MW models), it becomes evident that there is a significant increase in deflections of the tower and blades and platform surge/pitch responses with continued increases in power levels, but without a correspondingly mature infrastructure. The regional differences (mature ports of Europe vs. U.S. Jones Act restrictions vs. scale-up of vessels/manufacturing in China) lead to the necessity of optimisation depending on the context. The analysis concludes that, to the extent of mature markets with adapted logistics, continuous upscaling is an effective business strategy and can result in 5 to 12 percent further reductions in LCOE, but beyond that point, gains become marginal or even negative, as risks and costs increase. The competitiveness of the future depends on multi-scale/multi-market-based approaches—modular-based families of turbines, programmatic standardisation, vibration control innovations, and industry coordination towards supply-chain alignment and standards. Its major strength is that it transcends mere size–cost relationships and shows how nonlinear structural processes, aero-hydro-servo-elastic interactions, and bottlenecks in logistical systems are becoming more determinant of the efficiency of ultra-large turbines. The study demonstrates that upscaling turbines has LCOE benefits through the support of associated improvements in installation facility, supply-chain preparedness, and structural vibration control potential, based on the comparisons of quantitative loads, techno-economic scaling trends, and regional market differentiation. Full article
Show Figures

Figure 1

26 pages, 3386 KB  
Article
A Two-Level Optimal Water Allocation Model for Canal-Drip Irrigation Systems Based on Decomposition–Coordination Theory
by Jingzheng Li, Chunfang Yue and Shengjiang Zhang
Sustainability 2026, 18(7), 3217; https://doi.org/10.3390/su18073217 - 25 Mar 2026
Viewed by 337
Abstract
Agriculture in Xinjiang, a region in arid northwest China, is almost entirely dependent on irrigation, leading to significant supply–demand contradictions. This study addresses the spatial and temporal mismatches between water supply and demand, and the resulting conflicts in crop water supply. Using the [...] Read more.
Agriculture in Xinjiang, a region in arid northwest China, is almost entirely dependent on irrigation, leading to significant supply–demand contradictions. This study addresses the spatial and temporal mismatches between water supply and demand, and the resulting conflicts in crop water supply. Using the primary irrigation cycle of Wutai branch canal as a case study, we developed a two-level optimal water allocation model based on large-scale system optimization. For the lateral canal water distribution, a model minimizing the sum of squares of the water shortage rate was solved using the Sequential Quadratic Programming (SQP) algorithm. For the drip irrigation systems, water distribution time was incorporated as a second objective, and the resulting bi-objective model was solved using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). Compared to actual distribution processes, our results show that (1) 74% of the distribution canals and pipelines achieved over 90% of their design flow rate, fully utilizing flow capacity and reducing the overall distribution time of the branch canal by 4.68 h. (2) The overall water shortage rate was reduced by 1.59% compared to the actual rate, with a more balanced water allocation among users. These results demonstrate that the model can effectively coordinate water distribution in a multi-level canal system, enhance the fairness of water use, and provide a valuable reference for single-event water distribution in water-scarce areas. Full article
Show Figures

Figure 1

Back to TopTop