Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,407)

Search Parameters:
Keywords = hybrid machining

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2029 KB  
Article
Intelligent Hybrid Modeling for Heart Disease Prediction
by Mona Almutairi and Samia Dardouri
Information 2025, 16(10), 869; https://doi.org/10.3390/info16100869 (registering DOI) - 7 Oct 2025
Abstract
Background: Heart disease continues to be one of the foremost causes of mortality worldwide, emphasizing the urgent need for reliable and early diagnostic tools. Accurate prediction methods can support timely interventions and improve patient outcomes. Methods: This study presents the development and comparative [...] Read more.
Background: Heart disease continues to be one of the foremost causes of mortality worldwide, emphasizing the urgent need for reliable and early diagnostic tools. Accurate prediction methods can support timely interventions and improve patient outcomes. Methods: This study presents the development and comparative evaluation of multiple machine learning models for heart disease prediction using a structured clinical dataset. Algorithms such as Logistic Regression, Random Forest, Support Vector Machine (SVM), XGBoost, and Deep Neural Networks were implemented. Additionally, a hybrid ensemble model combining XGBoost and SVM was proposed. Models were evaluated using key performance metrics including accuracy, precision, recall, and F1-score. Results: Among all models, the proposed hybrid model demonstrated the best performance, achieving an accuracy of 89.3%, a precision of 0.90, recall of 0.91, and an F1-score of 0.905, and outperforming all individual classifiers. These results highlight the benefits of combining complementary algorithms for improved generalization and diagnostic reliability. Conclusions: The findings underscore the effectiveness of ensemble and deep learning techniques in addressing key challenges such as data imbalance, feature selection, and model interpretability. The proposed hybrid model shows significant potential as a clinical decision-support tool, contributing to enhanced diagnostic accuracy and supporting medical professionals in real-world settings. Full article
Show Figures

Figure 1

23 pages, 2429 KB  
Article
Hybrid Spatio-Temporal CNN–LSTM/BiLSTM Models for Blocking Prediction in Elastic Optical Networks
by Farzaneh Nourmohammadi, Jaume Comellas and Uzay Kaymak
Network 2025, 5(4), 44; https://doi.org/10.3390/network5040044 (registering DOI) - 7 Oct 2025
Abstract
Elastic optical networks (EONs) must allocate resources dynamically to accommodate heterogeneous, high-bandwidth demands. However, the continuous setup and teardown of connections with different bit rates can fragment the spectrum and lead to blocking. The blocking predictors enable proactive defragmentation and resource reallocation within [...] Read more.
Elastic optical networks (EONs) must allocate resources dynamically to accommodate heterogeneous, high-bandwidth demands. However, the continuous setup and teardown of connections with different bit rates can fragment the spectrum and lead to blocking. The blocking predictors enable proactive defragmentation and resource reallocation within network controllers. In this paper, we propose two novel deep learning models (based on CNN–BiLSTM and CNN–LSTM) to predict blocking in EONs by combining spatial feature extraction from spectrum snapshots using 2D convolutional layers with temporal sequence modeling. This hybrid spatio-temporal design learns how local fragmentation patterns evolve over time, allowing it to detect impending blocking scenarios more accurately than conventional methods. We evaluate our model on the simulated NSFNET topology and compare it against multiple baselines, namely 1D CNN, 2D CNN, k-nearest neighbors (KNN), and support vector machines (SVMs). The results show that the proposed CNN–BiLSTM/LSTM models consistently achieve higher performance. The CNN–BiLSTM model achieved the highest accuracy in blocking prediction, while the CNN–LSTM model shows slightly lower accuracy; however, it has much lower complexity and a faster learning time. Full article
Show Figures

Figure 1

54 pages, 7106 KB  
Review
Modeling, Control and Monitoring of Automotive Electric Drives
by Pierpaolo Dini, Sergio Saponara, Sajib Chakraborty and Omar Hegazy
Electronics 2025, 14(19), 3950; https://doi.org/10.3390/electronics14193950 (registering DOI) - 7 Oct 2025
Abstract
The electrification of automotive powertrains has accelerated research efforts in the modeling, control, and monitoring of electric drive systems, where reliability, safety, and efficiency are key enablers for mass adoption. Despite a large corpus of literature addressing individual aspects of electric drives, current [...] Read more.
The electrification of automotive powertrains has accelerated research efforts in the modeling, control, and monitoring of electric drive systems, where reliability, safety, and efficiency are key enablers for mass adoption. Despite a large corpus of literature addressing individual aspects of electric drives, current surveys remain fragmented, typically focusing on either multiphysics modeling of machines and converters, or advanced control algorithms, or diagnostic and prognostic frameworks. This review provides a comprehensive perspective that systematically integrates these domains, establishing direct connections between high-fidelity models, control design, and monitoring architectures. Starting from the fundamental components of the automotive power drive system, the paper reviews state-of-the-art strategies for synchronous motor modeling, inverter and DC/DC converter design, and advanced control schemes, before presenting monitoring techniques that span model-based residual generation, AI-driven fault classification, and hybrid approaches. Particular emphasis is given to the interplay between functional safety (ISO 26262), computational feasibility on embedded platforms, and the need for explainable and certifiable monitoring frameworks. By aligning modeling, control, and monitoring perspectives within a unified narrative, this review identifies the methodological gaps that hinder cross-domain integration and outlines pathways toward digital-twin-enabled prognostics and health management of automotive electric drives. Full article
(This article belongs to the Special Issue Control and Optimization of Power Converters and Drives, 2nd Edition)
Show Figures

Figure 1

33 pages, 2074 KB  
Article
A FIG-IWOA-BiGRU Model for Bus Passenger Flow Fluctuation Trend and Spatial Prediction
by Jie Zhang, Qingling He, Xiaojuan Lu, Shungen Xiao and Ning Wang
Mathematics 2025, 13(19), 3204; https://doi.org/10.3390/math13193204 - 6 Oct 2025
Abstract
To capture bus passenger flow fluctuations and address the problems of slow convergence and high error in machine learning parameter optimization, this paper develops an improved Whale Optimization Algorithm (IWOA) integrated with a Bidirectional Gated Recurrent Unit (BiGRU). First, a Logistic–Tent chaotic mapping [...] Read more.
To capture bus passenger flow fluctuations and address the problems of slow convergence and high error in machine learning parameter optimization, this paper develops an improved Whale Optimization Algorithm (IWOA) integrated with a Bidirectional Gated Recurrent Unit (BiGRU). First, a Logistic–Tent chaotic mapping is introduced to generate a diverse and high-quality initial population. Second, a hybrid mechanism combining elite opposition-based learning and Cauchy mutation enhances population diversity and reduces premature convergence. Third, a cosine-based adaptive convergence factor and inertia weight strategy improve the balance between global exploration and local exploitation. Based on the correlation analysis between bus passenger flow and weather condition data in Harbin, and combined with the fluctuation characteristics of bus passenger flow, the data were divided into windows with a 7-day weekly cycle and processed by fuzzy information granulation to obtain three groups of fuzzy granulated window data, namely LOW, R, and UP, representing the fluctuation trend and spatial characteristics of bus passenger flow. The IWOA was employed to optimize and solve parameters such as the hidden layer weights and bias vectors of the BiGRU, thereby constructing a bus passenger flow fluctuation trend and spatial prediction model based on FIG-IWOA-BiGRU. Simulation experiments with 21 benchmark functions and real bus data verified its effectiveness. Results show that IWOA significantly improves optimization accuracy and convergence speed. For bus passenger flow forecasting, the average MAE, RMSE, and MAPE of LOW, R, and UP data are 2915, 3075, and 8.1%, representing improvements over existing classical models. The findings provide reliable decision support for bus scheduling and passenger travel planning. Full article
29 pages, 632 KB  
Article
ML-PSDFA: A Machine Learning Framework for Synthetic Log Pattern Synthesis in Digital Forensics
by Wafa Alorainy
Electronics 2025, 14(19), 3947; https://doi.org/10.3390/electronics14193947 - 6 Oct 2025
Abstract
This study introduces the Machine Learning (ML)-Driven Pattern Synthesis for Digital Forensics in Synthetic Log Analysis (ML-PSDFA) framework to address critical gaps in digital forensics, including the reliance on real-world data, limited pattern diversity, and forensic integration challenges. A key innovation is the [...] Read more.
This study introduces the Machine Learning (ML)-Driven Pattern Synthesis for Digital Forensics in Synthetic Log Analysis (ML-PSDFA) framework to address critical gaps in digital forensics, including the reliance on real-world data, limited pattern diversity, and forensic integration challenges. A key innovation is the introduction of a novel temporal forensics loss LTFL in the Synthetic Attack Pattern Generator (SAPG), which enhances the preservation of temporal sequences in synthetic logs that are crucial for forensic analysis. The framework employs the SAPG with hybrid seed data (UNSW-NB15 and CICIDS2017) to create 500,000 synthetic log entries using Google Colab, achieving a realism score of 0.96, a temporal consistency score of 0.90, and an entropy of 4.0. The methodology employs a three-layer architecture that integrates data generation, pattern analysis, and forensic training, utilizing TimeGAN, XGBoost classification with hyperparameter tuning via Optuna, and reinforcement learning (RL) to optimize the extraction of evidence. Due to enhanced synthetic data quality and advanced modeling, the results exhibit an average classification precision of 98.5% (best fold 98.7%) 98.5% (best fold 98.7%), outperforming previously reported approaches. Feature importance analysis highlights timestamps (0.40) and event types (0.30), while the RL workflow reduces false positives by 17% over 1000 episodes, aligning with RL benchmarks. The temporal forensics loss improves the realism score from 0.92 to 0.96 and introduces a temporal consistency score of 0.90, demonstrating enhanced forensic relevance. This work presents a scalable and accessible training platform for legally constrained environments, as well as a novel RL-based evidence extraction method. Limitations include a lack of real-system validation and resource constraints. Future work will explore dynamic reward tuning and simulated benchmarks to enhance precision and generalizability. Full article
(This article belongs to the Special Issue AI and Cybersecurity: Emerging Trends and Key Challenges)
Show Figures

Figure 1

24 pages, 3163 KB  
Article
Machine Learning Investigation of Ternary-Hybrid Radiative Nanofluid over Stretching and Porous Sheet
by Hamid Qureshi, Muhammad Zubair and Sebastian Andreas Altmeyer
Nanomaterials 2025, 15(19), 1525; https://doi.org/10.3390/nano15191525 - 5 Oct 2025
Abstract
Ternary hybrid nanofluid have been revealed to possess a wide range of application disciplines reaching from biomedical engineering, detection of cancer, over or photovoltaic panels and cells, nuclear power plant engineering, to the automobile industry, smart cells and and eventually to heat exchange [...] Read more.
Ternary hybrid nanofluid have been revealed to possess a wide range of application disciplines reaching from biomedical engineering, detection of cancer, over or photovoltaic panels and cells, nuclear power plant engineering, to the automobile industry, smart cells and and eventually to heat exchange systems. Inspired by the recent developments in nanotechnology and in particular the high potential ability of use of such nanofluids in practical problems, this paper deals with the flow of a three phase nanofluid of MWCNT-Au/Ag nanoparticles dispersed in blood in the presence of a bidirectional stretching sheet. The model derived in this study yields a set of linked nonlinear PDEs, which are first transformed into dimensionless ODEs. From these ODEs we get a dataset with the help of MATHEMATICA environment, then solved using AI-based technique utilizing Levenberg Marquardt Feedforward Algorithm. In this work, flow characteristics under varying physical parameters have been studied and analyzed and the boundary layer phenomena has been investigated. In detail horizontal, vertical velocity profiles as well as temperature distribution are analyzed. The findings reveal that as the stretching ratio of the surface coincide with an increase the vertical velocity as the surface has thinned in this direction minimizing resistance to the fluid flow. Full article
(This article belongs to the Section Theory and Simulation of Nanostructures)
Show Figures

Figure 1

35 pages, 5316 KB  
Review
Machine Learning for Quality Control in the Food Industry: A Review
by Konstantinos G. Liakos, Vassilis Athanasiadis, Eleni Bozinou and Stavros I. Lalas
Foods 2025, 14(19), 3424; https://doi.org/10.3390/foods14193424 - 4 Oct 2025
Abstract
The increasing complexity of modern food production demands advanced solutions for quality control (QC), safety monitoring, and process optimization. This review systematically explores recent advancements in machine learning (ML) for QC across six domains: Food Quality Applications; Defect Detection and Visual Inspection Systems; [...] Read more.
The increasing complexity of modern food production demands advanced solutions for quality control (QC), safety monitoring, and process optimization. This review systematically explores recent advancements in machine learning (ML) for QC across six domains: Food Quality Applications; Defect Detection and Visual Inspection Systems; Ingredient Optimization and Nutritional Assessment; Packaging—Sensors and Predictive QC; Supply Chain—Traceability and Transparency and Food Industry Efficiency; and Industry 4.0 Models. Following a PRISMA-based methodology, a structured search of the Scopus database using thematic Boolean keywords identified 124 peer-reviewed publications (2005–2025), from which 25 studies were selected based on predefined inclusion and exclusion criteria, methodological rigor, and innovation. Neural networks dominated the reviewed approaches, with ensemble learning as a secondary method, and supervised learning prevailing across tasks. Emerging trends include hyperspectral imaging, sensor fusion, explainable AI, and blockchain-enabled traceability. Limitations in current research include domain coverage biases, data scarcity, and underexplored unsupervised and hybrid methods. Real-world implementation challenges involve integration with legacy systems, regulatory compliance, scalability, and cost–benefit trade-offs. The novelty of this review lies in combining a transparent PRISMA approach, a six-domain thematic framework, and Industry 4.0/5.0 integration, providing cross-domain insights and a roadmap for robust, transparent, and adaptive QC systems in the food industry. Full article
(This article belongs to the Special Issue Artificial Intelligence for the Food Industry)
Show Figures

Figure 1

17 pages, 390 KB  
Review
Deep Learning Image Processing Models in Dermatopathology
by Apoorva Mehta, Mateen Motavaf, Danyal Raza, Neil Jairath, Akshay Pulavarty, Ziyang Xu, Michael A. Occidental, Alejandro A. Gru and Alexandra Flamm
Diagnostics 2025, 15(19), 2517; https://doi.org/10.3390/diagnostics15192517 - 4 Oct 2025
Abstract
Dermatopathology has rapidly advanced due to the implementation of deep learning models and artificial intelligence (AI). From convolutional neural networks (CNNs) to transformer-based foundation models, these systems are now capable of accurate whole-slide analysis and multimodal integration. This review synthesizes the most recent [...] Read more.
Dermatopathology has rapidly advanced due to the implementation of deep learning models and artificial intelligence (AI). From convolutional neural networks (CNNs) to transformer-based foundation models, these systems are now capable of accurate whole-slide analysis and multimodal integration. This review synthesizes the most recent advents of deep-learning architecture and synthesizes its evolution from first-generation CNNs to hybrid CNN-transformer systems to large-scale foundational models such as Paige’s PanDerm AI and Virchow. Herein, we examine performance benchmarks from real-world deployments of major dermatopathology deep learning models (DermAI, PathAssist Derm), as well as emerging next-generation models still under research and development. We assess barriers to clinical workflow adoption such as dataset bias, AI interpretability, and government regulation. Further, we discuss potential future research directions and emphasize the need for diverse, prospectively curated datasets, explainability frameworks for trust in AI, and rigorous compliance to Good Machine-Learning-Practice (GMLP) to achieve safe and scalable deep learning dermatopathology models that can fully integrate into clinical workflows. Full article
(This article belongs to the Special Issue Artificial Intelligence in Skin Disorders 2025)
15 pages, 2358 KB  
Article
Optimized Lung Nodule Classification Using CLAHE-Enhanced CT Imaging and Swin Transformer-Based Deep Feature Extraction
by Dorsaf Hrizi, Khaoula Tbarki and Sadok Elasmi
J. Imaging 2025, 11(10), 346; https://doi.org/10.3390/jimaging11100346 - 4 Oct 2025
Abstract
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD [...] Read more.
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD pipeline integrates ten image preprocessing techniques and ten pretrained deep learning models for feature extraction including convolutional neural networks and transformer-based architectures, and four classical machine learning classifiers. Unlike traditional end-to-end deep learning systems, our approach decouples feature extraction from classification, enhancing interpretability and reducing the risk of overfitting. A total of 400 model configurations were evaluated to identify the optimal combination. The proposed approach was evaluated on the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset, which comprises 1018 thoracic CT scans annotated by four thoracic radiologists. For the classification task, the dataset included a total of 6568 images labeled as malignant and 4849 images labeled as benign. Experimental results show that the best performing pipeline, combining Contrast Limited Adaptive Histogram Equalization, Swin Transformer feature extraction, and eXtreme Gradient Boosting, achieved an accuracy of 95.8%. Full article
(This article belongs to the Special Issue Advancements in Imaging Techniques for Detection of Cancer)
46 pages, 3080 KB  
Review
Machine Learning for Structural Health Monitoring of Aerospace Structures: A Review
by Gennaro Scarselli and Francesco Nicassio
Sensors 2025, 25(19), 6136; https://doi.org/10.3390/s25196136 - 4 Oct 2025
Abstract
Structural health monitoring (SHM) plays a critical role in ensuring the safety and performance of aerospace structures throughout their lifecycle. As aircraft and spacecraft systems grow in complexity, the integration of machine learning (ML) into SHM frameworks is revolutionizing how damage is detected, [...] Read more.
Structural health monitoring (SHM) plays a critical role in ensuring the safety and performance of aerospace structures throughout their lifecycle. As aircraft and spacecraft systems grow in complexity, the integration of machine learning (ML) into SHM frameworks is revolutionizing how damage is detected, localized, and predicted. This review presents a comprehensive examination of recent advances in ML-based SHM methods tailored to aerospace applications. It covers supervised, unsupervised, deep, and hybrid learning techniques, highlighting their capabilities in processing high-dimensional sensor data, managing uncertainty, and enabling real-time diagnostics. Particular focus is given to the challenges of data scarcity, operational variability, and interpretability in safety-critical environments. The review also explores emerging directions such as digital twins, transfer learning, and federated learning. By mapping current strengths and limitations, this paper provides a roadmap for future research and outlines the key enablers needed to bring ML-based SHM from laboratory development to widespread aerospace deployment. Full article
(This article belongs to the Special Issue Feature Review Papers in Fault Diagnosis & Sensors)
Show Figures

Figure 1

21 pages, 2769 KB  
Article
Computational Intelligence-Based Modeling of UAV-Integrated PV Systems
by Mohammad Hosein Saeedinia, Shamsodin Taheri and Ana-Maria Cretu
Solar 2025, 5(4), 45; https://doi.org/10.3390/solar5040045 - 3 Oct 2025
Abstract
The optimal utilization of UAV-integrated photovoltaic (PV) systems demands accurate modeling that accounts for dynamic flight conditions. This paper introduces a novel computational intelligence-based framework that models the behavior of a moving PV system mounted on a UAV. A unique mathematical approach is [...] Read more.
The optimal utilization of UAV-integrated photovoltaic (PV) systems demands accurate modeling that accounts for dynamic flight conditions. This paper introduces a novel computational intelligence-based framework that models the behavior of a moving PV system mounted on a UAV. A unique mathematical approach is developed to translate UAV flight dynamics, specifically roll, pitch, and yaw, into the tilt and azimuth angles of the PV module. To adaptively estimate the diode ideality factor under varying conditions, the Grey Wolf Optimization (GWO) algorithm is employed, outperforming traditional methods like Particle Swarm Optimization (PSO). Using a one-year environmental dataset, multiple machine learning (ML) models are trained to predict maximum power point (MPP) parameters for a commercial PV panel. The best-performing model, Rational Quadratic Gaussian Process Regression (RQGPR), demonstrates high accuracy and low computational cost. Furthermore, the proposed ML-based model is experimentally integrated into an incremental conductance (IC) MPPT technique, forming a hybrid MPPT controller. Hardware and experimental validations confirm the model’s effectiveness in real-time MPP prediction and tracking, highlighting its potential for enhancing UAV endurance and energy efficiency. Full article
(This article belongs to the Special Issue Efficient and Reliable Solar Photovoltaic Systems: 2nd Edition)
42 pages, 3952 KB  
Article
An Explainable Markov Chain–Machine Learning Sequential-Aware Anomaly Detection Framework for Industrial IoT Systems Based on OPC UA
by Youness Ghazi, Mohamed Tabaa, Mohamed Ennaji and Ghita Zaz
Sensors 2025, 25(19), 6122; https://doi.org/10.3390/s25196122 - 3 Oct 2025
Abstract
Stealth attacks targeting industrial control systems (ICS) exploit subtle sequences of malicious actions, making them difficult to detect with conventional methods. The OPC Unified Architecture (OPC UA) protocol—now widely adopted in SCADA/ICS environments—enhances OT–IT integration but simultaneously increases the exposure of critical infrastructures [...] Read more.
Stealth attacks targeting industrial control systems (ICS) exploit subtle sequences of malicious actions, making them difficult to detect with conventional methods. The OPC Unified Architecture (OPC UA) protocol—now widely adopted in SCADA/ICS environments—enhances OT–IT integration but simultaneously increases the exposure of critical infrastructures to sophisticated cyberattacks. Traditional detection approaches, which rely on instantaneous traffic features and static models, neglect the sequential dimension that is essential for uncovering such gradual intrusions. To address this limitation, we propose a hybrid sequential anomaly detection pipeline that combines Markov chain modeling to capture temporal dependencies with machine learning algorithms for anomaly detection. The pipeline is further augmented by explainability through SHapley Additive exPlanations (SHAP) and causal inference using the PC algorithm. Experimental evaluation on an OPC UA dataset simulating Man-In-The-Middle (MITM) and denial-of-service (DoS) attacks demonstrates that incorporating a second-order sequential memory significantly improves detection: F1-score increases by +2.27%, precision by +2.33%, and recall by +3.02%. SHAP analysis identifies the most influential features and transitions, while the causal graph highlights deviations from the system’s normal structure under attack, thereby providing interpretable insights into the root causes of anomalies. Full article
18 pages, 3387 KB  
Article
Machine Learning-Assisted Reconstruction of In-Cylinder Pressure in Internal Combustion Engines Under Unmeasured Operating Conditions
by Qiao Huang, Tianfang Xie and Jinlong Liu
Energies 2025, 18(19), 5235; https://doi.org/10.3390/en18195235 - 2 Oct 2025
Abstract
In-cylinder pressure provides critical insights for analyzing and optimizing combustion in internal combustion engines, yet its acquisition across the full operating space requires extensive testing, while physics-based models are computationally demanding. Machine learning (ML) offers an alternative, but its application to direct reconstruction [...] Read more.
In-cylinder pressure provides critical insights for analyzing and optimizing combustion in internal combustion engines, yet its acquisition across the full operating space requires extensive testing, while physics-based models are computationally demanding. Machine learning (ML) offers an alternative, but its application to direct reconstruction of full pressure traces remains limited. This study evaluates three strategies for reconstructing cylinder pressure under unmeasured operating conditions, establishing a machine learning-assisted framework that generates the complete pressure–crank angle (P–CA) trace. The framework treats crank angle and operating conditions as inputs and predicts either pressure directly or apparent heat release rate (HRR) as an intermediate variable, which is then integrated to reconstruct pressure. In all approaches, discrete pointwise predictions are combined to form the full P–CA curve. Direct pressure prediction achieves high accuracy for overall traces but underestimates HRR-related combustion features. Training on HRR improves combustion representation but introduces baseline shifts in reconstructed pressure. A hybrid approach, combining non-combustion pressure prediction with combustion-phase HRR-based reconstruction delivers the most robust and physically consistent results. These findings demonstrate that ML can efficiently reconstruct in-cylinder pressure at unmeasured conditions, reducing experimental requirements while supporting combustion diagnostics, calibration, and digital twin applications. Full article
(This article belongs to the Section I2: Energy and Combustion Science)
Show Figures

Figure 1

23 pages, 698 KB  
Review
Machine Learning in Land Use Prediction: A Comprehensive Review of Performance, Challenges, and Planning Applications
by Cui Li, Cuiping Wang, Tianlei Sun, Tongxi Lin, Jiangrong Liu, Wenbo Yu, Haowei Wang and Lei Nie
Buildings 2025, 15(19), 3551; https://doi.org/10.3390/buildings15193551 - 2 Oct 2025
Abstract
The accelerated global urbanization process has positioned land use/land cover change modeling as a critical component of contemporary geographic science and urban planning research. Traditional approaches face substantial challenges when addressing urban system complexity, multiscale spatial interactions, and high-dimensional data associations, creating urgent [...] Read more.
The accelerated global urbanization process has positioned land use/land cover change modeling as a critical component of contemporary geographic science and urban planning research. Traditional approaches face substantial challenges when addressing urban system complexity, multiscale spatial interactions, and high-dimensional data associations, creating urgent demand for sophisticated analytical frameworks. This review comprehensively evaluates machine learning applications in land use prediction through systematic analysis of 74 publications spanning 2020–2024, establishing a taxonomic framework distinguishing traditional machine learning, deep learning, and hybrid methodologies. The review contributes a comprehensive methodological assessment identifying algorithmic evolution patterns and performance benchmarks across diverse geographic contexts. Traditional methods demonstrate sustained reliability, while deep learning architectures excel in complex pattern recognition. Most significantly, hybrid methodologies have emerged as the dominant paradigm through algorithmic complementarity, consistently outperforming single-algorithm implementations. However, contemporary applications face critical constraints including computational complexity, scalability limitations, and interpretability issues impeding practical adoption. This review advances the field by synthesizing fragmented knowledge into a coherent framework and identifying research trajectories toward integrated intelligent systems with explainable artificial intelligence. Full article
(This article belongs to the Special Issue Advances in Urban Planning and Design for Urban Safety and Operations)
Show Figures

Figure 1

16 pages, 1005 KB  
Article
A Two-Step Machine Learning Approach Integrating GNSS-Derived PWV for Improved Precipitation Forecasting
by Laura Profetto, Andrea Antonini, Luca Fibbi, Alberto Ortolani and Giovanna Maria Dimitri
Entropy 2025, 27(10), 1034; https://doi.org/10.3390/e27101034 - 2 Oct 2025
Abstract
Global Navigation Satellite System (GNSS) meteorology has emerged as a valuable tool for atmospheric monitoring, providing high-resolution, near-real-time data that can significantly improve precipitation nowcasting. This study aims to enhance short-term precipitation forecasting by integrating GNSS-derived Precipitable Water Vapor (PWV)—a key indicator of [...] Read more.
Global Navigation Satellite System (GNSS) meteorology has emerged as a valuable tool for atmospheric monitoring, providing high-resolution, near-real-time data that can significantly improve precipitation nowcasting. This study aims to enhance short-term precipitation forecasting by integrating GNSS-derived Precipitable Water Vapor (PWV)—a key indicator of atmospheric moisture—with traditional meteorological observations. A novel two-step machine learning framework is proposed that combines a Random Forest (RF) model and a Long Short-Term Memory (LSTM) neural network. The RF model first estimates current precipitation based on PWV, surface weather parameters, and auxiliary atmospheric variables. Then, the LSTM network leverages temporal dependencies within the data to predict precipitation for the subsequent hour. This hybrid method capitalizes on the RF’s ability to model complex nonlinear relationships and the LSTM’s strength in handling time series data. The results demonstrate that the proposed approach improves forecasting accuracy, particularly during extreme weather events such as intense rainfall and thunderstorms, outperforming conventional models. By integrating GNSS meteorology with advanced machine learning techniques, this study offers a promising tool for meteorological services, early warning systems, and disaster risk management. The findings highlight the potential of GNSS-based nowcasting for real-time decision-making in weather-sensitive applications. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
Show Figures

Figure 1

Back to TopTop