Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,663)

Search Parameters:
Keywords = extreme learning machine

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 1708 KB  
Article
Climate Adaptability and Energy Performance in the Greater Bay Area of China: Analysis of Carbon Neutrality Through Green Building Practices
by Xinshu Feng, Fenfang Xiang and Caisheng Liao
Buildings 2025, 15(17), 3066; https://doi.org/10.3390/buildings15173066 (registering DOI) - 27 Aug 2025
Abstract
China has committed to carbon neutrality by 2060 by necessitating a comprehensive transformation of its building sector, particularly in rapidly urbanizing areas such as the Greater Bay Area (GBA), where subtropical climates, urban heat island effects, and extreme weather events present distinct challenges [...] Read more.
China has committed to carbon neutrality by 2060 by necessitating a comprehensive transformation of its building sector, particularly in rapidly urbanizing areas such as the Greater Bay Area (GBA), where subtropical climates, urban heat island effects, and extreme weather events present distinct challenges for achieving carbon reduction objectives through green building practices. The goal of this study is to establish an analysis method for green building success in the GBA’s subtropical environment, paying attention to the challenging goals of reducing carbon and making buildings more climate-resilient. Research techniques involved performing building energy simulations with EnergyPlus and DesignBuilder, applying LightGBM models for machine learning, using case studies from 32 buildings in Shenzhen, Hong Kong and Guangzhou and carrying out an evaluation of the policy using a PEI. Energy usage in green buildings was 45.3% less than in conventional structures, with Energy Use Intensity ranging from 65.1 to 72.4 kWh/m2/year, while traditional buildings used between 118.5 and 124.2 kWh/m2/year. Also, the carbon footprint during the life cycle of buildings was decreased by 38.4% and they became more resilient to typhoons, giving residents 72.4 h of power during storms, while conventional buildings gave only 8.3 h. HVAC system efficiency was the leading factor, accounting for 24.3% of the difference in energy performance. A detailed approach is developed for optimizing subtropical green buildings, based on unique design features and helpful policy ideas to promote carbon neutrality in swiftly growing metropolitan areas around the world. Full article
19 pages, 1087 KB  
Article
Exploring Sarcopenic Obesity in the Cancer Setting: Insights from the National Health and Nutrition Examination Survey on Prognosis and Predictors Using Machine Learning
by Yinuo Jiang, Wenjie Jiang, Qun Wang, Ting Wei and Lawrence Wing Chi Chan
Bioengineering 2025, 12(9), 921; https://doi.org/10.3390/bioengineering12090921 - 27 Aug 2025
Abstract
Objective: Sarcopenic obesity (SO) is a combination of depleted skeletal muscle mass and obesity, with a high prevalence, undetected onset, challenging diagnosis, and poor prognosis. However, studies on SO in cancer settings are limited. We aimed to explore the association between SO [...] Read more.
Objective: Sarcopenic obesity (SO) is a combination of depleted skeletal muscle mass and obesity, with a high prevalence, undetected onset, challenging diagnosis, and poor prognosis. However, studies on SO in cancer settings are limited. We aimed to explore the association between SO and mortality and to investigate potential predictors involved in the development of SO, with a further objective of constructing a model to detect its occurrence in cancer patients. Methods: The data of 1432 cancer patients from the National Health and Nutrition Examination Survey (NHANES) from the years 1999 to 2006 and 2011 to 2016 were included. For survival analysis, univariable and multivariable Cox proportional hazard models were used to examine the associations of SO with overall survival, adjusting for potential confounders. For machine learning, six algorithms, including logistic regression, stepwise logistic regression, least absolute shrinkage and selection operator (LASSO), support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost), were utilized to build models to predict the presence of SO. The predictive performances of each model were evaluated. Results: From six machine learning algorithms, cancer patients with SO were significantly associated with a higher risk of all-cause mortality (adjusted HR 1.368, 95%CI 1.107–1.690) compared with individuals without SO. Among the six machine learning algorithms, the optimal LASSO model achieved the highest area under the curve (AUC) of 0.891 on the training set and 0.873 on the test set, outperforming the other five machine learning algorithms. Conclusions: SO is a significant risk factor for the prognosis of cancer patients. Our constructed LASSO model to predict the presence of SO is an effective tool for clinical practice. This study is the first to utilize machine learning to explore the predictors of SO among cancer populations, providing valuable insights for future research. Full article
Show Figures

Figure 1

28 pages, 68775 KB  
Article
Machine Learning Approaches for Predicting Lithological and Petrophysical Parameters in Hydrocarbon Exploration: A Case Study from the Carpathian Foredeep
by Drozd Arkadiusz, Topór Tomasz, Lis-Śledziona Anita and Sowiżdżał Krzysztof
Energies 2025, 18(17), 4521; https://doi.org/10.3390/en18174521 - 26 Aug 2025
Abstract
This study presents a novel approach to the parametrization of 3D PETRO FACIES and SEISMO FACIES using supervised and unsupervised learning, supported by a coherent structural and stratigraphic framework, to enhance understanding of the presence of hydrocarbons in the Dzików–Uszkowce region. The prediction [...] Read more.
This study presents a novel approach to the parametrization of 3D PETRO FACIES and SEISMO FACIES using supervised and unsupervised learning, supported by a coherent structural and stratigraphic framework, to enhance understanding of the presence of hydrocarbons in the Dzików–Uszkowce region. The prediction relies on selected seismic attributes and well logging data, which are essential in hydrocarbon exploration. Three-dimensional seismic data, a crucial source of information, reflect the propagation velocity of elastic waves influenced by lithological formations and reservoir fluids. However, seismic response similarities complicate accurate seismic image interpretation. Three-dimensional seismic data were also used to build a structural–stratigraphic model that partitions the study area into coeval strata, enabling spatial analysis of the machine learning results. In the 3D seismic model, PETRO FACIES classification achieved an overall accuracy of 80% (SD = 0.01), effectively distinguishing sandstone- and mudstone-dominated facies (RT1–RT4) with F1 scores between 0.65 and 0.85. RESERVOIR FACIES prediction, covering seven hydrocarbon system classes, reached an accuracy of 70% (SD = 0.01). However, class-level performance varied substantially. Non-productive zones such as HNF (No Flow) were identified with high precision (0.82) and recall (0.84, F1 = 0.83), while mixed-saturation facies (HWGS, BSWGS) showed moderate performance (F1 = 0.74–0.81). In contrast, gas-saturated classes (BSGS and HGS) suffered from extremely low F1 scores (0.08 and 0.12, respectively), with recalls as low as 5–7%, highlighting the model’s difficulty in discriminating these units from water-saturated or mixed facies due to overlapping seismic responses and limited training data for gas-rich intervals. To enhance reservoir characterization, SEISMO FACIES analysis identified 12 distinct seismic facies using key attributes. An additional facies (facies 13) was defined to characterize gas-saturated sandstones with high reservoir quality and accumulation potential. Refinements were performed using borehole data on hydrocarbon-bearing zones and clay volume (VCL), applying a 0.3 VCL cutoff and filtering specific facies to isolate zones with confirmed gas presence. The same approach was applied to PETRO FACIES and a new RT facie was extracted. This integrated approach improved mapping of lithological variability and hydrocarbon saturation in complex geological settings. The results were validated against two blind wells that were excluded from the machine learning process. Knowledge of the presence of gas in well N-1 and its absence in well D-24 guided verification of the models within the structural–stratigraphic framework. Full article
(This article belongs to the Section H1: Petroleum Engineering)
Show Figures

Figure 1

41 pages, 9064 KB  
Article
PLSCO: An Optimization-Driven Approach for Enhancing Predictive Maintenance Accuracy in Intelligent Manufacturing
by Aymen Ramadan Mohamed Alahwel Besha, Opeoluwa Seun Ojekemi, Tolga Oz and Oluwatayomi Adegboye
Processes 2025, 13(9), 2707; https://doi.org/10.3390/pr13092707 - 25 Aug 2025
Abstract
Predictive maintenance (PdM) is a cornerstone of smart manufacturing, enabling the early detection of equipment degradation and reducing unplanned downtimes. This study proposes an advanced machine learning framework that integrates the Extreme Learning Machine (ELM) with a novel hybrid metaheuristic optimization algorithm, the [...] Read more.
Predictive maintenance (PdM) is a cornerstone of smart manufacturing, enabling the early detection of equipment degradation and reducing unplanned downtimes. This study proposes an advanced machine learning framework that integrates the Extreme Learning Machine (ELM) with a novel hybrid metaheuristic optimization algorithm, the Polar Lights Salp Cooperative Optimizer (PLSCO), to enhance predictive modeling in manufacturing processes. PLSCO combines the strengths of the Polar Light Optimizer (PLO), Competitive Swarm Optimization (CSO), and Salp Swarm Algorithm (SSA), utilizing a cooperative strategy that adaptively balances exploration and exploitation. In this mechanism, particles engage in a competitive division process, where winners intensify search via PLO and losers diversify using SSA, effectively avoiding local optima and premature convergence. The performance of PLSCO was validated on CEC2015 and CEC2020 benchmark functions, demonstrating superior convergence behavior and global search capabilities. When applied to a real-world predictive maintenance dataset, the ELM-PLSCO model achieved a high prediction accuracy of 95.4%, outperforming baseline and other optimization-assisted models. Feature importance analysis revealed that torque and tool wear are dominant indicators of machine failure, offering interpretable insights for condition monitoring. The proposed approach presents a robust, interpretable, and computationally efficient solution for predictive maintenance in intelligent manufacturing environments. Full article
Show Figures

Figure 1

20 pages, 11941 KB  
Article
Correlation Analysis of Geological Disaster Density and Soil and Water Conservation Prevention and Control Capacity: A Case Study of Guangdong Province
by Yaping Lu, Jingcheng Fu and Li Tang
Water 2025, 17(17), 2527; https://doi.org/10.3390/w17172527 - 25 Aug 2025
Abstract
This study investigates the spatial coupling between geohazard susceptibility and soil conservation capacity in Guangdong Province, China, using integrated spatial analysis and machine learning approaches. Through kernel density estimation, hotspot analysis, principal component analysis (PCA), and t-SNE clustering applied to 11,252 geohazard records [...] Read more.
This study investigates the spatial coupling between geohazard susceptibility and soil conservation capacity in Guangdong Province, China, using integrated spatial analysis and machine learning approaches. Through kernel density estimation, hotspot analysis, principal component analysis (PCA), and t-SNE clustering applied to 11,252 geohazard records and nine soil conservation factors, we identify three critical mechanisms: (1) Topographic steepness (LS factor) constitutes the primary control on geohazard distribution (r = 0.162, p < 0.001), with high-risk clusters concentrated in northeastern mountainous regions (Meizhou-Huizhou-Heyuan); (2) Vegetation coverage (C_mean) mediates rainfall impacts, exhibiting significant risk reduction (r = −0.099, p < 0.001) despite counterintuitive negative correlations with mean rainfall erosivity; (3) Soil conservation effectiveness depends on topographic context, reducing geohazard density in moderate slopes (Cluster 0: 527.04) but proving insufficient in extreme terrain (Cluster 2: LS = 20.587). The emerging role of rainfall variability (R_slope, r = 0.183) highlights climate change impacts. Full article
Show Figures

Figure 1

30 pages, 37444 KB  
Article
A Novel Framework for Winter Crop Mapping Using Sample Generation Automatically and Bayesian-Optimized Machine Learning
by Fukang Feng, Maofang Gao, Ruilu Gao, Yunxiang Jin and Yadong Yang
Agronomy 2025, 15(9), 2034; https://doi.org/10.3390/agronomy15092034 - 25 Aug 2025
Abstract
Timely and accurate winter crop distribution maps are crucial for agricultural monitoring, food security, and sustainable land use planning. However, conventional methods relying on field surveys are labor-intensive, costly, and difficult to scale across large regions. To address these limitations, this study presents [...] Read more.
Timely and accurate winter crop distribution maps are crucial for agricultural monitoring, food security, and sustainable land use planning. However, conventional methods relying on field surveys are labor-intensive, costly, and difficult to scale across large regions. To address these limitations, this study presents an automated winter crop mapping framework that integrates phenology-based sample generation and machine learning classification using time-series Sentinel-2 imagery. The Winter Crop Index (WCI) is developed to capture seasonal vegetation dynamics, and the Otsu algorithm is employed to automatically extract reliable training samples. These samples are then used to train three widely used machine learning classifiers—Random Forest (RF), a Support Vector Machine (SVM), and Extreme Gradient Boosting (XGBoost)—with hyperparameters optimized via Bayesian optimization. The framework was validated in three diverse agricultural regions in China: the Erhai Basin in Yunnan Province, Shenzhou City in Hebei Province, and Jiangling County in Hunan Province. The experimental results demonstrate that the combination of the WCI and Otsu enables a reliable initial classification, facilitating the generation of high-quality training samples. XGBoost achieved the best performance in the Erhai Basin and Shenzhou City, with overall accuracies of 0.9238 and 0.9825 and F1-scores of 0.9233 and 0.9823, respectively. In contrast, the SVM performed best in Jiangling County, yielding an overall accuracy of 0.9574 and an F1-score of 0.9525. The proposed approach enables high-precision winter crop mapping without reliance on manually collected samples, demonstrating strong generalizability and providing a promising solution for large-scale, automated agricultural monitoring. Full article
(This article belongs to the Special Issue Crop Production in the Era of Climate Change)
Show Figures

Figure 1

19 pages, 2459 KB  
Article
Temporal-Alignment Cluster Identification and Relevance-Driven Feature Refinement for Ultra-Short-Term Wind Power Forecasting
by Yan Yan and Yan Zhou
Energies 2025, 18(17), 4477; https://doi.org/10.3390/en18174477 - 22 Aug 2025
Viewed by 215
Abstract
Ultra-short-term wind power forecasting is challenged by high volatility and complex temporal patterns, with traditional single-model approaches often failing to provide stable and accurate predictions under diverse operational scenarios. To address this issue, a framework based on the TCN-ELM hybrid model with temporal [...] Read more.
Ultra-short-term wind power forecasting is challenged by high volatility and complex temporal patterns, with traditional single-model approaches often failing to provide stable and accurate predictions under diverse operational scenarios. To address this issue, a framework based on the TCN-ELM hybrid model with temporal alignment clustering and feature refinement is proposed for ultra-short-term wind power forecasting. First, dynamic time warping (DTW)–K-means is applied to cluster historical power curves in the temporal alignment space, identifying consistent operational patterns and providing prior information for subsequent predictions. Then, a correlation-driven feature refinement method is introduced to weight and select the most representative meteorological and power sequence features within each cluster, optimizing the feature set for improved prediction accuracy. Next, a TCN-ELM hybrid model is constructed, combining the advantages of temporal convolutional networks (TCNs) in capturing sequential features and an extreme learning machine (ELM) in efficient nonlinear modelling. This hybrid approach enhances forecasting performance through their synergistic capabilities. Traditional ultra-short-term forecasting often focuses solely on historical power as input, especially with a 15 min resolution, but this study emphasizes reducing the time scale of meteorological forecasts and power samples to within one hour, aiming to improve the reliability of the forecasting model in handling sudden meteorological changes within the ultra-short-term time horizon. To validate the proposed framework, comparisons are made with several benchmark models, including traditional TCN, ELM, and long short-term memory (LSTM) networks. Experimental results demonstrate that the proposed framework achieves higher prediction accuracy and better robustness across various operational modes, particularly under high-variability scenarios, out-performing conventional models like TCN and ELM. The method provides a reliable technical solution for ultra-short-term wind power forecasting, grid scheduling, and power system stability. Full article
Show Figures

Figure 1

12 pages, 1033 KB  
Article
A Time-Series Approach for Machine Learning-Based Patient-Specific Quality Assurance of Radiosurgery Plans
by Simone Buzzi, Pietro Mancosu, Andrea Bresolin, Pasqualina Gallo, Francesco La Fauci, Francesca Lobefalo, Lucia Paganini, Marco Pelizzoli, Giacomo Reggiori, Ciro Franzese, Stefano Tomatis, Marta Scorsetti, Cristina Lenardi and Nicola Lambri
Bioengineering 2025, 12(8), 897; https://doi.org/10.3390/bioengineering12080897 - 21 Aug 2025
Viewed by 231
Abstract
Stereotactic radiosurgery (SRS) for multiple brain metastases can be delivered with a single isocenter and non-coplanar arcs, achieving highly conformal dose distributions at the cost of extreme modulation of treatment machine parameters. As a result, SRS plans are at a higher risk of [...] Read more.
Stereotactic radiosurgery (SRS) for multiple brain metastases can be delivered with a single isocenter and non-coplanar arcs, achieving highly conformal dose distributions at the cost of extreme modulation of treatment machine parameters. As a result, SRS plans are at a higher risk of patient-specific quality assurance (PSQA) failure compared to standard treatments. This study aimed to develop a machine-learning (ML) model to predict the PSQA outcome (gamma passing rate, GPR) of SRS plans. Five hundred and ninety-two consecutive patients treated between 2020 and 2024 were selected. GPR analyses were performed using a 3%/1 mm criterion and a 95% action limit for each arc. Fifteen plan complexity metrics were used as input features to predict the GPR of an arc. A stratified and a time-series approach were employed to split the data into training (1555 arcs), validation (389 arcs), and test (486 arcs) sets. The ML model achieved a mean absolute error of 2.6% on the test set, with a 0.83% median residual value (measured/predicted). Lower values of the measured GPR tended to be overestimated. Sensitivity and specificity were 93% and 56%, respectively. ML models for virtual QA of SRS can be integrated into clinical practice, facilitating more efficient PSQA approaches. Full article
(This article belongs to the Special Issue Radiation Imaging and Therapy for Biomedical Engineering)
Show Figures

Figure 1

36 pages, 14083 KB  
Article
Workload Prediction for Proactive Resource Allocation in Large-Scale Cloud-Edge Applications
by Thang Le Duc, Chanh Nguyen and Per-Olov Östberg
Electronics 2025, 14(16), 3333; https://doi.org/10.3390/electronics14163333 - 21 Aug 2025
Viewed by 148
Abstract
Accurate workload prediction is essential for proactive resource allocation in large-scale Content Delivery Networks (CDNs), where traffic patterns are highly dynamic and geographically distributed. This paper introduces a CDN-tailored prediction and autoscaling framework that integrates statistical and deep learning models within an adaptive [...] Read more.
Accurate workload prediction is essential for proactive resource allocation in large-scale Content Delivery Networks (CDNs), where traffic patterns are highly dynamic and geographically distributed. This paper introduces a CDN-tailored prediction and autoscaling framework that integrates statistical and deep learning models within an adaptive feedback loop. The framework is evaluated using 18 months of real traffic traces from a production multi-tier CDN, capturing realistic workload seasonality, cache–tier interactions, and propagation delays. Unlike generic cloud-edge predictors, our design incorporates CDN-specific features and model-switching mechanisms to balance prediction accuracy with computational cost. Seasonal ARIMA (S-ARIMA), Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Online Sequential Extreme Learning Machine (OS-ELM) are combined to support both short-horizon scaling and longer-term capacity planning. The predictions drive a queue-based resource-estimation model, enabling proactive cache–server scaling with low rejection rates. Experimental results demonstrate that the framework maintains high accuracy while reducing computational overhead through adaptive model selection. The proposed approach offers a practical, production-tested solution for predictive autoscaling in CDNs and can be extended to other latency-sensitive edge-cloud services with hierarchical architectures. Full article
(This article belongs to the Special Issue Next-Generation Cloud–Edge Computing: Systems and Applications)
Show Figures

Graphical abstract

22 pages, 9182 KB  
Article
Sensor Synergy in Bathymetric Mapping: Integrating Optical, LiDAR, and Echosounder Data Using Machine Learning
by Emre Gülher and Ugur Alganci
Remote Sens. 2025, 17(16), 2912; https://doi.org/10.3390/rs17162912 - 21 Aug 2025
Viewed by 333
Abstract
Bathymetry, the measurement of water depth and underwater terrain, is vital for scientific, commercial, and environmental applications. Traditional methods like shipborne echosounders are costly and inefficient in shallow waters due to limited spatial coverage and accessibility. Emerging technologies such as satellite imagery, drones, [...] Read more.
Bathymetry, the measurement of water depth and underwater terrain, is vital for scientific, commercial, and environmental applications. Traditional methods like shipborne echosounders are costly and inefficient in shallow waters due to limited spatial coverage and accessibility. Emerging technologies such as satellite imagery, drones, and spaceborne LiDAR offer cost-effective and efficient alternatives. This research explores integrating multi-sensor datasets to enhance bathymetric mapping in coastal and inland waters by leveraging each sensor’s strengths. The goal is to improve spatial coverage, resolution, and accuracy over traditional methods using data fusion and machine learning. Gülbahçe Bay in İzmir, Turkey, serves as the study area. Bathymetric modeling uses Sentinel-2, Göktürk-1, and aerial imagery with varying resolutions and sensor characteristics. Model calibration evaluates independent and integrated use of single-beam echosounder (SBE) and satellite-based LiDAR (ICESat-2) during training. After preprocessing, Random Forest and Extreme Gradient Boosting algorithms are applied for bathymetric inference. Results are assessed using accuracy metrics and IHO CATZOC standards, achieving A1 level for 0–10 m, A2/B for 0–15 m, and C level for 0–20 m depth intervals. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

27 pages, 1502 KB  
Review
Monitoring of Air Pollution from the Iron and Steel Industry: A Global Bibliometric Review
by Ekaterina Zolotova, Natalya Ivanova and Sezgin Ayan
Atmosphere 2025, 16(8), 992; https://doi.org/10.3390/atmos16080992 - 21 Aug 2025
Viewed by 259
Abstract
The iron and steel industry is one of the main industrial contributors to air pollution. The aim of our study is to analyze modern studies on air pollution by the iron and steel industry, as a result of which the geography and research [...] Read more.
The iron and steel industry is one of the main industrial contributors to air pollution. The aim of our study is to analyze modern studies on air pollution by the iron and steel industry, as a result of which the geography and research directions and the degree of development of current issues will be assessed, and the most cited articles and journals will be identified. A review of contemporary research (2018–2024) was conducted on the basis of articles with a digital object identifier (DOI) using machine learning methodologies (VOSviewer software version 1.6.20). The number of articles selected was 80. The heat map of study density clearly showed that the geographic distribution of studies was extremely uneven. A total of 65% of the studies were conducted in China, 9% in Nigeria, 6% in Russia, 3% in Poland, and 3% in Turkey. The remaining 14% of articles represent a series of single studies conducted in 11 countries. The revealed geographical imbalance between countries with developed production and the number of studies conducted in them shows a significant shortcoming in monitoring research. Most of the studies (20%) were devoted to the assessment of multicomponent emissions. A special place among them was occupied by the inventory of emissions using various methods. The next main directions in terms of the number of articles were aimed at studying the toxic metal emissions (19%), at the analysis of organic emissions (19%), at the modeling and forecasting of emissions (18%), and at particulate matter studies (15%). The main features of the articles for each direction are briefly noted. Citation analysis made it possible to compile a rating of articles of greatest scientific interest and the most authoritative journals. Citation network analysis revealed important insights into the structure of scientific communication in the monitoring of atmospheric pollution from the iron and steel industry. The results of our review will contribute to the consolidation of scientists, the identification of gaps in scientific knowledge, and the improvement of environmental policy and technological solutions. Full article
(This article belongs to the Section Air Pollution Control)
Show Figures

Figure 1

21 pages, 3286 KB  
Article
ELM-GA-Based Active Comfort Control of a Piggyback Transfer Robot
by Liyan Feng, Xinping Wang, Teng Liu, Kaicheng Qi, Long Zhang, Jianjun Zhang and Shijie Guo
Machines 2025, 13(8), 748; https://doi.org/10.3390/machines13080748 - 21 Aug 2025
Viewed by 204
Abstract
The improvement of comfort in the human–robot interaction for care recipients is a significant challenge in the development of nursing robots. The existing methods for enhancing comfort largely depend on subjective comfort questionnaires, which are prone to unavoidable errors. Additionally, traditional passive movement [...] Read more.
The improvement of comfort in the human–robot interaction for care recipients is a significant challenge in the development of nursing robots. The existing methods for enhancing comfort largely depend on subjective comfort questionnaires, which are prone to unavoidable errors. Additionally, traditional passive movement control approaches lack the ability to adapt and effectively improve care recipient comfort. To address these problems, this paper proposes an active, personalized intelligent control method based on neural networks. A muscle activation prediction model is established for the piggyback transfer robot, enabling dynamic adjustments during the care process to improve human comfort. Initially, a kinematic analysis of the piggyback transfer robot is conducted to determine the optimal back-carrying trajectory. Experiments were carried out to measure human–robot contact forces, chest holder rotation angles, and muscle activation levels. Subsequently, an Online Sequential Extreme Learning Machine (OS-ELM) algorithm is used to train a predictive model. The model takes the contact forces and chest holder rotation angle as inputs, while outputting the latissimus dorsi muscle activation levels. The Genetic Algorithm (GA) is then employed to dynamically adjust the chest holder’s rotation angle to minimize the difference between actual muscle activation and the comfort threshold. Comparative experiments demonstrate that the proposed ELM-GA-based active control method effectively enhances comfort during the piggyback transfer process, as evidenced by both subjective feedback and objective measurements of muscle activation. Full article
(This article belongs to the Special Issue Vibration Isolation and Control in Mechanical Systems)
Show Figures

Figure 1

22 pages, 3330 KB  
Article
Predicting the Bearing Capacity of Shallow Foundations on Granular Soil Using Ensemble Machine Learning Models
by Husein Ali Zeini, Mohammed E. Seno, Esraa Q. Shehab, Emad A. Abood, Hamza Imran, Luís Filipe Almeida Bernardo and Tiago Pinto Ribeiro
Geotechnics 2025, 5(3), 57; https://doi.org/10.3390/geotechnics5030057 - 20 Aug 2025
Viewed by 360
Abstract
Shallow foundations are widely used in both terrestrial and marine environments, supporting critical structures such as buildings, offshore wind turbines, subsea platforms, and infrastructure in coastal zones, including piers, seawalls, and coastal defense systems. Accurately determining the soil bearing capacity for shallow foundations [...] Read more.
Shallow foundations are widely used in both terrestrial and marine environments, supporting critical structures such as buildings, offshore wind turbines, subsea platforms, and infrastructure in coastal zones, including piers, seawalls, and coastal defense systems. Accurately determining the soil bearing capacity for shallow foundations presents a significant challenge, as it necessitates considerable resources in terms of materials and testing equipment, as well as a substantial amount of time to perform the necessary evaluations. Consequently, our research was designed to approximate the forecasting of soil bearing capacity for shallow foundations using machine learning algorithms. In our research, four ensemble machine learning algorithms were employed for the prediction process, benefiting from previous experimental tests. Those four models were AdaBoost, Extreme Gradient Boosting (XGBoost), Gradient Boosting Regression Trees (GBRTs), and Light Gradient Boosting Machine (LightGBM). To enhance the model’s efficacy and identify the optimal hyperparameters, grid search was conducted in conjunction with k-fold cross-validation for each model. The models were evaluated using the R2 value, MAE, and RMSE. After evaluation, the R2 values were between 0.817 and 0.849, where the GBRT model predicted more accurately than other models in training, testing, and combined datasets. Moreover, variable importance was analyzed to check which parameter is more important. Foundation width was the most important parameter affecting the shallow foundation bearing capacity. The findings obtained from the refined machine learning approach were compared with the well-known empirical and modern machine learning equations. In the end, the study designed a web application that helps geotechnical engineers from all over the world determine the ultimate bearing capacity of shallow foundations. Full article
Show Figures

Figure 1

36 pages, 6877 KB  
Article
Machine Learning for Reservoir Quality Prediction in Chlorite-Bearing Sandstone Reservoirs
by Thomas E. Nichols, Richard H. Worden, James E. Houghton, Joshua Griffiths, Christian Brostrøm and Allard W. Martinius
Geosciences 2025, 15(8), 325; https://doi.org/10.3390/geosciences15080325 - 19 Aug 2025
Viewed by 218
Abstract
We have developed a generalisable machine learning framework for reservoir quality prediction in deeply buried clastic systems. Applied to the Lower Jurassic deltaic sandstones of the Tilje Formation (Halten Terrace, North Sea), the approach integrates sedimentological facies modelling with mineralogical and petrophysical prediction [...] Read more.
We have developed a generalisable machine learning framework for reservoir quality prediction in deeply buried clastic systems. Applied to the Lower Jurassic deltaic sandstones of the Tilje Formation (Halten Terrace, North Sea), the approach integrates sedimentological facies modelling with mineralogical and petrophysical prediction in a single workflow. Using supervised Extreme Gradient Boosting (XGBoost) models, we classify reservoir facies, predict permeability directly from standard wireline log parameters and estimate the abundance of porosity-preserving grain coating chlorite (gamma ray, neutron porosity, caliper, photoelectric effect, bulk density, compressional and shear sonic, and deep resistivity). Model development and evaluation employed stratified K-fold cross-validation to preserve facies proportions and mineralogical variability across folds, supporting robust performance assessment and testing generalisability across a geologically heterogeneous dataset. Core description, point count petrography, and core plug analyses were used for ground truthing. The models distinguish chlorite-associated facies with up to 80% accuracy and estimate permeability with a mean absolute error of 0.782 log(mD), improving substantially on conventional regression-based approaches. The models also enable prediction, for the first time using wireline logs, grain-coating chlorite abundance with a mean absolute error of 1.79% (range 0–16%). The framework takes advantage of diagnostic petrophysical responses associated with chlorite and high porosity, yielding geologically consistent and interpretable results. It addresses persistent challenges in characterising thinly bedded, heterogeneous intervals beyond the resolution of traditional methods and is transferable to other clastic reservoirs, including those considered for carbon storage and geothermal applications. The workflow supports cost-effective, high-confidence subsurface characterisation and contributes a flexible methodology for future work at the interface of geoscience and machine learning. Full article
Show Figures

Figure 1

26 pages, 3620 KB  
Article
Estimation Method of Leaf Nitrogen Content of Dominant Plants in Inner Mongolia Grassland Based on Machine Learning
by Lishan Jin, Xiumei Wang, Jianjun Dong, Ruochen Wang, Hefei Wen, Yuyan Sun, Wenbo Wu, Zhihang Zhang and Can Kang
Nitrogen 2025, 6(3), 70; https://doi.org/10.3390/nitrogen6030070 - 19 Aug 2025
Viewed by 262
Abstract
Accurate nitrogen (N) content estimation in grassland vegetation is essential for ecosystem health and optimizing pasture quality, as N supports plant photosynthesis and water uptake. Traditional lab methods are slow and unsuitable for large-scale monitoring, while remote sensing models often face accuracy challenges [...] Read more.
Accurate nitrogen (N) content estimation in grassland vegetation is essential for ecosystem health and optimizing pasture quality, as N supports plant photosynthesis and water uptake. Traditional lab methods are slow and unsuitable for large-scale monitoring, while remote sensing models often face accuracy challenges due to hyperspectral data complexity. This study improves N content estimation in the typical steppe of Inner Mongolia by integrating hyperspectral remote sensing with advanced machine learning. Hyperspectral reflectance from Leymus chinensis and Cleistogenes squarrosa was measured using an ASD FieldSpec-4 spectrometer, and leaf N content was measured with an elemental analyzer. To address high-dimensional data, four spectral transformations—band combination, first-order derivative transformation (FDT), continuous wavelet transformation (CWT), and continuum removal transformation (CRT)—were applied, with Least Absolute Shrinkage and Selection Operator (LASSO) used for feature selection. Four machine learning models—Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), Artificial Neural Network (ANN), and K-Nearest Neighbors (KNN)—were evaluated via five-fold cross-validation. Wavelet transformation provided the most informative parameters. The SVM model achieved the highest accuracy for L. chinensis (R2 = 0.92), and the ANN model performed best for C. squarrosa (R2 = 0.72). This study demonstrates that integrating wavelet transform with machine learning offers a reliable, scalable approach for grassland N monitoring and management. Full article
Show Figures

Figure 1

Back to TopTop