Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (170)

Search Parameters:
Keywords = variable-weighted linear combination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2371 KB  
Article
Knee Joint Loading During Supported Standing in Children and Adolescents with Severe Cerebral Palsy: Effects of Verticalization and Joint Position
by René Althaus and Eva M. Steindl
Children 2026, 13(4), 497; https://doi.org/10.3390/children13040497 - 1 Apr 2026
Viewed by 222
Abstract
Background: Supported standing is widely used in children and adolescents with severe cerebral palsy (CP) as part of rehabilitation programs aimed at maintaining musculoskeletal health and enabling participation. Despite its frequent clinical use, quantitative biomechanical evidence describing knee joint loading under different positioning [...] Read more.
Background: Supported standing is widely used in children and adolescents with severe cerebral palsy (CP) as part of rehabilitation programs aimed at maintaining musculoskeletal health and enabling participation. Despite its frequent clinical use, quantitative biomechanical evidence describing knee joint loading under different positioning conditions remains limited, particularly in individuals classified as GMFCS IV–V. The primary objective of this study was to quantify knee joint loading during supported standing across predefined combinations of verticalization angle and hip/knee flexion. The secondary objective was to investigate interaction effects between these variables and to assess whether increasing hip/knee flexion is associated with a linear reduction in knee joint loading. Methods: Twenty-six children and adolescents with CP (GMFCS IV–V; age 6–17 years) participated in the study. Measurements were performed using a standardized back-supported standing device. Knee joint loading was measured using integrated pressure sensors across six verticalization angles (0°, 30°, 45°, 60°, 75°, 90°) combined with four hip/knee flexion angles (0°, 15°, 30°, 45°). Forces were normalized to body weight (%BW). Statistical analysis was performed using repeated-measures analysis of variance. Results: Knee joint loading increased consistently with greater verticalization across all tested hip/knee flexion conditions (p < 0.001). A non-linear pattern was observed across flexion angles. Interaction effects between verticalization and hip/knee flexion were observed. Knee joint loading did not decrease linearly with increasing flexion; instead, the lowest loading was observed at approximately 15° hip/knee flexion, whereas both full extension and 45° flexion resulted in higher loads. Conclusions: Verticalization angle represents a key factor influencing knee joint loading during supported standing in children and adolescents with severe CP. Knee joint loading increases with greater verticalization, while hip/knee position shows a non-linear influence. The absence of a linear reduction in loading with increasing flexion highlights the presence of interaction effects between positioning variables and supports individualized positioning strategies in supported standing programs. Full article
Show Figures

Figure 1

25 pages, 11171 KB  
Article
Multilevel Flood Susceptibility Mapping by Fuzzy Sets, Analytical Hierarchy Process, Weighted Linear Combination and Random Forest
by Pece V. Gorsevski and Ivica Milevski
ISPRS Int. J. Geo-Inf. 2026, 15(4), 148; https://doi.org/10.3390/ijgi15040148 - 1 Apr 2026
Viewed by 757
Abstract
Given the increasing frequency and intensity of floods, which are mostly caused by continuous climate change and growing human pressures on the environment, accurately identifying areas that are susceptible to flooding is a crucial priority for risk reduction and long-term land use planning. [...] Read more.
Given the increasing frequency and intensity of floods, which are mostly caused by continuous climate change and growing human pressures on the environment, accurately identifying areas that are susceptible to flooding is a crucial priority for risk reduction and long-term land use planning. Thus, this research examines multilevel flood susceptibility mapping across North Macedonia, using 328 past flood occurrences, 14 conditioning variables derived from a digital elevation model, simplified lithology, and calculated direct runoff. The methodology integrates fuzzy set theory (Fuzzy), analytic hierarchy process (AHP), weighted linear combination (WLC), and random forest (RF) approaches. The two-stage process employs distinct sets of conditioning factors in sequential flood susceptibility mapping: first, generating Fuzzy/AHP/WLC predictions and pseudo-absence data, and second, producing five RF predictions by varying pseudo-absences and binary cutoffs. Validation results indicate that the very high susceptibility class (0.8–1.0) of the Fuzzy/AHP/WLC model predicted 46.6% of flood pixels within 31.6% of the total area. In comparison, the very high susceptibility class of the RF models predicted 88.5%, 78.3%, 60.6%, 48.5%, and 28.3% of flood pixels within 54.7%, 42.2%, 30.5%, 27.0%, and 25.1% of the total area, respectively. The RF models achieved area under the curve (AUC) values exceeding 0.850, with a maximum of 0.966. Additionally, areas of high and low uncertainty were highlighted using a standard deviation map created from the RF models, highlighting agreement/disagreement and potential locations for methodological improvement and focused sampling. The findings also highlight the potential of the multilevel technique for mapping flood susceptibility and call for more research into its potential for future studies and practical uses. Full article
Show Figures

Figure 1

21 pages, 3428 KB  
Article
Subseasonal-to-Seasonal Prediction of Arctic Sea Ice Concentration and Thickness Using a Multivariate Linear Markov Model
by Jijia Yang, Xuewei Li, Peng Lu, Qingkai Wang and Zhijun Li
J. Mar. Sci. Eng. 2026, 14(7), 637; https://doi.org/10.3390/jmse14070637 - 30 Mar 2026
Viewed by 229
Abstract
Rapid changes in Arctic summer sea ice exert substantial influences on the polar climate system, maritime navigation, and resource exploitation, while subseasonal-to-seasonal (S2S) prediction of sea ice state remains highly uncertain. Using daily observations and reanalysis data of sea ice concentration (SIC) and [...] Read more.
Rapid changes in Arctic summer sea ice exert substantial influences on the polar climate system, maritime navigation, and resource exploitation, while subseasonal-to-seasonal (S2S) prediction of sea ice state remains highly uncertain. Using daily observations and reanalysis data of sea ice concentration (SIC) and thickness (SIT) from 1979 to 2023, together with concurrent atmospheric and oceanic fields, this study develops a multivariate linear Markov model to perform S2S predictions of Arctic summer sea ice. Sensitivity experiments with different variable combinations, weighting strategies, and modal truncation schemes are conducted, and predictive skill is systematically evaluated against persistence and climatological baselines. Results indicate that the model exhibits stable forecast skill without pronounced error accumulation at extended lead times. SIC predictability is primarily governed by its intrinsic spatiotemporal persistence and is significantly modulated by oceanic thermodynamic forcing, particularly sea surface temperature and surface net energy flux, highlighting a pronounced oceanic memory effect. In contrast, local atmospheric dynamic variables provide limited incremental skill. For SIT, predictability is dominated by its own historical state, with SIC contributing marginal short-term improvement and air–sea coupling exerting weak influence. Overall, the proposed framework effectively extracts dominant predictable signals with clear physical interpretability, providing a computationally efficient statistical approach for S2S prediction of Arctic summer sea ice. Full article
Show Figures

Figure 1

24 pages, 15270 KB  
Article
The Analysis of Urban Nighttime Light Spatial Heterogeneity and Driving Factors Based on SDGSAT-1 Data
by Jinke Liu, Yiran Zhang, Yifei Zhu, Xuesheng Zhao and Wei Guo
Sensors 2026, 26(7), 2094; https://doi.org/10.3390/s26072094 - 27 Mar 2026
Viewed by 328
Abstract
Artificial light at night (ALAN) data is widely used in urban function analysis and socio-economic activity monitoring, but its application at the micro-scale of cities still faces challenges. This study utilizes high spatial resolution SDGSAT-1 nighttime light data to explore the spatial heterogeneity [...] Read more.
Artificial light at night (ALAN) data is widely used in urban function analysis and socio-economic activity monitoring, but its application at the micro-scale of cities still faces challenges. This study utilizes high spatial resolution SDGSAT-1 nighttime light data to explore the spatial heterogeneity of ALAN at the street scale in two representative Chinese cities—Beijing and Guangzhou. By integrating multi-source data (such as building vector data, road networks, and point of interest data), a multi-dimensional indicator system covering urban morphology, functional structure, and transportation accessibility is constructed. Based on this, the study employs a Geographically Weighted Random Forest (GWRF) model combined with the Shapley Additive Explanations (SHAP) method to deeply analyze the non-linear relationships between ALAN intensity and multiple driving factors, as well as their spatial variability. Results demonstrate the superiority of the GWRF model over global models in capturing spatial non-stationarity, with R2 values of 0.67 for Beijing and 0.74 for Guangzhou, compared to 0.62 and 0.71 for the random forest models, respectively. Road density is the dominant factor influencing nighttime light intensity in both Beijing and Guangzhou. However, the relationship between ALAN and its driving factors varies across these cities. In Beijing, a balanced multi-factor model is observed, whereas in Guangzhou, ALAN intensity is primarily driven by road density, with secondary influences from other factors like sky view factor. This study validates SDGSAT-1 for micro-scale analysis, offering a scientific basis for differentiated urban lighting planning. Full article
(This article belongs to the Special Issue Sensor-Based Systems for Environmental Monitoring and Assessment)
Show Figures

Figure 1

21 pages, 1254 KB  
Article
Solar and Anthropogenic Climate Drivers: An Updated Regression Model and Refined Forecast
by Frank Stefani
Atmosphere 2026, 17(3), 252; https://doi.org/10.3390/atmos17030252 - 28 Feb 2026
Viewed by 1041
Abstract
Recently, an attempt was made to quantify the respective solar and anthropogenic influences on the terrestrial climate, and to cautiously predict the global mean temperature over the next 130 years. In a double regression analysis, both the binary logarithm of carbon dioxide concentration [...] Read more.
Recently, an attempt was made to quantify the respective solar and anthropogenic influences on the terrestrial climate, and to cautiously predict the global mean temperature over the next 130 years. In a double regression analysis, both the binary logarithm of carbon dioxide concentration and the geomagnetic aa index were used as predictors of the sea surface temperature (SST) since the mid-19th century. The regression results turned out to be sensitive to end effects, leading to a disconcertingly broad range of the climate sensitivity between 0.6 K and 1.6 K per doubling of CO2 when varying the final year of the data used. The aim of this paper is to significantly narrow down this range. To this end, the correlations between the two predictors and the dependent variable (SST) are analysed in detail. It is demonstrated that the SST can be predicted until around 2000 almost perfectly using only the aa index, whereas for later periods the role of CO2 increases significantly. Therefore, the weight of the aa index is fixed to its very robust outcome (around 0.04 K/nT) from the single and double regressions up to 1990. The SST data, reduced by the aa contribution thus specified, are then used in a single regression with CO2 as the only remaining predictor. This results in a significant reduction in the range of CO2 sensitivity, narrowing it to 1.1–1.4 K. Given the exceptionally high temperatures in recent years, these values are considered a kind of upper limit that could still be subject to downward corrections when future data are incorporated. Based on this estimate, a prediction of the temperature up to the year 2100 is ventured, assuming various constant emission scenarios combined with a linear sink model for atmospheric CO2 content. The most risky factor in this prediction is the future of the aa index. For its forecast, the results of a recently developed synchronization model of the solar dynamo are tentatively employed. Full article
(This article belongs to the Special Issue The Challenge of Weather and Climate Prediction (2nd Edition))
Show Figures

Figure 1

20 pages, 2727 KB  
Article
Phenotypic Diversity and Breeding Potential of Passiflora Germplasm Conserved Under Tropical Semi-Arid Conditions for Fruit Yield and Quality
by Mariana Laurência Nunes de Lima, Onildo Nunes de Jesus, Fábio Gelape Faleiro, Juliana Martins Ribeiro and Natoniel Franklin de Melo
Agriculture 2026, 16(5), 521; https://doi.org/10.3390/agriculture16050521 - 26 Feb 2026
Viewed by 355
Abstract
Passiflora germplasm represents an important genetic resource for improving fruit yield and quality in breeding programs targeting semi-arid environments. This study aimed to assess the phenotypic diversity, genetic parameters, and breeding potential of Passiflora accessions conserved in the Passion Fruit Active Germplasm Bank [...] Read more.
Passiflora germplasm represents an important genetic resource for improving fruit yield and quality in breeding programs targeting semi-arid environments. This study aimed to assess the phenotypic diversity, genetic parameters, and breeding potential of Passiflora accessions conserved in the Passion Fruit Active Germplasm Bank of Embrapa Semiárido. A total of 55 accessions, predominantly Passiflora cincinnata Mast., were evaluated using morphoagronomic descriptors related to plant, flower, and fruit traits. Quantitative data were analyzed using mixed linear models (REML/BLUP) to estimate genetic parameters, and multivariate analyses were applied to characterize phenotypic divergence. Substantial phenotypic variability was observed, particularly for fruit-related traits. Fruit weight ranged from 43.25 to 142.88 g, pulp weight ranged from 7.86 to 51.37 g, and pulp yield ranged from 17.06% to 40.27% among accessions. Broad-sense heritability estimates for key fruit traits were moderate to high, reaching 0.50 for fruit weight, 0.49 for pulp weight, and 0.36 for pulp yield, indicating favorable prospects for selection. Principal Component Analysis explained 66.0% of the total variation in the first two components, with fruit size, pulp-related traits, and seed number contributing most strongly to accession differentiation. Multivariate analyses consistently identified accessions 1 and 16 as superior for fruit weight and pulp yield, whereas accession 55 combined high fruit weight with elevated soluble solid content (up to 14.24 °Brix) but lower pulp yield. Overall, the observed variability highlights the relevance of Passiflora germplasm conserved under semi-arid conditions as a valuable resource for breeding programs focused on fruit yield, quality, and adaptation to water-limited environments. Full article
(This article belongs to the Special Issue Fruit Quality Formation and Regulation in Fruit Trees)
Show Figures

Figure 1

22 pages, 4492 KB  
Article
Raman Spectroscopic Classification of Polyethylene Glycol Samples of Varying Molecular Weights Using Machine Learning
by Thomas J. Tewes, Ciara N. Duismann, Udita Singh, Peter F. W. Simon and Dirk P. Bockmühl
Molecules 2026, 31(5), 778; https://doi.org/10.3390/molecules31050778 - 26 Feb 2026
Viewed by 479
Abstract
Polyethylene glycol (PEG) is a widely used water-soluble polymer (WSP) whose properties such as crystallinity depend on molecular weight. This study explores whether Raman spectroscopy, combined with supervised machine learning, can differentiate PEG samples of defined molecular weights within the investigated molecular weight [...] Read more.
Polyethylene glycol (PEG) is a widely used water-soluble polymer (WSP) whose properties such as crystallinity depend on molecular weight. This study explores whether Raman spectroscopy, combined with supervised machine learning, can differentiate PEG samples of defined molecular weights within the investigated molecular weight range. Eight PEG materials with molecular weights ranging from 1000 to 35,000 g/mol were analyzed by confocal Raman microscopy under standardized conditions. A Support Vector Machine (SVM) classifier achieved 93.4% accuracy in five-fold cross-validation and 72.6% on an independent test set, confirming that molecular-weight-dependent vibrational signatures are present in the Raman spectra. Principal component analysis followed by linear discriminant analysis (PCA–LDA) models supported these findings, revealing that discriminative information arises mainly from line-shape and shoulder regions rather than from peak centers, consistent with gradual increases in conformational order. Although sample morphology and drying behavior introduce variability, the results demonstrate that Raman spectroscopy provides a reproducible, non-destructive means of distinguishing between PEG samples of different molecular weights. The established workflow provides a foundation for future quantitative evaluations of spectral trends, cross-polymer generalization, and adaptation to variable measurement conditions to enhance applicability in analytical and industrial contexts. Full article
(This article belongs to the Special Issue Recent Advances in Structural Characterization by Raman Spectroscopy)
Show Figures

Figure 1

22 pages, 16604 KB  
Article
Predicting Net Primary Productivity Using Geographically Weighted Machine Learning: A Comparative Study in the Eastern Sahel
by Kopano Letsela, Farai Mlambo and Elhadi Adam
Sustainability 2026, 18(5), 2217; https://doi.org/10.3390/su18052217 - 25 Feb 2026
Viewed by 443
Abstract
Net Primary Productivity (NPP) is a vital ecological indicator used to monitor land productivity and the health of ecosystems, particularly in climate-sensitive areas like the Eastern Sahel. However, the spatial heterogeneity in the relationships between NPP and environmental factors complicates accurate predictions. This [...] Read more.
Net Primary Productivity (NPP) is a vital ecological indicator used to monitor land productivity and the health of ecosystems, particularly in climate-sensitive areas like the Eastern Sahel. However, the spatial heterogeneity in the relationships between NPP and environmental factors complicates accurate predictions. This research aimed to evaluate the effectiveness of geographically weighted statistical and machine learning models in predicting NPP, while considering spatial non-stationarity and non-linear interactions. The study used 939 spatial observations of the NPP in conjunction with four environmental predictors: rainfall, temperature, soil moisture, and elevation, spanning Niger, Chad, and Sudan. Initially, a global Ordinary Least Squares (OLS) model was used as a reference point. Subsequently, three geographically weighted models, Geographically Weighted Regression (GWR), Geographically Weighted Random Forest (GWRF) and Geographically Weighted Neural Network (GWNN) were executed to account for spatial variability and non-linear effects. The performance of the models was assessed using R2, Mean Absolute Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and spatial residual diagnostics. All geographically weighted models outperformed the global OLS baseline in terms of both predictive accuracy and spatial sensitivity. GWNN achieved the highest performance (R2 = 0.9360; RMSE = 0.0333), followed closely by GWRF (R2 = 0.9308) and GWR (R2 = 0.9207), compared to OLS (R2 = 0.8354). The residual spatial autocorrelation was completely resolved in GWNN and GWRF. Rainfall was consistently the most significant predictor, while the effects of other variables, such as elevation and temperature, varied between different spatial contexts. The findings of this research emphasise the value of combining spatial weighting with machine learning methodologies to model ecological productivity in heterogeneous landscapes. The GWNN model, in particular, stands out as a powerful tool for improving NPP predictions in regions sensitive to climate change. Full article
Show Figures

Figure 1

43 pages, 11743 KB  
Article
Rebar Price Prediction in Guangzhou, China: A Comparison of Statistical, Machine Learning and Hybrid Models
by Jiangnan Zhao, Xiaomin Dai, Peng Gao, Shengqiang Ma and Lei Wang
Buildings 2026, 16(5), 905; https://doi.org/10.3390/buildings16050905 - 25 Feb 2026
Viewed by 295
Abstract
Price volatility in steel reinforcement bars (rebar) plays a pivotal role in managing construction project costs, with precise forecasting being essential for maintaining corporate profitability and ensuring market stability. This research conducts a comprehensive evaluation of five prominent forecasting models—Autoregressive Integrated Moving Average [...] Read more.
Price volatility in steel reinforcement bars (rebar) plays a pivotal role in managing construction project costs, with precise forecasting being essential for maintaining corporate profitability and ensuring market stability. This research conducts a comprehensive evaluation of five prominent forecasting models—Autoregressive Integrated Moving Average (ARIMA), eXtreme Gradient Boosting (XGBoost), Prophet, Long Short-Term Memory (LSTM), and Transformer—specifically applied to steel rebar price prediction. The study emphasizes the influence of feature selection, defined as the number of historical price data points utilized for prediction, on the accuracy of these models. Furthermore, it develops a hybrid forecasting framework grounded in a residual complementarity mechanism aimed at improving long-term predictive performance. The results reveal that the ARIMA model delivers consistent and reliable short-term forecasts, particularly within a two-month horizon, whereas the Prophet model effectively captures long-term price trends but suffers from notable short-term bias. A two-stage hybrid model (referred to as Combination Model II), which integrates ARIMA and Prophet through residual inversion, demonstrates superior forecasting accuracy over a six-month period. This hybrid approach surpasses the standalone ARIMA model by more than 70% across key evaluation metrics—including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Symmetric Mean Absolute Percentage Error (SMAPE), and Mean Absolute Scaled Error (MASE)—and exceeds the performance of the standalone Prophet model by over 90%. This integration effectively combines the high short-term precision of ARIMA with the long-term trend stability of Prophet. Within the domain of machine learning and deep learning models, XGBoost achieves optimal predictive accuracy when utilizing between one and four features. The predictive performance of LSTM does not exhibit a straightforward linear relationship with the number of features; however, certain feature combinations enable it to outperform other models. Transformer models maintain stable accuracy when employing feature sets ranging from one to five and twelve to seventeen, but display considerable variability in performance when the feature count lies between five and twelve. This investigation delineates the optimal parameter ranges and contextual applicability for each model. The proposed hybrid forecasting methodology, alongside a model transfer strategy encompassing data preprocessing adjustments, parameter optimization, and weight adaptation, offers practical applicability to other commodity markets such as cement and concrete. Consequently, this research provides a scientifically grounded framework to support procurement decision-making processes within construction enterprises. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

19 pages, 922 KB  
Article
Risk Stratification for In-Hospital Mortality in Alzheimer’s Disease Using Interpretable Regression and Explainable AI
by Tursun Alkam, Ebrahim Tarshizi and Andrew H. Van Benschoten
Geriatrics 2026, 11(2), 23; https://doi.org/10.3390/geriatrics11020023 - 24 Feb 2026
Viewed by 452
Abstract
Background: Older adults with Alzheimer’s disease (AD) face a heightened risk of adverse hospital outcomes, including mortality. However, early identification of high-risk patients remains a challenge. While regression models provide interpretable associations, they may miss non-linear interactions that machine learning can uncover. Objective: [...] Read more.
Background: Older adults with Alzheimer’s disease (AD) face a heightened risk of adverse hospital outcomes, including mortality. However, early identification of high-risk patients remains a challenge. While regression models provide interpretable associations, they may miss non-linear interactions that machine learning can uncover. Objective: To identify key predictors of in-hospital mortality among AD patients using both survey-weighted logistic regression and explainable machine learning. Methods: We analyzed hospitalizations among AD patients aged ≥60 in the 2017 Nationwide Inpatient Sample (NIS). The outcome was in-hospital death. Predictors included demographics, hospital variables, and 15 comorbidities. Logistic regression used survey weighting to generate nationally representative inference; XGBoost incorporated NIS discharge weights as sample weights during 5-fold hospital-grouped cross-validation and used the same weights in performance evaluation. Missing-value imputation and feature scaling were performed within the cross-validation pipelines to prevent data leakage. Model performance was assessed using AUROC, AUPRC, Brier score, and log loss. Feature importance was assessed using adjusted odds ratios and SHapley Additive exPlanations (SHAP). A sensitivity analysis excluded palliative care and DNR status and was re-evaluated under the same grouped cross-validation. Results: In the full model, logistic regression achieved AUROC 0.879 and AUPRC 0.310, while XGBoost achieved AUROC 0.887 and AUPRC 0.324. Palliative care (aOR 6.19), acute respiratory failure (aOR 5.15), DNR status (aOR 2.20), and sepsis (aOR 2.26) were the strongest logistic predictors. SHAP analysis corroborated these findings and additionally emphasized dysphagia, malnutrition, and pressure ulcers. In sensitivity analysis excluding palliative care and DNR status, logistic regression performance declined (AUROC 0.806; AUPRC 0.206), while XGBoost performed similarly (AUROC 0.811; AUPRC 0.206). SHAP corroborated the dominant signals from end-of-life documentation and acute organ failure in the full model; in the restricted model (excluding DNR and palliative care), SHAP highlighted physiologic and frailty-related features (e.g., dysphagia, malnutrition, aspiration risk) that may be more actionable when end-of-life documentation is absent. Conclusions: Combining regression with explainable machine learning enables robust mortality risk stratification in hospitalized AD patients. Restricted models excluding end-of-life indicators provide actionable risk signals when such documentation is absent, while the full model may better support resource allocation and goals-of-care workflows. Full article
(This article belongs to the Section Geriatric Neurology)
Show Figures

Figure 1

23 pages, 1662 KB  
Article
A Hybrid Deep Learning Model for Wheat Price Prediction: LSTM–Autoencoder Ensemble Approach with SHAP-Based Interpretability
by Yelda Fırat and Hüseyin Ali Sarıkaya
Sustainability 2026, 18(4), 1960; https://doi.org/10.3390/su18041960 - 13 Feb 2026
Viewed by 466
Abstract
Accurate prediction of wheat prices is crucial for market participants and policymakers because volatility in agricultural markets affects food security and economic planning. This study proposes a hybrid deep-learning-based framework for daily wheat price prediction in Türkiye. The approach first applies an autoencoder [...] Read more.
Accurate prediction of wheat prices is crucial for market participants and policymakers because volatility in agricultural markets affects food security and economic planning. This study proposes a hybrid deep-learning-based framework for daily wheat price prediction in Türkiye. The approach first applies an autoencoder to detect and remove anomalous price–quality records from a dataset of 38,019 market transactions collected between June 2022 and May 2023. A weighted ensemble combining Linear Regression, Random Forest, Support Vector Regression and an attention-based Long Short-Term Memory network is then trained on quality parameters and market attributes, with data split into training, validation and test sets. On the independent test set the ensemble achieved a coefficient of determination R2 = 0.9942 and a mean absolute error of 0.1646 TL, outperforming the constituent models. SHAP analysis identifies the price–quality ratio as the most influential feature, while the ablation analysis shows that some of the high accuracy derives from price-derived variables’ strong correlation with the target. Cross-validation confirms robustness and generalization. Overall, the framework provides an effective and interpretable tool for wheat price forecasting, though the short data collection period and single-product focus limit generalizability. Full article
(This article belongs to the Special Issue Land Management and Sustainable Agricultural Production)
Show Figures

Figure 1

20 pages, 2514 KB  
Article
A Non-Compensatory Framework Integrating LCA and QFD for Robust Manufacturing Sustainability Decisions Under Uncertainty: An OCC Paper Machine Case Study
by Lidija Rihar and Marjan Jenko
Processes 2026, 14(4), 649; https://doi.org/10.3390/pr14040649 - 13 Feb 2026
Viewed by 385
Abstract
Manufacturing decarbonization and sustainability improvement require decision-support methods that can prioritise actions across multiple, often conflicting dimensions, including product quality, process stability, resource efficiency, and environmental performance. In industrial practice, such decisions are further complicated by stochastic variability and the presence of dominant [...] Read more.
Manufacturing decarbonization and sustainability improvement require decision-support methods that can prioritise actions across multiple, often conflicting dimensions, including product quality, process stability, resource efficiency, and environmental performance. In industrial practice, such decisions are further complicated by stochastic variability and the presence of dominant drivers, which limit the usefulness of conventional linear, weighted-sum scoring approaches. This paper proposes a non-compensatory decision framework with explicit stochastic uncertainty propagation that integrates quality function deployment (QFD) with life cycle assessment (LCA) to support robust, value-driven prioritisation of manufacturing improvement actions under uncertainty. The approach combines QFD-style influence factor modelling with LCA-based environmental indicators and employs a nonlinear, non-compensatory aggregation scheme to reduce sensitivity to arbitrary weighting and to better capture dominant and tail-risk effects. Uncertainty is propagated using Monte Carlo simulation, and the stability of prioritisation outcomes is analysed using sensitivity measures. The framework is demonstrated on an industrial old corrugated container (OCC) paper machine line using operational data from plant information systems, including quality, process control, laboratory, and maintenance databases. Results show that the proposed integration yields more stable and interpretable prioritisation of improvement actions than conventional compensatory scoring methods, particularly under variable operating conditions. The proposed approach enables practical, data-driven sustainability decision-making in complex manufacturing processes under variable operating conditions and alternative process configurations. Full article
Show Figures

Graphical abstract

28 pages, 5334 KB  
Article
A Shape–Memory–Programmable Tuning Fork Metamaterial with Adjustable Vibration Isolation Bands
by Rui Yang, Wenyou Zha, Ruixiang Zhang, Yongtao Yao and Yanju Liu
Vibration 2026, 9(1), 12; https://doi.org/10.3390/vibration9010012 - 11 Feb 2026
Viewed by 412
Abstract
Honeycomb structures are widely utilized in engineering due to their light weight, high strength, high stiffness, excellent energy absorption, and outstanding vibration isolation performance. In this study, we propose a novel tuning fork–honeycomb megastructure, which demonstrates excellent tunable vibration isolation capabilities. The geometric [...] Read more.
Honeycomb structures are widely utilized in engineering due to their light weight, high strength, high stiffness, excellent energy absorption, and outstanding vibration isolation performance. In this study, we propose a novel tuning fork–honeycomb megastructure, which demonstrates excellent tunable vibration isolation capabilities. The geometric configuration of the structure before and after shape memory–induced deformation is described, and a theoretical model for the natural frequency of the initial configuration is established. The vibration isolation performance of the structure is validated through simulations and experiments, and three strategies for tuning its vibrational behavior are proposed. First, by exploiting variable stiffness, shape memory materials are used to achieve a linear shift in the bandgap position. At 75 °C, the starting frequency of the bandgap decreases to 95% of its value at room temperature. Second, based on shape memory programming, the deformed structure exhibits a 20% reduction in the center frequency of the first bandgap and a 47% reduction in the center frequency of the second bandgap compared to the undeformed configuration. Then, by altering the geometry of the tuning fork structure, in–plane deformation is shown to provide superior low–frequency vibration isolation performance compared to out–of–plane deformation. Finally, the design method of programmable mechanical pixel metamaterials is introduced. This method achieves tunable full–band vibration isolation through shape–memory–induced deformation and temperature–induced stiffness variation. It enhances the structural diversity, modularity, and reconfigurability. Moreover, a shape memory tuning fork structure could be combined with any type of cellular structure with excellent vibration isolation performance. It offers a new paradigm for designing structures with adjustable wide–frequency vibration isolation performance. Full article
(This article belongs to the Special Issue Vibration in 2025)
Show Figures

Figure 1

20 pages, 1962 KB  
Article
Machine Learning-Based Prediction and Feature Attribution Analysis of Contrast-Associated Acute Kidney Injury in Patients with Acute Myocardial Infarction
by Neriman Sıla Koç, Can Ozan Ulusoy, Berrak Itır Aylı, Yusuf Bozkurt Şahin, Veysel Ozan Tanık, Arzu Akgül and Ekrem Kara
Medicina 2026, 62(1), 228; https://doi.org/10.3390/medicina62010228 - 22 Jan 2026
Viewed by 692
Abstract
Background and Objectives: Contrast-associated acute kidney injury (CA-AKI) is a frequent and clinically significant complication in patients with acute myocardial infarction (AMI) undergoing coronary angiography. Early and accurate risk stratification remains challenging with conventional models that rely on linear assumptions and limited [...] Read more.
Background and Objectives: Contrast-associated acute kidney injury (CA-AKI) is a frequent and clinically significant complication in patients with acute myocardial infarction (AMI) undergoing coronary angiography. Early and accurate risk stratification remains challenging with conventional models that rely on linear assumptions and limited variable integration. This study aimed to evaluate and compare the predictive performance of multiple machine learning (ML) algorithms with traditional logistic regression and the Mehran risk score for CA-AKI prediction and to explore key determinants of risk using explainable artificial intelligence methods. Materials and Methods: This retrospective, single-center study included 1741 patients with AMI who underwent coronary angiography. CA-AKI was defined according to KDIGO criteria. Multiple ML models, including gradient boosting machine (GBM), random forest (RF), XGBoost, support vector machine, elastic net, and standard logistic regression were developed using routinely available clinical and laboratory variables. A weighted ensemble model combining the best-performing algorithms was constructed. Model discrimination was assessed using area under the receiver operating characteristic curve (AUC), along with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Model interpretability was evaluated using feature importance and SHapley Additive exPlanations (SHAP). Results: CA-AKI occurred in 356 patients (20.4%). In multivariable logistic regression, lower left ventricular ejection fraction, higher contrast volume, lower sodium, lower hemoglobin, and higher neutrophil-to-lymphocyte ratio (NLR) were independently associated with CA-AKI. Among ML approaches, the weighted ensemble model demonstrated the highest discriminative performance (AUC 0.721), outperforming logistic regression and the Mehran risk score (AUC 0.608). Importantly, the ensemble model achieved a consistently high NPV (0.942), enabling reliable identification of low-risk patients. Explainability analyses revealed that inflammatory markers, particularly NLR, along with sodium, uric acid, baseline renal indices, and contrast burden, were the most influential predictors across models. Conclusions: In patients with AMI undergoing coronary angiography, interpretable ML models, especially ensemble and gradient boosting-based approaches, provide superior risk stratification for CA-AKI compared with conventional methods. The high negative predictive value highlights their clinical utility in safely identifying low-risk patients and supporting individualized, risk-adapted preventive strategies. Full article
(This article belongs to the Section Urology & Nephrology)
Show Figures

Figure 1

24 pages, 2236 KB  
Article
Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network
by Xiang Li, Yitao Su, Xiaobin Zhao, Junjun Yin and Jian Yang
Sensors 2026, 26(1), 334; https://doi.org/10.3390/s26010334 - 4 Jan 2026
Viewed by 737
Abstract
High-resolution range profile (HRRP) sequence recognition in radar automatic target recognition faces several practical challenges, including severe category imbalance, degradation of robustness under complex and variable operating conditions, and strict requirements for lightweight models suitable for real-time deployment on resource-limited platforms. To address [...] Read more.
High-resolution range profile (HRRP) sequence recognition in radar automatic target recognition faces several practical challenges, including severe category imbalance, degradation of robustness under complex and variable operating conditions, and strict requirements for lightweight models suitable for real-time deployment on resource-limited platforms. To address these problems, this paper proposes a lightweight spatiotemporal fusion-based (LSTF) HRRP sequence target recognition method. First, a lightweight Transformer encoder based on group linear transformations (TGLT) is designed to effectively model temporal dynamics while significantly reducing parameter size and computation, making it suitable for edge-device applications. Second, a transform-domain spatial feature extraction network is introduced, combining the fractional Fourier transform with an enhanced squeeze-and-excitation fully convolutional network (FSCN). This design fully exploits multi-domain spatial information and enhances class separability by leveraging discriminative scattering-energy distributions at specific fractional orders. Finally, an adaptive focal loss with label smoothing (AFL-LS) is constructed to dynamically adjust class weights for improved performance on long-tail classes, while label smoothing alleviates overfitting and enhances generalization. Experiments on the MSTAR and CVDomes datasets demonstrate that the proposed method consistently outperforms existing baseline approaches across three representative scenarios. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

Back to TopTop