Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,010)

Search Parameters:
Keywords = supervised algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1170 KB  
Article
Adverse Drug Reaction Detection on Social Media Based on Large Language Models
by Hao Li and Hongfei Lin
Information 2026, 17(4), 352; https://doi.org/10.3390/info17040352 - 7 Apr 2026
Abstract
Adverse drug reaction (ADR) detection is essential for ensuring drug safety and effective pharmacovigilance. The rapid growth of users’ medication reviews posted on social media has introduced a valuable new data source for ADR detection. However, the large scale and high noise inherent [...] Read more.
Adverse drug reaction (ADR) detection is essential for ensuring drug safety and effective pharmacovigilance. The rapid growth of users’ medication reviews posted on social media has introduced a valuable new data source for ADR detection. However, the large scale and high noise inherent in social media text pose substantial challenges to existing detection methods. Although large language models (LLMs) exhibit strong robustness to noisy and interfering information, they are often limited by issues such as stochastic outputs and hallucinations. To address these challenges, this paper proposes two generative detection frameworks based on Chain of Thought (CoT), namely LLaMA-DetectionADR for Supervised Fine-Tuning (SFT) and DetectionADRGPT for low-resource in-context learning. LLaMA-DetectionADR automatically generates CoT reasoning sequences to construct an instruction tuning dataset, which is then used to fine-tune the LLaMA3-8B model via Quantized Low-Rank Adaptation (QLoRA). In contrast, DetectionADRGPT leverages clustering algorithms to select representative unlabeled samples and enhances in-context learning by incorporating CoT reasoning paths together with their corresponding labels. Experimental results on the Twitter and CADEC social media datasets show that LLaMA-DetectionADR achieves excellent performance, with F1 scores of 92.67% and 86.13%, respectively. Meanwhile, DetectionADRGPT obtains competitive F1 scores of 87.29% and 82.80% with only a few labeled examples, approaching the performance of fully supervised advanced models. The overall results demonstrate the effectiveness and practical value of the proposed CoT-based generative frameworks for ADR detection from social media. Full article
(This article belongs to the Topic Generative AI and Interdisciplinary Applications)
Show Figures

Figure 1

18 pages, 1727 KB  
Article
Machine Learning-Based QSAR Models for Discovery of Inhibitors Targeting Leishmania infantum Amastigotes
by Naivi Flores-Balmaseda, Julio A. Rojas-Vargas, Susana Rojas-Socarrás, Facundo Pérez-Giménez, Francisco Torrens and Juan A. Castillo-Garit
Pharmaceuticals 2026, 19(4), 588; https://doi.org/10.3390/ph19040588 - 7 Apr 2026
Abstract
Background/Objectives: Leishmaniasis is a group of diseases caused by obligate intracellular parasites of the Leishmania genus and is classified by the World Health Organization as a category I neglected tropical disease. Leishmania infantum predominantly affects children under five years of age and [...] Read more.
Background/Objectives: Leishmaniasis is a group of diseases caused by obligate intracellular parasites of the Leishmania genus and is classified by the World Health Organization as a category I neglected tropical disease. Leishmania infantum predominantly affects children under five years of age and shows an increasing incidence of cutaneous and visceral forms. The development of new therapeutic alternatives remains challenging, making in silico approaches valuable for accelerating antileishmanial drug discovery. This study aimed to identify new compounds with potential activity against Leishmania infantum amastigotes using artificial intelligence-based classification models. Methods: A curated database of compounds with reported biological activity was constructed. Molecular representation employed zero- to two-dimensional descriptors calculated with Dragon software (v 7.0.10). Unsupervised k-means cluster analysis was applied to define training and external prediction sets. Supervised models were developed on the WEKA platform using IBk, J48, multilayer perceptron, and sequential minimal optimization algorithms. Model performance was assessed through internal cross-validation and external validation procedures. Results: All models achieved classification accuracies above eighty percent for both training and prediction sets, indicating consistent predictive performance and good generalization ability. The validated models were applied to virtual screening of the DrugBank database and a collection of synthetic compounds. This screening campaign enabled the identification of one hundred twenty compounds with potential activity against the amastigote form of Leishmania infantum. Conclusions: Artificial intelligence-based QSAR models proved to be useful tools for prioritizing antileishmanial candidates. The integration of molecular descriptors, machine learning, and virtual screening offers an efficient strategy for drug discovery. Full article
(This article belongs to the Special Issue Advances in Antiparasitic Drug Research)
Show Figures

Figure 1

22 pages, 22745 KB  
Article
Spectral Phenological Typologies for Improving Cross-Dataset in Mediterranean Winter Cereals
by Patricia Arizo-García, Sergio Castiñeira-Ibáñez, Beatriz Ricarte, Alberto San Bautista and Constanza Rubio
Appl. Sci. 2026, 16(7), 3598; https://doi.org/10.3390/app16073598 - 7 Apr 2026
Abstract
Accurate monitoring of crop phenology is essential for precision agriculture and yield forecasting. However, satellite-derived time series often suffer from inherent noise, such as residual atmospheric effects and mixed pixels, as well as a frequent lack of ground-truth data in agriculture. In response, [...] Read more.
Accurate monitoring of crop phenology is essential for precision agriculture and yield forecasting. However, satellite-derived time series often suffer from inherent noise, such as residual atmospheric effects and mixed pixels, as well as a frequent lack of ground-truth data in agriculture. In response, this study proposes an algorithm to define the type of spectral signatures for the principal phenological stages of crops, using them as the foundation for training supervised machine learning classification models. The algorithm was developed using Fuzzy C-Means (FCM) clustering to identify the spectral signature reference groups in winter wheat across the Burgos region (Spain) during the 2020 and 2021 growing seasons. To enhance cluster independence and biological coherence, a multi-step filtering process was implemented, including spectral purity (membership degree, SAM, and SAMder) and temporal coherence filters. The filtered and labeled dataset (80% original Burgos dataset) was used to train supervised classification models (KNN and XGBoost). The models’ reliability was verified through three wheat tests (remaining 20%), labeled using other clustering techniques, and an independent barley dataset from diverse geographic locations (Valladolid and Soria). The filtering process significantly improved cluster stability by removing outliers and transition spectral signatures. The supervised models demonstrated exceptional performance; the KNN model slightly outperformed XGB, achieving a mean Accuracy of 0.977, a Kappa of 0.967, and an F1-score of 0.977 in the wheat external test. Furthermore, the model showed, when applied to barley, that its phenological spectral signatures are equivalent in shape to those of wheat, with an Accuracy of 0.965 and an F1-score of 0.974. In addition, it was verified that the type spectral signatures remain the same regardless of the location. This study presents a robust classification tool capable of labeling four key phenological stages (tillering, stem elongation, ripening, and senescence) without ground truth. By effectively removing inherent satellite noise, the proposed methodology produces organized, cleaned datasets. This structured foundation is critical for future research integrating spectral signatures with harvester data to develop high-precision yield prediction models. Full article
(This article belongs to the Special Issue Digital Technologies in Smart Agriculture)
Show Figures

Figure 1

24 pages, 2003 KB  
Article
SEN-Batch Pseudo-Labeling with NeuroStack for Robust Semi-Supervised Liver Classification
by Pranabes Gangopadhyay, Perumal Ganeshkumar, Tirtharaj Sen, Bidesh Chakraborty, Arindam Biswas and Prabu Pachiyannan
Appl. Sci. 2026, 16(7), 3446; https://doi.org/10.3390/app16073446 - 2 Apr 2026
Viewed by 375
Abstract
The liver is vital for metabolism, detoxification, and homeostasis. Untreated liver disease leads to severe consequences, stressing the need for early diagnosis. However, patient classification using statistical learning is limited by the scarcity of large, labeled datasets due to high acquisition and expertise [...] Read more.
The liver is vital for metabolism, detoxification, and homeostasis. Untreated liver disease leads to severe consequences, stressing the need for early diagnosis. However, patient classification using statistical learning is limited by the scarcity of large, labeled datasets due to high acquisition and expertise cost. Surmounting this impediment, a novel Self-Evolving Neighborhood (SEN)-batched pseudo-labeling (PL) technique is proposed within the context of a semi-supervised learning framework. At its core, the NeuroStack model has been developed for labeling the datasets. The study examines the performance of the proposed PL algorithm across datasets like ILPD, BUPA Liver Disorder, and LFT. It is further compared to the state-of-the-art (SOTA) FixMatch. This study achieved the best accuracy of 98%, which is ≈11% higher than the FixMatch algorithm, and a confidence score of 97%, which is ≈12% higher than the FixMatch algorithm. The average accuracy, confidence score, F1-score and AUC across all the datasets are 94.6%, 94%, 0.96 and 0.98, respectively. The confidence interval was ±1.2 which is significantly lower than other algorithms. The experiments also achieved the best patient classification accuracy of 98% using the novel NeuroStack model which is adaptable for labeling any non-image datasets. Full article
(This article belongs to the Special Issue Advances and Applications of Machine Learning for Bioinformatics)
Show Figures

Figure 1

52 pages, 18820 KB  
Article
Multimodal Industrial Scene Characterisation for Pouring Process Monitoring Using a Mixture of Experts
by Javier Nieves, Javier Selva, Guillermo Elejoste-Rementeria, Jorge Angulo-Pines, Jon Leiñena, Xuban Barberena and Fátima A. Saiz
Appl. Sci. 2026, 16(7), 3430; https://doi.org/10.3390/app16073430 - 1 Apr 2026
Viewed by 217
Abstract
Industrial pouring processes operate under highly dynamic conditions where small deviations can lead to defects, scrap, and production losses. Although modern foundries are equipped with multiple sensors and visual inspection systems, most monitoring approaches remain fragmented, unimodal, and difficult to interpret. Furthermore, annotated [...] Read more.
Industrial pouring processes operate under highly dynamic conditions where small deviations can lead to defects, scrap, and production losses. Although modern foundries are equipped with multiple sensors and visual inspection systems, most monitoring approaches remain fragmented, unimodal, and difficult to interpret. Furthermore, annotated anomalous samples in industrial settings are scarce, hindering the development of traditional methods. As a result, many critical pouring anomalies are detected too late or lack sufficient contextual information for effective decision making. In this work, we propose a multimodal framework for industrial scene characterisation that combines visual information and process signals through an explainable Mixture-of-Experts (MoE)-style expert-fusion strategy. First, we deploy an ensemble of specialised modules that collaborate to identify regions of interest, assess pouring quality, and contextualise events within the production process, thereby generating an interpretable description of pouring events. Second, we introduce a novel anomaly detection method for multimodal video data, combining a self-supervised transformer with an outlier-aware clustering algorithm. Our approach effectively identifies rare anomalies without requiring extensive manual labelling. The resulting information is structured into a digital twin-ready representation, supporting synchronisation between the physical system and its virtual counterpart. This solution provides a scalable, deployable pathway to transform heterogeneous industrial data into actionable knowledge, supporting advanced monitoring, anomaly detection, and quality control in real foundry environments. Full article
Show Figures

Figure 1

23 pages, 13635 KB  
Article
Deep Reinforcement Learning for Autonomous Underwater Navigation: A Comparative Study with DWA and Digital Twin Validation
by Zamirddine Mari, Mohamad Motasem Nawaf and Pierre Drap
Sensors 2026, 26(7), 2179; https://doi.org/10.3390/s26072179 - 1 Apr 2026
Viewed by 253
Abstract
Autonomous navigation in underwater environments is challenged by the absence of GPS, degraded visibility, and submerged obstacles. This article investigates these issues using the BlueROV2, an open platform for scientific experimentation. We propose a deep reinforcement learning approach based on the Proximal Policy [...] Read more.
Autonomous navigation in underwater environments is challenged by the absence of GPS, degraded visibility, and submerged obstacles. This article investigates these issues using the BlueROV2, an open platform for scientific experimentation. We propose a deep reinforcement learning approach based on the Proximal Policy Optimization (PPO) algorithm, using an observation space that combines target-oriented navigation information, a virtual occupancy grid, and raycasting along the boundaries of the operational area. This information is encoded into a high-dimensional observation space of 84 dimensions, providing the agent with comprehensive local and global situational awareness. The learned policy is compared against a reference deterministic kinematic planner, the Dynamic Window Approach (DWA), a robust baseline for obstacle avoidance. The evaluation is conducted in a realistic simulation environment and complemented by validation on a physical BlueROV2 supervised by a 3D digital twin of the test site, reducing risks associated with real-world experimentation. The results show that the PPO policy consistently outperforms DWA in highly cluttered environments, notably thanks to better local adaptation and reduced collisions. Finally, experiments demonstrate the transferability of the learned behavior from simulation to the real world, confirming the relevance of deep RL for autonomous navigation in underwater robotics. Full article
Show Figures

Graphical abstract

19 pages, 903 KB  
Review
Monitoring Inputs, Control Architectures, and Failure Modes in Closed-Loop Vasopressor Systems: A Comprehensive Review
by Vitor Felippe, Hiorrana Sousa Dias, Carlos Darcy Alves Bersot, Gustavo Guimaraes Torres, Bruno Wegner, Gabriel Lemos González, Gustavo Wegner and Marcos Adriano Lessa
Sensors 2026, 26(7), 2180; https://doi.org/10.3390/s26072180 - 1 Apr 2026
Viewed by 259
Abstract
Closed-loop vasopressor systems integrate real-time blood pressure monitoring with automated decision logic to support hemodynamic stability in perioperative and critical care environments. These technologies sit at the intersection of biomedical sensing, signal processing, and clinician-supervised automation: the quality, latency, and failure behavior of [...] Read more.
Closed-loop vasopressor systems integrate real-time blood pressure monitoring with automated decision logic to support hemodynamic stability in perioperative and critical care environments. These technologies sit at the intersection of biomedical sensing, signal processing, and clinician-supervised automation: the quality, latency, and failure behavior of the monitoring input can directly shape controller performance, safety margins, and clinical usability. In this comprehensive review, we synthesize the major closed-loop vasopressor architectures reported in the literature, examine how sensor modality and signal integrity influence algorithm behavior, and summarize recurrent reliability vulnerabilities spanning sensors, control logic, and device integration. We organize the field through an end-to-end information pipeline—monitoring input, signal conditioning and quality assessment, decision and control strategy, actuation via infusion technology, and supervisory safety layers—highlighting common performance metrics used to benchmark control quality. We then discuss clinical validation patterns across settings, emphasizing practical considerations for deployment and the evidence gaps that remain most relevant to high-risk populations. Finally, we propose reporting and validation priorities for future studies, with a focus on sensor robustness, transparency of algorithm design, integration safeguards, and standardized documentation of failures and overrides. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

31 pages, 4336 KB  
Article
Machine Learning Approach for Predicting Older Adults’ Responsiveness to Cognitive Training Interventions: Data from the ACTIVE Study
by Petra Vargek, Sašo Karakatič and Karin Bakračevič
J. Intell. 2026, 14(4), 56; https://doi.org/10.3390/jintelligence14040056 - 1 Apr 2026
Viewed by 300
Abstract
In recent years, there has been increasing interest in personalizing cognitive training to enhance the likelihood of positive training effects at the individual level. Machine learning methods have proven suitable for this purpose due to their ability to generate predictions at the individual [...] Read more.
In recent years, there has been increasing interest in personalizing cognitive training to enhance the likelihood of positive training effects at the individual level. Machine learning methods have proven suitable for this purpose due to their ability to generate predictions at the individual level. The aim of the study was to develop supervised machine learning models to predict near and far transfer of three cognitive training interventions (memory training, reasoning training and speed-of-processing training) based on baseline characteristics of elderly individuals including sociodemographic data, measures of cognitive and everyday functioning and depressive symptoms. In addition, near-transfer models were further utilized to predict individual responsiveness to all three types of cognitive training. Publicly available data from the ACTIVE study were used, which examined the effects of memory training, reasoning training and speed-of-processing training in healthy adults. Multiple supervised machine learning classification algorithms were applied to establish optimal predictive models for each type of cognitive training and transfer measure. Selected models for predicting near transfer were then used to estimate individual responsiveness to all three interventions. The results show selected models for all three types of cognitive training and both near- and far-transfer outcomes demonstrated better discriminative ability than chance based on all included features (AUC range 0.56–0.74), although models predicting far transfer demonstrated limited performance. Predicted responsiveness to cognitive training varied according to participant characteristics. Differences between model-predicted responders indicate that initially advantaged participants would have greater likelihood of benefiting from a broader range of interventions compared to initially disadvantaged ones, which would support magnification effects. The developed models need external validation, but have practical potential for selecting effective interventions tailored to individual characteristics, which could improve the future implementation of cognitive training programs. Full article
Show Figures

Figure 1

36 pages, 5538 KB  
Review
AI-Driven Monocular Metrology and Fuzzy Random Portfolio Management of Financial Assets
by Tongjie Xu, Lu Sun, Charles C. Nguyen and Pei-Chun Lin
Electronics 2026, 15(7), 1458; https://doi.org/10.3390/electronics15071458 - 31 Mar 2026
Viewed by 186
Abstract
This study provides a comprehensive review on monocular metrology and fuzzy random-set portfolio management of financial assets. The findings and conclusions are elaborated as follows. Soft computing and AI have already enhanced and will further empower a variety of applications of monocular metrology [...] Read more.
This study provides a comprehensive review on monocular metrology and fuzzy random-set portfolio management of financial assets. The findings and conclusions are elaborated as follows. Soft computing and AI have already enhanced and will further empower a variety of applications of monocular metrology and fuzzy random-set portfolio management of financial assets through progressive quantification and capturing of domain situations. The single most significant limitation of monocular metrology lies in its intrinsic incapability of direct measurement of 3D geometry through 2D imagery. The future of monocular metrology lies in deep learning and end-to-end solutions, multi-sensor data fusion, algorithmic optimization and real-time performance, self-supervised learning and generalization, and standardization and practical deployment. Neither statistical validation nor performance optimization alone is sufficient to support decision-makers in making portfolio decisions that are reliably and trust-worthy. A promising portfolio management decision-making framework should integrate the statistical rigor of fuzzy statistics with fuzzy random portfolio optimization techniques to quantitatively account for fuzziness and uncertainty while better balancing computational efficiency, statistical reliability, interpretability, and practical credibility. Full article
Show Figures

Figure 1

25 pages, 5301 KB  
Article
High-Precision Spatial Interpolation of Meteorological Variables in Complex Terrain Using Machine Learning Methods
by Shuangping Li, Bin Zhang, Bo Shi, Qingsong Ai, Yuxi Zeng, Xuanyao Yan, Hao Chen and Huawei Wang
Sensors 2026, 26(7), 2167; https://doi.org/10.3390/s26072167 - 31 Mar 2026
Viewed by 265
Abstract
This study has explored the effectiveness of machine learning methods for high-precision spatial interpolation of meteorological variables, aiming to provide accurate atmospheric delay corrections for high-precision edge and corner nets observation in complex-terrain environments such as the Xiluodu Hydropower Station, thereby enhancing the [...] Read more.
This study has explored the effectiveness of machine learning methods for high-precision spatial interpolation of meteorological variables, aiming to provide accurate atmospheric delay corrections for high-precision edge and corner nets observation in complex-terrain environments such as the Xiluodu Hydropower Station, thereby enhancing the accuracy of deformation monitoring. Considering the significant limitations of traditional interpolation methods such as Inverse Distance Weighting (IDW) and Ordinary Kriging (OK) in capturing spatial variability under complex topographic conditions, we systematically introduced machine learning algorithms including Random Forest (RF)and eXtreme Gradient Boosting (XGBoost, XGB) to compare their performance with traditional methods for high-density interpolation of sparsely distributed temperature, relative humidity, and surface pressure, respectively. Concurrently, we proposed an enhanced XGB model incorporating center-point features (XGB-C) which frames spatial interpolation as a supervised learning problem that learns physical mapping from synoptic backgrounds to local microclimates instead of relying on geometric distances alone. The interpolation performance indices (RMSE, MAE, and R2) were evaluated with daily meteorological observations from 47 stations (38 for training, 9 for testing) during 2023–2024. Results demonstrate that machine learning methods significantly outperform traditional approaches, with XGB-C achieving the highest accuracy (R2 ≈ 1.00 for pressure, 0.97 for humidity, 0.83 for temperature). Moreover, the interpolation performance also exhibits a dependence on seasons and the station location. Greater challenges are shown in the summer season and in the “Urban and Built-Up” and “Croplands” areas. These findings highlight the substantial advantages of machine learning, particularly the proposed XGB-C, for meteorological interpolation in mountainous hydropower station environments where accurate atmospheric correction is crucial for deformation monitoring. This also lays a solid foundation for developing operational ML-based interpolation models trained with high-quality labels derived from unmanned aerial vehicle (UAV) remote sensing data. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

23 pages, 31586 KB  
Article
Machine Learning Workflow for Fracture Modeling in the Tensleep Reservoir
by Israa Ahmed, Gharib Hamada and Abdel Sattar Dahab
Energies 2026, 19(7), 1683; https://doi.org/10.3390/en19071683 - 30 Mar 2026
Viewed by 199
Abstract
Fractured reservoir characterization is a complex and challenging task due to its depositional nature and high uncertainty in the spatial distribution of fractures, typically when well data is limited, and interpolation algorithms are employed. This paper introduces an alternative workflow designed to enhance [...] Read more.
Fractured reservoir characterization is a complex and challenging task due to its depositional nature and high uncertainty in the spatial distribution of fractures, typically when well data is limited, and interpolation algorithms are employed. This paper introduces an alternative workflow designed to enhance fracture modeling between well locations by incorporating seismic attributes, using publicly released data from the Teapot Dome Field. The paper’s objective is to create a fracture model for the Tensleep reservoir in the Teapot Dome Anticline by employing seismic attributes sensitive to fault and fracture features, while also demonstrating the limitations of interpolation-based models such as Gaussian simulation. The approach uses artificial neural networks to predict fracture intensity by analyzing seismic data and well logs, training supervised probabilistic artificial networks to identify the seismic attributes that most closely correlate with the fracture intensity property derived from well log data. The validated network successfully transformed the 3D seismic data into 3D fracture intensity data, achieving a high correlation coefficient between the selected seismic attributes and the training wells. The research findings are extremely valuable because they help address the lack of information on fractures, improve reservoir management, and optimize well placement. Full article
Show Figures

Figure 1

15 pages, 702 KB  
Systematic Review
Exercise as Medicine: Quantifying the Effects of Physical Activity on Fibromyalgia Pain—A Systematic Review and Meta-Analysis
by Vasileios T. Stavrou and Panagiotis Zis
Brain Sci. 2026, 16(4), 365; https://doi.org/10.3390/brainsci16040365 - 28 Mar 2026
Viewed by 271
Abstract
Background: The pain experienced by people with fibromyalgia (FM) is thought to be the result of altered nociceptive processing, impaired descending inhibition and reduced tolerance to physical load. However, the relationship between the amount of exercise and pain reduction remains unclear. Methods: This [...] Read more.
Background: The pain experienced by people with fibromyalgia (FM) is thought to be the result of altered nociceptive processing, impaired descending inhibition and reduced tolerance to physical load. However, the relationship between the amount of exercise and pain reduction remains unclear. Methods: This study synthesized randomized controlled trials of exercise interventions for FM to quantify the combined analgesic effects of different types of exercise. A secondary aim was to standardize exposure using metabolic equivalent of task (MET)-based metrics and examine the association between cumulative intervention dose (MET·h) and analgesic response (Hedges’ g) across intervention arms. Following the PRISMA guidelines, a search was conducted in PubMed for randomized controlled trials published up to 31 December 2025. After screening and a full-text assessment, 15 trials were included. The protocols were converted into MET-defined intensity and weekly MET·min exposure, and the cumulative dose was calculated as the total MET·h accrued over the intervention period. Random-effects models were used to estimate the pooled effects within modality subgroups. Results: Across modalities, exercise was associated with reductions in pain, with effects typically falling within the small-to-moderate range. Larger improvements were observed in structured or supervised programs. The dose-response scatter plot showed wide variability across the dose range, with overlapping confidence intervals. An exploratory fourth-degree polynomial fit explained limited variance (R2 = 0.1615) and did not indicate a monotonic dose-response pattern. This suggests that cumulative workload alone is a weak proxy for therapeutic response. Conclusions: Based on these findings, a pain-responsive algorithm combining weekly Visual Analogue Scale (VAS), ΔVAS and Talk Test thresholds was implemented as a preliminary online calculator to support the prescription of exercise tailored to symptoms. Full article
(This article belongs to the Special Issue Emerging Trends and Perspectives in the Neuroscience of Pain)
Show Figures

Figure 1

23 pages, 5229 KB  
Article
Experimental Investigation of Surface Integrity Analysis Using Machine Learning for Nano-Powder Mixed Electrical Discharge Machining
by Amreeta R. Kaigude, Nitin K. Khedkar and Vijaykumar S. Jatti
J. Manuf. Mater. Process. 2026, 10(4), 115; https://doi.org/10.3390/jmmp10040115 - 28 Mar 2026
Viewed by 333
Abstract
This research investigates the optimization of surface integrity in powder-mixed electrical discharge machining (PMEDM) through the innovative use of Jatropha biodielectric fluid enhanced with titanium dioxide (TiO2) nanoparticles. A comprehensive experimental framework was developed using design expert software (DOE) with Response [...] Read more.
This research investigates the optimization of surface integrity in powder-mixed electrical discharge machining (PMEDM) through the innovative use of Jatropha biodielectric fluid enhanced with titanium dioxide (TiO2) nanoparticles. A comprehensive experimental framework was developed using design expert software (DOE) with Response Surface Methodology (RSM) to systematically analyze the machining of AISI D2 tool steel using copper electrodes. The study examined five critical process parameters, gap current (Ip), pulse-on duration (Ton), pulse-off time (Toff), gap voltage (V), and powder concentration, evaluating their combined effects on surface roughness (SR), surface crack density (SCD), and residual stress characteristics. Advanced characterization techniques including scanning electron microscopy (SEM) were employed to analyze surface topography and subsurface microstructural changes. The optimization process successfully identified optimal machining conditions of current = 9 A, Ton = 100 µs, Toff = 10 µs, and gap voltage = 65 V, achieving exceptional surface quality with a minimum surface roughness of 3.22 µm. Remarkably, these optimized parameters resulted in crack-free surfaces with zero surface crack density and minimal residual stress values across the 2θ range of 90° to 180°. To enhance predictive capabilities, supervised machine learning algorithms were implemented to model surface roughness behavior. Comparative analysis of classification algorithms demonstrated that Support Vector Machine (SVM), k-Nearest Neighbors (kNNs), and Gaussian Naïve Bayes achieved superior performance with F1-scores of 0.88 and prediction accuracies of 90%. The integration of sustainable Jatropha biodielectric with TiO2 nanoparticles represents a significant advancement in environmentally conscious precision machining, while the machine learning approach establishes a robust framework for intelligent process optimization and quality prediction in advanced manufacturing applications. Full article
Show Figures

Figure 1

18 pages, 10448 KB  
Article
Forest Density Detection Using a Set of Remotely Sensed Vegetation Indices, Texture Parameters, and Spatial Clustering Metrics
by Stavros Kolios and Mariana Mandilara
Geomatics 2026, 6(2), 33; https://doi.org/10.3390/geomatics6020033 - 27 Mar 2026
Viewed by 258
Abstract
Monitoring forest density is essential for understanding ecosystem health, wildfire risk, and post-disturbance recovery. This study proposes a robust methodology to extract forest density classes exclusively using Sentinel-2 multispectral imagery combined with vegetation indices (VIs), textural parameters, and spatial clustering metrics. The approach [...] Read more.
Monitoring forest density is essential for understanding ecosystem health, wildfire risk, and post-disturbance recovery. This study proposes a robust methodology to extract forest density classes exclusively using Sentinel-2 multispectral imagery combined with vegetation indices (VIs), textural parameters, and spatial clustering metrics. The approach was applied to the northern part of Euboea Island, Greece, as a pilot area severely affected by a wildfire in August 2021. Four cloud-free Sentinel-2 images (2017–2024) were selected to capture pre- and post-fire conditions. A set of nine VIs—representing vegetation vigor, chlorophyll content, soil exposure, and canopy moisture—were calculated and statistically assessed for independence. To enhance classification accuracy, texture measures (homogeneity, correlation, and entropy) and spatial autocorrelation metrics (Moran’s I, Getis-Ord Gi) were derived for selected VIs. Supervised classification was performed using the Maximum Likelihood algorithm, yielding overall accuracies up to 89.4% and kappa coefficients above 0.85 when combining VIs with texture and spatial metrics. Results revealed a dramatic 49.3% reduction in forest cover immediately after the wildfire, with partial recovery (to 77.9% of pre-fire levels) three years later, mainly as a low-density forest. Approximately 12.1% of forest cover failed to regenerate, indicating potential long-term ecosystem degradation. The proposed approach provides a computationally efficient, high-accuracy alternative to data-fusion methods involving (Light Detection and Ranging) LiDAR or (Synthetic Aperture Radar) SAR datasets, making it suitable for operational forest monitoring and fire-risk management. Full article
Show Figures

Figure 1

23 pages, 7096 KB  
Article
Research and Application of Functional Model Construction Method for Production Equipment Operation Management and Control Oriented to Diversified and Personalized Scenarios
by Jun Li, Keqin Dou, Jinsong Liu, Qing Li and Yong Zhou
Machines 2026, 14(4), 368; https://doi.org/10.3390/machines14040368 - 27 Mar 2026
Viewed by 274
Abstract
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in [...] Read more.
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in the industrial internet environment. To address the diversity of scenarios and objectives of PEOMC, a hierarchical construction method for the functional model of PEOMC based on IDEF0 is proposed. By analysing relevant international standards, such as ISO 55010, ISO/IEC 62264, and OSA-CBM, the generic functional modules for the first and second layers of the functional model are identified and defined. On the basis of semi-supervised machine learning, topic clustering is used to extract the components, functional mechanisms, and logical relationships of production equipment operation management and control from approximately 200 standard texts and to construct a reference resource pool for the third-layer functional module. On this basis, an interface matching and recursive traversal algorithm for functional modules is designed, and a composition and orchestration strategy of functional modules for specific scenarios is provided to support the flexible construction of diversified and personalized PEOMC scenarios. The proposed construction and application method was validated through an engineering case study in an aero-engine transmission unit manufacturing workshop: the average process capability index of the enterprise’s production equipment steadily increased from 1.28 to approximately 1.60, the mean time to repair (MTTR) of production equipment failures significantly decreased from 8 h to 3 h, and the average overall equipment effectiveness (OEE) increased from 56.43% to a stable 68.57%, demonstrating its effectiveness and practicality. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

Back to TopTop