Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,630)

Search Parameters:
Keywords = integrated score

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1420 KB  
Article
All-Weather Forest Fire Automatic Monitoring and Early Warning Application Based on Multi-Source Remote Sensing Data: Case Study of Yunnan
by Boyang Gao, Weiwei Jia, Qiang Wang and Guang Yang
Fire 2025, 8(9), 344; https://doi.org/10.3390/fire8090344 (registering DOI) - 27 Aug 2025
Abstract
Forest fires pose severe ecological, climatic, and socio-economic threats, destroying habitats and emitting greenhouse gases. Early and timely warning is particularly challenging because fires often originate from small-scale, low-temperature ignition sources. Traditional monitoring approaches primarily rely on single-source satellite imagery and empirical threshold [...] Read more.
Forest fires pose severe ecological, climatic, and socio-economic threats, destroying habitats and emitting greenhouse gases. Early and timely warning is particularly challenging because fires often originate from small-scale, low-temperature ignition sources. Traditional monitoring approaches primarily rely on single-source satellite imagery and empirical threshold algorithms, and most forest fire monitoring tasks remain human-driven. Existing frameworks have yet to effectively integrate multiple data sources and detection algorithms, lacking the capability to provide continuous, automated, and generalizable fire monitoring across diverse fire scenarios. To address these challenges, this study first improves multiple monitoring algorithms for forest fire detection, including a statistically enhanced automatic thresholding method; data augmentation to expand the U-Net deep learning dataset; and the application of a freeze–unfreeze transfer learning strategy to the U-Net transfer model. Multiple algorithms are systematically evaluated across varying fire scales, showing that the improved automatic threshold method achieves the best performance on GF-4 imagery with an F-score of 0.915 (95% CI: 0.8725–0.9524), while the U-Net deep learning algorithm yields the highest F-score of 0.921 (95% CI: 0.8537–0.9739) on Landsat 8 imagery. All methods demonstrate robust performance and generalizability across diverse scenarios. Second, data-driven scheduling technology is developed to automatically initiate preprocessing and fire detection tasks, significantly reducing fire discovery time. Finally, an integrated framework of multi-source remote sensing data, advanced detection algorithms, and a user-friendly visualization interface is proposed. This framework enables all-weather, fully automated forest fire monitoring and early warning, facilitating dynamic tracking of fire evolution and precise fire line localization through the cross-application of heterogeneous data sources. The framework’s effectiveness and practicality are validated through wildfire cases in two regions of Yunnan Province, offering scalable technical support for improving early detection of and rapid response to forest fires. Full article
26 pages, 3815 KB  
Article
Deep Learning Method Based on Multivariate Variational Mode Decomposition for Classification of Epileptic Signals
by Shang Zhang, Guangda Liu, Shiqing Sun and Jing Cai
Brain Sci. 2025, 15(9), 933; https://doi.org/10.3390/brainsci15090933 (registering DOI) - 27 Aug 2025
Abstract
Background/Objectives: Epilepsy is a neurological disorder that severely impacts patients’ quality of life. In clinical practice, specific pharmacological and surgical interventions are tailored to distinct seizure types. The identification of the epileptogenic zone enables the implementation of surgical procedures and neuromodulation therapies. [...] Read more.
Background/Objectives: Epilepsy is a neurological disorder that severely impacts patients’ quality of life. In clinical practice, specific pharmacological and surgical interventions are tailored to distinct seizure types. The identification of the epileptogenic zone enables the implementation of surgical procedures and neuromodulation therapies. Consequently, accurate classification of seizure types and precise determination of focal epileptic signals are critical to provide clinicians with essential diagnostic insights for optimizing therapeutic strategies. Traditional machine learning approaches are constrained in their efficacy due to limited capability in autonomously extracting features. Methods: This study proposes a novel deep learning framework integrating temporal and spatial information extraction to address this limitation. Multivariate variational mode decomposition (MVMD) is employed to maintain inter-channel mode alignment during the decomposition of multi-channel epileptic signals, ensuring the synchronization of time–frequency characteristics across channels and effectively mitigating mode mixing and mode mismatch issues. Results: The Bern–Barcelona database is employed to classify focal epileptic signals, with the proposed framework achieving an accuracy of 98.85%, a sensitivity of 98.75%, and a specificity of 98.95%. For multi-class seizure type classification, the TUSZ database is utilized. Subject-dependent experiments yield an accuracy of 96.17% with a weighted F1-score of 0.962. Meanwhile, subject-independent experiments attain an accuracy of 87.97% and a weighted F1-score of 0.884. Conclusions: The proposed framework effectively integrates temporal and spatial domain information derived from multi-channel epileptic signals, thereby significantly enhancing the algorithm’s classification performance. The performance on unseen patients demonstrates robust generalization capability, indicating the potential clinical applicability in assisting neurologists with epileptic signal classification. Full article
40 pages, 30640 KB  
Review
From Data to Diagnosis: A Novel Deep Learning Model for Early and Accurate Diabetes Prediction
by Muhammad Mohsin Zafar, Zahoor Ali Khan, Nadeem Javaid, Muhammad Aslam and Nabil Alrajeh
Healthcare 2025, 13(17), 2138; https://doi.org/10.3390/healthcare13172138 - 27 Aug 2025
Abstract
Background: Diabetes remains a major global health challenge, contributing significantly to premature mortality due to its potential progression to organ failure if not diagnosed early. Traditional diagnostic approaches are subject to human error, highlighting the need for modern computational techniques in clinical decision [...] Read more.
Background: Diabetes remains a major global health challenge, contributing significantly to premature mortality due to its potential progression to organ failure if not diagnosed early. Traditional diagnostic approaches are subject to human error, highlighting the need for modern computational techniques in clinical decision support systems. Although these systems have successfully integrated deep learning (DL) models, they still encounter several challenges, such as a lack of intricate pattern learning, imbalanced datasets, and poor interpretability of predictions. Methods: To address these issues, the temporal inception perceptron network (TIPNet), a novel DL model, is designed to accurately predict diabetes by capturing complex feature relationships and temporal dynamics. An adaptive synthetic oversampling strategy is utilized to reduce severe class imbalance in an extensive diabetes health indicators dataset consisting of 253,680 instances and 22 features, providing a diverse and representative sample for model evaluation. The model’s performance and generalizability are assessed using a 10-fold cross-validation technique. To enhance interpretability, explainable artificial intelligence techniques are integrated, including local interpretable model-agnostic explanations and Shapley additive explanations, providing insights into the model’s decision-making process. Results: Experimental results demonstrate that TIPNet achieves improvement scores of 3.53% in accuracy, 3.49% in F1-score, 1.14% in recall, and 5.95% in the area under the receiver operating characteristic curve. Conclusion: These findings indicate that TIPNet is a promising tool for early diabetes prediction, offering accurate and interpretable results. The integration of advanced DL modeling with oversampling strategies and explainable AI techniques positions TIPNet as a valuable resource for clinical decision support, paving the way for its future application in healthcare settings. Full article
22 pages, 1926 KB  
Review
Biological Sequence Representation Methods and Recent Advances: A Review
by Hongwei Zhang, Yan Shi, Yapeng Wang, Xu Yang, Kefeng Li, Sio-Kei Im and Yu Han
Biology 2025, 14(9), 1137; https://doi.org/10.3390/biology14091137 - 27 Aug 2025
Abstract
Biological-sequence representation methods are pivotal for advancing machine learning in computational biology, transforming nucleotide and protein sequences into formats that enhance predictive modeling and downstream task performance. This review categorizes these methods into three developmental stages: computational-based, word embedding-based, and large language model [...] Read more.
Biological-sequence representation methods are pivotal for advancing machine learning in computational biology, transforming nucleotide and protein sequences into formats that enhance predictive modeling and downstream task performance. This review categorizes these methods into three developmental stages: computational-based, word embedding-based, and large language model (LLM)-based, detailing their principles, applications, and limitations. Computational-based methods, such as k-mer counting and position-specific scoring matrices (PSSM), extract statistical and evolutionary patterns to support tasks like motif discovery and protein–protein interaction prediction. Word embedding-based approaches, including Word2Vec and GloVe, capture contextual relationships, enabling robust sequence classification and regulatory element identification. Advanced LLM-based methods, leveraging Transformer architectures like ESM3 and RNAErnie, model long-range dependencies for RNA structure prediction and cross-modal analysis, achieving superior accuracy. However, challenges persist, including computational complexity, sensitivity to data quality, and limited interpretability of high-dimensional embeddings. Future directions prioritize integrating multimodal data (e.g., sequences, structures, and functional annotations), employing sparse attention mechanisms to enhance efficiency, and leveraging explainable AI to bridge embeddings with biological insights. These advancements promise transformative applications in drug discovery, disease prediction, and genomics, empowering computational biology with robust, interpretable tools. Full article
(This article belongs to the Special Issue Machine Learning Applications in Biology—2nd Edition)
Show Figures

Figure 1

20 pages, 2543 KB  
Article
Development of Fermentation Strategies for Quality Mild Coffee Production (Coffea arabica L.) Based on Oxygen Availability and Processing Time
by Aida Esther Peñuela-Martínez, Carol Vanessa Osorio-Giraldo, Camila Buitrago-Zuluaga and Rubén Darío Medina-Rivera
Foods 2025, 14(17), 3001; https://doi.org/10.3390/foods14173001 (registering DOI) - 27 Aug 2025
Abstract
Fermentation is a crucial stage in the production of washed mild coffees, as it enables the generation of compounds that influence overall quality. The conditions to optimize this process are still unknown. This study evaluated the effects of fermenting coffee fruits and depulped [...] Read more.
Fermentation is a crucial stage in the production of washed mild coffees, as it enables the generation of compounds that influence overall quality. The conditions to optimize this process are still unknown. This study evaluated the effects of fermenting coffee fruits and depulped coffee under two conditions: an open tank (semi-anaerobic-SA) and a closed tank (self-induced anaerobic fermentation, SIAF) over 192 h. Samples were taken every 24 h using a sacrificial bioreactor. A randomized complete block design with a factorial arrangement (2 × 2 + 1), plus a standard control, was employed, incorporating two factors: coffee type and fermentation condition. High-throughput sequencing of 16S and ITS amplicons identified an average of 260 ± 71 and 101 ± 24 OTUs, respectively. Weisella was the dominant lactic acid bacteria, followed by Leuconostoc and Lactiplantibacillus. Acetic acid bacteria, mainly Acetobacter, were more abundant under semi-anaerobic conditions. The yeast genera most affected by the fermentation condition were Pichia, Issatchenkia, and Wickerhamomyces. Repeated measures analysis revealed significant differences in pH, glucose consumption, lactic acid production, dry matter content, embryo viability, and the percentage of healthy beans. Principal component analysis was used to develop an index that integrates physical, physiological, and sensory quality variables, thereby clarifying the impact of each treatment. Samples from shorter fermentation times and SIAF conditions scored closest to 1.0, reflecting the most favorable outcomes. Otherwise, samples from longer fermentation times in both depulped and coffee fruits scored 0.497 and 0.369, respectively, on the SA condition. These findings support technically and economically beneficial fermentation strategies. Full article
Show Figures

Figure 1

37 pages, 3805 KB  
Article
Comparative Evaluation of CNN and Transformer Architectures for Flowering Phase Classification of Tilia cordata Mill. with Automated Image Quality Filtering
by Bogdan Arct, Bartosz Świderski, Monika A. Różańska, Bogdan H. Chojnicki, Tomasz Wojciechowski, Gniewko Niedbała, Michał Kruk, Krzysztof Bobran and Jarosław Kurek
Sensors 2025, 25(17), 5326; https://doi.org/10.3390/s25175326 (registering DOI) - 27 Aug 2025
Abstract
Understanding and monitoring the phenological phases of trees is essential for ecological research and climate change studies. In this work, we present a comprehensive evaluation of state-of-the-art convolutional neural networks (CNNs) and transformer architectures for the automated classification of the flowering phase of [...] Read more.
Understanding and monitoring the phenological phases of trees is essential for ecological research and climate change studies. In this work, we present a comprehensive evaluation of state-of-the-art convolutional neural networks (CNNs) and transformer architectures for the automated classification of the flowering phase of Tilia cordata Mill. (small-leaved lime) based on a large set of real-world images acquired under natural field conditions. The study introduces a novel, automated image quality filtering approach using an XGBoost classifier trained on diverse exposure and sharpness features to ensure robust input data for subsequent deep learning models. Seven modern neural network architectures, including VGG16, ResNet50, EfficientNetB3, MobileNetV3 Large, ConvNeXt Tiny, Vision Transformer (ViT-B/16), and Swin Transformer Tiny, were fine-tuned and evaluated under a rigorous cross-validation protocol. All models achieved excellent performance, with cross-validated F1-scores exceeding 0.97 and balanced accuracy up to 0.993. The best results were obtained for ResNet50 and ConvNeXt Tiny (F1-score: 0.9879 ± 0.0077 and 0.9860 ± 0.0073, balanced accuracy: 0.9922 ± 0.0054 and 0.9927 ± 0.0042, respectively), indicating outstanding sensitivity and specificity for both flowering and non-flowering classes. Classical CNNs (VGG16, ResNet50, and ConvNeXt Tiny) demonstrated slightly superior robustness compared to transformer-based models, though all architectures maintained high generalization and minimal variance across folds. The integrated quality assessment and classification pipeline enables scalable, high-throughput monitoring of flowering phases in natural environments. The proposed methodology is adaptable to other plant species and locations, supporting future ecological monitoring and climate studies. Our key contributions are as follows: (i) introducing an automated exposure-quality filtering stage for field imagery; (ii) publishing a curated, season-long dataset of Tilia cordata images; and (iii) providing the first systematic cross-validated benchmark that contrasts classical CNNs with transformer architectures for phenological phase recognition. Full article
(This article belongs to the Special Issue Application of UAV and Sensing in Precision Agriculture)
Show Figures

Figure 1

20 pages, 2631 KB  
Article
Machine Learning Models for SQL Injection Detection
by Cosmina-Mihaela Rosca, Adrian Stancu and Catalin Popescu
Electronics 2025, 14(17), 3420; https://doi.org/10.3390/electronics14173420 (registering DOI) - 27 Aug 2025
Abstract
Cyberattacks include Structured Query Language Injection (SQLi), which represents threats at the level of web applications that interact with the database. These attacks are carried out by executing SQL commands, which compromise the integrity and confidentiality of the data. In this paper, a [...] Read more.
Cyberattacks include Structured Query Language Injection (SQLi), which represents threats at the level of web applications that interact with the database. These attacks are carried out by executing SQL commands, which compromise the integrity and confidentiality of the data. In this paper, a machine learning (ML)-based model is proposed for identifying SQLi attacks. The authors propose a two-stage personalized software processing pipeline as a novel element. Although individual techniques are known, their structured combination and application in this context represent a novel approach to transforming raw SQL queries into input features for an ML model. In this research, a dataset consisting of 90,000 SQL queries was constructed, comprising 17,695 legitimate and 72,304 malicious queries. The dataset consists of synthetic data generated using the GPT-4o model and data from a publicly available dataset. These were processed within a pipeline proposed by the authors, consisting of two stages: syntactic normalization and the extraction of the eight semantic features for model training. Also, within the research, several ML models were analyzed using the Azure Machine Learning Studio platform. These models were paired with different sampling algorithms for selecting the training set and the validation set. Out of the 15 training-sampling algorithm combinations, the Voting Ensemble model achieved the best performance. It achieved an accuracy of 96.86%, a weighted AUC of 98.25%, a weighted F1-score of 96.77%, a weighted precision of 96.92%, and a Matthews correlation coefficient of 89.89%. These values demonstrate the model’s ability to classify queries as legitimate or malicious. The attack identification rate was only 15 malicious queries missed out of a total of 7200, and the number of false alarms was 211 cases. The results confirm the possibility of integrating this algorithm into an additional security layer within an existing web application architecture. In practice, the authors suggest adding an extra layer of security using synthetic data. Full article
(This article belongs to the Special Issue Machine Learning and Cybersecurity—Trends and Future Challenges)
Show Figures

Figure 1

19 pages, 6232 KB  
Article
Comparison of Open Versus Minimally Invasive Repair of Colovesical Fistula: A Case Report and Propensity-Matched National Database Analysis
by Alexis Volkert, Anmol Nigam, David Stover, Pravin Meshram, Rubeena Naaz, Chidiebere Onongaya, Sean Huu-Tien Nguyen, Jordan Sauve, Wolfgang Gaertner and James V. Harmon Jr.
J. Clin. Med. 2025, 14(17), 6065; https://doi.org/10.3390/jcm14176065 (registering DOI) - 27 Aug 2025
Abstract
Background: Colovesical fistulas are abnormal communications between the colon and urinary bladder, most commonly caused by diverticular disease. Although colovesical fistulas are rare, they should be suspected in patients presenting with recurrent urinary tract infections, pneumaturia, or fecaluria. We integrated two case reports [...] Read more.
Background: Colovesical fistulas are abnormal communications between the colon and urinary bladder, most commonly caused by diverticular disease. Although colovesical fistulas are rare, they should be suspected in patients presenting with recurrent urinary tract infections, pneumaturia, or fecaluria. We integrated two case reports with a retrospective national cohort analysis to assess the surgical treatment of colovesical fistulas. Methods: We report two cases of colovesical fistulas, both secondary to sigmoid diverticulitis, treated surgically via minimally invasive approaches. A retrospective analysis using the National Inpatient Sample database from 2016 to 2022 was conducted to compare outcomes of open surgery with those of minimally invasive surgery. Propensity score matching and multivariable regression analyses were used to evaluate clinical outcomes. Results: The first patient underwent hand-assisted laparoscopic sigmoidectomy with fistula takedown and has remained asymptomatic at 8 months, while the second patient underwent robotic-assisted sigmoidectomy with staged ileostomy reversal and has remained asymptomatic at 1 month. National data analysis showed no significant difference in mortality (<1% versus <1%, p = 0.931), wound complications (1.4% versus 1.0%; p = 0.554), or postoperative sepsis or shock (7.1% versus 5.6%; p = 0.114) between open and minimally invasive surgical approaches. However, the minimally invasive surgery group had significantly shorter length of stay than the open surgery group (6.9 versus 7.3 days, p < 0.001). Conclusions: Minimally invasive repair of colovesical fistulas was associated with shorter hospital stays than open surgery, with no significant differences in major complications. Early identification and timely surgical management are critical for achieving favorable outcomes. Full article
(This article belongs to the Section Gastroenterology & Hepatopancreatobiliary Medicine)
15 pages, 1018 KB  
Article
Development and Validation of a NEWS2-Enhanced Multivariable Prediction Model for Clinical Deterioration and In-Hospital Mortality in Hospitalized Adults
by Sofia Lo Conte, Guido Fruscoloni, Alessandra Cartocci, Martin Vitiello, Maria Francesca De Marco, Gabriele Cevenini and Paolo Barbini
Medicina 2025, 61(9), 1543; https://doi.org/10.3390/medicina61091543 - 27 Aug 2025
Abstract
Background and Objectives: Early identification of patients at risk of clinical deterioration is essential for optimizing therapeutic management and improving outcomes in general medicine wards. The National Early Warning Score 2 (NEWS2) is a validated tool for predicting patient worsening but integrating [...] Read more.
Background and Objectives: Early identification of patients at risk of clinical deterioration is essential for optimizing therapeutic management and improving outcomes in general medicine wards. The National Early Warning Score 2 (NEWS2) is a validated tool for predicting patient worsening but integrating it with additional clinical and demographic data can enhance its predictive accuracy and support timely clinical decisions. Material and methods: In this retrospective cohort study, 2108 patients admitted to the general medicine department of the University Hospital of Siena were analyzed. Logistic regression models incorporating NEWS2 alongside key clinical variables—including age, presence of central venous catheter (CVC), and functional status measured by the Barthel Index—were developed to predict high clinical risk (HCR) and mortality. Model performance was assessed using the area under the ROC curve (AUC). Results: High clinical risk status developed in 29% of patients. Older age, presence of CVC, lower Barthel Index, and higher NEWS2 scores were significantly associated with both HCR and mortality. The integrated predictive model demonstrated good accuracy, with an AUC of 0.798 for HCR and 0.716 for mortality prediction. Conclusions: This study suggests that NEWS2, when combined with additional patient-specific variables from the electronic health record, can become a more sophisticated tool for early risk stratification. Such a tool has the potential to support timely clinical intervention and optimized therapeutic management, potentially contributing to improved patient outcomes. While the model may indirectly support nurse workload balancing by identifying patients requiring intensified care, its ultimate impact on patient outcomes requires confirmation through prospective studies. Full article
(This article belongs to the Section Epidemiology & Public Health)
Show Figures

Figure 1

18 pages, 6155 KB  
Article
Integrating RGB Image Processing and Random Forest Algorithm to Estimate Stripe Rust Disease Severity in Wheat
by Andrzej Wójtowicz, Jan Piekarczyk, Marek Wójtowicz, Sławomir Królewicz, Ilona Świerczyńska, Katarzyna Pieczul, Jarosław Jasiewicz and Jakub Ceglarek
Remote Sens. 2025, 17(17), 2981; https://doi.org/10.3390/rs17172981 (registering DOI) - 27 Aug 2025
Abstract
Accurate and timely assessment of crop disease severity is crucial for effective management strategies and ensuring sustainable agricultural production. Traditional visual disease scoring methods are subjective and labor-intensive, highlighting the need for automated, objective alternatives. This study evaluates the effectiveness of a model [...] Read more.
Accurate and timely assessment of crop disease severity is crucial for effective management strategies and ensuring sustainable agricultural production. Traditional visual disease scoring methods are subjective and labor-intensive, highlighting the need for automated, objective alternatives. This study evaluates the effectiveness of a model for field-based identification and quantification of stripe rust severity in wheat using red, green, blue RGB imaging. Based on crop reflectance hyperspectra (CRHS) acquired using a FieldSpec ASD spectroradiometer, two complementary approaches were developed. In the first approach, we estimate single leaf disease severity (LDS) under laboratory conditions, while in the second approach, we assess crop disease severity (CDS) from field-based RGB images. The high accuracy of both methods enabled the development of a predictive model for estimating LDS from CDS, offering a scalable solution for precision disease monitoring in wheat cultivation. The experiment was conducted on four winter wheat plots subjected to varying fungicide treatments to induce different levels of stripe rust severity for model calibration, with treatment regimes ranging from no application to three applications during the growing season. RGB images were acquired in both laboratory conditions (individual leaves) and field conditions (nadir and oblique perspectives), complemented by hyperspectral measurements in the 350–2500 nm range. To achieve automated and objective assessment of disease severity, we developed custom image-processing scripts and applied Random Forest classification and regression models. The models demonstrated high predictive performance, with the combined use of nadir and oblique RGB imagery achieving the highest classification accuracy (97.87%), sensitivity (100%), and specificity (95.83%). Oblique images were more sensitive to early-stage infection, while nadir images offered greater specificity. Spectral feature selection revealed that wavelengths in the visible (e.g., 508–563 nm and 621–703 nm) and red-edge/SWIR regions (around 1556–1767 nm) were particularly informative for disease detection. In classification models, shorter wavelengths from the visible range proved to be more useful, while in regression models, longer wavelengths were more effective. The integration of RGB-based image analysis with the Random Forest algorithm provides a robust, scalable, and cost-effective solution for monitoring stripe rust severity under field conditions. This approach holds significant potential for enhancing precision agriculture strategies by enabling early intervention and optimized fungicide application. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

41 pages, 10718 KB  
Article
Enhancing a Building Change Detection Model in Remote Sensing Imagery for Encroachments and Construction on Government Lands in Egypt as a Case Study
by Essam Mohamed AbdElhamied, Sherin Moustafa Youssef, Marwa Ali ElShenawy and Gouda Ismail Salama
Appl. Sci. 2025, 15(17), 9407; https://doi.org/10.3390/app15179407 (registering DOI) - 27 Aug 2025
Abstract
Change detection (CD) in optical remote-sensing images is a critical task for applications such as urban planning, disaster monitoring, and environmental assessment. While UNet-based architecture has demonstrated strong performance in CD tasks, it often struggles with capturing deep hierarchical features due to the [...] Read more.
Change detection (CD) in optical remote-sensing images is a critical task for applications such as urban planning, disaster monitoring, and environmental assessment. While UNet-based architecture has demonstrated strong performance in CD tasks, it often struggles with capturing deep hierarchical features due to the limitations of plain convolutional layers. Conversely, ResNet architectures excel at learning deep features through residual connections but may lack precise localization capabilities. To address these challenges, we propose ResUNet++, a novel hybrid architecture that combines the strengths of ResNet and UNet for accurate and robust change detection. ResUNet++ integrates residual blocks into the UNet framework to enhance feature representation and mitigate gradient vanishing problems. Additionally, we introduce a Multi-Scale Feature Fusion (MSFF) module to aggregate features at different scales, improving the detection of both large and small changes. Experimental results on multiple datasets (EGY-CD, S2Looking, and LEVIR-CD) demonstrate that ResUNet++ outperforms state-of-the-art methods, achieving higher precision, recall, and F1-scores while maintaining computational efficiency. Full article
20 pages, 1484 KB  
Article
Novel Computed Tomography Perfusion and Laboratory Indices as Predictors of Long-Term Outcome and Survival in Acute Ischemic Stroke
by Eray Halil, Kostadin Kostadinov, Nikoleta Traykova, Neli Atanasova, Kiril Atliev, Elizabet Dzhambazova and Penka Atanassova
Neurol. Int. 2025, 17(9), 136; https://doi.org/10.3390/neurolint17090136 - 27 Aug 2025
Abstract
Background/Objectives: Acute ischemic stroke is a leading cause of mortality and long-term disability globally, with limited reliable early predictors of functional outcomes and survival. This study aimed to assess the prognostic value of two novel predictors: the hypoperfusion intensity ratio calculated from mean [...] Read more.
Background/Objectives: Acute ischemic stroke is a leading cause of mortality and long-term disability globally, with limited reliable early predictors of functional outcomes and survival. This study aimed to assess the prognostic value of two novel predictors: the hypoperfusion intensity ratio calculated from mean transit time and time-to-drain maps (HIR-MTT–TTD), derived from computed tomography perfusion (CTP) imaging parameters, and the Inflammation–Coagulation Index (ICI), which integrates systemic inflammatory (C-reactive protein and white blood cell count) and hemostatic (D-dimer) markers. Methods: This prospective, single-center observational study included 60 patients with acute ischemic stroke treated with intravenous thrombolysis and underwent pre-treatment CTP imaging. HIR-MTT–TTD evaluated collateral status and perfusion deficit severity, while ICI integrated C-reactive protein (CRP), white blood cell (WBC) count, and D-dimer levels. Functional outcomes were assessed using the National Institutes of Health Stroke Scale (NIHSS), Barthel Index, and modified Rankin Scale (mRS) at 24 h, 3 months, and 1 year. Results: Of 60 patients, 53.3% achieved functional independence (mRS 0–2) at 1 year. Unadjusted Cox models showed HIR-MTT–TTD (HR = 6.25, 95% CI: 1.48–26.30, p = 0.013) and ICI (HR = 1.08, 95% CI: 1.00–1.17, p = 0.052) were associated with higher 12-month mortality, worse mRS, and lower Barthel scores. After adjustment for age, BMI, smoking status, and sex, these associations became non-significant (HIR-MTT–TTD: HR = 2.83, 95% CI: 0.37–21.37, p = 0.314; ICI: HR = 1.07, 95% CI: 0.96–1.19, p = 0.211). Receiver operating characteristic (ROC) analysis indicated moderate predictive value, with ICI (AUC = 0.756, 95% CI: 0.600–0.867) outperforming HIR-MTT–TTD (AUC = 0.67, 95% CI: 0.48–0.83) for mortality prediction. Conclusions: The study introduces promising prognostic tools for functional outcomes. Elevated HIR-MTT–TTD and ICI values were independently associated with greater initial stroke severity, poorer functional recovery, and increased 1-year mortality. These findings underscore the prognostic significance of hypoperfusion intensity and systemic thrombo-inflammation in acute ischemic stroke. Combining the use of the presented indices may enhance early risk stratification and guide individualized treatment strategies. Full article
(This article belongs to the Section Movement Disorders and Neurodegenerative Diseases)
Show Figures

Figure 1

20 pages, 3789 KB  
Article
Anomaly-Detection Framework for Thrust Bearings in OWC WECs Using a Feature-Based Autoencoder
by Se-Yun Hwang, Jae-chul Lee, Soon-sub Lee and Cheonhong Min
J. Mar. Sci. Eng. 2025, 13(9), 1638; https://doi.org/10.3390/jmse13091638 - 27 Aug 2025
Abstract
An unsupervised anomaly-detection framework is proposed and field validated for thrust-bearing monitoring in the impulse turbine of a shoreline oscillating water-column (OWC) wave energy converter (WEC) off Jeju Island, Korea. Operational monitoring is constrained by nonstationary sea states, scarce fault labels, and low-rate [...] Read more.
An unsupervised anomaly-detection framework is proposed and field validated for thrust-bearing monitoring in the impulse turbine of a shoreline oscillating water-column (OWC) wave energy converter (WEC) off Jeju Island, Korea. Operational monitoring is constrained by nonstationary sea states, scarce fault labels, and low-rate supervisory logging at 20 Hz. To address these conditions, a 24 h period of normal operation was median-filtered to suppress outliers, and six physically motivated time-domain features were computed from triaxial vibration at 10 s intervals: absolute mean; standard deviation (STD); root mean square (RMS); skewness; shape factor (SF); and crest factor (CF, peak divided by RMS). A feature-based autoencoder was trained to reconstruct the feature vectors, and reconstruction error was evaluated with an adaptive threshold derived from the moving mean and moving standard deviation to accommodate baseline drift. Performance was assessed on a 2 h test segment that includes a 40 min simulated fault window created by doubling the triaxial vibration amplitudes prior to preprocessing and feature extraction. The detector achieved accuracy of 0.99, precision of 1.00, recall of 0.98, and F1 score of 0.99, with no false positives and five false negatives. These results indicate dependable detection at low sampling rates with modest computational cost. The chosen feature set provides physical interpretability under the 20 Hz constraint, and denoising stabilizes indicators against marine transients, supporting applicability in operational settings. Limitations associated with simulated faults are acknowledged. Future work will incorporate long-term field observations with verified fault progressions, cross-site validation, and integration with digital-twin-enabled maintenance. Full article
22 pages, 1705 KB  
Article
An Intelligent Hybrid AI Course Recommendation Framework Integrating BERT Embeddings and Random Forest Classification
by Armaneesa Naaman Hasoon, Salwa Khalid Abdulateef, R. S. Abdulameer and Moceheb Lazam Shuwandy
Computers 2025, 14(9), 353; https://doi.org/10.3390/computers14090353 (registering DOI) - 27 Aug 2025
Abstract
With the proliferation of online learning platforms, selecting appropriate artificial intelligence (AI) courses has become increasingly complex for learners. This study proposes a novel hybrid AI course recommendation framework that integrates Term Frequency–Inverse Document Frequency (TF-IDF) and Bidirectional Encoder Representations from Transformers (BERT) [...] Read more.
With the proliferation of online learning platforms, selecting appropriate artificial intelligence (AI) courses has become increasingly complex for learners. This study proposes a novel hybrid AI course recommendation framework that integrates Term Frequency–Inverse Document Frequency (TF-IDF) and Bidirectional Encoder Representations from Transformers (BERT) for robust textual feature extraction, enhanced by a Random Forest classifier to improve recommendation precision. A curated dataset of 2238 AI-related courses from Udemy was constructed through multi-session web scraping, followed by comprehensive data preprocessing. The system computes semantic and lexical similarity using cosine similarity and fuzzy matching to handle user input variations. Experimental results demonstrate a high recommendation accuracy = 91.25%, precision = 96.63%, and F1-score = 90.77%. Compared with baseline models, the proposed framework significantly improves performance in cold-start scenarios and does not rely on historical user interactions. A Flask-based web application was developed for real-time deployment, offering instant, user-friendly recommendations. This work contributes a scalable and metadata-driven AI recommender architecture with practical deployment and promising generalization capabilities. Full article
(This article belongs to the Section AI-Driven Innovations)
30 pages, 21387 KB  
Article
An Intelligent Docent System with a Small Large Language Model (sLLM) Based on Retrieval-Augmented Generation (RAG)
by Taemoon Jung and Inwhee Joe
Appl. Sci. 2025, 15(17), 9398; https://doi.org/10.3390/app15179398 (registering DOI) - 27 Aug 2025
Abstract
This study designed and empirically evaluated a method to enhance information accessibility for museum and art gallery visitors using a small Large Language Model (sLLM) based on the Retrieval-Augmented Generation (RAG) framework. Over 199,000 exhibition descriptions were collected and refined, and a question-answering [...] Read more.
This study designed and empirically evaluated a method to enhance information accessibility for museum and art gallery visitors using a small Large Language Model (sLLM) based on the Retrieval-Augmented Generation (RAG) framework. Over 199,000 exhibition descriptions were collected and refined, and a question-answering dataset consisting of 102,000 pairs reflecting user personas was constructed to develop DocentGemma, a domain-optimized language model. This model was fine-tuned through Low-Rank Adaptation (LoRA) based on Google’s Gemma2-9B and integrated with FAISS and OpenSearch-based document retrieval systems within the LangChain framework. Performance evaluation was conducted using a dedicated Q&A benchmark for the docent domain, comparing the model against five commercial and open-source LLMs (including GPT-3.5 Turbo, LLaMA3.3-70B, and Gemma2-9B). DocentGemma achieved an accuracy of 85.55% and a perplexity of 3.78, demonstrating competitive performance in language generation and response accuracy within the domain-specific context. To enhance retrieval relevance, a Spatio-Contextual Retriever (SC-Retriever) was introduced, which combines semantic similarity and spatial proximity based on the user’s query and location. An ablation study confirmed that integrating both modalities improved retrieval quality, with the SC-Retriever achieving a recall@1 of 53.45% and a Mean Reciprocal Rank (MRR) of 68.12, representing a 17.5 20% gain in search accuracy compared to baseline models such as GTE and SpatialNN. System performance was further validated through field deployment at three major exhibition venues in Seoul (the Seoul History Museum, the Hwan-ki Museum, and the Hanseong Baekje Museum). A user test involving 110 participants indicated high response credibility and an average satisfaction score of 4.24. To ensure accessibility, the system supports various output formats, including multilingual speech and subtitles. This work illustrates a practical application of integrating LLM-based conversational capabilities into traditional docent services and suggests potential for further development toward location-aware interactive systems and AI-driven cultural content services. Full article
Show Figures

Figure 1

Back to TopTop