Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,542)

Search Parameters:
Keywords = use case based learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4131 KiB  
Article
Enhancing Malignant Lymph Node Detection in Ultrasound Imaging: A Comparison Between the Artificial Intelligence Accuracy, Dice Similarity Coefficient and Intersection over Union
by Iulian-Alexandru Taciuc, Mihai Dumitru, Andreea Marinescu, Crenguta Serboiu, Gabriela Musat, Mirela Gherghe, Adrian Costache and Daniela Vrinceanu
J. Mind Med. Sci. 2025, 12(1), 29; https://doi.org/10.3390/jmms12010029 - 4 May 2025
Viewed by 65
Abstract
Background: The accurate identification of malignant lymph nodes in cervical ultrasound images is crucial for early diagnosis and treatment planning. Traditional evaluation metrics, such as accuracy and the Dice Similarity Coefficient (DSC), often fail to provide a realistic assessment of segmentation performance, as [...] Read more.
Background: The accurate identification of malignant lymph nodes in cervical ultrasound images is crucial for early diagnosis and treatment planning. Traditional evaluation metrics, such as accuracy and the Dice Similarity Coefficient (DSC), often fail to provide a realistic assessment of segmentation performance, as they do not account for partial overlaps between predictions and ground truth. This study addresses this gap by introducing the Intersection over Union (IoU) as an additional metric to offer a more comprehensive evaluation of model performance. Specifically, we aimed to develop a convolutional neural network (CNN) capable of detecting suspicious malignant lymph nodes and assess its effectiveness using both conventional and IoU-based performance metrics. Methods: A dataset consisting of 992 malignant lymph node images was extracted from 166 cervical ultrasound scans and labeled using the ImgLab annotation tool. A CNN was developed using Python, Keras, and TensorFlow and employed within the Jupyter Notebook environment. The network architecture consists of four neural layers trained to distinguish malignant lymph nodes. Results: The CNN achieved a training accuracy of 97% and a validation accuracy of 99%. The DSC score was 0.984, indicating a strong segmentation performance, although it was limited to detecting malignant lymph nodes in positive cases. An IoU evaluation applied to the test images revealed an average overlap of 74% between the ground-truth labels and model predictions, offering a more nuanced measure of the segmentation accuracy. Conclusions: The CNN demonstrated high accuracy and DSC scores, confirming its effectiveness in identifying malignant lymph nodes. However, the IoU values, while lower than conventional accuracy metrics, provided a more realistic evaluation of the model’s performance, highlighting areas for potential improvement in segmentation accuracy. This study underscores the importance of using IoU alongside traditional metrics to obtain a more reliable assessment of deep learning-based medical image analysis models. Full article
Show Figures

Figure 1

35 pages, 4949 KiB  
Article
Bidirectional Teaching Reform in Theoretical Mechanics: Integrating Engineering Thinking and Personalized Assignments
by Yue Jia and Chun Li
Educ. Sci. 2025, 15(5), 574; https://doi.org/10.3390/educsci15050574 - 4 May 2025
Viewed by 79
Abstract
Traditional theoretical mechanics courses often emphasize the rote learning of principles over practical applications. This focus can diminish student engagement and leave graduates ill prepared for applying concepts to real engineering problems. To address these challenges, this study introduces a bidirectional teaching reform [...] Read more.
Traditional theoretical mechanics courses often emphasize the rote learning of principles over practical applications. This focus can diminish student engagement and leave graduates ill prepared for applying concepts to real engineering problems. To address these challenges, this study introduces a bidirectional teaching reform that integrates a front-end focus on cultivating engineering thinking with a back-end focus on personalized assignment design. In the front-end reform, active learning methods, including case-based and project-based learning (PBL) within a structured BOPPPS lesson framework, are used to connect theoretical content with real-world engineering scenarios, thereby strengthening problem-solving skills and engagement among students. The back-end reform introduces personalized and collaborative assignments tailored to the interests and abilities of students, such as individualized problem sets, programming-based exercises, and team projects that encourage innovation and a deeper exploration of mechanics concepts. By addressing both in-class instruction and post-class work, these two reforms complement each other, providing a cohesive learning experience from initial concept acquisition to practical application. Implemented together in a second-year undergraduate mechanics course, this integrated approach was observed to increase student motivation, improve students’ ability to apply theory in practice, and enhance overall teaching effectiveness while fostering stronger collaborative skills. This bidirectional reform provides an effective model for modernizing theoretical mechanics education and prepares students to meet contemporary engineering needs by bridging the longstanding gap between theoretical knowledge and practical application. Full article
Show Figures

Figure 1

25 pages, 2444 KiB  
Article
Adam Algorithm with Step Adaptation
by Vladimir Krutikov, Elena Tovbis and Lev Kazakovtsev
Algorithms 2025, 18(5), 268; https://doi.org/10.3390/a18050268 - 4 May 2025
Viewed by 55
Abstract
Adam (Adaptive Moment Estimation) is a well-known algorithm for the first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. As shown by computational experiments, with an increase in the degree of conditionality of the problem and in the [...] Read more.
Adam (Adaptive Moment Estimation) is a well-known algorithm for the first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. As shown by computational experiments, with an increase in the degree of conditionality of the problem and in the presence of interference, Adam is prone to looping, which is associated with difficulties in step adjusting. In this paper, an algorithm for step adaptation for the Adam method is proposed. The principle of the step adaptation scheme used in the paper is based on reproducing the state in which the descent direction and the new gradient are found during one-dimensional descent. In the case of exact one-dimensional descent, the angle between these directions is right. In case of inexact descent, if the angle between the descent direction and the new gradient is obtuse, then the step is large and should be reduced; if the angle is acute, then the step is small and should be increased. For the experimental analysis of the new algorithm, test functions of a certain degree of conditionality with interference on the gradient and learning problems with mini-batches for calculating the gradient were used. As the computational experiment showed, in stochastic optimization problems, the proposed Adam modification with step adaptation turned out to be significantly more efficient than both the standard Adam algorithm and the other methods with step adaptation that are studied in the work. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

11 pages, 1555 KiB  
Article
Deep Learning-Based Classification of Canine Cataracts from Ocular B-Mode Ultrasound Images
by Sanghyeon Park, Seokmin Go, Seonhyo Kim and Jaeho Shim
Animals 2025, 15(9), 1327; https://doi.org/10.3390/ani15091327 - 4 May 2025
Viewed by 70
Abstract
Cataracts are a prevalent cause of vision loss in dogs, and timely diagnosis is essential for effective treatment. This study aimed to develop and evaluate deep learning models to automatically classify canine cataracts from ocular ultrasound images. A dataset of 3155 ultrasound images [...] Read more.
Cataracts are a prevalent cause of vision loss in dogs, and timely diagnosis is essential for effective treatment. This study aimed to develop and evaluate deep learning models to automatically classify canine cataracts from ocular ultrasound images. A dataset of 3155 ultrasound images (comprising 1329 No cataract, 614 Cortical, 1033 Mature, and 179 Hypermature cases) was used to train and validate four widely used deep learning models (AlexNet, EfficientNetB3, ResNet50, and DenseNet161). Data augmentation and normalization techniques were applied to address category imbalance. DenseNet161 demonstrated the best performance, achieving a test accuracy of 92.03% and an F1-score of 0.8744. The confusion matrix revealed that the model attained the highest accuracy for the No cataract category (99.0%), followed by Cortical (90.3%) and Mature (86.5%) cataracts, while Hypermature cataracts were classified with lower accuracy (78.6%). Receiver Operating Characteristic (ROC) curve analysis confirmed strong discriminative ability, with an area under the curve (AUC) of 0.99. Visual interpretation using Gradient-weighted Class Activation Mapping indicated that the model effectively focused on clinically relevant regions. This deep learning-based classification framework shows significant potential for assisting veterinarians in diagnosing cataracts, thereby improving clinical decision-making in veterinary ophthalmology. Full article
Show Figures

Figure 1

16 pages, 2512 KiB  
Article
Simulation-Based Design and Machine Learning Optimization of a Novel Liquid Cooling System for Radio Frequency Coils in Magnetic Hyperthermia
by Serhat Ilgaz Yöner and Alpay Özcan
Bioengineering 2025, 12(5), 490; https://doi.org/10.3390/bioengineering12050490 - 4 May 2025
Viewed by 176
Abstract
Magnetic hyperthermia is a promising cancer treatment technique that relies on Néel and Brownian relaxation mechanisms to heat superparamagnetic nanoparticles injected into tumor sites. Under low-frequency magnetic fields, nanoparticles generate localized heat, inducing controlled thermal damage to cancer cells. However, radio frequency coils [...] Read more.
Magnetic hyperthermia is a promising cancer treatment technique that relies on Néel and Brownian relaxation mechanisms to heat superparamagnetic nanoparticles injected into tumor sites. Under low-frequency magnetic fields, nanoparticles generate localized heat, inducing controlled thermal damage to cancer cells. However, radio frequency coils used to generate alternating magnetic fields may suffer from excessive heating, leading to efficiency losses and unintended thermal effects on surrounding healthy tissues. This study proposes novel liquid cooling systems, leveraging the skin effect phenomenon, to improve thermal management and reduce coil size. Finite element method-based simulation studies evaluated coil electrical current and temperature distributions under varying applied frequencies, water flow rates, and cooling microchannel dimensions. A dataset of 300 simulation cases was generated to train a Gaussian Process Regression-based machine learning model. The performance index was also developed and modeled using Gaussian Process Regression, enabling rapid performance prediction without requiring extensive numerical studies. Sensitivity analysis and the ReliefF algorithm were applied for a thorough analysis. Simulation results indicate that the proposed novel liquid cooling system demonstrates higher performance compared to conventional systems that utilize direct liquid cooling, offering a computationally efficient method for pre-manufacturing design optimization of radio frequency coil cooling systems in magnetic hyperthermia applications. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Graphical abstract

20 pages, 7332 KiB  
Article
Modeling and Predicting Human Actions in Soccer Using Tensor-SOM
by Moeko Tominaga, Yasunori Takemura and Kazuo Ishii
Appl. Sci. 2025, 15(9), 5088; https://doi.org/10.3390/app15095088 - 3 May 2025
Viewed by 117
Abstract
As robots become increasingly integrated into society, a future in which humans and robots collaborate is expected. In such a cooperative society, robots must possess the ability to predict human behavior. This study investigates a human–robot cooperation system using RoboCup soccer as a [...] Read more.
As robots become increasingly integrated into society, a future in which humans and robots collaborate is expected. In such a cooperative society, robots must possess the ability to predict human behavior. This study investigates a human–robot cooperation system using RoboCup soccer as a testbed, where a robot observes human actions, infers their intentions, and determines its own actions accordingly. Such problems have typically been addressed within the framework of multi-agent systems, where the entity performing an action is referred to as an ‘agent’, and multiple agents cooperate to complete a task. However, a system capable of performing cooperative actions in an environment where both humans and robots coexist has yet to be fully developed. This study proposes an action decision system based on self-organizing maps (SOM), a widely used unsupervised learning model, and evaluates its effectiveness in promoting cooperative play within human teams. Specifically, we analyze futsal game data, where the agents are professional futsal players, as a test case for the multi-agent system. To this end, we employ Tensor-SOM, an extension of SOM that can handle multi-relational datasets. The system learns from this data to determine the optimal movement speeds in x and y directions for each agent’s position. The results demonstrate that the proposed system successfully determines optimal movement speeds, suggesting its potential for integrating robots into human team coordination. Full article
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)
Show Figures

Figure 1

14 pages, 3816 KiB  
Article
Deep Learning-Based Synthetic CT for Personalized Treatment Modality Selection Between Proton and Photon Therapy in Thoracic Cancer
by Libing Zhu, Nathan Y. Yu, Riley C. Tegtmeier, Jonathan B. Ashman, Aman Anand, Jingwei Duan, Quan Chen and Yi Rong
Cancers 2025, 17(9), 1553; https://doi.org/10.3390/cancers17091553 - 3 May 2025
Viewed by 149
Abstract
Objectives: Identifying patients’ advantageous radiotherapy modalities prior to CT simulation is challenging. This study aimed to develop a workflow using deep learning (DL)-predicted synthetic CT (sCT) for treatment modality comparison based solely on a diagnostic CT (dCT). Methods: A DL network, [...] Read more.
Objectives: Identifying patients’ advantageous radiotherapy modalities prior to CT simulation is challenging. This study aimed to develop a workflow using deep learning (DL)-predicted synthetic CT (sCT) for treatment modality comparison based solely on a diagnostic CT (dCT). Methods: A DL network, U-Net, was trained utilizing 46 thoracic cases from a public database to generate sCT images predicting planning CT (pCT) scans based on the latest dCT, and tested on 15 institutional patients. The sCT accuracy was evaluated against the corresponding pCT and a commercial algorithm deformed CT (MdCT) based on Mean Absolute Error (MAE) and Universal Quality Index (UQI). To determine advantageous treatment modality, clinical dose-volume histogram (DVH) metrics and Normal Tissue Complication Probability (NTCP) differences between proton and photon treatment plans were analyzed on the sCTs via concordance correlation coefficient (CCC). Results: The AI-generated sCTs closely resembled those of the commercial deformation algorithm in the tested cases. The differences in MAE and UQI values between the sCT-vs-pCT and MdCT-vs-pCT were 19.38 HU and 0.06, respectively. The mean absolute NTCP deviation between sCT and pCT was 1.54%, 0.21%, and 2.36% for esophagus perforation, lung pneumonitis, and heart pericarditis, respectively. The CCC between sCT and pCT was 0.90 for DVH metrics and 0.97 for NTCP, indicating moderate agreement for DVH metrics and substantial agreement. Conclusions: Radiation oncologists can potentially utilize this personalized sCT based approach as a clinical support tool to rapidly compare the treatment modality benefit during patient consultation and facilitate in-depth discussion on potential toxicities at a patient-specific level. Full article
(This article belongs to the Special Issue New Approaches in Radiotherapy for Cancer)
Show Figures

Figure 1

15 pages, 1890 KiB  
Article
Evaluation of Neural Networks for Improved Computational Cost in Carbon Nanotubes Geometric Optimization
by Luis Josimar Vences-Reynoso, Daniel Villanueva-Vasquez, Roberto Alejo-Eleuterio, Federico Del Razo-López, Sonia Mireya Martínez-Gallegos and Everardo Efrén Granda-Gutiérrez
Modelling 2025, 6(2), 36; https://doi.org/10.3390/modelling6020036 - 2 May 2025
Viewed by 215
Abstract
Geometric optimization of carbon nanotubes (CNTs) is a fundamental step in computational simulations, enabling precise studies of their properties for various applications. However, this process becomes computationally expensive as the molecular structure grows in complexity and size. To address this challenge, this study [...] Read more.
Geometric optimization of carbon nanotubes (CNTs) is a fundamental step in computational simulations, enabling precise studies of their properties for various applications. However, this process becomes computationally expensive as the molecular structure grows in complexity and size. To address this challenge, this study utilized three deep-learning-based neural network architectures: Multi-Layer Perceptron (MLP), Bidirectional Long Short-Term Memory (BiLSTM), and 1D Convolutional Neural Networks (1D-CNNs). Simulations were performed using the CASTEP module in Material Studio to generate datasets for training the neural networks. While the final geometric optimization calculations were completed within Material Studio, the neural networks effectively generated preoptimized CNT structures that served as starting points, significantly reducing computational time. The results showed that the 1D-CNN architecture performed best for CNTs with 28, 52, 76, and 156 atoms, while the MLP outperformed others for CNTs with 84, 124, 148, and 196 atoms. Across all cases, computational time was reduced by 39.68% to 90.62%. Although the BiLSTM also achieved reductions, its performance was less effective than the other two architectures. This work highlights the potential of integrating deep learning techniques into materials science; it also offers a transformative approach to reducing computational costs in optimizing CNTs and presents a way for accelerated research in molecular systems. Full article
Show Figures

Figure 1

29 pages, 1415 KiB  
Article
Automated Lightweight Model for Asthma Detection Using Respiratory and Cough Sound Signals
by Shuting Xu, Ravinesh C. Deo, Oliver Faust, Prabal D. Barua, Jeffrey Soar and Rajendra Acharya
Diagnostics 2025, 15(9), 1155; https://doi.org/10.3390/diagnostics15091155 (registering DOI) - 1 May 2025
Viewed by 82
Abstract
Background and objective: Chronic respiratory diseases, such as asthma and COPD, pose significant challenges to human health and global healthcare systems. This pioneering study utilises AI analysis and modelling of cough and respiratory sound signals to classify and differentiate between asthma, COPD, and [...] Read more.
Background and objective: Chronic respiratory diseases, such as asthma and COPD, pose significant challenges to human health and global healthcare systems. This pioneering study utilises AI analysis and modelling of cough and respiratory sound signals to classify and differentiate between asthma, COPD, and healthy subjects. The aim is to develop an AI-based diagnostic system capable of accurately distinguishing these conditions, thereby enhancing early detection and clinical management. Our study, therefore, presents the first AI system that leverages dual acoustic signals to enhance the diagnostic ACC of asthma using automated, lightweight deep learning models. Methods: To build an automated, lightweight model for asthma detection, tested separately with respiratory and cough sounds to assess their suitability for detecting asthma and COPD, the proposed AI models integrate the following ML algorithms: RF, SVM, DT, NN, and KNN, with an overall aim to demonstrate the efficacy of the proposed method for future clinical use. Model training and validation were performed using 5-fold cross-validation, wherein the dataset was randomly divided into five folds and the models were trained and tested iteratively to ensure robust performance. We evaluated the model outcomes with several performance metrics: ACC, precision, recall, F1 score, and area under the AUC. Additionally, a majority voting ensemble technique was employed to aggregate the predictions of the various classifiers for improved diagnostic reliability. We applied Gabor time–frequency transformation for feature extraction and NCA) for feature selection to optimise predictive accuracy. Independent comparative experiments were conducted, where cough-sound subsets were used to evaluate asthma detection capabilities, and respiratory-sound subsets were used to evaluate COPD detection capabilities, allowing for targeted model assessment. Results: The proposed ensemble approach, facilitated by a majority voting approach for model efficacy evaluation, achieved acceptable ACC values of 94.05% and 83.31% for differentiating between asthma and normal cases utilising separate respiratory sounds and cough sounds, respectively. The results highlight a substantial benefit in integrating multiple classifier models and sound modalities while demonstrating an unprecedented level of ACC and robustness for future diagnostic predictions of the disease. Conclusions: The present study sets a new benchmark in AI-based detection of respiratory diseases by integrating cough and respiratory sound signals for future diagnostics. The successful implementation of a dual-sound analysis approach promises advancements in the early detection and management of asthma and COPD.We conclude that the proposed model holds strong potential to transform asthma diagnostic practices and support clinicians in their respiratory healthcare practices. Full article
Show Figures

Figure 1

33 pages, 3800 KiB  
Article
Adaptive Zero Trust Policy Management Framework in 5G Networks
by Abdulrahman K. Alnaim
Mathematics 2025, 13(9), 1501; https://doi.org/10.3390/math13091501 - 1 May 2025
Viewed by 95
Abstract
The rapid evolution and deployment of 5G networks have introduced complex security challenges due to their reliance on dynamic network slicing, ultra-low latency communication, decentralized architectures, and highly diverse use cases. Traditional perimeter-based security models are no longer sufficient in these highly fluid [...] Read more.
The rapid evolution and deployment of 5G networks have introduced complex security challenges due to their reliance on dynamic network slicing, ultra-low latency communication, decentralized architectures, and highly diverse use cases. Traditional perimeter-based security models are no longer sufficient in these highly fluid and distributed environments. In response to these limitations, this study introduces SecureChain-ZT, a novel Adaptive Zero Trust Policy Framework (AZTPF) that addresses emerging threats by integrating intelligent access control, real-time monitoring, and decentralized authentication mechanisms. SecureChain-ZT advances conventional Zero Trust Architecture (ZTA) by leveraging machine learning, reinforcement learning, and blockchain technologies to achieve autonomous policy enforcement and threat mitigation. Unlike static ZT models that depend on predefined rule sets, AZTPF continuously evaluates user and device behavior in real time, detects anomalies through AI-powered traffic analysis, and dynamically updates access policies based on contextual risk assessments. Comprehensive simulations and experiments demonstrate the robustness of the framework. SecureChain-ZT achieves an authentication accuracy of 97.8% and reduces unauthorized access attempts from 17.5% to just 2.2%. Its advanced detection capabilities achieve a threat detection accuracy of 99.3% and block 95.6% of attempted cyber intrusions. The implementation of blockchain-based identity verification reduces spoofing incidents by 97%, while microsegmentation limits lateral movement attacks by 75%. The proposed SecureChain-ZT model achieved an authentication accuracy of 98.6%, reduced false acceptance and rejection rates to 1.2% and 0.2% respectively, and improved policy update time to 180 ms. Compared to traditional models, the overall latency was reduced by 62.6%, and threat detection accuracy increased to 99.3%. These results highlight the model’s effectiveness in both cybersecurity enhancement and real-time service responsiveness. This research contributes to the advancement of Zero Trust security models by presenting a scalable, resilient, and adaptive policy enforcement framework that aligns with the demands of next-generation 5G infrastructures. The proposed SecureChain-ZT model not only enhances cybersecurity but also ensures service reliability and responsiveness in complex and mission-critical environments. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
16 pages, 3729 KiB  
Article
Stuck Pipe Detection in Oil and Gas Drilling Operations Using Deep Learning Autoencoder for Anomaly Diagnosis
by Hasan N. Al-Mamoori, Jialin Tian and Haifeng Ma
Appl. Sci. 2025, 15(9), 5042; https://doi.org/10.3390/app15095042 - 1 May 2025
Viewed by 294
Abstract
Stuck pipe events remain a critical challenge in oil and gas drilling operations, leading to increased non-productive time and substantial financial losses. Traditional detection methods rely on manual monitoring and expert judgment, which are prone to delays and human error. This study proposes [...] Read more.
Stuck pipe events remain a critical challenge in oil and gas drilling operations, leading to increased non-productive time and substantial financial losses. Traditional detection methods rely on manual monitoring and expert judgment, which are prone to delays and human error. This study proposes a deep learning autoencoder-based anomaly diagnosis approach to enhance the detection of stuck pipe incidents. Using high-resolution time series drilling data from the Volve field, a deep learning autoencoder model was trained exclusively on normal drilling conditions to learn operational patterns and detect deviations indicative of stuck pipe events. The proposed model leverages reconstruction error as an anomaly detection metric, effectively distinguishing between normal and stuck cases. The results demonstrate that the model achieves a detection accuracy of 99.06%, with an area under the receiver operating characteristic curve (AUC) of 0.958. Additionally, the model attained a precision of 97.12%, a recall of 91.34%, and a F1-score of 94.15%, significantly reducing false positives and false negatives. The findings highlight the potential of deep learning-based approaches in improving real-time anomaly detection, offering a scalable and cost-effective solution for mitigating drilling disruptions. This research contributes to advancing intelligent monitoring systems in the oil and gas industry, reducing operational risks, and enhancing drilling efficiency. Full article
Show Figures

Figure 1

16 pages, 1591 KiB  
Article
Cereal and Rapeseed Yield Forecast in Poland at Regional Level Using Machine Learning and Classical Statistical Models
by Edyta Okupska, Dariusz Gozdowski, Rafał Pudełko and Elżbieta Wójcik-Gront
Agriculture 2025, 15(9), 984; https://doi.org/10.3390/agriculture15090984 - 1 May 2025
Viewed by 135
Abstract
This study performed in-season yield prediction, about 2–3 months before the harvest, for cereals and rapeseed at the province level in Poland for 2009–2024. Various models were employed, including machine learning algorithms and multiple linear regression. The satellite-derived normalized difference vegetation index (NDVI) [...] Read more.
This study performed in-season yield prediction, about 2–3 months before the harvest, for cereals and rapeseed at the province level in Poland for 2009–2024. Various models were employed, including machine learning algorithms and multiple linear regression. The satellite-derived normalized difference vegetation index (NDVI) and climatic water balance (CWB), calculated using meteorological data, were treated as predictors of crop yield. The accuracy of the models was compared to identify the optimal approach. The strongest correlation coefficients with crop yield were observed for the NDVI at the beginning of March, ranging from 0.454 for rapeseed to 0.503 for rye. Depending on the crop, the highest R2 values were observed for different prediction models, ranging from 0.654 for rapeseed based on the random forest model to 0.777 for basic cereals based on linear regression. The random forest model was best for rapeseed yield, while for cereal, the best prediction was observed for multiple linear regression or neural network models. For the studied crops, all models had mean absolute errors and root mean squared errors not exceeding 6 dt/ha, which is relatively small because it is under 20% of the mean yield. For the best models, in most cases, relative errors were not higher than 10% of the mean yield. The results proved that linear regression and machine learning models are characterized by similar predictions, likely due to the relatively small sample size (256 observations). Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

14 pages, 1656 KiB  
Article
A Hybrid Learning Framework for Enhancing Bridge Damage Prediction
by Amal Abdulbaqi Maryoosh, Saeid Pashazadeh and Pedram Salehpour
Appl. Syst. Innov. 2025, 8(3), 61; https://doi.org/10.3390/asi8030061 - 30 Apr 2025
Viewed by 114
Abstract
Bridges are crucial structures for transportation networks, and their structural integrity is paramount. Deterioration and damage to bridges can lead to significant economic losses, traffic disruptions, and, in severe cases, loss of life. Traditional methods of bridge damage detection, often relying on visual [...] Read more.
Bridges are crucial structures for transportation networks, and their structural integrity is paramount. Deterioration and damage to bridges can lead to significant economic losses, traffic disruptions, and, in severe cases, loss of life. Traditional methods of bridge damage detection, often relying on visual inspections, can be challenging or impossible in critical areas such as roofing, corners, and heights. Therefore, there is a pressing need for automated and accurate techniques for bridge damage detection. This study aims to propose a novel method for bridge crack detection that leverages a hybrid supervised and unsupervised learning strategy. The proposed approach combines pixel-based feature method local binary pattern (LBP) with the mid-level feature bag of visual words (BoVW) for feature extraction, followed by the Apriori algorithm for dimensionality reduction and optimal feature selection. The selected features are then trained using the MobileNet model. The proposed model demonstrates exceptional performance, achieving accuracy rates ranging from 98.27% to 100%, with error rates between 1.73% and 0% across multiple bridge damage datasets. This study contributes a reliable hybrid learning framework for minimizing error rates in bridge damage detection, showcasing the potential of combining LBP–BoVW features with MobileNet for image-based classification tasks. Full article
Show Figures

Figure 1

28 pages, 15125 KiB  
Article
Detection of Agricultural Terraces Platforms Using Machine Learning from Orthophotos and LiDAR-Based Digital Terrain Model: A Case Study in Roya Valley of Southeast France
by Michael Vincent Tubog, Karine Emsellem and Stephane Bouissou
Land 2025, 14(5), 962; https://doi.org/10.3390/land14050962 - 29 Apr 2025
Viewed by 307
Abstract
Terraces have long transformed steep slopes into gradual steps, reducing erosion and enabling agriculture on marginal land. In France’s Roya Valley, these dry stone structures, neglected for decades, demonstrated remarkable resilience during storm Alex in October 2020. This prompted civil society and researchers [...] Read more.
Terraces have long transformed steep slopes into gradual steps, reducing erosion and enabling agriculture on marginal land. In France’s Roya Valley, these dry stone structures, neglected for decades, demonstrated remarkable resilience during storm Alex in October 2020. This prompted civil society and researchers to identify terraces that could support food security and agri-tourism initiatives. This study aimed to develop a semi-automatic method for detecting and mapping terraced areas using LiDAR and orthophoto data from French repositories, processed with GIS and analyzed through a Support Vector Machine (SVM) classification algorithm. The model identified 18 terraces larger than 1 hectare in Saorge and 35 in La Brigue. Field visits confirmed evidence of abandonment in several areas. Accuracy tests showed a user accuracy (UA) of 97% in Saorge and 72% in La Brigue. This disparity reflects site-specific differences, including terrain steepness, vegetation density, and data resolution. These results highlight the value of machine learning for terrace mapping while emphasizing the need to account for local geomorphological and data-quality factors to improve model performance. Enhanced terrace detection supports sustainable land management, agricultural revitalization, and risk mitigation in mountainous regions, offering practical tools for future landscape restoration and food resilience planning. Full article
Show Figures

Figure 1

25 pages, 7617 KiB  
Article
Optimization of Hydronic Heating System in a Commercial Building: Application of Predictive Control with Limited Data
by Rana Loubani, Didier Defer, Ola Alhaj-Hasan and Julien Chamoin
Energies 2025, 18(9), 2260; https://doi.org/10.3390/en18092260 - 29 Apr 2025
Viewed by 124
Abstract
Optimizing building equipment control is crucial for enhancing energy efficiency. This article presents a predictive control applied to a commercial building heated by a hydronic system, comparing its performance to a traditional heating curve-based strategy. The approach is developed and validated using TRNSYS18 [...] Read more.
Optimizing building equipment control is crucial for enhancing energy efficiency. This article presents a predictive control applied to a commercial building heated by a hydronic system, comparing its performance to a traditional heating curve-based strategy. The approach is developed and validated using TRNSYS18 modeling, which allows for comparison of the control methods under the same weather boundary conditions. The proposed strategy balances energy consumption and indoor thermal comfort. It aims to optimize the control of the secondary heating circuit’s water setpoint temperature, so it is not the boiler supply water temperature that is optimized, but rather the temperature of the water that feeds the radiators. Limited data poses challenges for capturing system dynamics, addressed through a black-box approach combining two machine learning models: an artificial neural network predicts indoor temperature, while a support vector machine estimates gas consumption. Incorporating weather forecasts, occupancy scenarios, and comfort requirements, a genetic algorithm identifies optimal hourly setpoints. This work demonstrates the possibility of creating sufficiently accurate models for this type of application using limited data. It offers a simplified and efficient optimization approach to heat control in such buildings. The case study results show energy savings up to 30% compared to a traditional control method. Full article
(This article belongs to the Special Issue Optimizing Energy Efficiency and Thermal Comfort in Building)
Show Figures

Figure 1

Back to TopTop