Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (549)

Search Parameters:
Keywords = manual handling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4559 KB  
Article
Automating Leaf Area Measurement in Citrus: The Development and Validation of a Python-Based Tool
by Emilio Suarez, Manuel Blaser and Mary Sutton
Appl. Sci. 2025, 15(17), 9750; https://doi.org/10.3390/app15179750 - 5 Sep 2025
Abstract
Leaf area is a critical trait in plant physiology and agronomy, yet conventional measurement approaches such as those using ImageJ remain labor-intensive, user-dependent, and difficult to scale for high-throughput phenotyping. To address these limitations, we developed a fully automated, open-source Python tool for [...] Read more.
Leaf area is a critical trait in plant physiology and agronomy, yet conventional measurement approaches such as those using ImageJ remain labor-intensive, user-dependent, and difficult to scale for high-throughput phenotyping. To address these limitations, we developed a fully automated, open-source Python tool for quantifying citrus leaf area from scanned images using multi-mask HSV segmentation, contour-hierarchy filtering, and batch calibration. The tool was validated against ImageJ across 11 citrus cultivars (n = 412 leaves), representing a broad range of leaf sizes and morphologies. Agreement between methods was near perfect, with correlation coefficients exceeding 0.997, mean bias within ±0.14 cm2, and error rates below 2.5%. Bland–Altman analysis confirmed narrow limits of agreement (±0.3 cm2) while scatter plots showed robust performance across both small and large leaves. Importantly, the Python tool successfully handled challenging imaging conditions, including low-contrast leaves and edge-aligned specimens, where ImageJ required manual intervention. Processing efficiency was markedly improved, with the full dataset analyzed in 7 s compared with over 3 h using ImageJ, representing a >1600-fold speed increase. By eliminating manual thresholding and reducing user variability, this tool provides a reliable, efficient, and accessible framework for high-throughput leaf area quantification, advancing reproducibility and scalability in digital phenotyping. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Precision Agriculture)
Show Figures

Figure 1

33 pages, 21287 KB  
Article
Interactive, Shallow Machine Learning-Based Semantic Segmentation of 2D and 3D Geophysical Data from Archaeological Sites
by Lieven Verdonck, Michel Dabas and Marc Bui
Remote Sens. 2025, 17(17), 3092; https://doi.org/10.3390/rs17173092 - 4 Sep 2025
Abstract
In recent decades, technological developments in archaeological geophysics have led to growing data volumes, so that an important bottleneck is now at the stage of data interpretation. The manual delineation and classification of anomalies are time-consuming, and different methods for (semi-)automatic image segmentation [...] Read more.
In recent decades, technological developments in archaeological geophysics have led to growing data volumes, so that an important bottleneck is now at the stage of data interpretation. The manual delineation and classification of anomalies are time-consuming, and different methods for (semi-)automatic image segmentation have been proposed, based on explicitly formulated rulesets or deep convolutional neural networks (DCNNs). So far, these have not been used widely in archaeological geophysics because of the complexity of the segmentation task (due to the low contrast between archaeological structures and background and the low predictability of the targets). Techniques based on shallow machine learning (e.g., random forests, RFs) have been explored very little in archaeological geophysics, although they are less case-specific than most rule-based methods, do not require large training sets as is the case for DCNNs, and can easily handle 3D data. In this paper, we show their potential for geophysical data analysis. For the classification on the pixel level, we use ilastik, an open-source segmentation tool developed in medical imaging. Algorithms for object classification, manual reclassification, post-processing, vectorisation, and georeferencing were brought together in a Jupyter Notebook, available on GitHub (version 7.3.2). To assess the accuracy of the RF classification applied to geophysical datasets, we compare it with manual interpretation. A quantitative evaluation using the mean intersection over union metric results in scores of ~60%, which only slightly increases after the manual correction of the RF classification results. Remarkably, a similar score results from the comparison between independent manual interpretations. This observation illustrates that quantitative metrics are not a panacea for evaluating machine-generated geophysical data interpretation in archaeology, which is characterised by a significant degree of uncertainty. It also raises the question of how the semantic segmentation of geophysical data (whether carried out manually or with the aid of machine learning) can best be evaluated. Full article
Show Figures

Figure 1

30 pages, 7088 KB  
Article
Cascade Hydropower Plant Operational Dispatch Control Using Deep Reinforcement Learning on a Digital Twin Environment
by Erik Rot Weiss, Robert Gselman, Rudi Polner and Riko Šafarič
Energies 2025, 18(17), 4660; https://doi.org/10.3390/en18174660 - 2 Sep 2025
Viewed by 159
Abstract
In this work, we propose the use of a reinforcement learning (RL) agent for the control of a cascade hydropower plant system. Generally, this job is handled by power plant dispatchers who manually adjust power plant electricity production to meet the changing demand [...] Read more.
In this work, we propose the use of a reinforcement learning (RL) agent for the control of a cascade hydropower plant system. Generally, this job is handled by power plant dispatchers who manually adjust power plant electricity production to meet the changing demand set by energy traders. This work explores the more fundamental problem with the cascade hydropower plant operation of flow control for power production in a highly nonlinear setting on a data-based digital twin. Using deep deterministic policy gradient (DDPG), twin delayed DDPG (TD3), soft actor-critic (SAC), and proximal policy optimization (PPO) algorithms, we can generalize the characteristics of the system and determine the human dispatcher level of control of the entire system of eight hydropower plants on the river Drava in Slovenia. The creation of an RL agent that makes decisions similar to a human dispatcher is not only interesting in terms of control but also in terms of long-term decision-making analysis in an ever-changing energy portfolio. The specific novelty of this work is in training an RL agent on an accurate testing environment of eight real-world cascade hydropower plants on the river Drava in Slovenia and comparing the agent’s performance to human dispatchers. The results show that the RL agent’s absolute mean error of 7.64 MW is comparable to the general human dispatcher’s absolute mean error of 5.8 MW at a peak installed power of 591.95 MW. Full article
Show Figures

Figure 1

22 pages, 261573 KB  
Article
A Continuous Low-Rank Tensor Approach for Removing Clouds from Optical Remote Sensing Images
by Dong-Lin Sun, Teng-Yu Ji, Siying Li and Zirui Song
Remote Sens. 2025, 17(17), 3001; https://doi.org/10.3390/rs17173001 - 28 Aug 2025
Viewed by 498
Abstract
Optical remote sensing images are often partially obscured by clouds due to the inability of visible light to penetrate cloud cover, which significantly limits their subsequent applications. Most existing cloud removal methods formulate the problem using low-rank and sparse priors within a discrete [...] Read more.
Optical remote sensing images are often partially obscured by clouds due to the inability of visible light to penetrate cloud cover, which significantly limits their subsequent applications. Most existing cloud removal methods formulate the problem using low-rank and sparse priors within a discrete representation framework. However, these approaches typically rely on manually designed regularization terms, which fail to accurately capture the complex geostructural patterns in remote sensing imagery. In response to this issue, we develop a continuous blind cloud removal model. Specifically, the cloud-free component is represented using a continuous tensor function that integrates implicit neural representations with low-rank tensor decomposition. This representation enables the model to capture both global correlations and local smoothness. Furthermore, a band-wise sparsity constraint is employed to represent the cloud component. To preserve the information in regions not covered by clouds during reconstruction, a box constraint is incorporated. In this constraint, cloud detection is performed using an adaptive thresholding strategy, and a morphological erosion function is employed to ensure accurate detection of cloud boundaries. To efficiently handle the developed model, we formulate an alternating minimization algorithm that decouples the optimization into three interpretable subproblems: cloud-free reconstruction, cloud component estimation, and cloud detection. Our extensive evaluations on both synthetic and real-world data reveal that the proposed method performs competitively against state-of-the-art cloud removal methods. Full article
Show Figures

Figure 1

27 pages, 5369 KB  
Article
High-Performance Automated Detection of Sheep Binocular Eye Temperatures and Their Correlation with Rectal Temperature
by Yadan Zhang, Ying Han, Xiaocong Li, Xueting Zeng, Waleid Mohamed EL-Sayed Shakweer, Gang Liu and Jun Wang
Animals 2025, 15(17), 2475; https://doi.org/10.3390/ani15172475 - 22 Aug 2025
Viewed by 383
Abstract
Although rectal temperature is reliable, its measurement requires manual handling and causes stress to animals. IRT provides a non-contact alternative but often ignores bilateral eye temperature differences. This study presents an E-S-YOLO11n model for the automated detection of the binocular regions of sheep, [...] Read more.
Although rectal temperature is reliable, its measurement requires manual handling and causes stress to animals. IRT provides a non-contact alternative but often ignores bilateral eye temperature differences. This study presents an E-S-YOLO11n model for the automated detection of the binocular regions of sheep, which achieves remarkable performance with a precision of 98.2%, recall of 98.5%, mAP@0.5 of 99.40%, F1 score of 98.35%, FPS of 322.58 frame/s, parameters of 7.27 M, model size of 3.97 MB, and GFLOPs of 1.38. Right and left eye temperatures exhibit a strong correlation (r = 0.8076, p < 0.0001), However, the eye temperatures show only very weak correlation with rectal temperature (right eye: r = 0.0852; left eye: r = −0.0359), and neither figure reaches statistical significance. Rectal temperature is 7.37% and 7.69% higher than the right and left eye temperatures, respectively. Additionally, the right eye temperature is slightly higher than the left eye (p < 0.01). The study demonstrates the feasibility of combining IRT and deep learning for non-invasive eye temperature monitoring, although environmental factors may limit it as a proxy for rectal temperature. These results support the development of efficient thermal monitoring tools for precision animal husbandry. Full article
Show Figures

Figure 1

17 pages, 2751 KB  
Article
Joint Extraction of Cyber Threat Intelligence Entity Relationships Based on a Parallel Ensemble Prediction Model
by Huan Wang, Shenao Zhang, Zhe Wang, Jing Sun and Qingzheng Liu
Sensors 2025, 25(16), 5193; https://doi.org/10.3390/s25165193 - 21 Aug 2025
Viewed by 577
Abstract
The construction of knowledge graphs in cyber threat intelligence (CTI) critically relies on automated entity–relation extraction. However, sequence tagging-based methods for joint entity–relation extraction are affected by the order-dependency problem. As a result, overlapping relations are handled ineffectively. To address this limitation, a [...] Read more.
The construction of knowledge graphs in cyber threat intelligence (CTI) critically relies on automated entity–relation extraction. However, sequence tagging-based methods for joint entity–relation extraction are affected by the order-dependency problem. As a result, overlapping relations are handled ineffectively. To address this limitation, a parallel, ensemble-prediction–based model is proposed for joint entity–relation extraction in CTI. The joint extraction task is reformulated as an ensemble prediction problem. A joint network that combines Bidirectional Encoder Representations from Transformers (BERT) with a Bidirectional Gated Recurrent Unit (BiGRU) is constructed to capture deep contextual features in sentences. An ensemble prediction module and a triad representation of entity–relation facts are designed for joint extraction. A non-autoregressive decoder is employed to generate relation triad sets in parallel, thereby avoiding unnecessary sequential constraints during decoding. In the threat intelligence domain, labeled data are scarce and manual annotation is costly. To mitigate these constraints, the SecCti dataset is constructed by leveraging ChatGPT’s small-sample learning capability for labeling and augmentation. This approach reduces annotation costs effectively. Experimental results show a 4.6% absolute F1 improvement over the baseline on joint entity–relation extraction for threat intelligence concerning Advanced Persistent Threats (APTs) and cybercrime activities. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

19 pages, 1706 KB  
Article
Hybrid Resource Quota Scaling for Kubernetes-Based Edge Computing Systems
by Minh-Ngoc Tran and Younghan Kim
Electronics 2025, 14(16), 3308; https://doi.org/10.3390/electronics14163308 - 20 Aug 2025
Viewed by 408
Abstract
In the Kubernetes edge computing environment, Resource Quota plays a vital role in efficient limited resource management because it defines the maximum resources that each service or tenant can use. Therefore, when edge nodes serve multiple services simultaneously, resource quota prevents any single [...] Read more.
In the Kubernetes edge computing environment, Resource Quota plays a vital role in efficient limited resource management because it defines the maximum resources that each service or tenant can use. Therefore, when edge nodes serve multiple services simultaneously, resource quota prevents any single service from monopolizing resources. However, the manual resource quota configuration mechanism in current Kubernetes-based management platforms is not dynamic enough to handle fluctuating resource demands of services over time. Slow quota extension during surge traffic prevents scaling up necessary pods and degrades service performance, while over-allocating quotas during light traffic might occupy valuable resources that other services may need. This study proposes a Dynamic Resource Quota Auto-scaling Framework, combining proactive scaling based on workload predictions with reactive mechanisms to handle both inaccurate predictions and unforeseeable events. This framework not only optimizes resource allocation but also maintains stable performance, reduces deployment failures, and prevents over-allocation during scaling in high-demand periods. Full article
Show Figures

Figure 1

24 pages, 4431 KB  
Article
Fault Classification in Power Transformers Using Dissolved Gas Analysis and Optimized Machine Learning Algorithms
by Vuyani M. N. Dladla and Bonginkosi A. Thango
Machines 2025, 13(8), 742; https://doi.org/10.3390/machines13080742 - 20 Aug 2025
Viewed by 413
Abstract
Power transformers are critical assets in electrical power systems, yet their fault diagnosis often relies on conventional dissolved gas analysis (DGA) methods such as the Duval Pentagon and Triangle, Key Gas, and Rogers Ratio methods. Even though these methods are commonly used, they [...] Read more.
Power transformers are critical assets in electrical power systems, yet their fault diagnosis often relies on conventional dissolved gas analysis (DGA) methods such as the Duval Pentagon and Triangle, Key Gas, and Rogers Ratio methods. Even though these methods are commonly used, they present limitations in classification accuracy, concurrent fault identification, and manual sample handling. In this study, a framework of optimized machine learning algorithms that integrates Chi-squared statistical feature selection with Random Search hyperparameter optimization algorithms was developed to enhance transformer fault classification accuracy using DGA data, thereby addressing the limitations of conventional methods and improving diagnostic precision. Utilizing the R2024b MATLAB Classification Learner App, five optimized machine learning algorithms were trained and tested using 282 transformer oil samples with varying DGA gas concentrations obtained from industrial transformers, the IEC TC10 database, and the literature. The optimized and assessed models are Linear Discriminant, Naïve Bayes, Decision Trees, Support Vector Machine, Neural Networks, k-Nearest Neighbor, and the Ensemble Algorithm. From the proposed models, the best performing algorithm, Optimized k-Nearest Neighbor, achieved an overall performance accuracy of 92.478%, followed by the Optimized Neural Network at 89.823%. To assess their performance against the conventional methods, the same dataset used for the optimized machine learning algorithms was used to evaluate the performance of the Duval Triangle and Duval Pentagon methods using VAISALA DGA software version 1.1.0; the proposed models outperformed the conventional methods, which could only achieve a classification accuracy of 35.757% and 30.818%, respectively. This study concludes that the application of the proposed optimized machine learning algorithms can enhance the classification accuracy of DGA-based faults in power transformers, supporting more reliable diagnostics and proactive maintenance strategies. Full article
(This article belongs to the Section Electrical Machines and Drives)
Show Figures

Figure 1

19 pages, 5949 KB  
Article
Integrating a Soft Pneumatic Gripper in a Robotic System for High-Speed Stable Handling of Raw Oysters
by Yang Zhang and Zhongkui Wang
Foods 2025, 14(16), 2875; https://doi.org/10.3390/foods14162875 - 19 Aug 2025
Viewed by 393
Abstract
Pick-and-place handling of aquatic products (e.g., raw oyster) in packing processing remains manual, despite advances in soft robotic grippers as well as robotic systems that offer a path to automation in food production lines. In this study, we focused on the automation of [...] Read more.
Pick-and-place handling of aquatic products (e.g., raw oyster) in packing processing remains manual, despite advances in soft robotic grippers as well as robotic systems that offer a path to automation in food production lines. In this study, we focused on the automation of raw-oyster handling which can be achieved by a robotic system equipped with a soft robotic gripper. However, raw oysters are fragile and prone to large damage during robotic handling, while high-speed handling generates inertial effects. Minimizing the grasping force is thus essential to protect raw oysters, while preserving the grasping stability is required. To address, this study introduces and validates a robotic system equipped with a soft pneumatic gripper for raw-oyster handling task in food production lines. Finite element analysis (FEA) was employed to discuss the effect of gripper actuation pressure on finger deflection and gripper grasping force, revealing a trade-off: increasing actuation pressure improves stability but raises grasping force, whereas reducing actuation pressure causes excessive swing and tossing problems. An optimal actuation pressure of the soft gripper was identified as grasping stability and oyster integrity, minimizing swing while preventing excessive grasping force. Handling performance of the robotic system was experimentally evaluated with raw oysters under different actuation pressures and oyster orientations. Under the optimal actuation pressure confirmed in FEA, the robotic system achieved a handling success rate of 100% (15/15) without obvious misalignment and large damage of raw oysters, which confirmed its adaptability for high-speed, stable handling. This study offers a reference of robotic systems for handling fragile aquatic products and indicates that optimal actuation pressure can protect such products during robotic handling, thereby facilitating the automation of aquatic product processing. Full article
(This article belongs to the Section Food Systems)
Show Figures

Graphical abstract

22 pages, 5941 KB  
Article
Explainable AI Methods for Identification of Glue Volume Deficiencies in Printed Circuit Boards
by Theodoros Tziolas, Konstantinos Papageorgiou, Theodosios Theodosiou, Dimosthenis Ioannidis, Nikolaos Dimitriou, Gregory Tinker and Elpiniki Papageorgiou
Appl. Sci. 2025, 15(16), 9061; https://doi.org/10.3390/app15169061 - 17 Aug 2025
Viewed by 1112
Abstract
In printed circuit board (PCB) assembly, the volume of dispensed glue is closely related to the PCB’s durability, production costs, and the overall product reliability. Currently, quality inspection is performed manually by operators, inheriting the limitations of human-performed procedures. To address this, we [...] Read more.
In printed circuit board (PCB) assembly, the volume of dispensed glue is closely related to the PCB’s durability, production costs, and the overall product reliability. Currently, quality inspection is performed manually by operators, inheriting the limitations of human-performed procedures. To address this, we propose an automatic optical inspection framework that utilizes convolutional neural networks (CNNs) and post-hoc explainable methods. Our methodology handles glue quality inspection as a three-fold procedure. Initially, a detection system based on CenterNet MobileNetV2 is developed to localize PCBs, thus, offering a flexible lightweight tool for targeting and cropping regions of interest. Consequently, a CNN is proposed to classify PCB images into three classes based on the placed glue volume achieving 92.2% accuracy. This classification step ensures that varying glue volumes are accurately assessed, addressing potential quality issues that appear early in the production process. Finally, the Deep SHAP and Grad-CAM methods are applied to the CNN classifier to produce explanations of the decision making and further increase the interpretability of the proposed approach, targeting human-centered artificial intelligence. These post-hoc explainable methods provide visual explanations of the model’s decision-making process, offering insights into which features and regions contribute to each classification decision. The proposed method is validated with real industrial data, demonstrating its practical applicability and robustness. The evaluation procedure indicates that the proposed framework offers increased accuracy, low latency, and high-quality visual explanations, thereby strengthening quality assurance in PCB manufacturing. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

43 pages, 5258 KB  
Article
Twin Self-Supervised Learning Framework for Glaucoma Diagnosis Using Fundus Images
by Suguna Gnanaprakasam and Rolant Gini John Barnabas
Appl. Syst. Innov. 2025, 8(4), 111; https://doi.org/10.3390/asi8040111 - 11 Aug 2025
Viewed by 438
Abstract
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but [...] Read more.
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but mostly rely on small-labeled datasets. Annotated fundus image datasets improve deep learning predictions by aiding pattern identification but require extensive curation. In contrast, unlabeled fundus images are more accessible. The proposed method employs a semi-supervised learning approach to utilize both labeled and unlabeled data effectively. It follows traditional supervised training with the generation of pseudo-labels for unlabeled data, and incorporates self-supervised techniques that eliminate the need for manual annotation. It uses a twin self-supervised learning approach to improve glaucoma diagnosis by integrating pseudo-labels from one model into another self-supervised model for effective detection. The self-supervised patch-based exemplar CNN generates pseudo-labels in the first stage. These pseudo-labeled data, combined with labeled data, train a convolutional auto-encoder classification model in the second stage to identify glaucoma features. A support vector machine classifier handles the final classification of glaucoma in the model, achieving 98% accuracy and 0.98 AUC on the internal, same-source combined fundus image datasets. Also, the model maintains reasonably good generalization to the external (fully unseen) data, achieving AUC of 0.91 on the CRFO dataset and AUC of 0.87 on the Papilla dataset. These results demonstrate the method’s effectiveness, robustness, and adaptability in addressing limited labeled fundus data and aid in improved health and lifestyle. Full article
Show Figures

Figure 1

20 pages, 1735 KB  
Article
Multilingual Named Entity Recognition in Arabic and Urdu Tweets Using Pretrained Transfer Learning Models
by Fida Ullah, Muhammad Ahmad, Grigori Sidorov, Ildar Batyrshin, Edgardo Manuel Felipe Riverón and Alexander Gelbukh
Computers 2025, 14(8), 323; https://doi.org/10.3390/computers14080323 - 11 Aug 2025
Viewed by 458
Abstract
The increasing use of Arabic and Urdu on social media platforms, particularly Twitter, has created a growing need for robust Named Entity Recognition (NER) systems capable of handling noisy, informal, and code-mixed content. However, both languages remain significantly underrepresented in NER research, especially [...] Read more.
The increasing use of Arabic and Urdu on social media platforms, particularly Twitter, has created a growing need for robust Named Entity Recognition (NER) systems capable of handling noisy, informal, and code-mixed content. However, both languages remain significantly underrepresented in NER research, especially in social media contexts. To address this gap, this study makes four key contributions: (1) We introduced a manual entity consolidation step to enhance the consistency and accuracy of named entity annotations. In the original datasets, entities such as person names and organization names were often split into multiple tokens (e.g., first name and last name labeled separately). We manually refined the annotations to merge these segments into unified entities, ensuring improved coherence for both training and evaluation. (2) We selected two publicly available datasets from GitHub—one in Arabic and one in Urdu—and applied two novel strategies to tackle low-resource challenges: a joint multilingual approach and a translation-based approach. The joint approach involved merging both datasets to create a unified multilingual corpus, while the translation-based approach utilized automatic translation to generate cross-lingual datasets, enhancing linguistic diversity and model generalizability. (3) We presented a comprehensive and reproducible pseudocode-driven framework that integrates translation, manual refinement, dataset merging, preprocessing, and multilingual model fine-tuning. (4) We designed, implemented, and evaluated a customized XLM-RoBERTa model integrated with a novel attention mechanism, specifically optimized for the morphological and syntactic complexities of Arabic and Urdu. Based on the experiments, our proposed model (XLM-RoBERTa) achieves 0.98 accuracy across Arabic, Urdu, and multilingual datasets. While it shows a 7–8% improvement over traditional baselines (RF), it also achieves a 2.08% improvement over a deep learning (BiLSTM = 0.96), highlighting the effectiveness of our cross-lingual, resource-efficient approach for NER in low-resource, code-mixed social media text. Full article
Show Figures

Figure 1

21 pages, 2559 KB  
Article
A Shape-Aware Lightweight Framework for Real-Time Object Detection in Nuclear Medicine Imaging Equipment
by Weiping Jiang, Guozheng Xu and Aiguo Song
Appl. Sci. 2025, 15(16), 8839; https://doi.org/10.3390/app15168839 - 11 Aug 2025
Viewed by 399
Abstract
Manual calibration of nuclear medicine scanners currently relies on handling phantoms containing radioactive sources, exposing personnel to high radiation doses and elevating cancer risk. We designed an automated detection framework for robotic inspection on the YOLOv8n foundation. It pairs a lightweight backbone with [...] Read more.
Manual calibration of nuclear medicine scanners currently relies on handling phantoms containing radioactive sources, exposing personnel to high radiation doses and elevating cancer risk. We designed an automated detection framework for robotic inspection on the YOLOv8n foundation. It pairs a lightweight backbone with a shape-aware geometric attention module and an anchor-free head. Facing a small training set, we produced extra images with a GAN and then fine-tuned a pretrained network on these augmented data. Evaluations on a custom dataset consisting of PET/CT gantry and table images showed that the SAM-YOLOv8n model achieved a precision of 93.6% and a recall of 92.8%. These results demonstrate fast, accurate, real-time detection, offering a safer and more efficient alternative to manual calibration of nuclear medicine equipment. Full article
(This article belongs to the Section Applied Physics General)
Show Figures

Figure 1

25 pages, 3724 KB  
Article
Research on Trajectory Tracking Control Method for Wheeled Robots Based on Seabed Soft Slopes on GPSO-MPC
by Dewei Li, Zizhong Zheng, Zhongjun Ding, Jichao Yang and Lei Yang
Sensors 2025, 25(16), 4882; https://doi.org/10.3390/s25164882 - 8 Aug 2025
Viewed by 393
Abstract
With advances in underwater exploration and intelligent ocean technologies, wheeled underwater mobile robots are increasingly used for seabed surveying, engineering, and environmental monitoring. However, complex terrains centered on seabed soft slopes—characterized by wheel slippage due to soil deformability and force imbalance arising from [...] Read more.
With advances in underwater exploration and intelligent ocean technologies, wheeled underwater mobile robots are increasingly used for seabed surveying, engineering, and environmental monitoring. However, complex terrains centered on seabed soft slopes—characterized by wheel slippage due to soil deformability and force imbalance arising from slope variations—pose challenges to the accuracy and robustness of trajectory tracking control systems. Model predictive control (MPC), known for predictive optimization and constraint handling, is commonly used in such tasks. Yet, its performance relies on manually tuned parameters and lacks adaptability to dynamic changes. This study introduces a hybrid grey wolf-particle swarm optimization (GPSO) algorithm, combining the exploratory ability of a grey wolf optimizer with the rapid convergence of particle swarm optimization. The GPSO algorithm adaptively tunes MPC parameters online to improve control. A kinematic model of a four-wheeled differential-drive robot is developed, and an MPC controller using error-state linearization is implemented. GPSO integrates hierarchical leadership and chaotic disturbance strategies to enhance global search and local convergence. Simulation experiments on circular and double-lane-change trajectories show that GPSO-MPC outperforms standard MPC and PSO-MPC in tracking accuracy, heading stability, and control smoothness. The results confirm the improved adaptability and robustness of the proposed method, supporting its effectiveness in dynamic underwater environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

14 pages, 881 KB  
Article
Fine-Tuning BiomedBERT with LoRA and Pseudo-Labeling for Accurate Drug–Drug Interactions Classification
by Ioan-Flaviu Gheorghita, Vlad-Ioan Bocanet and Laszlo Barna Iantovics
Appl. Sci. 2025, 15(15), 8653; https://doi.org/10.3390/app15158653 - 5 Aug 2025
Viewed by 521
Abstract
In clinical decision support systems (CDSSs), where accurate classification of drug–drug interactions (DDIs) can directly affect treatment safety and outcomes, identifying drug interactions is a major challenge, introducing a scalable approach for classifying DDIs utilizing a finely-tuned biomedical language model. The method shown [...] Read more.
In clinical decision support systems (CDSSs), where accurate classification of drug–drug interactions (DDIs) can directly affect treatment safety and outcomes, identifying drug interactions is a major challenge, introducing a scalable approach for classifying DDIs utilizing a finely-tuned biomedical language model. The method shown here uses BiomedBERT, a domain-specific version of bidirectional encoder representations from transformers (BERT) that was pre-trained on biomedical literature, to reduce the number of resources needed during fine-tuning. Low-rank adaptation (LoRA) was used to fine-tune the model on the DrugBank dataset. The objective was to classify DDIs into two clinically distinct categories, that is, synergistic and antagonistic interactions. A pseudo-labeling strategy was created to deal with the problem of not having enough labeled data. A curated ground-truth dataset was constructed using polarity-labeled interaction entries from DrugComb and verified DrugBank antagonism pairs. The fine-tuned model is used to figure out what kinds of interactions there are in the rest of the unlabeled data. A checkpointing system saves predictions and confidence scores in small pieces, which means that the process can be continued and is not affected by system crashes. The framework is designed to log every prediction it makes, allowing results to be refined later, either manually or through automated updates, without discarding low-confidence cases, as traditional threshold-based methods often do. The method keeps a record of every output it generates, making it easier to revisit earlier predictions, either by experts or with improved tools, without depending on preset confidence cutoffs. It was built with efficiency in mind, so it can handle large amounts of biomedical text without heavy computational demands. Rather than focusing on model novelty, this research demonstrates how existing biomedical transformers can be adapted to polarity-aware DDI classification with minimal computational overhead, emphasizing deployment feasibility and clinical relevance. Full article
Show Figures

Figure 1

Back to TopTop