Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,040)

Search Parameters:
Keywords = human–machine systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2598 KB  
Article
Enhancing Shuttle–Pedestrian Communication: An Exploratory Evaluation of External HMI Systems Including Participants Experienced in Interacting with Automated Shuttles
by My Weidel, Sara Nygårdhs, Mattias Forsblad and Simon Schütte
Future Transp. 2025, 5(4), 153; https://doi.org/10.3390/futuretransp5040153 (registering DOI) - 1 Nov 2025
Abstract
This study evaluates four developed external Human–Machine Interface (eHMI) concepts for automated shuttles, focusing on improving communication with other road users, mainly pedestrians and cyclists. Without a human driver to signal intentions, eHMI systems can play a crucial role in conveying the shuttle’s [...] Read more.
This study evaluates four developed external Human–Machine Interface (eHMI) concepts for automated shuttles, focusing on improving communication with other road users, mainly pedestrians and cyclists. Without a human driver to signal intentions, eHMI systems can play a crucial role in conveying the shuttle’s movements and future path, fostering safety and trust. The four eHMI systems’ purple light projections, emotional eyes, auditory alerts, and informative text were tested in a virtual reality (VR) environment. Participant evaluations were collected using an approach inspired by Kansei engineering and Likert scales. Results show that auditory alerts and informative text-eHMI are most appreciated, with participants finding them relatively clear and easy to understand. In contrast, purple light projections were hard to see in daylight, and emotional eyes were often misinterpreted. Principal Component Analysis (PCA) identified three key factors for eHMI success: predictability, endangerment, and practicality. The findings underscore the need for intuitive, simple, and predictable designs, particularly in the absence of a driver. This study highlights how eHMI systems can support the integration of automated shuttles into public transport. It offers insights into design features that improve road safety and user experience, recommending further research on long-term effectiveness in real-world traffic conditions. Full article
Show Figures

Figure 1

41 pages, 8385 KB  
Article
A Facial-Expression-Aware Edge AI System for Driver Safety Monitoring
by Maram A. Almodhwahi and Bin Wang
Sensors 2025, 25(21), 6670; https://doi.org/10.3390/s25216670 (registering DOI) - 1 Nov 2025
Abstract
Road safety has emerged as a global issue, driven by the rapid rise in vehicle ownership and traffic congestion. Human error, like distraction, drowsiness, and panic, is the leading cause of road accidents. Conventional driver monitoring systems (DMSs) frequently fail to detect these [...] Read more.
Road safety has emerged as a global issue, driven by the rapid rise in vehicle ownership and traffic congestion. Human error, like distraction, drowsiness, and panic, is the leading cause of road accidents. Conventional driver monitoring systems (DMSs) frequently fail to detect these emotional and cognitive states, limiting their potential to prevent accidents. To overcome these challenges, this work proposes a robust deep learning-based DMS framework capable of real-time detection and response to emotion-driven driver behaviors that pose safety risks. The proposed system employs convolutional neural networks (CNNs), specifically the Inception module and a Caffe-based ResNet-10 with a Single Shot Detector (SSD), to achieve efficient, accurate facial detection and classification. The DMS is trained on a comprehensive and diverse dataset from various public and private sources, ensuring robustness across a wide range of emotions and real-world driving scenarios. This approach enables the model to achieve an overall accuracy of 98.6%, an F1 score of 0.979, a precision of 0.980, and a recall of 0.979 across the four emotional states. Compared with existing techniques, the proposed model strikes an effective balance between computational efficiency and complexity, enabling the precise recognition of driving-relevant emotions, making it a practical and high-performing solution for real-world in-car driver monitoring systems. Full article
(This article belongs to the Special Issue Applications of Sensors Based on Embedded Systems)
Show Figures

Figure 1

22 pages, 12886 KB  
Article
Digital Twin Prospects in IoT-Based Human Movement Monitoring Model
by Gulfeshan Parween, Adnan Al-Anbuky, Grant Mawston and Andrew Lowe
Sensors 2025, 25(21), 6674; https://doi.org/10.3390/s25216674 (registering DOI) - 1 Nov 2025
Abstract
Prehabilitation programs for abdominal pre-operative patients are increasingly recognized for improving surgical outcomes, reducing post-operative complications, and enhancing recovery. Internet of Things (IoT)-enabled human movement monitoring systems offer promising support in mixed-mode settings that combine clinical supervision with home-based independence. These systems enhance [...] Read more.
Prehabilitation programs for abdominal pre-operative patients are increasingly recognized for improving surgical outcomes, reducing post-operative complications, and enhancing recovery. Internet of Things (IoT)-enabled human movement monitoring systems offer promising support in mixed-mode settings that combine clinical supervision with home-based independence. These systems enhance accessibility, reduce pressure on healthcare infrastructure, and address geographical isolation. However, current implementations often lack personalized movement analysis, adaptive intervention mechanisms, and real-time clinical integration, frequently requiring manual oversight and limiting functional outcomes. This review-based paper proposes a conceptual framework informed by the existing literature, integrating Digital Twin (DT) technology, and machine learning/Artificial Intelligence (ML/AI) to enhance IoT-based mixed-mode prehabilitation programs. The framework employs inertial sensors embedded in wearable devices and smartphones to continuously collect movement data during prehabilitation exercises for pre-operative patients. These data are processed at the edge or in the cloud. Advanced ML/AI algorithms classify activity types and intensities with high precision, overcoming limitations of traditional Fast Fourier Transform (FFT)-based recognition methods, such as frequency overlap and amplitude distortion. The Digital Twin continuously monitors IoT behavior and provides timely interventions to fine-tune personalized patient monitoring. It simulates patient-specific movement profiles and supports dynamic, automated adjustments based on real-time analysis. This facilitates adaptive interventions and fosters bidirectional communication between patients and clinicians, enabling dynamic and remote supervision. By combining IoT, Digital Twin, and ML/AI technologies, the proposed framework offers a novel, scalable approach to personalized pre-operative care, addressing current limitations and enhancing outcomes. Full article
Show Figures

Figure 1

25 pages, 6312 KB  
Review
Early Insights into AI and Machine Learning Applications in Hydrogel Microneedles: A Short Review
by Jannah Urifa and Kwok Wei Shah
Micro 2025, 5(4), 48; https://doi.org/10.3390/micro5040048 (registering DOI) - 31 Oct 2025
Abstract
Hydrogel microneedles (HMNs) act as non-invasive devices that can effortlessly merge with the human body for drug delivery and diagnostic purposes. Nonetheless, their improvement is limited by intricate and repetitive issues related to material composition, structural geometry, manufacturing accuracy, and performance enhancement. At [...] Read more.
Hydrogel microneedles (HMNs) act as non-invasive devices that can effortlessly merge with the human body for drug delivery and diagnostic purposes. Nonetheless, their improvement is limited by intricate and repetitive issues related to material composition, structural geometry, manufacturing accuracy, and performance enhancement. At present, there are only a limited number of studies accessible since artificial intelligence and machine learning (AI/ML) for HMN are just starting to emerge and are in the initial phase. Data is distributed across separate research efforts, spanning different fields. This review aims to tackle the disjointed and narrowly concentrated aspects of current research on AI/ML applications in HMN technologies by offering a cohesive, comprehensive synthesis of interdisciplinary insights, categorized into five thematic areas: (1) material and microneedle design, (2) diagnostics and therapy, (3) drug delivery, (4) drug development, and (5) health and agricultural sensing. For each domain, we detail typical AI methods, integration approaches, proven advantages, and ongoing difficulties. We suggest a systematic five-stage developmental pathway covering material discovery, structural design, manufacturing, biomedical performance, and advanced AI integration, intended to expedite the transition of HMNs from research ideas to clinically and commercially practical systems. The findings of this review indicate that AI/ML can significantly enhance HMN development by addressing design and fabrication constraints via predictive modeling, adaptive control, and process optimization. By synchronizing these abilities with clinical and commercial translation requirements, AI/ML can act as key facilitators in converting HMNs from research ideas into scalable, practical biomedical solutions. Full article
Show Figures

Figure 1

26 pages, 4427 KB  
Review
Digital Technology Integration in Risk Management of Human–Robot Collaboration Within Intelligent Construction—A Systematic Review and Future Research Directions
by Xingyuan Ding, Yinshuang Xu, Min Zheng, Weide Kang and Xiaer Xiahou
Systems 2025, 13(11), 974; https://doi.org/10.3390/systems13110974 (registering DOI) - 31 Oct 2025
Abstract
With the digital transformation of the construction industry toward intelligent construction, advanced digital technologies—including Artificial Intelligence (AI), Digital Twins (DTs), and Internet of Things (IoT)—increasingly support Human–Robot Collaboration (HRC), offering productivity gains while introducing new safety risks. This study presents a systematic review [...] Read more.
With the digital transformation of the construction industry toward intelligent construction, advanced digital technologies—including Artificial Intelligence (AI), Digital Twins (DTs), and Internet of Things (IoT)—increasingly support Human–Robot Collaboration (HRC), offering productivity gains while introducing new safety risks. This study presents a systematic review of digital technology applications and risk management practices in HRC scenarios within intelligent construction environments. Following the PRISMA protocol, this study retrieved 7640 publications from the Web of Science database. After screening, 70 high-quality studies were selected for in-depth analysis. This review identifies four core digital technologies central to current HRC research: multi-modal acquisition technology, artificial intelligence learning technology (AI learning technology), Digital Twins (DTs), and Augmented Reality (AR). Based on the findings, this study constructed a systematic framework for digital technology in HRC, consisting of data acquisition and perception, data transmission and storage, intelligent analysis and decision support, human–machine interaction and collaboration, and intelligent equipment and automation. The study highlights core challenges across risk management stages, including difficulties in multi-modal fusion (risk identification), lack of quantitative systems (risk assessment), real-time performance issues (risk response), and weak feedback loops in risk monitoring and continuous improvement. Moreover, future research directions are proposed, including trust in HRC, privacy and ethics, and closed-loop optimization. This research provides theoretical insights and practical recommendations for advancing digital safety systems and supporting the safe digital transformation of the construction industry. These research findings hold significant important implications for advancing the digital transformation of the construction industry and enabling efficient risk management. Full article
Show Figures

Figure 1

18 pages, 2465 KB  
Article
Comparison of Mask-R-CNN and Thresholding-Based Segmentation for High-Throughput Phenotyping of Walnut Kernel Color
by Steven H. Lee, Sean McDowell, Charles Leslie, Kristina McCreery, Mason Earles and Patrick J. Brown
Plants 2025, 14(21), 3335; https://doi.org/10.3390/plants14213335 (registering DOI) - 31 Oct 2025
Abstract
High-throughput phenotyping has become essential for plant breeding programs, replacing traditional methods that rely on subjective scales influenced by human judgment. Machine learning (ML) computer vision systems have successfully used convolutional neural networks (CNNs) for image segmentation, providing greater flexibility than thresholding methods [...] Read more.
High-throughput phenotyping has become essential for plant breeding programs, replacing traditional methods that rely on subjective scales influenced by human judgment. Machine learning (ML) computer vision systems have successfully used convolutional neural networks (CNNs) for image segmentation, providing greater flexibility than thresholding methods that may require carefully staged images. This study compares two quantitative image analysis methods, rule-based thresholding using the magick package in R and an instance-segmentation pipeline based on the widely used Mask-R-CNN architecture, and then compares the output of each to two different sets of human evaluations. Walnuts were collected over three years from over 3000 individual trees maintained by the UC Davis walnut breeding program. The resulting 90,961 kernels were placed into 100-cell trays and imaged using a 20-megapixel Basler camera with a Sony IMX183 sensor. Quantitative data from both image analysis methods were highly correlated for both lightness (L*; r2 = 0.997) and size (r2 = 0.984). The thresholding method required many manual adjustments to account for minor discrepancies in staging, while the CNN method was robust after a rapid initial training on only 13 images. The two human scoring methods were not highly correlated with the image analysis methods or with each other. Pixel classification provides data similar to human color assessments but offers greater consistency across different years. The thresholding approach offers flexibility and has been applied to other color-based phenotyping tasks, while the CNN approach can be adapted to images that are not perfectly staged and be retrained to quantify more subtle kernel characteristics such as spotting and shrivel. Full article
Show Figures

Figure 1

24 pages, 773 KB  
Article
Vocabulary at the Living–Machine Interface: A Narrative Review of Shared Lexicon for Hybrid AI
by Andrew Prahl and Yan Li
Biomimetics 2025, 10(11), 723; https://doi.org/10.3390/biomimetics10110723 - 29 Oct 2025
Viewed by 221
Abstract
The rapid rise of bio-hybrid robots and hybrid human–AI systems has triggered an explosion of terminology that inhibits clarity and progress. To investigate how terms are defined, we conduct a narrative scoping review and concept analysis. We extract 60 verbatim definitions spanning engineering, [...] Read more.
The rapid rise of bio-hybrid robots and hybrid human–AI systems has triggered an explosion of terminology that inhibits clarity and progress. To investigate how terms are defined, we conduct a narrative scoping review and concept analysis. We extract 60 verbatim definitions spanning engineering, human–computer interaction, human factors, biomimetics, philosophy, and policy. Entries are coded on three axes: agency locus (human, shared, machine), integration depth (loose, moderate, high), and normative valence (negative, neutral, positive), and then clustered. Four categories emerged from the analysis: (i) machine-led, low-integration architectures such as neuro-symbolic or “Hybrid-AI” models; (ii) shared, moderately integrated systems like mixed-initiative cobots; (iii) human-led, medium-coupling decision aids; and (iv) human-centric, low-integration frameworks that focus on user agency. Most definitions adopt a generally positive valence, suggesting a gap with risk-heavy popular narratives. We show that, for researchers investigating where living meets machine, terminological precision is more than semantics and it can shape design, accountability, and public trust. This narrative review contributes a comparative taxonomy and a shared lexicon for reporting hybrid systems. Researchers are encouraged to clarify which sense of Hybrid-AI is intended (algorithmic fusion vs. human–AI ensemble), to specify agency locus and integration depth, and to adopt measures consistent with these conceptualizations. Such practices can reduce construct confusion, enhance cross-study comparability, and align design, safety, and regulatory expectations across domains. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

25 pages, 2154 KB  
Article
A Multimodal Polygraph Framework with Optimized Machine Learning for Robust Deception Detection
by Omar Shalash, Ahmed Métwalli, Mohammed Sallam and Esraa Khatab
Inventions 2025, 10(6), 96; https://doi.org/10.3390/inventions10060096 - 29 Oct 2025
Viewed by 209
Abstract
Deception detection is considered a concern for all individuals in their everyday lives, as it greatly affects human interactions. While multiple automatic lie detection systems exist, their accuracy still needs to be improved. Additionally, the lack of adequate and realistic datasets hinders the [...] Read more.
Deception detection is considered a concern for all individuals in their everyday lives, as it greatly affects human interactions. While multiple automatic lie detection systems exist, their accuracy still needs to be improved. Additionally, the lack of adequate and realistic datasets hinders the development of reliable systems. This paper presents a new multimodal dataset with physiological data (heart rate, galvanic skin response, and body temperature), in addition to demographic data (age, weight, and height). The presented dataset was collected from 49 unique subjects. Moreover, this paper presents a polygraph-based lie detection system utilizing multimodal sensor fusion. Different machine learning algorithms are used and evaluated. Random Forest has achieved an accuracy of 97%, outperforming Logistic Regression (58%), Support Vector Machine (58% with perfect recall of 1.00), and k-Nearest Neighbor (83%). The model shows excellent precision and recall (0.97 each), making it effective for applications such as criminal investigations. With a computation time of 0.06 s, Random Forest has proven to be efficient for real-time use. Additionally, a robust k-fold cross-validation procedure was conducted, combined with Grid Search and Particle Swarm Optimization (PSO) for hyperparameter tuning, which substantially reduced the gap between training and validation accuracies from several percentage points to under 1%, underscoring the model’s enhanced generalization and reliability in real-world scenarios. Full article
Show Figures

Figure 1

24 pages, 1962 KB  
Systematic Review
Autonomous Hazardous Gas Detection Systems: A Systematic Review
by Boon-Keat Chew, Azwan Mahmud and Harjit Singh
Sensors 2025, 25(21), 6618; https://doi.org/10.3390/s25216618 - 28 Oct 2025
Viewed by 290
Abstract
Gas Detection Systems (GDSs) are critical safety technologies deployed in semiconductor wafer fabrication facilities to monitor the presence of hazardous gases. A GDS receives input from gas detectors equipped with consumable gas sensors, such as electrochemical (EC) and metal oxide semiconductor (MOS) types, [...] Read more.
Gas Detection Systems (GDSs) are critical safety technologies deployed in semiconductor wafer fabrication facilities to monitor the presence of hazardous gases. A GDS receives input from gas detectors equipped with consumable gas sensors, such as electrochemical (EC) and metal oxide semiconductor (MOS) types, which are used to detect toxic, flammable, or reactive gases. However, over time, sensors degradations, accuracy drift, and cross-sensitivity to interference gases compromise their intended performance. To maintain sensor accuracy and reliability, routine manual calibration is required—an approach that is resource-intensive, time-consuming, and prone to human error, especially in facilities with extensive networks of gas detectors. This systematic review (PROSPERO on 11th October 2025 Registration number: 1166004) explored minimizing or eliminating the dependency on manual calibration. Findings indicate that using properly calibrated gas sensor data can support advanced data analytics and machine learning algorithms to correct accuracy drift and improve gas selectivity. Techniques such as Principal Component Analysis (PCA), Support Vector Machines (SVMs), multivariate regression, and calibration transfer have been effectively applied to differentiate target gases from interferences and compensate for sensor aging and environmental variability. The paper also explores the emerging potential for integrating calibration-free or self-correcting gas sensor systems into existing GDS infrastructures. Despite significant progress, key research challenges persist. These include understanding the dynamics of sensor response drift due to prolonged gas exposure, synchronizing multi-sensor data collection to minimize time-related drift, and aligning ambient sensor signals with gas analytical references. Future research should prioritize the development of application-specific datasets, adaptive environmental compensation models, and hybrid validation frameworks. These advancements will contribute to the realization of intelligent, autonomous, and data-driven gas detection solutions that are robust, scalable, and well-suited to the operational complexities of modern industrial environments. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

30 pages, 3557 KB  
Article
Application of Graph Neural Networks to Model Stem Cell Donor–Recipient Compatibility in the Detection and Classification of Leukemia
by Saeeda Meftah Salem Eltanashi and Ayça Kurnaz Türkben
Appl. Sci. 2025, 15(21), 11500; https://doi.org/10.3390/app152111500 - 28 Oct 2025
Viewed by 163
Abstract
Stem cell transplants are a common treatment for leukemia, and close donor–recipient matching improves their success. Machine learning models like support vector machine (SVM), convolutional neural networks (CNNs), and recurrent neural networks (RNNs) can have difficulty handling the complexity of genomic and immune [...] Read more.
Stem cell transplants are a common treatment for leukemia, and close donor–recipient matching improves their success. Machine learning models like support vector machine (SVM), convolutional neural networks (CNNs), and recurrent neural networks (RNNs) can have difficulty handling the complexity of genomic and immune data, which then lowers the accuracy of clinical predictions. This study looks at using graph neural networks (GNNs) in a different way. This method combines data such as single-nucleotide polymorphisms (SNPs), human leukocyte antigen (HLA) typing, and clinical details to create a graph that shows the relationship between donor and recipient pairs. The framework uses graph attention networks (GATs) to focus on key compatibility traits and Dynamic GNNs (DGNNs) to monitor changes in the immune system and the disease’s progression. With data from the 1000 Genomes Project, the model correctly identified matches with 97.68% to 99.74% accuracy and classified them with 98.76% to 99.4% accuracy, outperforming standard machine learning models. The model uses SNP similarity and HLA mismatches to assess compatibility, which enhances its match prediction and compatibility explanation capabilities. The results suggest that GNNs offer a helpful and understandable way to model donor–recipient matching, potentially assisting in early leukemia detection and personalized stem cell transplant plans. Full article
Show Figures

Figure 1

35 pages, 8683 KB  
Article
Teaching Machine Learning to Undergraduate Electrical Engineering Students
by Gerald Fudge, Anika Rimu, William Zorn, July Ringle and Cody Barnett
Computers 2025, 14(11), 465; https://doi.org/10.3390/computers14110465 - 28 Oct 2025
Viewed by 271
Abstract
Proficiency in machine learning (ML) and the associated computational math foundations have become critical skills for engineers. Required areas of proficiency include the ability to use available ML tools and the ability to develop new tools to solve engineering problems. Engineers also need [...] Read more.
Proficiency in machine learning (ML) and the associated computational math foundations have become critical skills for engineers. Required areas of proficiency include the ability to use available ML tools and the ability to develop new tools to solve engineering problems. Engineers also need to be proficient in using generative artificial intelligence (AI) tools in a variety of contexts, including as an aid to learning, research, writing, and code generation. Using these tools properly requires a solid understanding of the associated computational math foundation. Without this foundation, engineers will struggle with developing new tools and can easily misuse available ML/AI tools, leading to poorly designed systems that are suboptimal or even harmful to society. Teaching (and learning) these skills can be difficult due to the breadth of skills required. One contribution of this paper is that it approaches teaching this topic within an industrial engineering human factors framework. Another contribution is the detailed case study narrative describing specific pedagogical challenges, including implementation of teaching strategies (successful and unsuccessful), recent observed trends in generative AI, and student perspectives on learning this topic. Although the primary methodology is anecdotal, we also include empirical data in support of anecdotal results. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Figure 1

28 pages, 4508 KB  
Article
Mixed Reality-Based Multi-Scenario Visualization and Control in Automated Terminals: A Middleware and Digital Twin Driven Approach
by Yubo Wang, Enyu Zhang, Ang Yang, Keshuang Du and Jing Gao
Buildings 2025, 15(21), 3879; https://doi.org/10.3390/buildings15213879 - 27 Oct 2025
Viewed by 319
Abstract
This study presents a Digital Twin–Mixed Reality (DT–MR) framework for the immersive and interactive supervision of automated container terminals (ACTs), addressing the fragmented data and limited situational awareness of conventional 2D monitoring systems. The framework employs a middleware-centric architecture that integrates heterogeneous [...] Read more.
This study presents a Digital Twin–Mixed Reality (DT–MR) framework for the immersive and interactive supervision of automated container terminals (ACTs), addressing the fragmented data and limited situational awareness of conventional 2D monitoring systems. The framework employs a middleware-centric architecture that integrates heterogeneous subsystems—covering terminal operation, equipment control, and information management—through standardized industrial communication protocols. It ensures synchronized timestamps and delivers semantically aligned, low-latency data streams to a multi-scale Digital Twin developed in Unity. The twin applies level-of-detail modeling, spatial anchoring, and coordinate alignment (from Industry Foundation Classes (IFCs) to east–north–up (ENU) coordinates and Unity space) for accurate registration with physical assets, while a Microsoft HoloLens 2 device provides an intuitive Mixed Reality interface that combines gaze, gesture, and voice commands with built-in safety interlocks for secure human–machine interaction. Quantitative performance benchmarks—latency ≤100 ms, status refresh ≤1 s, and throughput ≥10,000 events/s—were met through targeted engineering and validated using representative scenarios of quay crane alignment and automated guided vehicle (AGV) rerouting, demonstrating improved anomaly detection, reduced decision latency, and enhanced operational resilience. The proposed DT–MR pipeline establishes a reproducible and extensible foundation for real-time, human-in-the-loop supervision across ports, airports, and other large-scale smart infrastructures. Full article
(This article belongs to the Special Issue Digital Technologies, AI and BIM in Construction)
Show Figures

Figure 1

34 pages, 3325 KB  
Systematic Review
A Systematic Review of Methods and Algorithms for the Intelligent Processing of Agricultural Data Applied to Sunflower Crops
by Valentina Arustamyan, Pavel Lyakhov, Ulyana Lyakhova, Ruslan Abdulkadirov, Vyacheslav Rybin and Denis Butusov
Mach. Learn. Knowl. Extr. 2025, 7(4), 130; https://doi.org/10.3390/make7040130 - 27 Oct 2025
Viewed by 320
Abstract
Food shortages are becoming increasingly urgent due to the growing global population. Enhancing oil crop yields, particularly sunflowers, is key to ensuring food security and the sustainable provision of vegetable fats essential for human nutrition and animal feed. However, sunflower yields are often [...] Read more.
Food shortages are becoming increasingly urgent due to the growing global population. Enhancing oil crop yields, particularly sunflowers, is key to ensuring food security and the sustainable provision of vegetable fats essential for human nutrition and animal feed. However, sunflower yields are often reduced by diseases, pests, and other factors. Remote sensing technologies, such as unmanned aerial vehicle (UAV) scans and satellite monitoring, combined with machine learning algorithms, provide powerful tools for monitoring crop health, diagnosing diseases, mapping fields, and forecasting yields. These technologies enhance agricultural efficiency and reduce environmental impact, supporting sustainable development in agriculture. This systematic review aims to assess the accuracy of various machine learning technologies, including classification and segmentation algorithms, convolutional neural networks, random forests, and support vector machines. These methods are applied to monitor sunflower crop conditions, diagnose diseases, and forecast yields. It provides a comprehensive analysis of current methods and their potential for precision farming applications. The review also discusses future research directions, including the development of automated systems for crop monitoring and disease diagnostics. Full article
(This article belongs to the Section Thematic Reviews)
Show Figures

Graphical abstract

24 pages, 751 KB  
Review
Integrating Advanced Metabolomics and Machine Learning for Anti-Doping in Human Athletes
by Mohannad N. AbuHaweeleh, Ahmad Hamdan, Jawaher Al-Essa, Shaikha Aljaal, Nasser Al Saad, Costas Georgakopoulos, Francesco Botre and Mohamed A. Elrayess
Metabolites 2025, 15(11), 696; https://doi.org/10.3390/metabo15110696 - 27 Oct 2025
Viewed by 375
Abstract
The ongoing challenge of doping in sports has triggered the adoption of advanced scientific strategies for the detection and prevention of doping abuse. This review examines the potential of integrating metabolomics aided by artificial intelligence (AI) and machine learning (ML) for profiling small-molecule [...] Read more.
The ongoing challenge of doping in sports has triggered the adoption of advanced scientific strategies for the detection and prevention of doping abuse. This review examines the potential of integrating metabolomics aided by artificial intelligence (AI) and machine learning (ML) for profiling small-molecule metabolites across biological systems to advance anti-doping efforts. While traditional targeted detection methods serve a primarily forensic role—providing legally defensible evidence by directly identifying prohibited substances—metabolomics offers complementary insights by revealing both exogenous compounds and endogenous physiological alterations that may persist beyond direct drug detection windows, rather than serving as an alternative to routine forensic testing. High-throughput platforms such as UHPLC-HRMS and NMR, coupled with targeted and untargeted metabolomic workflows, can provide comprehensive datasets that help discriminate between doped and clean athlete profiles. However, the complexity and dimensionality of these datasets necessitate sophisticated computational tools. ML algorithms, including supervised models like XGBoost and multi-layer perceptrons, and unsupervised methods such as clustering and dimensionality reduction, enable robust pattern recognition, classification, and anomaly detection. These approaches enhance both the sensitivity and specificity of diagnostic screening and optimize resource allocation. Case studies illustrate the value of integrating metabolomics and ML—for example, detecting recombinant human erythropoietin (r-HuEPO) use via indirect blood markers and uncovering testosterone and corticosteroid abuse with extended detection windows. Future progress will rely on interdisciplinary collaboration, open-access data infrastructure, and continuous methodological innovation to fully realize the complementary role of these technologies in supporting fair play and athlete well-being. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Metabolomics)
Show Figures

Graphical abstract

31 pages, 1423 KB  
Article
Agentic AI in Smart Manufacturing: Enabling Human-Centric Predictive Maintenance Ecosystems
by Andrés Fernández-Miguel, Susana Ortíz-Marcos, Mariano Jiménez-Calzado, Alfonso P. Fernández del Hoyo, Fernando E. García-Muiña and Davide Settembre-Blundo
Appl. Sci. 2025, 15(21), 11414; https://doi.org/10.3390/app152111414 - 24 Oct 2025
Viewed by 311
Abstract
Smart manufacturing demands adaptive, scalable, and human-centric solutions for predictive maintenance. This paper introduces the concept of Agentic AI, a paradigm that extends beyond traditional multi-agent systems and collaborative AI by emphasizing agency: the ability of AI entities to act autonomously, coordinate proactively, [...] Read more.
Smart manufacturing demands adaptive, scalable, and human-centric solutions for predictive maintenance. This paper introduces the concept of Agentic AI, a paradigm that extends beyond traditional multi-agent systems and collaborative AI by emphasizing agency: the ability of AI entities to act autonomously, coordinate proactively, and remain accountable under human oversight. Through federated learning, edge computing, and distributed intelligence, the proposed framework enables intentional, goal-oriented monitoring agents to form self-organizing predictive maintenance ecosystems. Validated in a ceramic manufacturing facility, the system achieved 94% predictive accuracy, a 67% reduction in false positives, and a 43% decrease in unplanned downtime. Economic analysis confirmed financial viability with a 1.6-year payback period and a €447,300 NPV over five years. The framework also embeds explainable AI and trust calibration mechanisms, ensuring transparency and safe human–machine collaboration. These results demonstrate that Agentic AI provides both conceptual and practical pathways for transitioning from reactive monitoring to resilient, autonomous, and human-centered industrial intelligence. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

Back to TopTop