Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,540)

Search Parameters:
Keywords = vision system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1022 KB  
Article
Strategic Competence in Sustainability Education: Conceptual Patterns Identified Through AI-Assisted Qualitative Analysis
by Cathérine Conradty and Franz Xaver Bogner
Sustainability 2026, 18(7), 3643; https://doi.org/10.3390/su18073643 (registering DOI) - 7 Apr 2026
Abstract
This study investigates how participants conceptualise sustainability and sustainability citizenship, as well as how these conceptualisations relate to perceived agency. Drawing on two open-ended prompts, it analyses participants’ visions of a sustainable future and the roles they would like to play within it. [...] Read more.
This study investigates how participants conceptualise sustainability and sustainability citizenship, as well as how these conceptualisations relate to perceived agency. Drawing on two open-ended prompts, it analyses participants’ visions of a sustainable future and the roles they would like to play within it. The dataset was based on 1714 coded response segments from 164 participants. Methodologically, the study combines qualitative content analysis, independent human-AI double coding, manual validation, inter-rater reliability assessment, and residual-based co-occurrence analysis within a qualitatively grounded mixed-methods design. The results show that sustainability is predominantly framed in civic, symbolic, and ecological terms, whereas strategic competence and professionally articulated agency remain less visible. Sustainability meanings and role conceptions also vary systematically across disciplinary contexts. In addition, the analyses reveal patterned gaps between participants’ future visions and their self-attributed roles in sustainability transformations. The study contributes empirical insights into sustainability meaning-making and perceived agency and shows how LLM-assisted coding can be embedded in a transparent mixed-methods workflow. For sustainability education, the findings underline the importance of strengthening strategic and systemic dimensions of competence and linking civic engagement more closely to professional pathways of action. Full article
Show Figures

Figure 1

20 pages, 518 KB  
Article
Sustainable Digital Transformation in Music Education: An Analysis of Teacher Competencies in the Light of TPACK and International Frameworks
by Şehriban Koca, Atakan Kutlu, Hazan Kurtaslan, Ümran Ezgi Güleken and Ahmet Can Çakal
Sustainability 2026, 18(7), 3640; https://doi.org/10.3390/su18073640 - 7 Apr 2026
Abstract
The education systems, financial circumstances, and societal structures of our century expect educators to possess the most important characteristic: the ability to guide students who are highly digitally competent and keep themselves up to date. The “Sustainable Development Goals (SDG 4)” emphasized by [...] Read more.
The education systems, financial circumstances, and societal structures of our century expect educators to possess the most important characteristic: the ability to guide students who are highly digitally competent and keep themselves up to date. The “Sustainable Development Goals (SDG 4)” emphasized by the According to the United Nations highlight the necessity of continuously updating teacher competencies for quality and inclusive education. Establishing music teachers’ “digital competencies” on a sustainable basis depends on combining technical skills with a pedagogical vision. Therefore, thoroughly examining music teachers’ digital competencies in light of international standards and the TPAC model is critical to ensuring the sustainability of digital transformation at both the institutional and individual levels. This study, which examines digital literacy as an important part of sustainable education in music education, has examined the digital skills of music teachers in Turkey within the scope of international digital literacy frameworks and the TPAC approach. Digital skills have been related to the status of teachers’ professional practices, teaching-learning processes, assessment approaches, and the support of students’ digital literacy. The research concluded that music teachers’ digital competency levels are at the “explorer” level, meaning they are individuals who are aware of digital technologies and conduct research to develop themselves in this area. Full article
(This article belongs to the Special Issue Sustainable Digital Education: Innovations in Teaching and Learning)
Show Figures

Figure 1

19 pages, 500 KB  
Article
The Politics of Buddhist Artifacts: Tribute and Bestowal Between Heian and Northern Song
by Hao Kang and Kanliang Wang
Religions 2026, 17(4), 460; https://doi.org/10.3390/rel17040460 - 7 Apr 2026
Abstract
During the Northern Song period, the gifting of Buddhist artifacts frequently appeared in Sino–Japanese exchanges. Although Japan had established a self-centered order with its emperor at its core and tended toward isolation, the Heian imperial court, led by the Fujiwara regents, actively dispatched [...] Read more.
During the Northern Song period, the gifting of Buddhist artifacts frequently appeared in Sino–Japanese exchanges. Although Japan had established a self-centered order with its emperor at its core and tended toward isolation, the Heian imperial court, led by the Fujiwara regents, actively dispatched monks to Song China and requested Buddhist artifacts. Although these monks were not official envoys, they reflected a trend toward diversified diplomacy in Japan. Recognizing the close ties between these monks and the Japanese rulers, the Song court used the bestowal of Buddhist artifacts to encourage them to convey messages to the Japanese court, urging Japan to send formal tribute missions and thereby incorporating this into its broader diplomatic strategy. Under the “Chanyuan Treaty System”, Buddhism served as a shared cultural foundation for transregional interaction in East Asia. By collecting and bestowing Buddhist artifacts, the Song Dynasty proclaimed its orthodox status within the Buddhist world and enhanced its diplomatic influence. However, the Heian court, upon receiving these artifacts, repurposed them to construct their own divine authority and vision of a “Land of Buddha’s Kingdom”. Thus, the very same set of Buddhist artifacts carried vastly different symbolic meanings and functions in the Northern Song–Heian diplomatic interactions. Full article
9 pages, 640 KB  
Communication
Noninvasive Measurement of Infant Respiration During Sleep: A Validation Study
by Melissa N. Horger, Maristella Lucchini, Shambhavi Thakur, Rebecca M. C. Spencer and Natalie Barnett
Sensors 2026, 26(7), 2275; https://doi.org/10.3390/s26072275 - 7 Apr 2026
Abstract
Infant respiration is a physiological marker of health and wellbeing that can provide insight into sleep and wake patterns. Technological innovation presents opportunities to enhance measurements of physiological signals, which improves ecological validity and participant experiences. This is particularly true in the context [...] Read more.
Infant respiration is a physiological marker of health and wellbeing that can provide insight into sleep and wake patterns. Technological innovation presents opportunities to enhance measurements of physiological signals, which improves ecological validity and participant experiences. This is particularly true in the context of studying infant sleep, as it can be disrupted by changes in the environment and the physical sensation of unfamiliar or uncomfortable sensors. The goal of this study was to examine if a commercially available video baby monitor (Nanit system) can accurately estimate respiration during a nap relative to a commonly used cardiorespiratory sensor (Isansys Lifetouch sensor). Thirty-three infants (M = 9.7 months; range = 1–22 months) took a nap while wearing the Lifetouch sensor and Nanit Breathing Band. Infants slept in view of the Nanit camera. A computer vision algorithm applied to the video detected movement of the patterns on the fabric band worn around the infant’s torso to determine respiratory rates. The results showed strong consistency between the devices. More than 95% of the minute-by-minute respiration data fell within the limits of agreement, with little bias. Agreement was not influenced by age or nap duration, suggesting the Nanit Breathing Band provides a valid measure of respiration across infancy. Full article
(This article belongs to the Collection Biomedical Imaging and Sensing)
Show Figures

Figure 1

22 pages, 4214 KB  
Article
Sustainable Automation of Monitoring and Production Accounting in Greenhouse Complexes Using Integrated AI, Robotics, and Data Systems
by Alexander Uzhinskiy, Lev Teryaev, Artem Dorokhin and Mikhail Ivashev
Sustainability 2026, 18(7), 3620; https://doi.org/10.3390/su18073620 - 7 Apr 2026
Abstract
Production greenhouse complexes increasingly require automation and digitalization to address rising labor costs, improve productivity, and support sustainable resource use. However, most existing solutions target isolated tasks and lack a unified framework for continuous monitoring and production-oriented accounting at facility scale. This paper [...] Read more.
Production greenhouse complexes increasingly require automation and digitalization to address rising labor costs, improve productivity, and support sustainable resource use. However, most existing solutions target isolated tasks and lack a unified framework for continuous monitoring and production-oriented accounting at facility scale. This paper proposes a system-level architecture that integrates robotic monitoring platforms, AI-based perception, and cloud-based data management into a coherent operational framework. The robotic monitoring platforms operate on rails and concrete surfaces and are capable of elevating cameras and sensors up to 5 m to support plant-health assessment, environmental monitoring, and production accounting. Aggregated data are incorporated into a digital twin that supports spatial traceability, historical analysis, and decision support. The proposed approach enables continuous inspection, improves early detection of crop stress, reduces repetitive manual scouting, and supports targeted interventions. The framework provides a scalable foundation for sustainable, data-driven greenhouse management and practical deployment of robotic monitoring systems in industrial production environments. Full article
Show Figures

Figure 1

27 pages, 26065 KB  
Article
AEFOP: Adversarial Energy Field Optimization for Adversarial Example Purification
by Heqi Peng, Shengpeng Xiao and Yuanfang Guo
Appl. Sci. 2026, 16(7), 3588; https://doi.org/10.3390/app16073588 - 7 Apr 2026
Abstract
As AI-driven educational systems increasingly rely on deep neural networks, their vulnerability to adversarial perturbations raises concerns about assessment integrity, fairness, and reliability. Adversarial example purification is attractive for such deployments because it removes input perturbations without modifying the already deployed models. However, [...] Read more.
As AI-driven educational systems increasingly rely on deep neural networks, their vulnerability to adversarial perturbations raises concerns about assessment integrity, fairness, and reliability. Adversarial example purification is attractive for such deployments because it removes input perturbations without modifying the already deployed models. However, most existing purification methods are inherently goal-free: denoising-based approaches apply blind heuristic operators, while reconstruction-based methods rely on stochastic sampling guided by natural image priors. These methods typically suppress perturbations at the cost of weakening semantic details or inducing structural distortions. To address this limitation, we propose a novel goal-directed purification framework, termed adversarial energy field optimization for adversarial example purification (AEFOP). AEFOP formulates purification as a constrained optimization problem by defining a learnable adversarial energy which quantifies how far an input deviates from the benign region. This allows adversarial examples to be explicitly pushed from high-energy regions toward low-energy benign regions along an interpretable descent trajectory. Specifically, we build an adversarial energy network and optimize the energy field via a two-stage strategy: adversarial energy field shaping, which enforces distance-like energy behavior and correct gradient directions, and task-driven energy field calibration, which unrolls the descent process to calibrate the field with classification-consistency and semantic-preservation objectives. Extensive experiments across multiple attack scenarios demonstrate that AEFOP achieves superior purification accuracy and high visual quality while requiring only a few gradient steps during inference, offering a practical and efficient robustness layer for vision-based AI services in education. Full article
Show Figures

Figure 1

24 pages, 347 KB  
Article
Anagogical Function of Images in Cusanus’s Thought: The Case of Veraicon
by Agnieszka Maria Kijewska
Religions 2026, 17(4), 457; https://doi.org/10.3390/rel17040457 - 7 Apr 2026
Abstract
The paper presents Nicholas of Cusa’s position in the debate on mystical theology, which had a place around the middle of the 15th century in monastic environments. His contribution to that debate was presented in the form of the treatise entitled On the [...] Read more.
The paper presents Nicholas of Cusa’s position in the debate on mystical theology, which had a place around the middle of the 15th century in monastic environments. His contribution to that debate was presented in the form of the treatise entitled On the Vision of God, complemented by a painted representation of the “All-seeing Face”. Both the treatise and the painting were designed to be aids in an experiment projected by Cusanus for his benedictine friends of Tegernsee Abbey, to help them in their progress towards mystical contemplation. The intention was to show them a way to lift their thought from the perception of the image, through meditation and prayer, to the contemplation of God. Thus, both the icon and his treatise were intended to fulfil an anagogical function for the users in inspiring them start on a journey of returning to God and teaching them how to effect that return. Besides giving an account of the experiment projected by Cusanus, the most important elements of his fascinating system are delineated, such as the way of mystical ascent, his use of paradox, his conception of God as the Infinity, and the conception of God’s seeing as the foundation of the existence of all things. Full article
(This article belongs to the Special Issue Words and Images Serving Christianity)
13 pages, 533 KB  
Review
Towards a Vision of Sustainable Health: Definitions, Related Concepts and Key Dimensions
by Samira Amil, Julie-Alexandra Moulin and Éric Gagnon
Sustainability 2026, 18(7), 3586; https://doi.org/10.3390/su18073586 - 6 Apr 2026
Abstract
Contemporary societies are facing converging crises, including environmental degradation, worsening social inequalities, aging populations, and increasingly costly healthcare systems, prompting sustainable health to be proposed as an integrative conceptual perspective for rethinking health, its determinants, and collective action. This narrative review aims to [...] Read more.
Contemporary societies are facing converging crises, including environmental degradation, worsening social inequalities, aging populations, and increasingly costly healthcare systems, prompting sustainable health to be proposed as an integrative conceptual perspective for rethinking health, its determinants, and collective action. This narrative review aims to trace the historical evolution of the concept, clarify the vision it offers for public health, and identify its implications for research, policy, and intervention. A literature search (May 2025) was conducted in PubMed, Google Scholar, and Google, with no restrictions on language, time period, or document type. Of 40 relevant documents, 21 were selected for in-depth analysis by two independent reviewers, with duplicate data extraction. The results show that sustainable health broadens the World Health Organisation (WHO) definition of health by incorporating sustainability, intergenerational justice, ecological limits, and social equity. Close to, but distinct from Planetary Health, One Health, and EcoHealth, sustainable health is based on ecological, social and ethical, economic, behavioral, intergenerational, and systemic/intersectoral dimensions. Sustainable health thus emerges as a systemic and transdisciplinary conceptual approach for transforming health systems, living environments, and public policy, requiring further conceptual clarification, robust interdisciplinary research programs, and intersectoral initiatives involving communities. Full article
Show Figures

Figure 1

29 pages, 5271 KB  
Article
An Improved PST-Based Visual Pose Estimation Algorithm for UAV Navigation
by Shengxin Yu, Jinfa Xu and Tianhan Yang
Appl. Sci. 2026, 16(7), 3551; https://doi.org/10.3390/app16073551 - 5 Apr 2026
Viewed by 114
Abstract
Vision-based pose estimation has been widely applied in unmanned aerial vehicle (UAV) navigation. However, existing visual pose estimation algorithms are highly sensitive to camera imaging distortion, which degrades estimation accuracy, and often suffer from noticeable jitter between frames in dynamic scenarios. To address [...] Read more.
Vision-based pose estimation has been widely applied in unmanned aerial vehicle (UAV) navigation. However, existing visual pose estimation algorithms are highly sensitive to camera imaging distortion, which degrades estimation accuracy, and often suffer from noticeable jitter between frames in dynamic scenarios. To address these issues, this paper proposes an improved visual pose estimation algorithm built upon the Perspective Similar Triangle (PST) geometric model. Using a planar fiducial marker as the observation target, the single-frame pose estimation problem is reformulated as a hierarchical geometric inference framework, including image point distortion correction, depth recovery based on planar similar triangle constraint, and rigid transformation estimation between the camera and world coordinate systems. This formulation improves pose estimation accuracy under distorted imaging conditions. To accommodate distortion variations in practical scenarios, a radial distortion coefficient update method is further designed to adaptively adjust the radial distortion parameters under single-frame observations, ensuring that the distortion model remains consistent with the actual imaging distortion and providing reliable model inputs for distortion correction in pose estimation. In addition, to enhance pose stability in dynamic scenarios, a multi-frame optical center consistency constraint (MOCCC) method is introduced to optimize the pose estimation for more stability. By constraining pose estimation across adjacent frames using the mean optical center over multiple frames as the optimization objective, the proposed method effectively suppresses pose jitter caused by single-frame observation noise. Finally, a three-degree-of-freedom (3-DOF) attitude motion platform is established, and both static and dynamic experimental scenarios are designed to validate the accuracy and stability of the proposed algorithm. Experimental results demonstrate that the proposed algorithm achieves high accuracy and high stability pose estimation under imaging distortion and small perturbations, exhibiting good robustness and suitability for practical UAV visual navigation applications. Full article
20 pages, 4228 KB  
Article
Design and Application of an Automated Microinjection System Combining Deep Learning Vision Positioning and Neural Network Sliding Mode Motion Control
by Zhihao Deng, Yifan Xu and Shengzheng Kang
Actuators 2026, 15(4), 208; https://doi.org/10.3390/act15040208 - 5 Apr 2026
Viewed by 111
Abstract
Microinjection is one of the most established and effective techniques for introducing foreign substances into cells. However, issues such as cumbersome procedures, low success rates, and poor repeatability in manual cell microinjection have seriously restricted its practical applications in biomedical research and engineering. [...] Read more.
Microinjection is one of the most established and effective techniques for introducing foreign substances into cells. However, issues such as cumbersome procedures, low success rates, and poor repeatability in manual cell microinjection have seriously restricted its practical applications in biomedical research and engineering. Responding to such problems, this paper designs an automated microinjection system that combines deep learning visual positioning and adaptive neural network sliding-mode motion control. The machine vision solution based on the deep learning YOLOv8 target detection algorithm is utilized by the system to provide positional prerequisites for automated microinjection. Then, stable and fast puncture is completed by controlling the end effector (composed of a piezoelectric actuator and a displacement amplification mechanism). Since the piezoelectric actuator has strong nonlinearity, the motion control of the end effector adopts the control strategy combining sliding mode variable structure and adaptive neural networks to meet the requirements of precise displacement output of microinjection. At the same time, a host computer control system is developed to integrate hardware equipment, visual positioning algorithms and motion control algorithms to achieve corresponding automated microinjection tasks. Finally, the effectiveness of the designed automated microinjection system is successfully verified on zebrafish embryos. Full article
Show Figures

Figure 1

29 pages, 7604 KB  
Article
Shading and Geometric Constraint Neural Radiance Field for DSM Reconstruction from Multi-View Satellite Images
by Zhihua Hu, Zhiwen Chen, Yushun Li, Yuxuan Liu, Kao Zhang, Chenguang Zhao and Yongxian Zhang
Remote Sens. 2026, 18(7), 1091; https://doi.org/10.3390/rs18071091 - 5 Apr 2026
Viewed by 117
Abstract
With the continued development of spatial information technologies, Digital Surface Models (DSMs) have become fundamental data products for urban planning, virtual reality, geographic information systems, and digital-earth applications. Neural Radiance Fields (NeRFs) have achieved remarkable success in multi-view 3D reconstruction in computer vision. [...] Read more.
With the continued development of spatial information technologies, Digital Surface Models (DSMs) have become fundamental data products for urban planning, virtual reality, geographic information systems, and digital-earth applications. Neural Radiance Fields (NeRFs) have achieved remarkable success in multi-view 3D reconstruction in computer vision. Still, their application to DSM generation from satellite imagery remains challenging because of differences in imaging geometry, complex surface structure, and varying illumination conditions. To address these issues, this paper proposes a Shading and Geometric Constraint (SGC) method tailored to satellite photogrammetry and designed to integrate with existing NeRF-based frameworks such as Sat-NeRF and EO-NeRF. First, a physical imaging model based on Lambertian reflectance and spherical harmonics is introduced to represent the complex illumination variations in satellite images. Synthetic images generated by this model provide auxiliary supervision that improves robustness to illumination inconsistency. Second, inspired by classical shading-based refinement methods, we introduce a bilateral edge-preserving geometric constraint. Unlike standard smoothness terms, this constraint uses photometric discrepancies to weight geometric smoothing, thereby preserving sharp building boundaries while smoothing flat surfaces. We integrate the method into two state-of-the-art baselines, Sat-NeRF and EO-NeRF. EO-NeRF+SGC achieves up to a 57.93% reduction in elevation MAE relative to EO-NeRF, which is the largest relative MAE reduction reported in this study. The method also recovers finer structural details and sharper edges than recently published NeRF-based DSM reconstruction methods. Full article
Show Figures

Figure 1

20 pages, 4923 KB  
Article
Vision-Based Robotic System for Selective Weed Detection and Control in Precision Agriculture
by Rubén O. Hernández-Terrazas, Juan M. Xicoténcatl-Pérez, Julio C. Ramos-Fernández, Marco A. Márquez-Vera, José G. Benítez-Morales, Eucario G. Pérez-Pérez, Jorge A. Ruiz-Vanoye, Ocotlán Diaz-Parra, Francisco R. Trejo-Macotela and Alejandro Fuentes-Penna
Agriculture 2026, 16(7), 810; https://doi.org/10.3390/agriculture16070810 - 5 Apr 2026
Viewed by 208
Abstract
Precision agriculture is a key technology for addressing challenges such as increasing food demand, labour shortages, and the environmental impact of intensive agrochemical use. In this context, selective weed management remains a critical issue due to its direct effect on crop productivity and [...] Read more.
Precision agriculture is a key technology for addressing challenges such as increasing food demand, labour shortages, and the environmental impact of intensive agrochemical use. In this context, selective weed management remains a critical issue due to its direct effect on crop productivity and sustainability. This article presents a simulation-based framework for the design and evaluation of an agricultural robotic module for the detection, classification, and selective intervention of weeds. The proposed system integrates convolutional neural networks and the kinematic model of a 2DOF robot manipulator with 5 links for weed classification and treatment. The system is evaluated in a virtual environment, where camera calibration, perception accuracy, and the performance of the kinematic model are analysed. Quantitative results include detection accuracy, localization error, and intervention success rate under simulated field conditions. The results demonstrate selective weed management and the feasibility of simulation for developing weed control systems, while also identifying the main challenges for real-world deployment. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

23 pages, 4792 KB  
Article
Distracted Driving Behavior Recognition Based on Improved YOLOv8n-Pose and Multi-Feature Fusion
by Zhuzhou Li, Dudu Guo, Zhenxun Wei, Guoliang Chen, Miao Sun and Yuhao Sun
Appl. Sci. 2026, 16(7), 3532; https://doi.org/10.3390/app16073532 - 3 Apr 2026
Viewed by 161
Abstract
Distracted driving is one of the primary causes of road traffic accidents. Behavior recognition technology based on machine vision has emerged as a research hotspot due to its non-contact and high-efficiency nature. To address the challenges of complex lighting conditions in the driver’s [...] Read more.
Distracted driving is one of the primary causes of road traffic accidents. Behavior recognition technology based on machine vision has emerged as a research hotspot due to its non-contact and high-efficiency nature. To address the challenges of complex lighting conditions in the driver’s cabin, low detection accuracy for small-scale keypoints, and the difficulty in effectively characterizing behavioral features, this paper proposes a distracted driving behavior recognition method based on an improved YOLOv8n-Pose model and multi-feature fusion. First, the original YOLOv8n-Pose model is optimized. A P2 detection layer is added to enhance the feature extraction capabilities for small-scale human keypoints, and the SE attention module is incorporated to improve the model’s robustness under complex lighting conditions. In addition, the loss function is replaced with focal loss to tackle the class imbalance problem, thus forming the YOLOv8n-PSF-Pose keypoint detection network. Subsequently, based on the coordinates of 12 human keypoints extracted by this network, a multi-dimensional feature vector is constructed, which takes joint angles as the core and integrates the relative distances between keypoints and the number of valid keypoints. Finally, a BP neural network is adopted to classify the constructed feature vectors, enabling the accurate recognition of six typical distracted driving behaviors (normal driving, drinking or eating, making phone calls, using mobile phones, operating vehicle infotainment systems, and turning around to fetch items). The experimental results show that the improved YOLOv8n-PSF-Pose model achieves an mAP50 of 93.8% in keypoint detection, which is 6.7 percentage points higher than the original model; the BP classification model based on multi-feature fusion achieves an F1-score of 97.7% in the behavior recognition task, which is significantly better than traditional classifiers such as SVM and random forest, and the image processing speed on the NVIDIA RTX 3090TI reaches a high throughput of 45 FPS. This proves that the proposed method achieves an excellent balance between accuracy and speed. This study provides an effective solution for the real-time and accurate recognition of distracted driving behaviors. Full article
Show Figures

Figure 1

18 pages, 6132 KB  
Article
Robust Automated Monitoring of Dairy Cow Rumination via Improved YOLOv11 and BoT-SORT in Complex Environments
by Yingjie Zhao, Longjiang Wang, Silei Tang, Qing Zhai, Ruirui Yu and Zongwei Jia
Animals 2026, 16(7), 1109; https://doi.org/10.3390/ani16071109 - 3 Apr 2026
Viewed by 132
Abstract
Accurate, non-contact monitoring of rumination behavior is essential for assessing dairy cow health and welfare, as well as for optimizing feeding strategies and herd management in modern precision livestock farming. However, practical deployment in commercial barns faces challenges such as occlusions, variable lighting, [...] Read more.
Accurate, non-contact monitoring of rumination behavior is essential for assessing dairy cow health and welfare, as well as for optimizing feeding strategies and herd management in modern precision livestock farming. However, practical deployment in commercial barns faces challenges such as occlusions, variable lighting, and dynamic cow movements. To address this, we developed a robust, automated vision-based framework for continuous rumination monitoring. The core of our system integrates an enhanced object detection algorithm with a robust tracking module, specifically improved to capture subtle behavioral features and maintain identity under complex conditions. Evaluated on a comprehensive dataset collected from commercial settings under various lighting and occlusion scenarios, our framework achieved high detection accuracy (mAP of 96.26%) and reliable tracking performance (multi-object tracking accuracy of 99.2%). This demonstrates its suitability for real-time, on-farm deployment. The study provides a practical, end-to-end solution for fine-grained behavioral analysis in complex environments, offering a tool that can enhance welfare assessment and support decision-making in dairy farm management. The methodological approach is also adaptable to other precision livestock monitoring tasks. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

27 pages, 4459 KB  
Article
TMacaque-FaceNet: Automatic Facial Recognition Based on Vision Transformer for Wild Tibetan Macaques
by Qiyang Gao, Lele Zhang, He Luo, Zhao Lv and Dongpo Xia
Animals 2026, 16(7), 1107; https://doi.org/10.3390/ani16071107 - 3 Apr 2026
Viewed by 130
Abstract
Within the framework of behavioral ecology and conservation, individual recognition plays a critical role in the research on wild social animals at the individual level. Traditional identification methods often rely on long-term field experience or invasive physical tagging. Recent advances in deep learning [...] Read more.
Within the framework of behavioral ecology and conservation, individual recognition plays a critical role in the research on wild social animals at the individual level. Traditional identification methods often rely on long-term field experience or invasive physical tagging. Recent advances in deep learning enable non-invasive individual recognition under natural conditions; however, the effectiveness of facial detection and identification depends on species-specific facial characteristics, environmental conditions, and dataset scale. In this study, we used 3385 images from 18 identified wild Tibetan macaques (Macaca thibetana) to develop an individual recognition system, TMacaque-FaceNet, integrating You Only Look Once (YOLO) for face detection and a Vision Transformer (ViT) for individual classification. The results showed that the Tibetan macaque face detector achieved a mAP@0.5 of 0.971, with a precision of 0.974 and a recall of 0.931. The individual recognizer for the wild Tibetan macaque social group achieved a top-1 accuracy of 96.33% on the test set. On an event-wise (temporal holdout) validation set comprising 90 images (5 images per individual), the recognizer achieved a top-1 accuracy of 95.56%. Gradient-weighted attention rollout analyses further revealed that the model focused on biologically meaningful facial regions, supporting the interpretability of the recognition process. Our results provide a new automated facial recognition method to non-invasively monitor Tibetan macaque individuals in natural environments. It provides a practical tool to facilitate automated behavioral observation, social network analysis, and long-term population monitoring of wild non-human primates. Full article
(This article belongs to the Section Wildlife)
Show Figures

Figure 1

Back to TopTop