Next Issue
Volume 13, March
Previous Issue
Volume 13, January
 
 

Robotics, Volume 13, Issue 2 (February 2024) – 14 articles

Cover Story (view full-size image): The Rehab-Exos is a novel upper limb exoskeleton designed for rehabilitation purposes. It is equipped with high-reduction-ratio actuators and compact elastic joints to obtain torque sensors based on strain gauges. Firstly, this work addresses the torque sensor performances and the design aspects that could cause unwanted non-axial moment load crosstalk. Then, a new full-state feedback torque controller is designed by modeling the multi-DOF and non-linear system dynamics and by providing compensation for non-linear effects such as friction and gravity. Lastly, it reports the comparison of the proposed controller with two other benchmark-state feedback controllers in both a transparency test and a haptic rendering evaluation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 4657 KiB  
Review
Automation’s Impact on Agriculture: Opportunities, Challenges, and Economic Effects
by Khadijeh Bazargani and Taher Deemyad
Robotics 2024, 13(2), 33; https://doi.org/10.3390/robotics13020033 - 19 Feb 2024
Cited by 1 | Viewed by 3278
Abstract
Automation and robotics are the key players in modern agriculture. They offer potential solutions for challenges related to the growing global population, demographic shifts, and economic status. This review paper evaluates the challenges and opportunities of using new technologies and the often-missed link [...] Read more.
Automation and robotics are the key players in modern agriculture. They offer potential solutions for challenges related to the growing global population, demographic shifts, and economic status. This review paper evaluates the challenges and opportunities of using new technologies and the often-missed link between automation technology and agricultural economics. Through a systematic analysis of the literature, this study explores the potential of automation and robotics in farming practices, as well as their socio-economic effects, and provides strategic recommendations for those involved. For this purpose, various types of robots in different fields of agriculture and the technical feasibility and challenges of using automation have been discussed. Other important factors, including demographic shifts, labor market effects, and economic considerations, have been analyzed. Furthermore, this study investigates the social effects of automation, particularly in terms of employment and workforce adaptation. It finds that, while automation boosts productivity and sustainability, it also causes labor displacement and demands considerable technological investment. This thorough investigation fills a crucial gap by assessing economic sustainability, labor market evolution, and the future of precision agriculture. It also charts a course for further research and policy-making at the intersection of agricultural technology and socio-economic fields and outlines a future roadmap for further research and policy. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

29 pages, 13682 KiB  
Article
Design and Control of the Rehab-Exos, a Joint Torque-Controlled Upper Limb Exoskeleton
by Domenico Chiaradia, Gianluca Rinaldi, Massimiliano Solazzi, Rocco Vertechy and Antonio Frisoli
Robotics 2024, 13(2), 32; https://doi.org/10.3390/robotics13020032 - 17 Feb 2024
Viewed by 1433
Abstract
This work presents the design of the Rehab-Exos, a novel upper limb exoskeleton designed for rehabilitation purposes. It is equipped with high-reduction-ratio actuators and compact elastic joints to obtain torque sensors based on strain gauges. In this study, we address the torque sensor [...] Read more.
This work presents the design of the Rehab-Exos, a novel upper limb exoskeleton designed for rehabilitation purposes. It is equipped with high-reduction-ratio actuators and compact elastic joints to obtain torque sensors based on strain gauges. In this study, we address the torque sensor performances and the design aspects that could cause unwanted non-axial moment load crosstalk. Moreover, a new full-state feedback torque controller is designed by modeling the multi-DOF, non-linear system dynamics and providing compensation for non-linear effects such as friction and gravity. To assess the proposed upper limb exoskeleton in terms of both control system performances and mechanical structure validation, the full-state feedback controller was compared with two other benchmark-state feedback controllers in both a transparency test—ten subjects, two reference speeds—and a haptic rendering evaluation. Both of the experiments were representative of the intended purpose of the device, i.e., physical interaction with patients affected by limited motion skills. In all experimental conditions, our proposed joint torque controller achieved higher performances, providing transparency to the joints and asserting the feasibility of the exoskeleton for assistive applications. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

16 pages, 15996 KiB  
Article
Comparison of Machine Learning Approaches for Robust and Timely Detection of PPE in Construction Sites
by Roxana Azizi, Maria Koskinopoulou and Yvan Petillot
Robotics 2024, 13(2), 31; https://doi.org/10.3390/robotics13020031 - 16 Feb 2024
Viewed by 1216
Abstract
Globally, workplace safety is a critical concern, and statistics highlight the widespread impact of occupational hazards. According to the International Labour Organization (ILO), an estimated 2.78 million work-related fatalities occur worldwide each year, with an additional 374 million non-fatal workplace injuries and illnesses. [...] Read more.
Globally, workplace safety is a critical concern, and statistics highlight the widespread impact of occupational hazards. According to the International Labour Organization (ILO), an estimated 2.78 million work-related fatalities occur worldwide each year, with an additional 374 million non-fatal workplace injuries and illnesses. These incidents result in significant economic and social costs, emphasizing the urgent need for effective safety measures across industries. The construction sector in particular faces substantial challenges, contributing a notable share to these statistics due to the nature of its operations. As technology, including machine vision algorithms and robotics, continues to advance, there is a growing opportunity to enhance global workplace safety standards and mitigate the human toll of occupational hazards on a broader scale. This paper explores the development and evaluation of two distinct algorithms designed for the accurate detection of safety equipment on construction sites. The first algorithm leverages the Faster R-CNN architecture, employing ResNet-50 as its backbone for robust object detection. Subsequently, the results obtained from Faster R-CNN are compared with those of the second algorithm, Few-Shot Object Detection (FsDet). The selection of FsDet is motivated by its efficiency in addressing the time-intensive process of compiling datasets for network training in object recognition. The research methodology involves training and fine-tuning both algorithms to assess their performance in safety equipment detection. Comparative analysis aims to evaluate the effectiveness of novel training methods employed in the development of these machine vision algorithms. Full article
(This article belongs to the Special Issue Collection in Honor of Women's Contribution in Robotics)
Show Figures

Figure 1

24 pages, 2758 KiB  
Review
What Affects Human Decision Making in Human–Robot Collaboration?: A Scoping Review
by Yuan Liu, Glenda Caldwell, Markus Rittenbruch, Müge Belek Fialho Teixeira, Alan Burden and Matthias Guertler
Robotics 2024, 13(2), 30; https://doi.org/10.3390/robotics13020030 - 9 Feb 2024
Viewed by 1729
Abstract
The advent of Industry 4.0 has heralded advancements in Human–robot Collaboration (HRC), necessitating a deeper understanding of the factors influencing human decision making within this domain. This scoping review examines the breadth of research conducted on HRC, with a particular focus on identifying [...] Read more.
The advent of Industry 4.0 has heralded advancements in Human–robot Collaboration (HRC), necessitating a deeper understanding of the factors influencing human decision making within this domain. This scoping review examines the breadth of research conducted on HRC, with a particular focus on identifying factors that affect human decision making during collaborative tasks and finding potential solutions to improve human decision making. We conducted a comprehensive search across databases including Scopus, IEEE Xplore and ACM Digital Library, employing a snowballing technique to ensure the inclusion of all pertinent studies, and adopting the PRISMA Extension for Scoping Reviews (PRISMA-ScR) for the reviewing process. Some of the important aspects were identified: (i) studies’ design and setting; (ii) types of human–robot interaction, types of cobots and types of tasks; (iii) factors related to human decision making; and (iv) types of user interfaces for human–robot interaction. Results indicate that cognitive workload and user interface are key in influencing decision making in HRC. Future research should consider social dynamics and psychological safety, use mixed methods for deeper insights and consider diverse cobots and tasks to expand decision-making studies. Emerging XR technologies offer the potential to enhance interaction and thus improve decision making, underscoring the need for intuitive communication and human-centred design. Full article
(This article belongs to the Special Issue Human Factors in Human–Robot Interaction)
Show Figures

Figure 1

18 pages, 12793 KiB  
Article
Design and Analysis of VARONE a Novel Passive Upper-Limb Exercising Device
by Luis Daniel Filomeno Amador, Eduardo Castillo Castañeda, Med Amine Laribi and Giuseppe Carbone
Robotics 2024, 13(2), 29; https://doi.org/10.3390/robotics13020029 - 8 Feb 2024
Viewed by 1463
Abstract
Robots have been widely investigated for active and passive rehabilitation therapy of patients with upper limb disabilities. Nevertheless, the rehabilitation assessment process is often ignored or just qualitatively performed by the physiotherapist implementing chart-based ordinal scales or observation-based measures, which tend to rely [...] Read more.
Robots have been widely investigated for active and passive rehabilitation therapy of patients with upper limb disabilities. Nevertheless, the rehabilitation assessment process is often ignored or just qualitatively performed by the physiotherapist implementing chart-based ordinal scales or observation-based measures, which tend to rely on professional experience and lack quantitative analysis. In order to objectively quantify the upper limb rehabilitation progress, this paper presents a noVel pAssive wRist motiOn assessmeNt dEvice (VARONE) having three degrees of freedom (DoFs) based on the gimbal mechanical design. VARONE implements a mechanism of three revolute passive joints with controllable passive resistance. An inertial measurement unit (IMU) sensor is used to quantify the wrist orientation and position, and an encoder module is implemented to obtain the arm positions. The proposed VARONE device can also be used in combination with the previously designed two-DoFs device NURSE (cassiNo-qUeretaro uppeR limb aSsistive dEvice) to perform multiple concurrent assessments and rehabilitation tasks. Analyses and experimental tests have been carried out to demonstrate the engineering feasibility of the intended applications of VARONE. The maximum value registered for the IMU sensor is 36.8 degrees, the minimum value registered is −32.3 degrees, and the torque range registered is around −80 and 80 Nmm. The implemented models include kinematics, statics (F.E.M.), and dynamics. Thirty healthy patients participated in an experimental validation. The experimental tests were developed with different goal-defined exercising paths that the participant had to follow. Full article
Show Figures

Figure 1

16 pages, 8410 KiB  
Article
A Path to Industry 5.0 Digital Twins for Human–Robot Collaboration by Bridging NEP+ and ROS
by Enrique Coronado, Toshio Ueshiba and Ixchel G. Ramirez-Alpizar
Robotics 2024, 13(2), 28; https://doi.org/10.3390/robotics13020028 - 1 Feb 2024
Viewed by 1721
Abstract
The integration of heterogeneous hardware and software components to construct human-centered systems for Industry 5.0, particularly human digital twins, presents considerable complexity. Our research addresses this challenge by pioneering a novel approach that harmonizes the techno-centered focus of the Robot Operating System (ROS) [...] Read more.
The integration of heterogeneous hardware and software components to construct human-centered systems for Industry 5.0, particularly human digital twins, presents considerable complexity. Our research addresses this challenge by pioneering a novel approach that harmonizes the techno-centered focus of the Robot Operating System (ROS) with the cross-platform advantages inherent in NEP+ (a human-centered development framework intended to assist users and developers with diverse backgrounds and resources in constructing interactive human–machine systems). We introduce the nep2ros ROS package, aiming to bridge these frameworks and foster a more interconnected and adaptable approach. This initiative can be used to facilitate diverse development scenarios beyond conventional robotics, underpinning a transformative shift in Industry 5.0 applications. Our assessment of NEP+ capabilities includes an evaluation of communication performance utilizing serialization formats like JavaScript Object Notation (JSON) and MessagePack. Additionally, we present a comparative analysis between the nep2ros package and existing solutions, illustrating its efficacy in linking the simulation environment (Unity) and ROS. Moreover, our research demonstrates NEP+’s applicability through an immersive human-in-the-loop collaborative assembly. These findings offer promising prospects for innovative integration possibilities across a broad spectrum of applications, transcending specific platforms or disciplines. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

20 pages, 5106 KiB  
Article
A Novel Approach to Simulating Realistic Exoskeleton Behavior in Response to Human Motion
by Zhejun Yao, Seyed Milad Mir Latifi, Carla Molz, David Scherb, Christopher Löffelmann, Johannes Sänger, Jörg Miehling, Sandro Wartzack, Andreas Lindenmann, Sven Matthiesen and Robert Weidner
Robotics 2024, 13(2), 27; https://doi.org/10.3390/robotics13020027 - 1 Feb 2024
Cited by 1 | Viewed by 1490
Abstract
Simulation models are a valuable tool for exoskeleton development, especially for system optimization and evaluation. It allows an assessment of the performance and effectiveness of exoskeletons even at an early stage of their development without physical realization. Due to the closed physical interaction [...] Read more.
Simulation models are a valuable tool for exoskeleton development, especially for system optimization and evaluation. It allows an assessment of the performance and effectiveness of exoskeletons even at an early stage of their development without physical realization. Due to the closed physical interaction between the exoskeleton and the user, accurate modeling of the human–exoskeleton interaction in defined scenarios is essential for exoskeleton simulations. This paper presents a novel approach to simulate exoskeleton motion in response to human motion and the interaction forces at the physical interfaces between the human and the exoskeleton. Our approach uses a multibody model of a shoulder exoskeleton in MATLAB R2021b and imports human motion via virtual markers from a digital human model to simulate human–exoskeleton interaction. To validate the human-motion-based approach, simulated exoskeleton motion and interaction forces are compared with experimental data from a previous lab study. The results demonstrate the feasibility of our approach to simulate human–exoskeleton interaction based on human motion. In addition, the approach is used to optimize the support profile of an exoskeleton, indicating its potential to assist exoskeleton development prior to physical prototyping. Full article
(This article belongs to the Section Neurorobotics)
Show Figures

Figure 1

0 pages, 7128 KiB  
Article
A Two Stage Nonlinear I/O Decoupling and Partially Wireless Controller for Differential Drive Mobile Robots
by Nikolaos D. Kouvakas, Fotis N. Koumboulis and John Sigalas
Robotics 2024, 13(2), 26; https://doi.org/10.3390/robotics13020026 - 31 Jan 2024
Viewed by 1351
Abstract
Differential drive mobile robots, being widely used in several industrial and domestic applications, are increasingly demanding when concerning precision and satisfactory maneuverability. In the present paper, the problem of independently controlling the velocity and orientation angle of a differential drive mobile robot is [...] Read more.
Differential drive mobile robots, being widely used in several industrial and domestic applications, are increasingly demanding when concerning precision and satisfactory maneuverability. In the present paper, the problem of independently controlling the velocity and orientation angle of a differential drive mobile robot is investigated by developing an appropriate two stage nonlinear controller embedded on board and also by using the measurements of the speed and accelerator of the two wheels, as well as taking remote measurements of the orientation angle and its rate. The model of the system is presented in a nonlinear state space form that includes unknown additive terms arising from external disturbances and actuator faults. Based on the nonlinear model of the system, the respective I/O relation is derived, and a two-stage nonlinear measurable output feedback controller, analyzed into an internal and an external controller, is designed. The internal controller aims to produce a decoupled inner closed-loop system of linear form, regulating the linear velocity and angular velocity of the mobile robot independently. The internal controller is of the nonlinear PD type and uses real time measurements of the angular velocities of the active wheels of the vehicle, as well as the respective accelerations. The external controller aims toward the regulation of the orientation angle of the vehicle. It is of a linear, delayed PD feedback form, offering feedback from the remote measurements of the orientation angle and angular velocity of the vehicle, which are transmitted to the controller through a wireless network. Analytic formulae are derived for the parameters of the external controller to ensure the stability of the closed-loop system, even in the presence of the wireless transmission delays, as well as asymptotic command following for the orientation angle. To compensate for measurement noise, external disturbances, and actuator faults, a metaheuristic algorithm is proposed to evaluate the remaining free controller parameters. The performance of the proposed control scheme is evaluated through a series of computational experiments, demonstrating satisfactory behavior. Full article
Show Figures

Figure 1

21 pages, 4275 KiB  
Article
Force-Sensor-Free Implementation of a Hybrid Position–Force Control for Overconstrained Cable-Driven Parallel Robots
by Luca Guagliumi, Alessandro Berti, Eros Monti, Marc Fabritius, Christoph Martin and Marco Carricato
Robotics 2024, 13(2), 25; https://doi.org/10.3390/robotics13020025 - 31 Jan 2024
Cited by 1 | Viewed by 1296
Abstract
This paper proposes a hybrid position–force control strategy for overconstrained cable-driven parallel robots (CDPRs). Overconstrained CDPRs have more cables (m) than degrees of freedom (n), and the idea of the proposed controller is to control n cables in length [...] Read more.
This paper proposes a hybrid position–force control strategy for overconstrained cable-driven parallel robots (CDPRs). Overconstrained CDPRs have more cables (m) than degrees of freedom (n), and the idea of the proposed controller is to control n cables in length and the other mn ones in force. Two controller implementations are developed, one using the motor torque and one using the motor following-error in the feedback loop for cable force control. A friction model of the robot kinematic chain is introduced to improve the accuracy of the cable force estimation. Compared to similar approaches available in the literature, the novelty of the proposed control strategy is that it does not rely on force sensors, which reduces the hardware complexity and cost. The developed control scheme is compared to classical methods that exploit force sensors and to a pure inverse kinematic controller. The experimental results show that the new controller provides good tracking of the desired cable forces, maintaining them within the given bounds. The positioning accuracy and repeatability are similar those obtained with the other controllers. The new approach also allows an online switch between position and force control of cables. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

17 pages, 18355 KiB  
Article
Development of a Pneumatically Actuated Quadruped Robot Using Soft–Rigid Hybrid Rotary Joints
by Zhujin Jiang, Yan Wang and Ketao Zhang
Robotics 2024, 13(2), 24; https://doi.org/10.3390/robotics13020024 - 29 Jan 2024
Viewed by 1614
Abstract
Inspired by musculoskeletal systems in nature, this paper presents a pneumatically actuated quadruped robot which utilizes two soft–rigid hybrid rotary joints in each of the four two-degrees of freedom (DoF) planar legs. We first introduce the mechanical design of the rotary joint and [...] Read more.
Inspired by musculoskeletal systems in nature, this paper presents a pneumatically actuated quadruped robot which utilizes two soft–rigid hybrid rotary joints in each of the four two-degrees of freedom (DoF) planar legs. We first introduce the mechanical design of the rotary joint and the integrated quadruped robot with minimized onboard electronic components. Based on the unique design of the rotary joint, a joint-level PID-based controller was adopted to control the angular displacement of the hip and knee joints of the quadruped robot. Typical gait patterns for legged locomotion, including the walking and trotting gaits, were investigated and designed. Proof-of-concept prototypes of the rotary joint and the quadruped robot were built and tested. The experimental results demonstrated that the rotary joint generated a maximum torque of 5.83 Nm and the quadruped robot was capable of locomotion, achieving a trotting gait of 187.5 mm/s with a frequency of 1.25 Hz and a walking gait of 12.8 mm/s with a gait cycle of 7.84 s. This study reveals that, compared to soft-legged robots, the quadruped robot has a simplified analytical model for motion control, size scalability and high movement speeds, thereby exhibiting significant potential for applications in extreme environments. Full article
(This article belongs to the Special Issue Legged Robots into the Real World, Volume II)
Show Figures

Figure 1

21 pages, 57501 KiB  
Article
A 3D World Interpreter System for Safe Autonomous Crane Operation
by Frank Bart ter Haar, Frank Ruis and Bastian Thomas van Manen
Robotics 2024, 13(2), 23; https://doi.org/10.3390/robotics13020023 - 26 Jan 2024
Viewed by 1505
Abstract
In an effort to improve short-sea shipping in Europe, we present a 3D world interpreter (3DWI) system as part of a robotic container-handling system. The 3DWI is an advanced sensor suite combined with AI-based software and the communication infrastructure to connect to both [...] Read more.
In an effort to improve short-sea shipping in Europe, we present a 3D world interpreter (3DWI) system as part of a robotic container-handling system. The 3DWI is an advanced sensor suite combined with AI-based software and the communication infrastructure to connect to both the crane control and the shore control center. On input of LiDAR data and stereo captures, the 3DWI builds a world model of the operating environment and detects containers. The 3DWI and crane control are the core of an autonomously operating crane that monitors the environment and may trigger an emergency stop while alerting the remote operator of the danger. During container handling, the 3DWI scans for human activity and continuously updates a 3D-Twin model for the operator, enabling situational awareness. The presented methodology includes the sensor suite design, creation of the world model and the 3D-Twin, innovations in AI-detection software, and interaction with the crane and operator. Supporting experiments quantify the performance of the 3DWI, its AI detectors, and safety measures; the detectors reach the top of VisDrone’s leaderboard and the pilot tests show the safe autonomous operation of the crane. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

18 pages, 8968 KiB  
Article
BIMBot for Autonomous Laser Scanning in Built Environments
by Nanying Liang, Yu Pin Ang, Kaiyun Yeo, Xiao Wu, Yuan Xie and Yiyu Cai
Robotics 2024, 13(2), 22; https://doi.org/10.3390/robotics13020022 - 26 Jan 2024
Viewed by 1471
Abstract
Accurate and complete 3D point clouds are essential in creating as-built building information modeling (BIM) models, although there are challenges in automating the process for 3D point cloud creation in complex environments. In this paper, an autonomous scanning system named BIMBot is introduced, [...] Read more.
Accurate and complete 3D point clouds are essential in creating as-built building information modeling (BIM) models, although there are challenges in automating the process for 3D point cloud creation in complex environments. In this paper, an autonomous scanning system named BIMBot is introduced, which integrates advanced light detection and ranging (LiDAR) technology with robotics to create 3D point clouds. Using our specially developed algorithmic pipeline for point cloud processing, iterative registration refinement, and next best view (NBV) calculation, this system facilitates an efficient, accurate, and fully autonomous scanning process. The BIMBot’s performance was validated using a case study in a campus laboratory, featuring complex structural and mechanical, electrical, and plumbing (MEP) elements. The experimental results showed that the autonomous scanning system produced 3D point cloud mappings in fewer scans than the manual method while maintaining comparable detail and accuracy, demonstrating its potential for wider application in complex built environments. Full article
(This article belongs to the Special Issue The State-of-the-Art of Robotics in Asia)
Show Figures

Figure 1

1 pages, 153 KiB  
Retraction
RETRACTED: Abdallah, I.B.; Bouteraa, Y. A Newly-Designed Wearable Robotic Hand Exoskeleton Controlled by EMG Signals and ROS Embedded Systems. Robotics 2023, 12, 95
by Ismail Ben Abdallah and Yassine Bouteraa
Robotics 2024, 13(2), 21; https://doi.org/10.3390/robotics13020021 - 26 Jan 2024
Viewed by 1242
Abstract
The Robotics Editorial Office retracts the article, “A Newly Designed Wearable Robotic Hand Exoskeleton Controlled by EMG Signals and ROS Embedded Systems” [...] Full article
20 pages, 46648 KiB  
Article
Generating a Dataset for Semantic Segmentation of Vine Trunks in Vineyards Using Semi-Supervised Learning and Object Detection
by Petar Slaviček, Ivan Hrabar and Zdenko Kovačić
Robotics 2024, 13(2), 20; https://doi.org/10.3390/robotics13020020 - 23 Jan 2024
Cited by 1 | Viewed by 1667
Abstract
This article describes an experimentally tested approach using semi-supervised learning for generating new datasets for semantic segmentation of vine trunks with very little human-annotated data, resulting in significant savings in time and resources. The creation of such datasets is a crucial step towards [...] Read more.
This article describes an experimentally tested approach using semi-supervised learning for generating new datasets for semantic segmentation of vine trunks with very little human-annotated data, resulting in significant savings in time and resources. The creation of such datasets is a crucial step towards the development of autonomous robots for vineyard maintenance. In order for a mobile robot platform to perform a vineyard maintenance task, such as suckering, a semantically segmented view of the vine trunks is required. The robot must recognize the shape and position of the vine trunks and adapt its movements and actions accordingly. Starting with vine trunk recognition and ending with semi-supervised training for semantic segmentation, we have shown that the need for human annotation, which is usually a time-consuming and expensive process, can be significantly reduced if a dataset for object (vine trunk) detection is available. In this study, we generated about 35,000 images with semantic segmentation of vine trunks using only 300 images annotated by a human. This method eliminates about 99% of the time that would be required to manually annotate the entire dataset. Based on the evaluated dataset, we compared different semantic segmentation model architectures to determine the most suitable one for applications with mobile robots. A balance between accuracy, speed, and memory requirements was determined. The model with the best balance achieved a validation accuracy of 81% and a processing time of only 5 ms. The results of this work, obtained during experiments in a vineyard on karst, show the potential of intelligent annotation of data, reducing the time required for labeling and thus paving the way for further innovations in machine learning. Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop