Previous Issue
Volume 13, June
 
 

Robotics, Volume 13, Issue 7 (July 2024) – 11 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 10217 KiB  
Article
ANN Enhanced Hybrid Force/Position Controller of Robot Manipulators for Fiber Placement
by José Francisco Villa-Tiburcio, José Antonio Estrada-Torres, Rodrigo Hernández-Alvarado, Josue Rafael Montes-Martínez, Darío Bringas-Posadas and Edgar Adrián Franco-Urquiza
Robotics 2024, 13(7), 105; https://doi.org/10.3390/robotics13070105 - 13 Jul 2024
Viewed by 325
Abstract
In practice, most industrial robot manipulators use PID (Proportional + Integral + Derivative) controllers, thanks to their simplicity and adequate performance under certain conditions. Normally, this type of controller has a good performance in tasks where the robot moves freely, performing movements without [...] Read more.
In practice, most industrial robot manipulators use PID (Proportional + Integral + Derivative) controllers, thanks to their simplicity and adequate performance under certain conditions. Normally, this type of controller has a good performance in tasks where the robot moves freely, performing movements without contact with its environment. However, complications arise in applications such as the AFP (Automated Fiber Placement) process, where a high degree of precision and repeatability is required in the control of parameters such as position and compression force for the production of composite parts. The control of these parameters is a major challenge in terms of quality and productivity of the final product, mainly due to the complex geometry of the part and the type of tooling with which the AFP system is equipped. In the last decades, several control system approaches have been proposed in the literature, such as classical, adaptive or sliding mode control theory based methodologies. Nevertheless, such strategies present difficulties to change their dynamics since their design consider only some set of disturbances. This article presents a novel intelligent type control algorithm based on back-propagation neural networks (BP-NNs) combined with classical PID/PI control schemes for force/position control in manipulator robots. The PID/PI controllers are responsible for the main control action, while the BP-NNs contributes with its ability to estimate and compensate online the dynamic variations of the AFP process. It is proven that the proposed control achieves both, stability in the Lyapunov sense for the desired interaction force between the end-effector and the environment, and position trajectory tracking for the robot tip in Cartesian space. The performance and efficiency of the proposed control is evaluated by numerical simulations in MATLAB-Simulink environment, obtaining as results that the errors for the desired force and the tracking of complex trajectories are reduced to a range below 5% in root mean square error (RMSE). Full article
22 pages, 4999 KiB  
Article
A Framework for Enhanced Human–Robot Collaboration during Disassembly Using Digital Twin and Virtual Reality
by Timon Hoebert, Stephan Seibel, Manuel Amersdorfer, Markus Vincze, Wilfried Lepuschitz and Munir Merdan
Robotics 2024, 13(7), 104; https://doi.org/10.3390/robotics13070104 - 12 Jul 2024
Viewed by 315
Abstract
This paper presents a framework that integrates digital twin and virtual reality (VR) technologies to improve the efficiency and safety of human–robot collaborative systems in the disassembly domain. With the increasing complexity of the handling of end-of-life electronic products and as the related [...] Read more.
This paper presents a framework that integrates digital twin and virtual reality (VR) technologies to improve the efficiency and safety of human–robot collaborative systems in the disassembly domain. With the increasing complexity of the handling of end-of-life electronic products and as the related disassembly tasks are characterized by variabilities such as rust, deformation, and diverse part geometries, traditional industrial robots face significant challenges in this domain. These challenges require adaptable and flexible automation solutions that can work safely alongside human workers. We developed an architecture to address these challenges and support system configuration, training, and operational monitoring. Our framework incorporates a digital twin to provide a real-time virtual representation of the physical disassembly process, allowing for immediate feedback and dynamic adjustment of operations. In addition, VR is used to simulate and optimize the workspace layout, improve human–robot interaction, and facilitate safe and effective training scenarios without the need for physical prototypes. A unique case study is presented, where the collaborative system is specifically applied to the disassembly of antenna amplifiers, illustrating the potential of our comprehensive approach to facilitate engineering processes and enhance collaborative safety. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

26 pages, 10151 KiB  
Article
Development, Experimental, and Numerical Characterisation of Novel Flexible Strain Sensors for Soft Robotics Applications
by Sylvester Ndidiamaka Nnadi, Ivor Ajadalu, Amir Rahmani, Aliyu Aliyu, Khaled Elgeneidy, Allahyar Montazeri and Behnaz Sohani
Robotics 2024, 13(7), 103; https://doi.org/10.3390/robotics13070103 - 11 Jul 2024
Viewed by 279
Abstract
Medical and agricultural robots that interact with living tissue or pick fruit require tactile and flexible sensors to minimise or eliminate damage. Until recently, research has focused on the development of robots made of rigid materials, such as metal or plastic. Due to [...] Read more.
Medical and agricultural robots that interact with living tissue or pick fruit require tactile and flexible sensors to minimise or eliminate damage. Until recently, research has focused on the development of robots made of rigid materials, such as metal or plastic. Due to their complex configuration, poor spatial adaptability and low flexibility, rigid robots are not fully applicable in some special environments such as limb rehabilitation, fragile objects gripping, human–machine interaction, and locomotion. All these should be done in an accurate and safe manner for them to be useful. However, the design and manufacture of soft robot parts that interact with living tissue or fragile objects is not as straightforward. Given that hyper-elasticity and conductivity are involved, conventional (subtractive) manufacturing can result in wasted materials (which are expensive), incompatible parts due to different physical properties, and high costs. In this work, additive manufacturing (3D printing) is used to produce a conductive, composite flexible sensor. Its electrical response was tested based on various physical conditions. Finite element analysis (FEA) was used to characterise its deformation and stress behaviour for optimisation to achieve functionality and durability. Also, a nonlinear regression model was developed for the sensor’s performance. Full article
(This article belongs to the Special Issue Soft Robotics: Fusing Function with Structure)
Show Figures

Figure 1

20 pages, 17993 KiB  
Article
Semantic 3D Reconstruction for Volumetric Modeling of Defects in Construction Sites
by Dimitrios Katsatos, Paschalis Charalampous, Patrick Schmidt, Ioannis Kostavelis, Dimitrios Giakoumis, Lazaros Nalpantidis and Dimitrios Tzovaras
Robotics 2024, 13(7), 102; https://doi.org/10.3390/robotics13070102 - 11 Jul 2024
Viewed by 282
Abstract
The appearance of construction defects in buildings can arise from a variety of factors, ranging from issues during the design and construction phases to problems that develop over time with the lifecycle of a building. These defects require repairs, often in the context [...] Read more.
The appearance of construction defects in buildings can arise from a variety of factors, ranging from issues during the design and construction phases to problems that develop over time with the lifecycle of a building. These defects require repairs, often in the context of a significant shortage of skilled labor. In addition, such work is often physically demanding and carried out in hazardous environments. Consequently, adopting autonomous robotic systems in the construction industry becomes essential, as they can relieve labor shortages, promote safety, and enhance the quality and efficiency of repair and maintenance tasks. Hereupon, the present study introduces an end-to-end framework towards the automation of shotcreting tasks in cases where construction or repair actions are required. The proposed system can scan a construction scene using a stereo-vision camera mounted on a robotic platform, identify regions of defects, and reconstruct a 3D model of these areas. Furthermore, it automatically calculates the required 3D volumes to be constructed to treat a detected defect. To achieve all of the above-mentioned technological tools, the developed software framework employs semantic segmentation and 3D reconstruction modules based on YOLOv8m-seg, SiamMask, InfiniTAM, and RTAB-Map, respectively. In addition, the segmented 3D regions are processed by the volumetric modeling component, which determines the amount of concrete needed to fill the defects. It generates the exact 3D model that can repair the investigated defect. Finally, the precision and effectiveness of the proposed pipeline are evaluated in actual construction site scenarios, featuring reinforcement bars as defective areas. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

29 pages, 15983 KiB  
Article
Analysis of Attack Intensity on Autonomous Mobile Robots
by Elena Basan, Alexander Basan, Alexey Mushenko, Alexey Nekrasov, Colin Fidge and Alexander Lesnikov
Robotics 2024, 13(7), 101; https://doi.org/10.3390/robotics13070101 - 10 Jul 2024
Viewed by 275
Abstract
Autonomous mobile robots (AMRs) combine a remarkable combination of mobility, adaptability, and an innate capacity for obstacle avoidance. They are exceptionally well-suited for a wide range of applications but usually operate in uncontrolled, non-deterministic environments, so the analysis and classification of security events [...] Read more.
Autonomous mobile robots (AMRs) combine a remarkable combination of mobility, adaptability, and an innate capacity for obstacle avoidance. They are exceptionally well-suited for a wide range of applications but usually operate in uncontrolled, non-deterministic environments, so the analysis and classification of security events are very important for their safe operation. In this regard, we considered the influence of different types of attacks on AMR navigation systems to subdivide them into classes and unified the effect of attacks on the system through their level of consequences and impact. Then, we built a model of an attack on a system, taking into account five methods of attack implementation and identified the unified response thresholds valid for any type of parameter, which allows for creating universal correlation rules and simplifies this process, as the trigger threshold is related to the degree of impact that the attack has on the finite subsystem. Also, we developed a methodology for classifying incidents and identifying key components of the system based on ontological models, which makes it possible to predict risks and select the optimal system configuration. The obtained results are important in the context of separating different types of destructive effects based on attack classes. Our study showed that it is sometimes difficult to divide spoofing attacks into classes by assessing only one parameter since the attacker can use a complex attack scenario, mixing the stages of the scenarios. We then showed how adding an attack intensity factor can make classification more flexible. The connections between subsystems and parameters, as well as the attack impact patterns, were determined. Finally, a set of unique rules was developed to classify destructive effects with uniform response thresholds for each parameter. In this case, we can increase the number of parameters as well as the type of parameter value. Full article
(This article belongs to the Special Issue UAV Systems and Swarm Robotics)
Show Figures

Figure 1

33 pages, 3278 KiB  
Review
A Practical Roadmap to Learning from Demonstration for Robotic Manipulators in Manufacturing
by Alireza Barekatain, Hamed Habibi and Holger Voos
Robotics 2024, 13(7), 100; https://doi.org/10.3390/robotics13070100 - 10 Jul 2024
Viewed by 404
Abstract
This paper provides a structured and practical roadmap for practitioners to integrate learning from demonstration (LfD) into manufacturing tasks, with a specific focus on industrial manipulators. Motivated by the paradigm shift from mass production to mass customization, it is crucial to have an [...] Read more.
This paper provides a structured and practical roadmap for practitioners to integrate learning from demonstration (LfD) into manufacturing tasks, with a specific focus on industrial manipulators. Motivated by the paradigm shift from mass production to mass customization, it is crucial to have an easy-to-follow roadmap for practitioners with moderate expertise, to transform existing robotic processes to customizable LfD-based solutions. To realize this transformation, we devise the key questions of “What to Demonstrate”, “How to Demonstrate”, “How to Learn”, and “How to Refine”. To follow through these questions, our comprehensive guide offers a questionnaire-style approach, highlighting key steps from problem definition to solution refinement. This paper equips both researchers and industry professionals with actionable insights to deploy LfD-based solutions effectively. By tailoring the refinement criteria to manufacturing settings, this paper addresses related challenges and strategies for enhancing LfD performance in manufacturing contexts. Full article
(This article belongs to the Special Issue Integrating Robotics into High-Accuracy Industrial Operations)
Show Figures

Figure 1

20 pages, 5377 KiB  
Article
Guaranteed Trajectory Tracking under Learned Dynamics with Contraction Metrics and Disturbance Estimation
by Pan Zhao, Ziyao Guo, Yikun Cheng, Aditya Gahlawat, Hyungsoo Kang and Naira Hovakimyan
Robotics 2024, 13(7), 99; https://doi.org/10.3390/robotics13070099 - 30 Jun 2024
Viewed by 589
Abstract
This paper presents a contraction-based learning control architecture that allows for using model learning tools to learn matched model uncertainties while guaranteeing trajectory tracking performance during the learning transients. The architecture relies on a disturbance estimator to estimate the pointwise value of the [...] Read more.
This paper presents a contraction-based learning control architecture that allows for using model learning tools to learn matched model uncertainties while guaranteeing trajectory tracking performance during the learning transients. The architecture relies on a disturbance estimator to estimate the pointwise value of the uncertainty, i.e., the discrepancy between a nominal model and the true dynamics, with pre-computable estimation error bounds, and a robust Riemannian energy condition for computing the control signal. Under certain conditions, the controller guarantees exponential trajectory convergence during the learning transients, while learning can improve robustness and facilitate better trajectory planning. Simulation results validate the efficacy of the proposed control architecture. Full article
Show Figures

Figure 1

19 pages, 9110 KiB  
Article
Imitation Learning from a Single Demonstration Leveraging Vector Quantization for Robotic Harvesting
by Antonios Porichis, Myrto Inglezou, Nikolaos Kegkeroglou, Vishwanathan Mohan and Panagiotis Chatzakos
Robotics 2024, 13(7), 98; https://doi.org/10.3390/robotics13070098 - 30 Jun 2024
Viewed by 416
Abstract
The ability of robots to tackle complex non-repetitive tasks will be key in bringing a new level of automation in agricultural applications still involving labor-intensive, menial, and physically demanding activities due to high cognitive requirements. Harvesting is one such example as it requires [...] Read more.
The ability of robots to tackle complex non-repetitive tasks will be key in bringing a new level of automation in agricultural applications still involving labor-intensive, menial, and physically demanding activities due to high cognitive requirements. Harvesting is one such example as it requires a combination of motions which can generally be broken down into a visual servoing and a manipulation phase, with the latter often being straightforward to pre-program. In this work, we focus on the task of fresh mushroom harvesting which is still conducted manually by human pickers due to its high complexity. A key challenge is to enable harvesting with low-cost hardware and mechanical systems, such as soft grippers which present additional challenges compared to their rigid counterparts. We devise an Imitation Learning model pipeline utilizing Vector Quantization to learn quantized embeddings directly from visual inputs. We test this approach in a realistic environment designed based on recordings of human experts harvesting real mushrooms. Our models can control a cartesian robot with a soft, pneumatically actuated gripper to successfully replicate the mushroom outrooting sequence. We achieve 100% success in picking mushrooms among distractors with less than 20 min of data collection comprising a single expert demonstration and auxiliary, non-expert, trajectories. The entire model pipeline requires less than 40 min of training on a single A4000 GPU and approx. 20 ms for inference on a standard laptop GPU. Full article
Show Figures

Figure 1

20 pages, 8159 KiB  
Article
Multichannel Sensorimotor Integration with a Dexterous Artificial Hand
by Moaed A. Abd and Erik D. Engeberg
Robotics 2024, 13(7), 97; https://doi.org/10.3390/robotics13070097 - 30 Jun 2024
Viewed by 389
Abstract
People use their hands for intricate tasks like playing musical instruments, employing myriad touch sensations to inform motor control. In contrast, current prosthetic hands lack comprehensive haptic feedback and exhibit rudimentary multitasking functionality. Limited research has explored the potential of upper limb amputees [...] Read more.
People use their hands for intricate tasks like playing musical instruments, employing myriad touch sensations to inform motor control. In contrast, current prosthetic hands lack comprehensive haptic feedback and exhibit rudimentary multitasking functionality. Limited research has explored the potential of upper limb amputees to feel, perceive, and respond to multiple channels of simultaneously activated haptic feedback to concurrently control the individual fingers of dexterous prosthetic hands. This study introduces a novel control architecture for three amputees and nine additional subjects to concurrently control individual fingers of an artificial hand using two channels of context-specific haptic feedback. Artificial neural networks (ANNs) recognize subjects’ electromyogram (EMG) patterns governing the artificial hand controller. ANNs also classify the directions objects slip across tactile sensors on the robotic fingertips, which are encoded via the vibration frequency of wearable vibrotactile actuators. Subjects implement control strategies with each finger simultaneously to prevent or permit slip as desired, achieving a 94.49% ± 8.79% overall success rate. Although no statistically significant difference exists between amputees’ and non-amputees’ success rates, amputees require more time to respond to simultaneous haptic feedback signals, suggesting a higher cognitive load. Nevertheless, amputees can accurately interpret multiple channels of nuanced haptic feedback to concurrently control individual robotic fingers, addressing the challenge of multitasking with dexterous prosthetic hands. Full article
(This article belongs to the Section Neurorobotics)
Show Figures

Figure 1

21 pages, 2978 KiB  
Article
A Digital Twin Infrastructure for NGC of ROV during Inspection
by David Scaradozzi, Flavia Gioiello, Nicolò Ciuccoli and Pierre Drap
Robotics 2024, 13(7), 96; https://doi.org/10.3390/robotics13070096 - 26 Jun 2024
Viewed by 1362
Abstract
Remotely operated vehicles (ROVs) provide practical solutions for a wide range of activities in a particularly challenging domain, despite their dependence on support ships and operators. Recent advancements in AI, machine learning, predictive analytics, control theories, and sensor technologies offer opportunities to make [...] Read more.
Remotely operated vehicles (ROVs) provide practical solutions for a wide range of activities in a particularly challenging domain, despite their dependence on support ships and operators. Recent advancements in AI, machine learning, predictive analytics, control theories, and sensor technologies offer opportunities to make ROVs (semi) autonomous in their operations and to remotely test and monitor their dynamics. This study moves towards that goal by formulating a complete navigation, guidance, and control (NGC) system for a six DoF BlueROV2, offering a solution to the current challenges in the field of marine robotics, particularly in the areas of power supply, communication, stability, operational autonomy, localization, and trajectory planning. The vehicle can operate (semi) autonomously, relying on a sensor acoustic USBL localization system, tethered communication with the surface vessel for power, and a line of sight (LOS) guidance system. This strategy transforms the path control problem into a heading control problem, aligning the vehicle’s movement with a dynamically calculated reference point along the desired path. The control system uses PID controllers implemented in the navigator flight controller board. Additionally, an infrastructure has been developed that synchronizes and communicates between the real ROV and its digital twin within the Unity environment. The digital twin acts as a visual representation of the ROV’s movements and considers hydrodynamic behaviors. This approach combines the physical properties of the ROV with the advanced simulation and analysis capabilities of its digital counterpart. All findings were validated at the Point Rouge port located in Marseille and at the port of Ancona. The NGC implemented has proven positive vehicle stability and trajectory tracking in time despite external interferences. Additionally, the digital part has proven to be a reliable infrastructure for a future bidirectional communication system. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

34 pages, 4132 KiB  
Article
Temporal Progression of Four Older Adults through Technology Acceptance Phases for a Mobile Telepresence Robot in Domestic Environments
by Rune Baggett, Martin Simecek, Katherine M. Tsui and Marlena R. Fraune
Robotics 2024, 13(7), 95; https://doi.org/10.3390/robotics13070095 - 22 Jun 2024
Viewed by 501
Abstract
Loneliness is increasingly common, especially among older adults. Technology like mobile telepresence robots can help people feel less lonely. However, such technology has challenges, and even if people use it in the short term, they may not accept it in the long term. [...] Read more.
Loneliness is increasingly common, especially among older adults. Technology like mobile telepresence robots can help people feel less lonely. However, such technology has challenges, and even if people use it in the short term, they may not accept it in the long term. Prior work shows that it can take up to six months for people to fully accept technology. This study focuses on exploring the nuances and fluidity of acceptance phases. This paper reports a case study of four older adult participants living with a mobile telepresence robot for seven months. In monthly interviews, we explore their progress through the acceptance phases. Results reveal the complexity and fluidity of the acceptance phases. We discuss what this means for technology acceptance. In this paper, we also make coding guidelines for interviews on acceptance phases more concrete. We take early steps in moving toward a more standard interview and coding method to improve our understanding of acceptance phases and how to help potential users progress through them. Full article
(This article belongs to the Special Issue Social Robots for the Human Well-Being)
Previous Issue
Back to TopTop