Next Issue
Volume 6, December
Previous Issue
Volume 6, June
 
 

Robotics, Volume 6, Issue 3 (September 2017) – 8 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
733 KiB  
Article
Building a ROS-Based Testbed for Realistic Multi-Robot Simulation: Taking the Exploration as an Example
by Zhi Yan, Luc Fabresse, Jannik Laval and Noury Bouraqadi
Robotics 2017, 6(3), 21; https://doi.org/10.3390/robotics6030021 - 12 Sep 2017
Cited by 25 | Viewed by 11438
Abstract
While the robotics community agrees that the benchmarking is of high importance to objectively compare different solutions, there are only few and limited tools to support it. To address this issue in the context of multi-robot systems, we have defined a benchmarking process [...] Read more.
While the robotics community agrees that the benchmarking is of high importance to objectively compare different solutions, there are only few and limited tools to support it. To address this issue in the context of multi-robot systems, we have defined a benchmarking process based on experimental designs, which aimed at improving the reproducibility of experiments by making explicit all elements of a benchmark such as parameters, measurements and metrics. We have also developed a ROS (Robot Operating System)-based testbed with the goal of making it easy for users to validate, benchmark, and compare different algorithms including coordination strategies. Our testbed uses the MORSE (Modular OpenRobots Simulation Engine) simulator for realistic simulation and a computer cluster for decentralized computation. In this paper, we present our testbed in details with the architecture and infrastructure, the issues encountered in implementing the infrastructure, and the automation of the deployment. We also report a series of experiments on multi-robot exploration, in order to demonstrate the capabilities of our testbed. Full article
Show Figures

Graphical abstract

12905 KiB  
Article
Proposal of Novel Model for a 2 DOF Exoskeleton for Lower-Limb Rehabilitation
by Cristian C. Velandia, Diego A. Tibaduiza and Maribel Anaya Vejar
Robotics 2017, 6(3), 20; https://doi.org/10.3390/robotics6030020 - 07 Sep 2017
Cited by 11 | Viewed by 7211
Abstract
Nowadays, engineering is working side by side with medical sciences to design and create devices which could help to improve medical processes. Physiotherapy is one of the areas of medicine in which engineering is working. There, several devices aimed to enhance and assist [...] Read more.
Nowadays, engineering is working side by side with medical sciences to design and create devices which could help to improve medical processes. Physiotherapy is one of the areas of medicine in which engineering is working. There, several devices aimed to enhance and assist therapies are being studied and developed. Mechanics and electronics engineering together with physiotherapy are developing exoskeletons, which are electromechanical devices attached to limbs which could help the user to move or correct the movement of the given limbs, providing automatic therapies with flexible and configurable programs to improve the autonomy and fit the needs of each patient. Exoskeletons can enhance the effectiveness of physiotherapy and reduce patient rehabilitation time. As a contribution, this paper proposes a dynamic model for two degrees of freedom (2 DOF) leg exoskeleton acting over the knee and ankle to treat people with partial disability in lower limbs. This model has the advantage that it can be adapted for any person using the variables of mass and height, converting it into a flexible alternative for calculating the exoskeleton dynamics very quickly and adapting them easily for a child’s or young adult’s body. In addition, this paper includes the linearization of the model and an analysis of its respective observability and controllability, as preliminary study for control strategies applications. Full article
Show Figures

Figure 1

2696 KiB  
Article
Automated Remote Insect Surveillance at a Global Scale and the Internet of Things
by Ilyas Potamitis, Panagiotis Eliopoulos and Iraklis Rigakis
Robotics 2017, 6(3), 19; https://doi.org/10.3390/robotics6030019 - 22 Aug 2017
Cited by 56 | Viewed by 13236
Abstract
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic [...] Read more.
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic sensors to guard the entrance of the trap to detect, time-stamp, GPS tag, and—in relevant cases—identify the species of the incoming insect from their wingbeat. For every important crop pest, there are monitoring protocols to be followed to decide when to initiate a treatment procedure before a serious infestation occurs. Monitoring protocols are mainly based on specifically designed insect traps. Traditional insect monitoring suffers in that the scope of such monitoring: is curtailed by its cost, requires intensive labor, is time consuming, and an expert is often needed for sufficient accuracy which can sometimes raise safety issues for humans. These disadvantages reduce the extent to which manual insect monitoring is applied and therefore its accuracy, which finally results in significant crop loss due to damage caused by pests. With the term ‘surveillance’ we intend to push the monitoring idea to unprecedented levels of information extraction regarding the presence, time-stamping detection events, species identification, and population density of targeted insect pests. Insect counts, as well as environmental parameters that correlate with insects’ population development, are wirelessly transmitted to the central monitoring agency in real time and are visualized and streamed to statistical methods to assist enforcement of security control to insect pests. In this work, we emphasize how the traps can be self-organized in networks that collectively report data at local, regional, country, continental, and global scales using the emerging technology of the Internet of Things (IoT). This research is necessarily interdisciplinary and falls at the intersection of entomology, optoelectronic engineering, data-science, and crop science and encompasses the design and implementation of low-cost, low-power technology to help reduce the extent of quantitative and qualitative crop losses by many of the most significant agricultural pests. We argue that smart traps communicating through IoT to report in real-time the level of the pest population from the field straight to a human controlled agency can, in the very near future, have a profound impact on the decision-making process in crop protection and will be disruptive of existing manual practices. In the present study, three cases are investigated: monitoring Rhynchophorus ferrugineus (Olivier) (Coleoptera: Curculionidae) using (a) Picusan and (b) Lindgren trap; and (c) monitoring various stored grain beetle pests using the stored-grain pitfall trap. Our approach is very accurate, reaching 98–99% accuracy on automatic counts compared with real detected numbers of insects in each type of trap. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Show Figures

Figure 1

2843 KiB  
Review
Application of Augmented Reality and Robotic Technology in Broadcasting: A Survey
by Dingtian Yan and Huosheng Hu
Robotics 2017, 6(3), 18; https://doi.org/10.3390/robotics6030018 - 17 Aug 2017
Cited by 18 | Viewed by 11317
Abstract
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in [...] Read more.
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

4068 KiB  
Article
Trajectory Planning and Tracking Control of a Differential-Drive Mobile Robot in a Picture Drawing Application
by Ching-Long Shih and Li-Chen Lin
Robotics 2017, 6(3), 17; https://doi.org/10.3390/robotics6030017 - 10 Aug 2017
Cited by 31 | Viewed by 14347
Abstract
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of [...] Read more.
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of the pen at the middle between two wheels, we are able to construct an omnidirectional mobile platform, thus implementing a simple and effective trajectory control method. The reference trajectories are generated based on line simplification and B-spline approximation of digitized input curves obtained from Canny’s edge-detection algorithm on a gray image. Experimental results for image picture drawing show the advantage of this proposed method. Full article
Show Figures

Figure 1

4359 KiB  
Article
Perception-Link Behavior Model: Supporting a Novel Operator Interface for a Customizable Anthropomorphic Telepresence Robot
by William Gu, Gerald Seet and Nadia Magnenat-Thalmann
Robotics 2017, 6(3), 16; https://doi.org/10.3390/robotics6030016 - 20 Jul 2017
Viewed by 7019
Abstract
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a [...] Read more.
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a CATR. These problems are the disruption during decoupling, discontinuity due to the unstable transmission and jerkiness due to the reactive collision avoidance. From the review, none of the existing interfaces can simultaneously fix all of the problems. Hence, a novel framework with the perception-link behavior model (PLBM) was proposed. The PLBM adopts the distributed spatiotemporal representation for all of its input signals. Equipping it with other components, the PLBM can solve the above problems with some limitations. For instance, the PLBM can retrieve missing modalities from its experience during decoupling. Next, the PLBM can handle up to a high level of drop rate in the network connection because it is dealing with gesture style and not pose. For collision prevention, the PLBM can tune the incoming gesture style so that the CATR can deliberately and smoothly avoid a collision. In summary, the framework consists of PLBM being able to increase the user’s presence on a CATR by synthesizing expressive user gestures. Full article
Show Figures

Figure 1

1059 KiB  
Article
Compressed Voxel-Based Mapping Using Unsupervised Learning
by Daniel Ricao Canelhas, Erik Schaffernicht, Todor Stoyanov, Achim J. Lilienthal and Andrew J. Davison
Robotics 2017, 6(3), 15; https://doi.org/10.3390/robotics6030015 - 29 Jun 2017
Cited by 10 | Viewed by 9547
Abstract
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective [...] Read more.
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

21136 KiB  
Article
Automated Assembly Using 3D and 2D Cameras
by Adam Leon Kleppe, Asgeir Bjørkedal, Kristoffer Larsen and Olav Egeland
Robotics 2017, 6(3), 14; https://doi.org/10.3390/robotics6030014 - 27 Jun 2017
Cited by 5 | Viewed by 8013
Abstract
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a [...] Read more.
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to < 1 and < 1 . This was demonstrated in a real industrial assembly task where high accuracy is required. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop