Next Issue
Previous Issue

Table of Contents

Robotics, Volume 6, Issue 2 (June 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Open AccessArticle A New Combined Vision Technique for Micro Aerial Vehicle Pose Estimation
Robotics 2017, 6(2), 6; doi:10.3390/robotics6020006
Received: 5 December 2016 / Revised: 20 February 2017 / Accepted: 23 March 2017 / Published: 28 March 2017
PDF Full-text (4718 KB) | HTML Full-text | XML Full-text
Abstract
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are
[...] Read more.
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are considered simultaneously by the particle filter framework to improve the accuracy of visual positioning. The framework, which is driven by an onboard inertial module, takes the positioning results from the visual system as measurements and updates the vehicle state. Moreover, experimental testing and data analysis have been carried out to verify the proposed algorithm, including multi-camera configuration, design and assembly of MAV systems, and the marker detection and matching between different views. Our results indicated that the combined vision technique is very attractive for high-performance MAV pose estimation. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle An Optimal and Energy Efficient Multi-Sensor Collision-Free Path Planning Algorithm for a Mobile Robot in Dynamic Environments
Robotics 2017, 6(2), 7; doi:10.3390/robotics6020007
Received: 4 December 2016 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 31 March 2017
PDF Full-text (6381 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based
[...] Read more.
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based on a new model to produce the shortest and most energy-efficient path from a given initial point to a goal point. The distance and time traveled, in addition to the consumed energy, have an asymptotic complexity of O(nlogn), where n is the number of obstacles. Real time experiments are performed to demonstrate the accuracy and energy efficiency of the proposed motion planning algorithm. Full article
Figures

Figure 1

Open AccessArticle Robot-Assisted Crowd Evacuation under Emergency Situations: A Survey
Robotics 2017, 6(2), 8; doi:10.3390/robotics6020008
Received: 27 December 2016 / Revised: 27 March 2017 / Accepted: 31 March 2017 / Published: 7 April 2017
Cited by 1 | PDF Full-text (1943 KB) | HTML Full-text | XML Full-text
Abstract
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions.
[...] Read more.
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions. This paper reviews current state-of-the-art robotic technologies that have been deployed in the simulation of crowd evacuation, including both macroscopic and microscopic models used in simulating a crowd. Existing work on crowd simulation is analyzed and the robots used in crowd evacuation are introduced. Finally, the paper demonstrates how autonomous robots could be effectively deployed in disaster evacuation, as well as search and rescue missions. Full article
Figures

Figure 1

Open AccessArticle Visual Place Recognition for Autonomous Mobile Robots
Robotics 2017, 6(2), 9; doi:10.3390/robotics6020009
Received: 14 March 2017 / Revised: 10 April 2017 / Accepted: 12 April 2017 / Published: 17 April 2017
PDF Full-text (12706 KB) | HTML Full-text | XML Full-text
Abstract
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We
[...] Read more.
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We compare different approaches for visual place recognition: holistic methods (visual compass and warping), signature-based methods (using Fourier coefficients or feature descriptors (able for binary-appearance loop-closure evaluation, ABLE)), and feature-based methods (fast appearance-based mapping, FabMap). As new contributions we investigate whether warping, a successful visual homing method, is suitable for place recognition. In addition, we extend the well-known visual compass to use multiple scale planes, a concept also employed by warping. To achieve tolerance against changing illumination conditions, we examine the NSAD distance measure (normalized sum of absolute differences) on edge-filtered images. To reduce the impact of illumination changes on the distance values, we suggest to compute ratios of image distances to normalize these values to a common range. We test all methods on multiple indoor databases, as well as a small outdoor database, using images with constant or changing illumination conditions. ROC analysis (receiver-operator characteristics) and the metric distance between best-matching image pairs are used as evaluation measures. Most methods perform well under constant illumination conditions, but fail under changing illumination. The visual compass using the NSAD measure on edge-filtered images with multiple scale planes, while being slower than signature methods, performs best in the latter case. Full article
Figures

Figure 1

Open AccessArticle Binaural Range Finding from Synthetic Aperture Computation as the Head is Turned
Robotics 2017, 6(2), 10; doi:10.3390/robotics6020010
Received: 9 January 2017 / Revised: 22 March 2017 / Accepted: 14 April 2017 / Published: 19 April 2017
PDF Full-text (1514 KB) | HTML Full-text | XML Full-text
Abstract
A solution to binaural direction finding described in Tamsett (Robotics 2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one
[...] Read more.
A solution to binaural direction finding described in Tamsett (Robotics 2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one and the method extended for SAC as a function of range for estimating range to an acoustic source. An instantaneous angle λ (lambda) between the auditory axis and direction to an acoustic source locates the source on a small circle of colatitude (lambda circle) of a sphere symmetric about the auditory axis. As the head is turned, data over successive instantaneous lambda circles are integrated in a virtual field of audition from which the direction to an acoustic source can be inferred. Multiple sets of lambda circles generated as a function of range yield an optimal range at which the circles intersect to best focus at a point in a virtual three-dimensional field of audition, providing an estimate of range. A proof of concept is demonstrated using simulated experimental data. The method enables a binaural robot to estimate not only direction but also range to an acoustic source from sufficiently accurate measurements of arrival time/level differences at the antennae. Full article
Figures

Figure 1

Open AccessArticle Feasibility of Using the Optical Sensing Techniques for Early Detection of Huanglongbing in Citrus Seedlings
Robotics 2017, 6(2), 11; doi:10.3390/robotics6020011
Received: 6 January 2017 / Revised: 10 April 2017 / Accepted: 19 April 2017 / Published: 23 April 2017
PDF Full-text (1593 KB) | HTML Full-text | XML Full-text
Abstract
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant
[...] Read more.
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant impact on citrus yield and quality in Florida. Unfortunately, no cure has been reported for HLB. Starch accumulates in HLB infected leaf chloroplasts, which causes the mottled blotchy green pattern. Starch rotates the polarization plane of light. A polarized imaging technique was used to detect the polarization-rotation caused by the hyper-accumulation of starch as a pre-symptomatic indication of HLB in young seedlings. Citrus seedlings were grown in a room with controlled conditions and exposed to intensive feeding by CLas-positive psyllids for eight weeks. A quantitative polymerase chain reaction was employed to confirm the HLB status of samples. Two datasets were acquired; the first created one month after the exposer to psyllids and the second two months later. The results showed that, with relatively unsophisticated imaging equipment, four levels of HLB infections could be detected with accuracies of 72%–81%. As expected, increasing the time interval between psyllid exposure and imaging increased the development of symptoms and, accordingly, improved the detection accuracy. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Figures

Figure 1

Open AccessArticle Bin-Dog: A Robotic Platform for Bin Management in Orchards
Robotics 2017, 6(2), 12; doi:10.3390/robotics6020012
Received: 1 April 2017 / Revised: 10 May 2017 / Accepted: 18 May 2017 / Published: 22 May 2017
PDF Full-text (7364 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management,
[...] Read more.
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management, the concept of a robotic bin-dog system is proposed in this study. This system is designed with a “go-over-the-bin” feature, which allows it to drive over bins between tree rows and complete the above process in one trip. To validate this system concept, a prototype and its control and navigation system were designed and built. Field tests were conducted in a commercial orchard to validate its key functionalities in three tasks including headland turning, straight-line tracking between tree rows, and “go-over-the-bin.” Tests of the headland turning showed that bin-dog followed a predefined path to align with an alleyway with lateral and orientation errors of 0.02 m and 1.5°. Tests of straight-line tracking showed that bin-dog could successfully track the alleyway centerline at speeds up to 1.00 m·s−1 with a RMSE offset of 0.07 m. The navigation system also successfully guided the bin-dog to complete the task of go-over-the-bin at a speed of 0.60 m·s−1. The successful validation tests proved that the prototype can achieve all desired functionality. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Figures

Figure 1

Open AccessArticle Augmented Reality Guidance with Multimodality Imaging Data and Depth-Perceived Interaction for Robot-Assisted Surgery
Robotics 2017, 6(2), 13; doi:10.3390/robotics6020013
Received: 30 March 2017 / Revised: 18 May 2017 / Accepted: 22 May 2017 / Published: 24 May 2017
PDF Full-text (6090 KB) | HTML Full-text | XML Full-text
Abstract
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical
[...] Read more.
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical system with interactive surgical guidance using tablet-based AR with a Kinect sensor for three-dimensional (3D) localization of patient anatomical structures and intraoperative 3D surgical tool navigation. Depth data acquired from the Kinect sensor was visualized in cone-shaped layers for 3D AR-assisted navigation. Virtual visual cues generated by the tablet were overlaid on the images of the surgical field for spatial reference. We evaluated the proposed system and the experimental results showed that the tablet-based visual guidance system could assist surgeons in locating internal organs, with errors between 1.74 and 2.96 mm. We also demonstrated that the system was able to provide mobile augmented guidance and interaction for surgical tool navigation. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Journal Contact

MDPI AG
Robotics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Robotics Edit a special issue Review for Robotics
logo
loading...
Back to Top