Next Article in Journal
Scheduling Energy Efficient Data Centers Using Renewable Energy
Previous Article in Journal
A Pulsed Coding Technique Based on Optical UWB Modulation for High Data Rate Low Power Wireless Implantable Biotelemetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Environment Mapping Using the Kinect V2 and Path Planning Based on RRT Algorithms

1
Centro de Investigación Científica y Tecnológica del Ejército CICTE, Universidad de las Fuerzas Armadas ESPE, Sangolquí 171103, Ecuador
2
Departamento de Eléctrica y Electrónica DEEE, Universidad de las Fuerzas Armadas ESPE, Sangolquí 171103, Ecuador
3
Research Group on Knowledge Engineering GREC, Universitat Politècnica de Catalunya UPC-BarcelonaTech, Barcelona 08040, Spain
*
Authors to whom correspondence should be addressed.
Electronics 2016, 5(4), 70; https://doi.org/10.3390/electronics5040070
Submission received: 19 August 2016 / Revised: 21 September 2016 / Accepted: 26 September 2016 / Published: 18 October 2016

Abstract

:
This paper describes a 3D path planning system that is able to provide a solution trajectory for the automatic control of a robot. The proposed system uses a point cloud obtained from the robot workspace, with a Kinect V2 sensor to identify the interest regions and the obstacles of the environment. Our proposal includes a collision-free path planner based on the Rapidly-exploring Random Trees variant (RRT*), for a safe and optimal navigation of robots in 3D spaces. Results on RGB-D segmentation and recognition, point cloud processing, and comparisons between different RRT* algorithms, are presented.

1. Introduction

Robotic systems have been playing an important role in the modern world. They are applied in several areas: in manufacturing, entertainment, the toy industry, the medical field, exploration, military and multimedia applications [1,2,3]. Most commercial robots need manual programming or teleoperation for the executing of movements. This important task requires human intervention that results in long set-up times, and the expertise of operators to control robots. Autonomous navigation is an alternative for solving this issue. Flexibility and energy efficiency are also important achievements of modern robotics [4,5].
Navigation includes algorithms for perception and motion estimation, and also for path planning and optimization in order to connect the start point with the goal point. Several techniques, such as proximity sensors, have been used for perception. However, 3D point cloud processing is one of the most important areas of computer vision because it obtains a better description of the world. Different range imaging cameras are used to capture depth maps, such as stereo cameras, structured light scanners, time-of-flight cameras and others.
In this system, we use a low-cost 3D sensor to generate the 3D binary occupancy grid map and to describe the workspace. Our approach consists of a path planning system that obtains the environment description from a Kinect V2 point cloud, recognizes the start and goal point by computer vision techniques and solves the path planning issue using a variant of the RRT algorithm. The system includes calibration between the Kinect coordinate frame and the workspace coordinate frame using a registration technique.
Path planning determines a collision-free path between start and goal positions in a workspace clustered with obstacles [6]. Different path planning algorithms are used to find an optimal solution trajectory. Sample-based methods are widely used because of their effectiveness and low computational cost on high dimensional spaces. These methods use a representative configuration space and build a collision-free roadmap connecting points sampled from the obstacle free space.
The Probabilistic Roadmaps (PRM) and the RRT are sample-based methods. The RRT algorithm has been commonly used for a fast trajectory search because of its incremental nature. There are several versions of the RRT developed to improve the cost of the solution path. In our work, we compared the RRT with the RRT* algorithm and two other variations using the path cost as evaluation metrics.
This paper is organized as follows: The next section is focused on a brief description of related works. In Section 3, Section 4 and Section 5 we introduce the methods proposed to solve each part of the system in order to achieve the goal: registration, RGB segmentation and RRT path planning methods. Finally, experimental results and conclusions are presented in the last two sections.

2. Related Work

Research groups on robotics and computer vision have developed different techniques for robot navigation such as 3D perception and mapping based on RGB-D cameras [7], laser range scanners [8] and stereo cameras [9]. The Kinect was released in November 2010 as an RGB-D commercial camera, and after that, several applications on robotics appeared [10,11].
Segmentation and object recognition are fundamental algorithms for vision applications. Recently, the growing interest in 3D environments for robot navigation has increased the research on 3D segmentation. In [12], a color and depth segmentation is proposed to detect objects of an indoor environment by finding boundaries between regions based on the norm of the pixel intensity. A novel technique for depth segmentation uses the divergence of a 2D vector field to extract 3D object boundaries [13].
There are also several works on the literature for path planning to connect start and goal locations. The Ant Colony Algorithm, based on the behavior of ants searching for food, is used in many research projects due to its global optimization performance in multi goal applications [14]. Others use neural networks to give an effective path planning and obstacle avoidance solution that leads the robot to the target neuron that corresponds to the target position [15,16]. Some other strategies use behavior-based navigation and fuzzy logic approaches. In this solution, the image provides the data for the fuzzy logic rules that guide the robot to the safest and most traversable terrain region [17].
In this paper, we use a probabilistic path planning algorithm. Several optimizations have been developed in this area such as the RRT-Connect approach [18] that uses a simple greedy heuristic. Other variants have been presented in [19,20]. Our contribution is focused on the overall system of an optimal robot path planner, comparing the resulting paths of two different variants of the RRT* algorithm.

3. Workspace Data

A path planning algorithm requires a workspace, the initial and goal positions, and a robot. Our proposed system has three stages: environment perception, detection of target points and path planning.
In 2013, the Kinect V2 was introduced with a different technology with respect to the previous Kinect V1. The Kinect V2 is based on the measurement principle of time of flight: the distance to be measured is proportional to the time needed by the active illumination source for traveling from the emitter to the target [21]. Therefore, a matrix of time of flight cameras allows us to estimate the distance to an object, for each pixel. The Kinect V2 also has an RGB-D camera and an infrared camera.
In Table 1, Kinect V2 characteristics and technical specifications are presented.
Depending on the distance between the object point and sensor, the pixel size has an estimated value between 1.4 mm at 0.5 m of depth and 12 mm at 4.5 m of depth, as shown in Table 1, giving the approximate resolution of our point cloud.
We have used the open-source software developed by Hatledal, L.I. [23] to obtain the colored point cloud in millimeters from the delivered depth and RGB data. This point cloud will be processed with the proposed algorithm. We align the point cloud relative to the Kinect, to the point cloud relative to the workspace coordinate frame (real physical set of points), previous to image segmentation. A technique to solve this issue is point cloud registration. The objective of registration is the matching of two point sets by the estimation of the transformation matrix that maps one point set to the other [24,25]. A widely used registration method is the Iterative Closest Point (ICP) due to its simplicity and low computational complexity. ICP assigns correspondence based on the nearest distance criteria and the least-squares rigid transformation giving the best alignment of the two point sets [26].
Assuming that the point set { m i } and { d i } ,   i = 1 N are matched and related by:
d i = R m i + T + V i
where R is a 3 × 3 rotation matrix, T is a 3D translation vector and V i is a noise vector [27].
The goal is to obtain the optimal transformation for mapping the set { m i } onto { d i } minimizing the least-square error:
Σ 2 = i = 1 N d i R ^ m i T ^ 2
The least-squares solution is not optimal if there are many outliers or incorrect correspondences on the dataset, and therefore other techniques should be added. Several algorithms are able to compute the closed-form solution for R and T . In this work, we use the ICP algorithm to register two point clouds, as presented in Figure 1.
This algorithm uses an outlier filter before the transformation matrix calculation, in order to solve the incorrect correspondence issue.
We find the rigid transformation matrix ( T 4 × 4 ) that aligns point set P to point set Q so:
Q = T P
where P = [ x , y , z , 1 ] T is a vector with the centroid coordinates of the boundary circles extracted from the point cloud in the work plane, and Q = [ x , y , z , 1 ] T are the corresponding data points from the workspace coordinate frame, as shown in Figure 2.
With this technique, we get the point cloud on the robot coordinate frame suitable for feature extraction and path planning algorithms.

4. Target Points

Image segmentation is the process of separating an image into regions. Each region is defined as a homogenous group of connected pixels with respect to a selected interest property, such as color intensity or texture [29]. For this purpose, one of the most effective approaches is segmentation using color threshold [30]. This method is used in object recognition, assuming that pixels with a color value, inside of a defined range in the color channels, belong to the same class or object. A threshold image can be defined by:
g ( u , v ) = { 1 ,   f ( u , v ) > t 0 ,   f ( u , v ) t
where f ( u , v ) is a set of pixels from the input image, g ( u , v ) is the output binary image (binarization) and t is the threshold value [31] (a threshold value is needed for each color channel).
In our work, the input image is generated from the point cloud, so each pixel has a corresponding xyz coordinate. The thresholds are obtained locally by selecting a target region pixel to obtain its color component and a sensibility value chosen experimentally. The threshold of our image was defined by:
g ( u , v ) = { 1 ,   p R s f ( u , v ) > p R + s   p G s f ( u , v ) > p G + s p B s f ( u , v ) > p B + s 0 ,   o t h e r w i s e
where p R , p G , p B are the manually-selected pixel color components and s is the sensibility.
After the binary mask is obtained, we use morphological operations to remove noise and unwanted holes. Some of the main morphological operations that can be used are dilatation, erosion, opening and close.
Dilation is applied to each element of the mask image A using a structuring element B , resulting in a new mask image with a different size. The dilation A B removes holes and indentations smaller or equal to the structuring element [32].
Erosion A B removes noise smaller than the structuring element.
The opening A B eliminates edges and removes some of the foreground bright pixels.
The closing operation A · B keeps the background regions with a similar size of the structuring element, and removes all other pixels of the background.
We use the opening and close operations to enhance our binary image and obtain a better approximation of the interest regions.
The next step is representation where adjacent pixels of the same class are connected to form spatial sets s 1 ,   , s m . For the representation of our binary image, we use the blob detection method of Corke P. [33]. Blob is a spatial contiguous region of pixels. Properties or features such as area, perimeter, circularity, moments or centroid coordinates are widely used to identify these regions.
We identify our interest regions by filtering the blobs based on their position and area in order to simplify the problem. Knowing the boundaries of the workspace and the minimum and maximum size of the target circles, we obtained the interest blobs and their centroid coordinates.
In Figure 3, we can see the segmentation process implemented in our system.

5. Path Planning

5.1. Travelling Salesman Problem

This path planning problem, known as the travelling salesman problem (TSP), is focused on the organization of points and the search of an optimal path through all of them. We have a set { c 1 , c 2 , , c N } of cities and a Euclidean distance d { c i , c j } for each pair { c i , c j } of cities. The goal of the algorithm is to order the cities so as to minimize the total length of the tour [34].
Our solution for the TSP is based on the Nearest Neighbor (NN) algorithm shown on Algorithm 1. The performance of this approach is not the best in the literature, but NN gives suitable solutions with respect to our workspace and the possible locations of the target points. An example of the workspace and target points is presented in Figure 4.
The algorithm is approached as follows:
Algorithm 1: The nearest neighbor algorithm
1:  P   =   [ p 1   p 2   p 3     p n ]
2:  p 1 starting point
3:  V p nearest from unvisited points
4: mark V as visited
5: if all P are visited, end
6: return to step 3
Once the points are ordered, we develop the path planner based on the RRT algorithm, as mentioned above.

5.2. RRT

Using the notation from [35], the path planning problem can be denoted as:
  • X is a bounded connected open subset of d where d ,   d 2 .
  • G = ( V , E ) is a graph composed by a set of vertexes V and edges E .
  • X obs and X goal , subsets of X , are the obstacle region and the goal region respectively.
  • The obstacle free space X \ X obs is denoted as X free and the initial state x init is an element of X free .
  • A path in X free should be a collision-free path.
In path planning algorithms, there are two variations: the feasibility problem that refers to the searching of a feasible path and the failure report otherwise; and the optimization problem that focuses on a feasible path with minimal cost. The cost is c : X free R > 0 assigned to a non-negative cost of each collision-free path. A feasible path is a collision free-path that starts in X init and ends in X goal .
In this part, we introduce the standard RRT algorithm and the RRT* variations tested in our system. These algorithms rely on the following functions:
  • Function Sample: returns x rand , an independent identically distributed sample from X free .
  • Function Nearest Neighbor: returns the closest vertex x near to the point x X free , in terms of the Euclidean distance function.
  • Function Steer returns a point x new at a distance ε , from x near in direction to x rand .
  • Function Obstacle Free: Given two points x , x X free , the function returns true if the line segment between x and x lies in X free and returns false otherwise.
  • Function Near Vertices: returns a set V of vertices that are close to the point x X free within the closed ball of radius r centered at x .
  • Function Parent: returns the parent vertex x parent E of x X free .
As shown in Figure 5, the RRT algorithm starts in the initial state. For each iteration, the graph incrementally grows by sampling a random state x rand from X free , and connecting x new returned by function Steer with the nearest vertex in the tree x nearest , when the connection is obstacle-free. x new is added to the vertex set and ( x nearest ,   x new ) to the edge set. This algorithm is detailed in Algorithms 2 and 3.
The connections between two edges, considered in this paper as straight-line connections, are made by increasing the distance from the start position using the local planner.
Algorithm 2: RRT main algorithm
1:  V { x init } ; E = ; i = 0 ;
2:  while   i < N   do
3:     G ( V , E ) ;
4:     x rand Sample ( i ) ;
5:     i i + 1 ;
6:     ( V , E ) Extend ( G , x rand )
Algorithm 3: Extend function of the RRT algorithm
1:  V V ; E = E ;
2:  x nearest Nearest ( G , x ) ;
3:  x new Steer ( x nearest , x ) ;
4:  if   ObstacleFree ( x nearest , x new )   then
5:     V = V { x new } ;
6:     E = E { ( x nearest , x new ) } ;
7: return G = ( V , E )

5.2.1. RRT* Algorithm

The RRT* algorithm adds vertices in the same way as the RRT (Algorithm 2), but has two optimization procedures in the extend function shown in Algorithm 4. These optimizations connect vertices to obtain a path with minimum cost. The first one, done after x new , is added to the vertex set of the tree. The function Near Vertices returns the set of vertices X near . If the obstacle-free connection between x new and one of the vertices of X near has a lower cost than the path through x nearest then ( x near , x new ) is added to the edge set, and the other connections are ignored. An example of this optimization is shown in Figure 6.
Algorithm 4: Extended function of the RRT* algorithm
1:  V V ; E = E ;
2:     x nearest Nearest ( G , x ) ;
3:  x new Steer ( x nearest , x ) ;
4:  if   ObstacleFree ( x nearest , x new )   then
5:    V = V { x new } ;
6:     x min x nearest ;
7:     X near NearVertices ( G ,   x new ) ;
8:     for   all   x near X near   d o
9:     if   ObstacleFree ( x near , x new )   then
10:       c Cos t ( x near ) + Cos t ( Line ( x near , x new ) ) ;
11:      if   c < Cos t ( x nearest ) + Cos t ( Line ( x nearest , x new ) )   then
12:        x min x near ; / / Choose   new   parent   for   x new
13:        E = E { ( x min , x new ) } ;
14:     for   all   x near X near \ { x min }   do
15:     if   ObstacleFree ( x near , x new )   then
16:     c Cost ( x new ) + Cost ( Line ( x near , x new ) ) ;
17:     if   c < Cos t ( x near )   then
18:        x parent Parent ( x near ) ; // Rewire
19:        E E \ { ( x parent , x near ) } ;
20:        E E { ( x new , x near ) } ;
21:    return G = ( V , E )
The second optimization procedure checks if the cost of one of the remaining vertices of X near is lower when it is connected through the new vertex x new . If it is lower, then the edge ( P a r e n t ( x near ) , x near ) is deleted and the edge ( x new , x near ) is added to the edge set E , as shown in Figure 7.
The result is an optimal path where each vertex has the lowest cost. The result of the example is presented in Figure 8.
In order to decrease the time of finding a feasible solution, and to accelerate the rate of convergence and optimization, we proved two variations of the RRT*. The modifications were made on the Sample function by changing the probability of the generated random node. We have called these versions RRT* Goal, RRT* Limits and the combination of both RRT* Goal Limit (RRT* GL).

5.2.2. RRT* Goal Algorithm

In Algorithm 5, we present the sample function algorithm. In this function we guide the exploration of the tree in the direction of the goal region. We give a probability of 50% for sampling the goal region. The other 50% returns an identically distributed sample from X free .
Algorithm 5: Sample ( i , feasiblePath , X free ) function of the RRT* Goal algorithm
1:  if   ( i   M O D   2 = 0 )   OR   ( feasible   Path   is   TRUE )
2:   x rand random ( X free ) ;
3:  else
4:   x rand X goal ;
5:  return   x rand

5.2.3. RRT* Limits Algorithm

Once the feasible path is found, we increase the probability of the random sample in a specific space. This accelerates the rate of convergence to an optimal, low-cost path.
As shown in Figure 9, we obtain the uniform distributed random sample by delimiting the range to the minimum and maximum coordinates (x, y, z) of the path found. This increases the density of random samples at the path surroundings, avoiding unnecessary optimizations in other sections of the tree. This is shown on Algorithm 6.
Algorithm 6: Sample ( i , feasiblePath , V , X free ) function of the RRT* Limits algorithm
1:  if   feasible   Path   is   TRUE
2.   min L   = min ( V )
3.   max L   = max ( V )
4.   x rand random ( X free , [ min L , max L ] ) ;
5.  else
6.   x rand random ( X free ) ;
7.  return   x rand

6. Experiment

The system was implemented on a laptop with the following characteristics: Processor Intel Core i7-4500 @1.90 GHz and 8.00 GB RAM. The graphics card used was a GeForce GT 735 (NVIDIA, Santa Clara, CA, USA).
A real 3D image captured by a Kinect V2 was processed off-line for three different scenes on the workspace. Each scenario had one, two and three obstacles, respectively, between the start and goal point.
The experiment was divided into three parts: (a) the alignment of the workspace point cloud of the Kinect to the workspace coordinate frame; (b) the target point recognition; and (c) the path planning algorithm with the variations. During the testing, we used a Cartesian robot to reach the target points and measure the obtained coordinates that do not depend on the robot dynamics [36,37,38].

6.1. Point Cloud Alignment

As mentioned above, the method used for point cloud alignment was the registration of two data sets. We rotated the Kinect for testing the transformation matrix effect on the original point cloud, as shown in Figure 10.
In Table 2, we present information on eight workspace boundary circles (landmarks), their centroid coordinates estimated from the Kinect coordinate frame, the real coordinates on the workspace, the calibrated coordinates after registration and the error percentage of each circle centroid coordinate.
Because all of the circles are located on the same plane, we can calculate the average error percentage as the error of the workspace plane on the robot coordinate frame related to the Kinect pose by:
E x % = calibrated x real x delta x × 100
where delta x is the difference between the maximum and minimum value of the workspace seen by the Kinect on the x coordinate.
We can see in Table 2 that our system obtains an aligned point cloud with high accuracy.

6.2. Target Points Recognition

In this section, in Table 3 we present the location on millimeters of five circle centroids recognized with the algorithm on the xy plane and the corresponding error percentage.
The error was also calculated with respect to the robot workspace seen by the Kinect. The points locations were obtained very accurately.

6.3. Path Planning

Each algorithm was tested ten times to obtain the mean cost, number of segments in the resulting path, the time for locating the target point and the execution time of the algorithm through all of the iterations.
We tested each algorithm with 300, 600 and 1000 iterations in order to compare their results with respect to path optimization.
These results are shown in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12:
Figure 11, Figure 12 and Figure 13 show a bar chart of the three experiments with 600 iterations, respectively, to visually compare the resulting values of the tested algorithms.
As we can see in the results tables, the RRT* Goal algorithm significantly decreases the time of finding a feasible solution (Goal time), providing more time and computational resources to the path optimization. The RRT* Limits algorithm increases the density of the tree branches around the path, as seen in Figure 14d, which results in lower path costs with respect to the standard RRT*. The combination of both methods shows a significant improvement in the performance of the path planning algorithm, producing a more optimal path without increasing the number of iterations. However, this method requires more time and therefore more computational resources to run. This disadvantage can be balanced by decreasing the number of iterations to obtain similar path costs in less time.
The number of segments of the path in the RRT* is the lowest in most of the cases, which means that in order to achieve a more optimal path, the algorithms need more segments to reduce the path cost.
Video results are provided in [39].

7. Conclusions

In this paper, we experimentally tested the optimal and robust performance of our system including point clouds alignment, target point detection and path planning algorithms.
Flexibility and energy efficiency are important features for the autonomous navigation of robots in a workspace. In our approach, flexibility is given by the automatic alignment of the Kinect V2 point cloud to the robot coordinate frame, using the ICP algorithm. Target point detection also provided flexibility to the system. The color based threshold technique was sufficient for obtaining the goal region centroid of each target point. This approach showed a high performance with the RGB source of the Kinect.
Several variations of the RRT algorithm were successfully tested in order to compare their results with respect to the autonomy and energy efficiency of robots’ navigation. We have selected RRT* GL, which achieves a fast and optimal path between the start and goal point. The RRT* Goal algorithm reduces the time required for finding a feasible solution. The RRT* Limits algorithm improves the optimization by increasing the density of the tree branches in the path region.

Acknowledgments

This work is part of the project “Perception and Localization system for autonomous navigation of rotor micro aerial vehicle in GPS-denied environments (VisualNav Drone)” from the Research Center CICTE, directed by Wilbert G. Aguilar. The research assistant Stephanie G. Morales thanks for the encouragement and technical support during the testing of this work to Energypetrol S.A.

Author Contributions

Wilbert G. Aguilar directed the research; Wilbert G. Aguilar and Stephanie G. Morales conceived and designed the experiments; Stephanie G. Morales implemented and performed the experiments; Wilbert G. Aguilar and Stephanie G. Morales analyzed and discussed the results. Both authors wrote and revised the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Koren, Y. Robotics for Engineers; McGraw-Hill: New York, NY, USA, 1998. [Google Scholar]
  2. Cabras, P.; Rosell, J.; Pérez, A.; Aguilar, W.G.; Rosell, A. Haptic-based navigation for the virtual bronchoscopy. IFAC Proc. Vol. 2011, 44, 9638–9643. [Google Scholar] [CrossRef]
  3. Aguilar, W.G.; Angulo, C. Estabilización de vídeo en micro vehículos aéreos y su aplicación en la detección de caras. In Proceedings of the IX Congreso de Ciencia y Tecnología ESPE, Sangolquí, Ecuador, 28–30 May 2014.
  4. Gonzalez, R.; Safabakhsh, R. Computer vision techniques for industrial applications and robot control. Computer 1982, 15, 17–32. [Google Scholar] [CrossRef]
  5. Aguilar, W.G.; Angulo, C. Robust video stabilization based on motion intention for low-cost micro aerial vehicles. In Proceedings of the 11th International Multi-Conference on Systems, Signals & Devices (SSD), Barcelona, Spain, 11–14 February 2014.
  6. Vasishth, O.; Gigras, Y. Path planning problem. Int. J. Comput. Appl. 2014, 104, 17–19. [Google Scholar] [CrossRef]
  7. Henry, P.; Krainin, P.; Herbst, E.; Ren, X.; Fox, D. RGB-D mapping: Using depth cameras for dense 3D modeling of indoor environments. In Proceedings of the 12th International Symposium on Experimental Robotics (ISER), Delhi, India, 18–21 December 2010.
  8. Thrun, S.; Burgard, W.; Fox, D. A real-time algorithm for mobile tobot mapping with applications to multi-robot and 3D mapping. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000.
  9. Gutmann, J.-S.; Fukuchi, M.; Fujita, M. 3D perception and environment map generation for humanoid robot navigation. Int. J. Robot. Res. 2008, 27, 1117–1134. [Google Scholar] [CrossRef]
  10. Oliver, A.; Kang, S.; Wunsche, B.; MacDonald, B. Using the Kinect as a navigation sensor for mobile robotics. In Proceedings of the Conference on Image and Vision Computing New Zealand, Dunedin, New Zealand, 26–28 November 2012.
  11. Benavidez, P.; Jamshidi, M. Mobile robot navigation and target tracking system. In Proceedings of the 6th International Conference on System of Systems Engineering, Albuquerque, NM, USA, 27–30 June 2011.
  12. Rao, D.; Le, Q.; Phoka, T.; Quigley, M.; Sudsang, A.; Ng, A.Y. Grasping novel objects with depth segmentation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 18–22 October 2010.
  13. Ali Shah, S.A.; Bennamoun, M.; Boussaid, F. A novel algorithm for efficient depth segmentation using low resolution (Kinect) images. In Proceedings of the IEEE 10th Conference on Industrial Electronics and Applications (ICIEA), Auckland, New Zealand, 15–17 June 2015.
  14. Liu, J.; Yang, J.; Liu, H.; Tian, X.; Gao, M. An improved ant colony algorithm for robot path planning. Soft Comput. 2016, 1, 1–11. [Google Scholar] [CrossRef]
  15. Glasius, R.; Komoda, A.; Gielen, S.C.A.M. Neural network dynamics for path planning and obstacle avoidance. Neural Netw. 2000, 8, 125–133. [Google Scholar] [CrossRef]
  16. Xin, D.; Hua-hua, C.; Wei-kang, G. Neural network and genetic algorithm based global path planning in a static environment. J. Zhejiang Univ. Sci. A 2005, 6, 549–554. [Google Scholar]
  17. Seraji, H.; Howard, A. Behavior-based robot navigation on challenging terrain: A fuzzy logic approach. IEEE Trans. Robot. Autom. 2002, 18, 308–321. [Google Scholar] [CrossRef]
  18. Kuffner, J.J.; LaValle, S.M. RRT-connect: An efficient approach to single-query path planning. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000.
  19. Devaurs, D.; Thierry, S.; Cortés, J. Efficient sampling-based approaches to optimal path planning in complex cost spaces. In Algorithmic Foundations of Robotics XI; Springer: Cham, Switzerland, 2015; pp. 143–159. [Google Scholar]
  20. Gammell, J.D.; Srinivasa, S.; Barfoot, T. Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), Chicago, IL, USA, 14–18 September 2014.
  21. Lachat, E.; Hélene, M.; Tania, L.; Pierre, G. Assessment and calibration of a RGB-D camera (Kinect V2 Sensor) towards a potential use for close-range 3D modeling. Remote Sens. 2015, 7, 13070–13097. [Google Scholar] [CrossRef]
  22. Fankhauser, P.; Bloesch, M.; Rodriguez, D.; Kaestner, R.; Hutter, M.; Siegwart, R. Kinect v2 for mobile robot navigation: Evaluation and modeling. In Proceedings of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015.
  23. Hatledal, L.I. Kinect V2 SDK 2.0—Colored Point Clouds. Available online: http://laht.info/kinect-v2-colored-point-clouds/ (accessed on 20 November 2015).
  24. Aguilar, W.G.; Angulo, C. Real-time video stabilization without phantom movements for micro aerial vehicles. EURASIP J. Image Video Proc. 2014, 1, 1–13. [Google Scholar] [CrossRef] [Green Version]
  25. Aguilar, W.G.; Angulo, C. Estabilización robusta de vídeo basada en diferencia de nivel de gris. In Proceedings of the VIII Congreso de Ciencia y Tecnología ESPE, Sangolquí, Ecuador, 6–8 June 2013.
  26. Myronenko, A.; Song, X. Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2262–2275. [Google Scholar] [CrossRef] [PubMed]
  27. Eggert, D.W.; Lorusso, A.; Fisher, R.B. Estimating 3-D rigid body transformations: A comparison of four major algorithms. Mach. Vis. Appl. 1997, 9, 272–290. [Google Scholar] [CrossRef]
  28. The MathWorks Inc. Pcregrigid Documentation. 2015. Available online: http://www.mathworks.com/help/vision/ref/pcregrigid.html (accessed on 24 February 2016).
  29. Navon, E.; Miller, O.; Averbuch, A. Color image segmentation based on adaptive local thresholds. Image Vis. Comput. 2005, 23, 69–85. [Google Scholar] [CrossRef]
  30. Bruce, J.; Balch, T.; Veloso, M. Fast and inexpensive color image segmentation for interactive robots. In Proceedings of the IEEE/RSJ International Conference on intelligent Robots and Systems, Takamatsu, Japan, 31 October–5 November 2000.
  31. Sahoo, P.K.; Soltani, S.; Wong, A.K.C. A survey of thresholding techniques. Comput. Vis. Graph. Image Proc. 1988, 41, 233–260. [Google Scholar] [CrossRef]
  32. Heinz, K.; Hanson, W. Interactive 3D segmentation of MRI and CT volumes using morphological operations. J. Comput. Assist. Tomogr. 1992, 16, 285–294. [Google Scholar]
  33. Corke, P. Robotics, Vision & Control: Fundamental Algorithms in Matlab; Springer: Berlin/Heidelberg, Germany, 2011; Volume 73. [Google Scholar]
  34. Aarts, E.H.L.; Lenstra, J.K.; Wiley, J.; Johnson, D.; McGeoch, L.A. The traveling salesman problem: A case study in local optimization. Local Search Comb. Optim. 1997, 1, 215–310. [Google Scholar]
  35. Karaman, S.; Frazzoli, E. Incremental Sampling-based Algorithms for Optimal Motion Planning. In Robotics Science and Systems VI 104; The MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  36. Aguilar, W.G.; Angulo, C. Real-time model-based video stabilization for microaerial vehicles. Neural Proc. Lett. 2016, 43, 459–477. [Google Scholar] [CrossRef]
  37. Aguilar, W.G.; Angulo, C.; Costa, R.; Molina, L. Control autónomo de cuadricópteros para seguimiento de trayectorias. In Proceedings of the IX Congreso de Ciencia y Tecnología ESPE, Sangolquí, Ecuador, 28–30 May 2014.
  38. Aguilar, W.G.; Angulo, C. Compensación y aprendizaje de efectos generados en la imagen durante el desplazamiento de un robot. In Proceedings of the X Simposio CEA de Ingeniería de Control, Barcelona, Spain, 1–2 March 2012.
  39. Aguilar, W.G.; Morales, S.G. RRT Optimal for UAVs. Available online: https://www.youtube.com/watch?v=Fe3AACHkcdQ (accessed on 12 July 2016).
Figure 1. Point cloud rigid registration algorithm [28].
Figure 1. Point cloud rigid registration algorithm [28].
Electronics 05 00070 g001
Figure 2. Registration point sets. (a) In orange the feature points extracted from the image; (b) In green the corresponding workspace coordinate frame points.
Figure 2. Registration point sets. (a) In orange the feature points extracted from the image; (b) In green the corresponding workspace coordinate frame points.
Electronics 05 00070 g002
Figure 3. Segmentation process.
Figure 3. Segmentation process.
Electronics 05 00070 g003
Figure 4. Target points in the workspace are shown in red, while the workspace boundary circles are shown in green.
Figure 4. Target points in the workspace are shown in red, while the workspace boundary circles are shown in green.
Electronics 05 00070 g004
Figure 5. Growth of the tree. In gray the current tree vertices, in yellow the new vertex. (a) x rand returned from sample function; (b) extending tree from the nearest point at ε distance on x free on x rand direction.
Figure 5. Growth of the tree. In gray the current tree vertices, in yellow the new vertex. (a) x rand returned from sample function; (b) extending tree from the nearest point at ε distance on x free on x rand direction.
Electronics 05 00070 g005
Figure 6. First optimization procedure of the Rapidly-exploring Random Trees variant (RRT*) algorithm. In gray the current tree vertices, in yellow the new vertex. (a) Near vertices to x new ; (be) In green, the path from x init to x new through each x near vertex; (f) new parent is chosen.
Figure 6. First optimization procedure of the Rapidly-exploring Random Trees variant (RRT*) algorithm. In gray the current tree vertices, in yellow the new vertex. (a) Near vertices to x new ; (be) In green, the path from x init to x new through each x near vertex; (f) new parent is chosen.
Electronics 05 00070 g006
Figure 7. Second optimization procedure of the RRT* algorithm. In gray the current tree vertices, in yellow the new vertex. (ac) In blue, the path from x init to each vertex of x near through x new ; (d) If the cost function is lower than the cost of x near through the current parent, a rewired procedure is executed.
Figure 7. Second optimization procedure of the RRT* algorithm. In gray the current tree vertices, in yellow the new vertex. (ac) In blue, the path from x init to each vertex of x near through x new ; (d) If the cost function is lower than the cost of x near through the current parent, a rewired procedure is executed.
Electronics 05 00070 g007
Figure 8. In gray the current tree vertices, in yellow the new vertex. (a) RRT standard resulting path; (b) RRT* optimal resulting path.
Figure 8. In gray the current tree vertices, in yellow the new vertex. (a) RRT standard resulting path; (b) RRT* optimal resulting path.
Electronics 05 00070 g008
Figure 9. The edges of the tree are presented in blue. The feasible path is shown in red. The cube in black shows the limited space, for the random sample, generated on the RRT* Limits.
Figure 9. The edges of the tree are presented in blue. The feasible path is shown in red. The cube in black shows the limited space, for the random sample, generated on the RRT* Limits.
Electronics 05 00070 g009
Figure 10. The black arrows represent the point clouds axis on the origin. (a) Point cloud on the Kinect coordinate frame; (b) point cloud on the robot coordinate frame.
Figure 10. The black arrows represent the point clouds axis on the origin. (a) Point cloud on the Kinect coordinate frame; (b) point cloud on the robot coordinate frame.
Electronics 05 00070 g010
Figure 11. Results of the algorithms for the one obstacle scenario and 600 iterations.
Figure 11. Results of the algorithms for the one obstacle scenario and 600 iterations.
Electronics 05 00070 g011
Figure 12. Results of the algorithms for the two obstacles scenario and 600 iterations.
Figure 12. Results of the algorithms for the two obstacles scenario and 600 iterations.
Electronics 05 00070 g012
Figure 13. Results of the algorithms for the three obstacles scenario and 600 iterations.
Figure 13. Results of the algorithms for the three obstacles scenario and 600 iterations.
Electronics 05 00070 g013
Figure 14. Comparison between RRT* algorithms in a simulation example (600 iterations) in a workspace with three obstacles between the start and goal point. The edges of the tree are presented in blue. The best path that reaches the goal is highlighted in red. (a) The RRT standard; (b) the RRT*; (c) the RRT* Goal; (d) the RRT* Limits and (e) the RRT* GL.
Figure 14. Comparison between RRT* algorithms in a simulation example (600 iterations) in a workspace with three obstacles between the start and goal point. The edges of the tree are presented in blue. The best path that reaches the goal is highlighted in red. (a) The RRT standard; (b) the RRT*; (c) the RRT* Goal; (d) the RRT* Limits and (e) the RRT* GL.
Electronics 05 00070 g014
Table 1. Kinect V2 sensor characteristics and specifications. Data from [22].
Table 1. Kinect V2 sensor characteristics and specifications. Data from [22].
CharacteristicsSpecifications
RGB camera resolution1920 × 1080 pixels
Depth camera resolution512 × 424 pixels
Field of View (h × v)70 × 60 degrees
Depth operating range0.5–4.5 m
Object pixel sizebetween 1.4 mm (@ 0.5 m range) and 12 mm (@ 4.5 m range)
Table 2. Results table of the point cloud alignment.
Table 2. Results table of the point cloud alignment.
LandmarkCoord.Kinect Coord. (mm)Real Coord. (mm)Calibrated Coord. (mm)Error %
1x0.7−1.5−0.910.04
y79.90.1−0.52−0.16
z−125.72.3−4.14−6.44
2x9.13.85−0.55−0.27
y473.2400.1390.9−2.19
z−158.7−5.24−2.432.81
3x351.7347.46345.7−0.11
y71.6−2.82−7.73−1.17
z−104.615.78160.22
4x604.8602.41600.2−0.14
y467.8394.37390.1−1.02
z−134.416.5820.113.53
5x942.5942.56940.9−0.10
y65.1−9.72−14.4−1.11
z−88.830.2531.971.72
6x1308.31298.81297−0.12
y453.6386.97377.5−2.25
z−136.120.917.5−3.40
7x1595.51591.815980.39
y50.9−16.5−25.26−2.09
z−85.134.5437.012.47
8x1603.41598.51596−0.15
y452.5383.1374.5−2.05
z−133.521.5421.08−0.46
Average−0.2710
Table 3. Results table of the target points recognition.
Table 3. Results table of the target points recognition.
Valuesp1p2p3p4p5
xyxyxyxyxy
real252.2300.1497.9151.2988.52208.371194.3300.11194.3300.1
calculated249.6299497152.8985.3205.71187.8299.21187.8299.2
Error %0.160.260.060.380.200.640.410.210.070.05
Table 4. Results. One obstacle, 300 iterations a.
Table 4. Results. One obstacle, 300 iterations a.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT2251501.85314.944.94
RRT*3001469.5264.4315.81
RRT* Goal3001333.7260.9113.61
RRT* Limits3001474.4285.9817.09
RRT* GL3001302.4270.8216.76
a RRT, Rapidly-exploring Random Trees; RRT*, RRT variant; RRT* GL, RRT* Goal Limits.
Table 5. Results. One obstacle, 600 iterations.
Table 5. Results. One obstacle, 600 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT2251501.85314.944.94
RRT*6001431.8255.5561.57
RRT* Goal6001289.9260.8654.73
RRT* Limits6001455.22717.760.73
RRT* GL6001308.7270.8666.99
Table 6. Results. One obstacle, 1000 iterations.
Table 6. Results. One obstacle, 1000 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT2251501.8314.944.94
RRT*10001403.4238.3180.25
RRT* Goal10001312.3260.81161.24
RRT* Limits10001366.32628.54191.2
RRT* GL10001299.1260.86198.01
Table 7. Results. Two obstacles, 300 iterations.
Table 7. Results. Two obstacles, 300 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT190.51853.65384.0154.015
RRT*300157629514.68
RRT* Goal3001477.2301.1511.93
RRT* Limits3001603.6329.5613.88
RRT* GL3001402.4292.058.535
Table 8. Results. Two obstacles, 600 iterations.
Table 8. Results. Two obstacles, 600 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT190.51853.65384.0154.015
RRT*6001433.7252.4966.4
RRT* Goal6001366281.4746.27
RRT* Limits6001512317.8656.19
RRT* GL6001359.6281.1965.4
Table 9. Results. Two obstacles, 1000 iterations.
Table 9. Results. Two obstacles, 1000 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT190.51853.65384.0154.015
RRT*10001398.4242.4188.61
RRT* Goal10001366.626.51.285158.26
RRT* Limits10001425.4307.22185.17
RRT* GL10001332.2271.5187.46
Table 10. Results. Three obstacles, 300 iterations.
Table 10. Results. Three obstacles, 300 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT2641324.527.55.535.53
RRT*3001435.6244.6913.36
RRT* Goal3001056.6201.2514.47
RRT* Limits3001205.2254.417.57
RRT* GL3001097.2232.2526.73
Table 11. Results. Three obstacles, 600 iterations.
Table 11. Results. Three obstacles, 600 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT2641324.527.55.535.53
RRT*6001331.1234.5944.42
RRT* Goal6001065.5201.2160.28
RRT* Limits6001156.9256.0665.4
RRT* GL6001040.2211.4391.58
Table 12. Results. Three obstacles, 1000 iterations.
Table 12. Results. Three obstacles, 1000 iterations.
Algorithm# IterationsTotal Cost# SegmentsGoal TimeTotal Time
RRT2641324.527.55.535.53
RRT*10001137.2202.95165.3
RRT* Goal10001029.3211.35189.73
RRT* Limits10001073.82510.84220.16
RRT* GL10001001.7211.61270.71

Share and Cite

MDPI and ACS Style

Aguilar, W.G.; Morales, S.G. 3D Environment Mapping Using the Kinect V2 and Path Planning Based on RRT Algorithms. Electronics 2016, 5, 70. https://doi.org/10.3390/electronics5040070

AMA Style

Aguilar WG, Morales SG. 3D Environment Mapping Using the Kinect V2 and Path Planning Based on RRT Algorithms. Electronics. 2016; 5(4):70. https://doi.org/10.3390/electronics5040070

Chicago/Turabian Style

Aguilar, Wilbert G., and Stephanie G. Morales. 2016. "3D Environment Mapping Using the Kinect V2 and Path Planning Based on RRT Algorithms" Electronics 5, no. 4: 70. https://doi.org/10.3390/electronics5040070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop