Next Article in Journal / Special Issue
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Previous Article in Journal
A Comparative Study on Three Different Transducers for the Measurement of Nonlinear Solitary Waves
Previous Article in Special Issue
A Novel Scheme for DVL-Aided SINS In-Motion Alignment Using UKF Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas

Centro De Automética y Robótica, UPM-CSIC. Calle José Gutiérrez Abascal, 2. 28006 Madrid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2013, 13(1), 1247-1267; https://doi.org/10.3390/s130101247
Submission received: 16 October 2012 / Revised: 24 December 2012 / Accepted: 14 January 2013 / Published: 21 January 2013
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)

Abstract

:
There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning) require the use of known maps or previous information of the environment. This work presents a system composed by a terrestrial and an aerial robot that cooperate and share sensor information in order to address those requirements. The ground robot is able to navigate in an unknown large environment aided by visual feedback from a camera on board the aerial robot. At the same time, the obstacles are mapped in real-time by putting together the information from the camera and the positioning system of the ground robot. A set of experiments were carried out with the purpose of verifying the system applicability. The experiments were performed in a simulation environment and outdoor with a medium-sized ground robot and a mini quad-rotor. The proposed robotic system shows outstanding results in simultaneous navigation and mapping applications in large outdoor environments.

Graphical Abstract

1. Introduction

The navigation of a mobile robot can be described as the problem of finding a suitable and collision-free motion between the two poses (position and orientation) of this robot, while obstacle mapping consists of using the sensing capabilities to obtain the representation of unknown obstacles in such a manner that it is useful for navigation [1]. This work presents an alternative to solve these two problems by using a system that combines an aerial and a ground robot, using only the localization system of the terrestrial robot (UGV) and the image from a single camera on board the aerial robot (UAV).
The localization and obstacle detection techniques that are needed for ground robot navigation systems can use relative or absolute sensors. Relative sensors refers to those that are usually internal to the robot, and its measurement is given with respect to another position or state. Absolute sensors provides a measurement with respect to an external or global reference frame [2]. The idea presented here is to provide a vision system on board the aerial robot as a relative sensor, and fuse this information with the absolute position of the ground robot in order to achieve an absolute measurement of the position of the obstacles. By doing this it is possible to obtain a mechanism of collaborative navigation and obstacle avoidance, that takes advantage of the heterogeneity of a mixed robotic system.
There are two main objectives for this system: The first is to obtain the relative distance from the UGV to any unknown obstacle surrounding it, and in that way, guarantee the safe navigation towards the goal or way-point. The second one is to obtain the absolute position of all obstacles detected during the navigation and build a map with their localization; this means having a list of the geo-referenced position of the obstacles. Some previous works have used a combination of aerial and ground robots to approach similar objectives [35], but there is no previous work that would have used only one on-board aerial camera for obstacle detection and avoidance; also, to our knowledge, no previous obstacle mapping was performed with fusion of an aerial image and the localization of the UGV.
One of the advantages of using an aerial visual navigation system is that the UGV field of view (FOV) is dynamic. This means that by controlling the UAV height and relative position, the UGV at some stage can perform a pseudo-zoom on an obstacle or any interesting object, as well as reconnaissance of areas outside the UGV's FOV. Moreover, it is also possible to identify rugged terrain, floor openings or negative obstacles, and other unexpected navigation obstructions in the UGV surrounds. This collaboration ensures the UGV's safety while it performs other inspection tasks. Finally, the system does not require previous knowledge from the environment and it can cover large navigation perimeters.
The collision-free navigation system is being developed in several phases. In the first step, the UGV and the obstacles in the robot pathway are identified by processing the aerial image, then techniques based in potential fields are applied to enable simple navigation and obstacle avoidance. After that, the proposed architecture will allow a local map to be built and obtain a geo-referenced position of the obstacle seen by the UAV. Also, each obstacle found can be memorized, and a global map with all the obstacles can be built. This will enable the system to merge reactive and deliberative methods.
The outline of this article is as follows: After this brief introduction, Section 2 reviews some related works. Section 3 introduces the addressed challenges. The techniques used to estimate the UGV's pose and the obstacle position are described in Section 4. Then, in Section 5, the UGV navigation and control systems used are explained. After that, the experiments performed and the results obtained both in simulations and with real robots are presented in Section 6. Finally, the conclusions are presented.

2. Related Work

Mobile robotic systems, both aerial and terrestrial, have been studied and developed over the years for several civil and military purposes. Some of those applications are focused on using mobile robots to help or substitute humans in tedious or dangerous tasks, as well as to survey and patrol large unstructured environments. This task is one of the objectives of the ROTOS project in which this work is framed.
In order to perform perimeter surveillance, a robot must be able to generate a trajectory to explore, or to navigate from, an initial point to a final point without colliding with other vehicles or obstacles. This autonomous navigation is one of the most ambitious issues in robotics research.
Concerning visual navigation, many reactive and deliberative navigation approaches have been presented up to now, e.g., in structured environments using white line recognition [6], in corridor navigation using View-Sequenced Route Representation [7], or more complex techniques combining visual localization with the extraction of valid planar region [8], or visual and navigation techniques to perform visual navigation and obstacle avoidance [9]. Some works integrate and fuse vision data from the UAV and UGV for best target tracking [3], another work [10] presents uncertainty modeling and observation–fusion approaches that produce considerable improvement in geo-location accuracy. Also, a different work [11] presents a comparative study of several target estimation and motion-planning techniques and remarks on the importance (and the difficulty) of a single UAV at maintaining consistent view of moving targets.
By merging all those capacities and characteristics together, it is possible to develop a unique sensing and perception of a collaborative system. Work [4] focus their work on a multi-robot system based on a vision-guide autonomy quad-rotor. They describe a way to take off, land and track over the UGV, where the UGV is equipped with two LEDs and a flat pattern in the surface. However, the quad-rotor does not provide information to the UGV about the environment. On the other hand, work [12] presents a motion-planning and control system based on visual servoing (i.e., the use of feedback from a camera or a network of cameras) in a UGV without cameras on board, but not specifically with a UAV. Even work [5] proposes a hierarchy in which several UAVs with aerial cameras can be used not only to monitor, but also to command a swarm of UGVs.
Some of those studies have been tested in several different contexts, such as environment monitoring [13], pursuit-evasion games [14], fire detection and fighting [15], multi-robot localization in highly cluttered urban environments [16], de-mining [17], and other multi-purpose collaborative tasks [18].

3. Problem Formulation and Solving

In this section, we introduce the problem from a theoretical perspective. In the first place, a geometric description of the problem, as well as the relationships between the different reference frames, are given. Then, a solution is proposed based on the aerial–ground system kinematics and some assumptions. As mentioned before, there are two main objectives of the aerial–ground system:
  • Obstacle avoidance: provide collision-free navigation for the UGV during the execution of its mission through visual feedback from a low cost mini UAV.
  • Obstacle mapping: obtaining the global position of the obstacles and built a geo-referenced map from it.

3.1. Coordinate Frames and Definitions

In the first place, the coordinate systems are defined using the right-hand rule. This step is important and should not be avoided since those definitions are used in the development of the guidance, navigation and control (GNC) sub-system. The coordinate frames from all physical bodies within the workspace are defined with the right-hand rule. The reference frames adopted by the aerial–ground robotic system are shown in Figure 1.
The nomenclature used in the coordinate systems is the follow: W (world frame), G (ground frame), O (Obstacle frame), I (image plane), P (pixels plane), C (camera frame), and A (aerial frame). All the coordinate frames are defined as three dimensional (3D), with the exception of the image plane and the pixels plane, which are two-dimensional (2D).
In this work, some assumptions have been considered. The goal is that the UAV follows the UGV and hovers at a minimum height above it. This minimum height must ensure that UGV is always within the camera FOV (field of view (FOV)), likewise the UGV surroundings. In this way, any obstacle nearby can easily be identified. Therefore, it is assumed that the UAV has a GNC (guidance, navigation and control) sub-system that enables it to follow and hover above UGV. Moreover, one single obstacle is identified at a time in the camera FOV.

3.2. Sensor Geometric Model

The sensor model can be mapped by a physical model (more complex, by considering highly non-linear equations) or analytical (approximated, polynomial functions). Additionally, the sensor model, i.e., the digital camera, can be expressed either forward (image to ground) or inversely (ground to image). In this work, the forward physical model is considered. Let us start from the pinhole camera model from Equation (1), where (u, υ, 1)T is a 2D point in the pixel frame and (X, Y, Z, 1)T a 3D point in the workspace of the robot. The perspective projection is scaled by a factor s.
s × [ u υ 1 ] s × p = [ f x 0 c x 0 f y c y 0 0 1 ] K [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] [ R | T ] [ X Y Z 1 ] P

3.2.1. Intrinsic Parameters

The intrinsic parameters of the camera are addressed by the first matrix from Equation (1), denoted as K. With these parameters, a relationship can be established between the pixel coordinates in the image and its position in the camera reference frame. The intrinsic parameters of interest (known a priori from the camera specifications) are: f, focal length of the camera; cx and cy, which are the principal points of the image (i.e., pixel coordinates definition from the center) and lens distortion coefficients (explained later ahead in Section 3.2.3.).

3.2.2. Extrinsic Parameters

The extrinsic parameters, on the other hand, define the camera position and orientation with reference to the world reference system. The obstacles position within the world coordinate frame can be obtained in two ways:
  • Obtaining the obstacle position in the world coordinate frame and apply all geometric transformations (i.e., forward camera model) from the pixel plane to the world reference frame.
  • Obtain the distance from the UGV to the obstacle in meters in the image plane, and compute the obstacle position in the world reference frame from the UGV position and orientation in the world reference frame (given by a GPS). Herein, there is no need to translate the coordinates to the world frame.
The proposed approach employs the methodologies described in the item 2. Thus, the only extrinsic parameter employed and necessary is the orientation from the camera which ensures geometric corrections on the image acquired by the camera shipped on the UAV.
From Figure 1, it can be noticed that the camera reference frame is displaced from the camera reference frame over the z-axis. This displacement represents the distance from the camera to the UAV center of gravity (CoG). The camera position with relation to the UAV can be expressed as
[ x C A y C A z C A ] = [ x A y A z A ] + [ 0 0 a ]
Furthermore, the UAV position is already given in world coordinates (i.e., provided by the GPS). Thus, the camera position in the world reference frame is given again by Equation (2).
The position and heading from the UGV and the obstacle position in the image reference frame are extracted and projected in the pixel frame by applying computer vision algorithms and techniques. The obstacle position or the UGV position can be obtained by applying this simple perspective projection in Equation (3), its relative position to each other, obtained by a simple subtraction, can be used when the navigation is purely reactive.
[ x O I y O I ] = [ f x 0 c x 0 f y c y ] [ cos ( θ ) sin ( ϕ ) sin ( θ ) cos ( ϕ ) sin ( θ ) 0 cos ( ϕ ) sen ( ϕ ) sin ( θ ) sin ( ϕ ) cos ( θ ) cos ( ϕ ) cos ( θ ) ] [ x p z A y p z A 1 ]
The angle between the obstacle and the UGV in the UGV's reference frame, denoted as γ, can be obtained by subtracting the UGV heading in the camera plane denoted as α from the angle of the obstacle, also on the camera plane and denoted as θ. Both of those angles can be obtained by using Equation (3). This is better illustrated in Figure 2.
Finally, the last objective of the system is to build a global map with all obstacles found in the path of the UGV. Each obstacle identified by the UAV camera is geo-referenced and targeted in a geo-referenced map. Thus, the obstacle position in the world coordinate frame is given by
[ x O W y O W z O W ] = [ cos ( γ ) sin ( γ ) 0 sin ( γ ) cos ( γ ) 0 0 0 1 ] [ x O I x G I y O I y G I z O I z G I ] + [ x G W y G W z G W ]

3.2.3. Lens Distortions

Additionally, the Brown-Conrady model is applied to rectify the lens distortion. The lens distortion model is given by two distortion coefficients: the radial (k1,…, kn) and the tangential (d1,…, dn). Thus, considering ( x p , y p ) T, an undistorted image point, and (xp, yp)T, a distorted image point, the rectified equations are given by,
[ x p y p ] = [ x p ( 1 + k 1 r 2 + k 2 r 4 ) + 2 d 1 + 2 p 1 x p y p + p 2 ( r 2 + 2 x p 2 ) y p ( 1 + k 1 r 2 + k 2 r 4 ) + p 1 ( r 2 + 2 x p 2 ) + 2 p 2 x p y p ]

3.3. Features Extraction

As was already mentioned, the features extracted with the camera shipped on the UAV were obtained by applying computer vision techniques. The UGV position and orientation in the pixel plane is obtained through the pseudo-code shown in Algorithm 1. In this case, a polygon is defined as a geometric shape with more than three vertices. This procedure is accomplished by finding in the image processed a set of contours addressing the ground robot shape or any good polygonal feature addressing the robot. The image processing sequence is exemplified in Figure 3. A threshold is applied to a gray scale image, followed by a canny edge detector. Once the polygon is identified, its centroid can be easily obtained. Moreover, the UGV orientation relative to the image is obtained through the contour principal moments of inertia.

Algorithm 1 UGV pose extraction.

 1:CountorsFindCountours(Image<gray>)
 2:if 3 < length(Countors) then
 3:PGetPolygn(Contours)
 4:Point<px,Py>Centroid(P)
 5:end if
 6:MgetMomentums(P)
 7:αAngle(M)
 8: ( x G P , y G P ) getCentroid ( P )

Concerning obstacles, those with round and square shape (e.g., silos, water or gas tanks, containers) have been considered for identification in this work. The square features are extracted similarly to the aforementioned procedure. However, the round top shapes are featured, employing Hough transformations. It should be also highlighted that the obstacle is identified from its top view, since the camera is pointing downwards (see pseudo-code in Algorithm 2).

Algorithm 2 Obstacle position extraction.

 1:CHoughCircles(Image<gray>)
 2: ( x O P , y O P ) Centroid ( C )
 3:rRadius(C)

Before the experiments, some parameters must be tuned manually, as, for example, the threshold, which must be regulated according to weather conditions, in particular, light exposure.

4. Position Estimation

This section describes the techniques used to estimate the UGV's pose and the obstacle position. First the UGV's pose in a global coordinate frame is estimated by fusing the odometry, IMU and GPS readings using an Extended Kalman Filter (EKF), after that, another Kalman Filter is used to estimate the position of the obstacles using the results of the image processing algorithm.

4.1. Ground Robot Pose Estimation

The vehicle pose estimation problem can be defined as the calculations necessary to estimate the vehicle state based on the readings from several sensors. This problem is solved by using an EKF, which produces at time k a minimum mean squared error estimate (kk) of a state vector s(k). This estimation is the result of fusing a prediction of the state estimate (kk − 1) with an observation z(k) of the state vector s(k).
  • Vehicle Model
    The vehicle model represents its three-dimensional pose (position and orientation). This pose can be parametrized as s = [t, ψ]T = [x, y, z, φ, θ, ψ]T, where t = [x, y, z]T are the Universal Transverse Mercator (UTM) coordinates and the relative height of the vehicle, and Ψ = [φ, θ, ψ]T are the Euler angles on the X-Y-Z axis also known as Roll, Pitch, Yaw.
    In order to use an EKF to estimate the pose of the robot, it is necessary to express it as Multivariate Gaussian Distribution sN (μ, Σ). This distribution is defined by a six-element column vector μ, representing the mean values. Moreover, a six-by-six symmetric matrix Σ represents the covariance.
    = μ = [ E [ x ] , E [ y ] , E [ z ] , E [ φ ] , E [ θ ] , E [ ψ ] ] T [ Cov [ x , x ] Cov [ x , y ] Cov [ x , ψ ] Cov [ y , x ] Cov [ y , y ] Cov [ y , ψ ] Cov [ ψ , x ] Cov [ ψ , y ] Cov [ ψ , ψ ] ]
    Now it is necessary to define a non-linear conditional probability density function f (s (k), u (k + 1)) which represents the probability of the predicted position given the current vector state s(k) and the control vector u(k + 1).
    s ( k + 1 ) = f ( s ( k ) , u ( k + 1 ) )
    It is also very useful to define a global UGV transformation TUGV consisting of a rotation matrix R(Ψ) obtained from the Euler angles, and a translation t in the global reference frame.
    T UGV = [ R ( Ψ ) t ]
  • Measurement Models
    The vehicle pose is updated according to the readings from three different sensors: The internal odometry of the UGV, a IMU sensor and a GPS. Thus, it is necessary to create a model that links the measurements of each sensor with the global position of the mobile robot. As was done with the pose of the vehicle, the observations of the sensors need to be expressed as Gaussian Distribution; this means that they should provide a vector of mean values and a symmetrical matrix of covariances.
    As it was done for the estimated UGV pose, a transformation Ti is defined for each measurement. These transformations are:
    Odometry
    The odometry provides a relative position and orientation constraint. To be able to use these measurements in a global reference frame, the transformation between two successive readings of the odometry (Todomread(k) → Todomread(k+1)) is calculated, and the resulting transformation is applied to the previous estimated position of the UGV.
    T odom ( k + 1 ) = T UGV ( k ) T odom read ( k ) 1 T odom read ( k + 1 )
    GPS.
    The readings from the GPS are converted to UTM using the equations form USGS Bulletin 1532 [19], and they are used to provide a global position constraint. However, the GPS does not provide information for the orientation of the robot, so only the position should be taken into account.
    T GPS ( k + 1 ) = [ R ( Ψ o ) t GPS READ ( k + 1 ) ]
    IMU.
    The IMU readings are pre-processed to fuse the gyroscopes, accelerometers and magnetometers. In order to provide a global constraint of the orientation of the UGV, no position or translation constraints are obtained from this sensor.
    T IMU ( k + 1 ) = [ R ( Ψ IMU READ ) t o ]
    With the transformations defined, it is possible to define a probability density function for each one of them. In all cases, the H matrix (necessary to compute the estimation of the EKF) will be an identity matrix of dimension six. For the GPS and the IMU, the covariance associated with the rotation or translation, which are not provided, will be replaced by very high values.
These models and the Kalman Filter itself are implemented using the Bayesian Filter Library [20], which is fully integrated in the ROS environment. The estimated pose is updated at a pre-defined frequency with the data available at that time; this is an estimation of the pose to be made even if one sensor stops sending information. Also, if the information is received after a time-out, it is disregarded.
Figure 4 shows a test trajectory performed with the robot in a simulated environment. The positions were translated to a common reference frame in order to be able to compare them. It can be observed that the EKF performs a correction of the position from the odometry, thereby reducing the mean square error in comparison with the real position obtained directly from the simulator.

4.2. Transformations and Obstacle Pose Estimation

Once the pose of the UGV is estimated, it is possible to apply the processing and transformations described in Section 3 to obtain the global position of the obstacle. For each image processed, a transformation like the one described on Equation (4) is obtained, and it is applied to the estimated position of the robot. This produces several measurements of the position of each obstacle detected. Since the global position of the UGV is obtained in UTM coordinates, the global position of the obstacles is given in the same reference frame. All of the positions are stored and then the mean value and the standard deviation are calculated. Finally. a table with the mean values and standard deviations is produced.

6. Experiments

This section presents the results obtained with the proposed aerial–ground system. The approaches described in the previous sections have been implemented first on a simulation environment, and then with real robots. The software architecture used allows to use the same implementation in both environments, but it was necessary to use a realistic simulation of the entire system. First, a quad-rotor aerial robot model was used, as well as a skid-steering ground robot. Also, the sensors simulations that include typical error sources were added to the both robot models, as well as the camera model on board the UAV. By doing this, we ensure that the simulations and the real results are consistent, and that the algorithms can be translated to the real robots. The simulated and real environments are shown in Figure 6.

6.1. Simulations

Two main tests have been performed on the simulated environment. The first one consists of a single obstacle detection and avoidance maneuver, it is done to check each part of the solution proposed, and thus validate it.
Figure 7 shows the trajectory performed for the first test. The position of the UGV in UTMs is obtained from the Extended Kalman Filter. The position of the obstacle is calculated from the aerial image and the corresponding transformations, as was described in Section 3. The total distance covered was 16.9 meters and was executed in 25 seconds.
Figure 8 shows how each position was obtained, a transformation from the UGV position to the obstacle position is obtained according to Equation (4) and is represented by blue arrows. The resulting obstacle position in the world coordinate frame is represented by red circles. Once the obstacle is left behind, its position is no longer of interest, nor is it taken into account. It can be observed that there is a dispersal of the observations of the obstacle position, so it is of interest to have a characterization of the resulting positions of the obstacle.
In Figure 9 the real obstacle position and the positions obtained with the system are translated to a common reference frame and compared. The mean square error and the standard deviation were calculated and are shown in Table 1. It is possible to see that both values are in good error ranges. The mean square error is less than 20% of the diameter of the obstacle, which is 1 meter, in the X axis and 30% on the Y axis.
Once the obstacle position is obtained, the navigation algorithm uses this information to perform the avoidance maneuver according to the seek and avoid behaviors described in Section 5.2. The outcome of this navigation algorithm is the velocity command in each time step. The command is translated to the UGV reference frame, converted to linear and angular velocity, and sent to the robot's control system. Figure 10 shows the velocity commands generated for this test, together with the UGV's trajectory and the obstacle position.
For the second test, three way-points were defined, in an environment with six cylindrical obstacles. While visiting the way-points, some obstacles were detected, and the corresponding avoidance was executed if necessary. Figure 11 shows the full trajectory, as well as the obstacles detected. Five of the six obstacles were detected, and all three way-points were reached successfully.
The position is given in UTM coordinates, as was explained in Section 4.2. Since the values obtained for the standard deviation are in all cases less than one meter, it can be concluded that this data gives a set of obstacle positions can be used for the navigation system. Table 2 shows the mean value and standard deviation of the five obstacles detected during the test.

6.2. Real Environment

A set of tests were designed and implemented in order to check the feasibility of the system outside of the simulated environment. The first test was oriented to check the inter-process communications. Therefore, a set of hovering flights were performed and data both from the UGV and the UAV was acquired in real-time using the middle-ware and software architecture developed for this system [21].
The second test was carried out in an outdoor environment. Its objectives was to obtain real images from the UAV and test the feature extraction algorithms. To facilitate the tests, a platform with a geometric shape pattern was shipped in the UGV. The platform designed have two main capabilities: it can be used for take-off and emergency landings, and it can also transport the UAV in case it ran out of battery. An additional advantage is that the platform design made it easier to track the UGV and distinguish it from other features on the ground. Finally, the navigation algorithm for the UGV was tested using a fixed target and a virtual obstacle position. A set of images from those previous tests are shown in Figure 12.
During the tests described previously, the image acquisition rate and processing times, likewise the telemetry data from both the UGV and the UAV, were tested and measured. At last, all the data processing and the commands sent to the navigation system were also measured in terms of its time period. The results of all those measurements are shown in Table 3.
It can be observed that the image streaming runs with an average frequency of 10Hz, and that all the other times or frequencies are faster than that value. In accordance with the software architecture (defined in [21]), the UGV's control and navigation systems run in the UGV on-board computer, and the image acquisition and processing is done on the base-station PC. Also, the inter-process communication core is handled by the ROS middle-ware framework, which enables the system to work at the same frequency as the image streaming. The results from those previous tests shown that the communications and image processing algorithms are feasible enough to perform obstacle-free navigation in real time.
In order to accomplish the last experiment with the full aerial–ground system. A set of obstacles were placed in different positions of the robot workspace. Then, a set of pre-defined and fixed targets were given to the UGV ensuring that to arrive there it must avoid the obstacles previously defined.
The obstacle-avoidance maneuver was executed using the steering behaviors algorithm. The UAV was manually controlled in hovering mode over the UGV, and images from the aerial vehicle were acquired and processed using the feature extraction algorithm. The position of the UGV and the obstacle were extracted from the image and the information of the obstacles position sent to the UGV navigation system. Figure 13 shows a sequence of aerial images obtained during a trajectory, with the objects identified in each frame, it also shows the trajectory obtained from the odometry of the UGV The variables measured during the experiment are shown in Table 4.
The obstacles were successfully detected and avoided using the proposed system; the mean position and standard deviation of the observations are shown in Table 5. It should be emphasized that the error in the estimated position of the obstacles was less than 0.15meters in the last experiment. Moreover, the UGV maintained a mean safe distance from the obstacles of 0.3 meters. Therefore, despite the obstacles' position error, the UGV is unlikely to collide due to the safe distance between the obstacles.

7. Concluding Remarks

A hybrid robotic system has been designed and implemented in order to provide a safe navigation system for a UGV, using the aerial image from a camera on board the UAV as the only source of information about the environment. The system is able to perform local real-time navigation and exploration in large semi-structured environments; it can also build near-accurate maps with the absolute position of the obstacles found it its path. These maps can be used to generate local path planning or be fed back to other robots or mission planners. The geographic positions from the obstacles are obtained through an original fusion method employing both the real-time aerial image and the UGV absolute position provided by the GPS.
The robustness of the system was checked according to the mission requirements. Therefore, a set of experiments were done both in a simulation environment and an outdoor scenario with real robots. The ground robot moved around unknown and cluttered environments without colliding.
The results obtained are encouraging to continue researching and extend the potential of this collaborative robotic system.

Acknowledgments

This work was supported by the Robotics and Cybernetics Group at Technical University of Madrid (Spain), and funded under the projects ROTOS: Multi-robot system for outdoor infrastructures protection, sponsored by Spanish Ministry of Education and Science (DPI2010-17998), and ROBOCITY 2030, sponsored by the Community of Madrid (S-0505/DPI/000235).

References

  1. Choset, H.; Lynch, K.; Hutchinson, S.; Kantor, G.; Burgard, W.; Kavraki, L.; Thrun, S. Principles of Robot Motion: Theory, Algorithms, and Implementations; MIT Press: Boston, MA, USA, 2005. [Google Scholar]
  2. Krotkov, E. Position Estimation and Autonomous Travel by Mobile Robots in Natural Terrain. Kent Forum Book. 1997. Available online: http://www.ri.cmu.edu/pub_files/pub3/krofkov_eric_1997_2/krotkov_eric_1997_2.pdf (accessed on 3 January 2013).
  3. Moseley, M.B.; Grocholsky, B.P.; Cheung, C.; Singh, S. Integrated long-range UAV/UGV collaborative target tracking. Proc. SPIE 2009, 7332. [Google Scholar] [CrossRef]
  4. Li, W.; Zhang, T.; Klihnlenz, K. A vision-guided autonomous quadrotor in an air-ground multi-robot system. Proceedings of 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 2980–2985.
  5. Chaimowicz, L.; Kumar, V. Aerial shepherds: Coordination among uavs and swarms of robots. In Distributed Autonomous Robotic Systems 6; Springer: Tokyo, Japan, 2007; pp. 243–252. [Google Scholar]
  6. Ishikawa, S.; Kuwamoto, H.; Ozawa, S. Visual navigation of an autonomous vehicle using white line recognition. IEEE Trans. Patt. Anal. Mach. Intell. 1988, 10, 743–749. [Google Scholar]
  7. Matsumoto, Y.; Inaba, M.; Inoue, H. Visual navigation using view-sequenced route representation. Proceedings of 1996 IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; pp. 83–88.
  8. Dao, N.X.; You, B.J.; Oh, S.R. Visual navigation for indoor mobile robots using a single camera. Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), Edmonton, AB, Canada, 2–6 August 2005; pp. 1992–1997.
  9. Cherubini, A.; Chaumette, F. Visual navigation with obstacle avoidance. Proceedings of 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 1593–1598.
  10. Grocholsky, B.; Dille, M.; Nuske, S. Efficient target geolocation by highly uncertain small air vehicles. Proceedings of 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 4947–4952.
  11. Dille, M.; Grocholsky, B.; Nuske, S. Persistent Visual Tracking and Accurate Geo-Location of Moving Ground Targets by Small Air Vehicles. Proceedings of AIAA Infotech@Aerospace Conference, St. Louis, MO, USA, 29–31 March 2011.
  12. Rao, R.; Kumar, V.; Taylor, C. Visual servoing of a UGV from a UAV using differential flatness. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27 October–1 November 2003; pp. 743–748.
  13. Elfes, A.; Bergerman, M.; Carvalho, J.R.H.; de Paiva, E.C.; Ramos, J.J.G.; Bueno, S.S. Air-ground robotic ensembles for cooperative applications: Concepts and preliminary results. Proceedings of 2nd International Conference on Field and Service Robotics, Pittsburgh, PA, USA, 29–31 August 1999; pp. 75–80.
  14. Vidal, R.; Rashid, S.; Sharp, C.; Shakernia, O.; Kim, J.; Sastry, S. Pursuit-evasion games with unmanned ground and aerial vehicles. Proceedings of IEEE International Conference on Robotics and Automation, Seoul, Korea, 21–26 May 2001; pp. 2948–2955.
  15. Phan, C.; Liu, H. A cooperative UAV/UGV platform for wildfire detection and fighting. Proceedings of Asia Simulation Conference: 7th International Conference on System Simulation and Scientific Computing, Beijing, China, 10–12 October 2008; pp. 494–498.
  16. Chaimowicz, L.; Grocholsky, B.; Keller, J.F.; Kumar, V.; Taylor, C.J. Experiments in Multirobot Air-Ground Coordination. Proceedings of the 2004 International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2004; pp. 4053–4058.
  17. MacArthur, E.Z.; MacArthur, D.; Crane, C. Use of cooperative unmanned air and ground vehicles for detection and disposal of mines. Proc. SPIE 2005, 5999, 94–101. [Google Scholar]
  18. Valente, J.; Barrientos, A.; Martinez, A.; Fiederling, C. Field tests with an aerial–ground convoy system for collaborative tasks. Proceedings of 8th Workshop de RoboCity2030-II: Robots Exteriores, Madrid, Spain, 2 December 2010; pp. 233–248.
  19. Snyder, J.P. Map Projections: A Working Manual; Supersedes USGS Bulletin 1532; U.S. Geological Survey, U.S. Government Printing Office: Washington, DC, USA, 1987. [Google Scholar]
  20. Gadeyne, K.; BFL: Bayesian Filtering Library. 2001. Available online: http://www.orocos.org/bfl (accessed on 3 January 2012).
  21. Garzón, M.; Valente, J.; Zapata, D.; Chil, R.; Barrientos, A. Towards a ground navigation system based in visual feedback provided by a mini UAV. Proceedings of the IEEE Intelligent Vehicles Symposium Workshops, Alcal de Henares, Spain, 3–7 June 2012.
  22. Reynolds, C.W. Steering Behaviors For Autonomous Characters. Proceedings of Game Developers Conference, San Jose, CA, USA, 15–19 March 1999; pp. 763–782.
Figure 1. Coordinate frames.
Figure 1. Coordinate frames.
Sensors 13 01247f1
Figure 2. UGV heading geometry. The angle of interest is given by γ = θα.
Figure 2. UGV heading geometry. The angle of interest is given by γ = θα.
Sensors 13 01247f2
Figure 3. Features extraction procedure example for a ground robot.
Figure 3. Features extraction procedure example for a ground robot.
Sensors 13 01247f3
Figure 4. Test Trajectory for EKF. A trajectory was performed in order to test the performance of the Extended Kalman Filter.
Figure 4. Test Trajectory for EKF. A trajectory was performed in order to test the performance of the Extended Kalman Filter.
Sensors 13 01247f4
Figure 5. Steering Behaviors. Two Behaviors (Seek and Avoid) and their corresponding vectors are shown.
Figure 5. Steering Behaviors. Two Behaviors (Seek and Avoid) and their corresponding vectors are shown.
Sensors 13 01247f5
Figure 6. UAV and UGV in the proposed environments (a) Simulated environment. (b) Real environment. The blue squares in each figure represent the image acquired from the camera on board the aerial robot.
Figure 6. UAV and UGV in the proposed environments (a) Simulated environment. (b) Real environment. The blue squares in each figure represent the image acquired from the camera on board the aerial robot.
Sensors 13 01247f6
Figure 7. Single obstacle Avoidance. The trajectory of the UGV while performing the avoidance maneuver, and the detected obstacle positions are shown.
Figure 7. Single obstacle Avoidance. The trajectory of the UGV while performing the avoidance maneuver, and the detected obstacle positions are shown.
Sensors 13 01247f7
Figure 8. Obstacle Position Estimation. The green line denotes the UGV position. The blue arrows determine the transformation from each robot position to the center of the obstacle marked with red circles.
Figure 8. Obstacle Position Estimation. The green line denotes the UGV position. The blue arrows determine the transformation from each robot position to the center of the obstacle marked with red circles.
Sensors 13 01247f8
Figure 9. Obstacle Position (Real vs. Estimated). The red circles denote the estimated obstacle position. The green one shows the real position of the obstacle and the blue is the mean value of the calculated positions. All positions were translated to a common reference frame.
Figure 9. Obstacle Position (Real vs. Estimated). The red circles denote the estimated obstacle position. The green one shows the real position of the obstacle and the blue is the mean value of the calculated positions. All positions were translated to a common reference frame.
Sensors 13 01247f9
Figure 10. Velocity Commands. The red line describes the UGV trajectory, and the blue arrows represent the velocity commands at each position.
Figure 10. Velocity Commands. The red line describes the UGV trajectory, and the blue arrows represent the velocity commands at each position.
Sensors 13 01247f10
Figure 11. Second Test Trajectory. The blue line describes the UGV trajectory. The red circles represent the obstacles found in the pathway and the green circles are the way-points.
Figure 11. Second Test Trajectory. The blue line describes the UGV trajectory. The red circles represent the obstacles found in the pathway and the green circles are the way-points.
Sensors 13 01247f11
Figure 12. Initial Tests with real robots: (a,b) Communication and hovering. (c), (d) and (e) Aerial Images captured at different heights.
Figure 12. Initial Tests with real robots: (a,b) Communication and hovering. (c), (d) and (e) Aerial Images captured at different heights.
Sensors 13 01247f12
Figure 13. Field tests: (a) Sequence of images obtained from the UAV while performing an obstacle-avoidance maneuver. The orientation is represented with a blue and red arrow for the X and Y axes. (b) The UGV trajectory read from the EKF is shown in blue, additional marks for the target and the obstacle have been added.
Figure 13. Field tests: (a) Sequence of images obtained from the UAV while performing an obstacle-avoidance maneuver. The orientation is represented with a blue and red arrow for the X and Y axes. (b) The UGV trajectory read from the EKF is shown in blue, additional marks for the target and the obstacle have been added.
Sensors 13 01247f13
Table 1. Mean Square Error and Standard Deviation of the calculated obstacle position.
Table 1. Mean Square Error and Standard Deviation of the calculated obstacle position.
Mean Square Error (m)Standard Deviation
X Axis0.19540.4236
Y Axis0.33330.3053
Table 2. Mean Value and Standard Deviation for all the obstacles found in the Test Trajectory.
Table 2. Mean Value and Standard Deviation for all the obstacles found in the Test Trajectory.
Mean Pos X (m)Std. Dev. XMean Pos Y (m)Std. Dev. Y
Obstacle 1441637.80.47434476756.30.6360
Obstacle 2441623.30.74894476768.40.5152
Obstacle 3441629.40.49394476761.90.2612
Obstacle 4441613.10.52464476755.10.4076
Obstacle 5441619.30.49244476749.70.3820
Table 3. Frequencies and Time Consumption for image and data streaming and processing.
Table 3. Frequencies and Time Consumption for image and data streaming and processing.
Average Frequency (Hz)Max. Period (s)Min. Period (s)
Image Streaming100.4460.003
UAV Telemetry130.4040.001
UGV Telemetry200.0510.049
UGV Navigation System100.1030.096
Image and Data Processing370.1020.010
Table 4. Trajectory Parameters and Additional Information.
Table 4. Trajectory Parameters and Additional Information.
Trajectory Time:39 s
UGV Max Speed0.3 m/s
UGV Mean Speed0.2048 m/s
UAV Max altitude4.812 m
UAV Mean altitude4.51 m
Total Trajectory Distance:10.1999 m
Table 5. Mean Value and Standard Deviation for the obstacles found in the trajectory.
Table 5. Mean Value and Standard Deviation for the obstacles found in the trajectory.
Mean Pos X (m)Std. Dev. XMean Pos Y (m)Std. Dev. Y
Obstacle 10.98960.01944.13730.0468
Obstacle 25.09300.11431.11580.0492

Share and Cite

MDPI and ACS Style

Garzón, M.; Valente, J.; Zapata, D.; Barrientos, A. An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas. Sensors 2013, 13, 1247-1267. https://doi.org/10.3390/s130101247

AMA Style

Garzón M, Valente J, Zapata D, Barrientos A. An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas. Sensors. 2013; 13(1):1247-1267. https://doi.org/10.3390/s130101247

Chicago/Turabian Style

Garzón, Mario, João Valente, David Zapata, and Antonio Barrientos. 2013. "An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas" Sensors 13, no. 1: 1247-1267. https://doi.org/10.3390/s130101247

Article Metrics

Back to TopTop