Next Article in Journal
Sparse Component Analysis Using Time-Frequency Representations for Operational Modal Analysis
Previous Article in Journal
A Solid-State Thin-Film Ag/AgCl Reference Electrode Coated with Graphene Oxide and Its Use in a pH Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative Environment Scans Based on a Multi-Robot System

Yonsei Institute of Convergence Technology, Yonsei University, Songdogwahak-ro, Yeonsu-gu, Incheon 406-840, Korea
Sensors 2015, 15(3), 6483-6496; https://doi.org/10.3390/s150306483
Submission received: 19 January 2015 / Revised: 21 February 2015 / Accepted: 6 March 2015 / Published: 17 March 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
This paper proposes a cooperative environment scan system (CESS) using multiple robots, where each robot has low-cost range finders and low processing power. To organize and maintain the CESS, a base robot monitors the positions of the child robots, controls them, and builds a map of the unknown environment, while the child robots with low performance range finders provide obstacle information. Even though each child robot provides approximated and limited information of the obstacles, CESS replaces the single LRF, which has a high cost, because much of the information is acquired and accumulated by a number of the child robots. Moreover, the proposed CESS extends the measurement boundaries and detects obstacles hidden behind others. To show the performance of the proposed system and compare this with the numerical models of the commercialized 2D and 3D laser scanners, simulation results are included.

1. Introduction

The description of an unknown environment has received attention, as it can be utilized in path planning [1,2,3] and localization [4,5,6,7] in autonomous robot systems, which should be required to achieve safe and efficient movement. To acquire obstacle information for the description of an environment (this will be called a map), various range finders such as infrared, ultrasonic, vision, and laser sensors have been implemented on the autonomous robots.
To detect and describe the obstacles in an unknown environment, infrared and ultrasonic sensors have been implemented on robots for a long time, because they have provided good solutions in simple applications [8,9] due to their low cost and simple output data. However, since they provide only approximate information, a detailed map cannot be achieved. Vision systems such as time of flight (ToF) cameras have also been employed [10,11]. Since ToF cameras provide depth images that include distance information with large amounts of noise, autonomous robots can acquire obstacle information using the vision system. However, its detectable area is small, it can be difficult to extract obstacle information in image, and it cannot detect obstacles hidden behind other obstacles. To acquire precise obstacle information in a large area, LRF has been utilized such that the one-channel laser scanner for a 2D plane and a multi-channel laser scanner for a 3D space have been implemented on autonomous robots [12,13,14,15]. They provide detailed obstacle information as a point cloud with which to build the map. However, LRF has a high cost, and it still cannot provide invisible obstacle information. To overcome the detectable area limitation of LRF, simultaneously localization and mapping (SLAM) algorithms [4,5,6,7] and multi-robot exploration and mapping algorithms [16,17,18,19] have been proposed. It is possible that the map of a large environment can be built by estimating the position of the autonomous robots moving in the unknown environment. However, if loop-closing and map-merging algorithms that can correct integrated estimation errors are not guaranteed, the map-building procedure does not provide accurate environmental information. Since loop-closing derives the robot’s additional movement and map-merging procedures require a high processing burden, SLAM and multi-robot exploration and mapping algorithms increase the cost of the robotic system. In addition, in the case of multi-robot exploration and mapping, because all robots should implement LRFs, the cost of the entire system can increase as the number of robots increases.
Thus, this paper proposes a cooperative environment scanning sensor system (CESS) based on multiple robots that can replace a single LRF in an autonomous robot. In the proposed CESS, the multi-robot system is used as a single sensor. To the authors’ knowledge, a sensor system based on multiple robots instead of a single sensor device has not been previously proposed in the literature. In CESS, the multiple child robots provide obstacle information to the autonomous robot (which will be termed the base robot). The child robots provide obstacle information measured with low-cost range finders to the base robot, and the base robot controls the child robots. For an accurate map of the environment around the base robot, in CESS, it is important to know the position of the child robots. Thus, in this paper, two relative positioning systems are employed as follows: (a) visual sensor-based systems [19,20,21,22] and (b) ultra-wide band (UWB)-based system [23,24,25]. First, the vision-based positioning system [19,20,21,22] measures the position of the child robot using artificial markers, image target trackers, and projective geometry. When this positioning system is used, the child robots should move in the visible area, since the vision system of the base robot should see all child robots. Second, in the UWB-based positioning system, the child robots move within the boundaries of the UWB system, even though they are in invisible areas, because the UWB-based positioning system can detect objects hidden behind others. In addition, the proposed CESS presents both the 2D plane and 3D space according to the vertical angle of the range finders implemented on the child robot. When all the vertical angles of the range finders on the child robots are zero, CESS can describe the obstacle information on the plane (i.e., CESS replaces 1-ch LRF); however, not all of their vertical angles are zero, so CESS can build a 3D obstacle map (i.e., CESS replaces multi-channel LRF).
When CESS is employed in an autonomous robot (i.e., base robot) in moving in an unknown environment instead of a single LRF, the following contributions are achieved. First, the organizing cost for the sensors scanning the environment is reduced. CESS can reduce the cost of the entire system by over 80% compared to LRF. Second, if the base robot implements a UWB-based positioning system detecting objects hidden behind others, CESS can acquire information about the obstacles behind others without a SLAM algorithm; while the performance of the range finders on the child robots is limited, these contributions are achieved due to the advantages of using a multi-robot system. In other words, even though the measurement of each child robot is not enough to build a precise map, the accuracy of the map increases and it describes more invisible information, since the base robot is provided much information by the child robots.
This paper is organized as follows: we describe the multiple robot system considered in the paper in Section 2. In Section 3, we propose control mechanisms for the child robots with respect to the positioning systems. To demonstrate the usefulness and performance of the proposed CESS, the simulation results are presented in Section 4. Finally, the conclusions of this study are given in Section 5.

2. Multiple Robot System for Detecting Obstacles

To maintain CESS, the base robot controls all child robots via a wireless communication device and the child robots provide obstacle information around the base robot. To detail the cooperation between the base and child robots, Figure 1 shows the CESS architecture.
Figure 1. The CESS architecture.
Figure 1. The CESS architecture.
Sensors 15 06483 g001
Figure 1 shows that CESS is a centralized system, because the base robot monitors the positions of all the child robots, controls them, acquires all obstacle information, and builds the map. First, the positioning system on the base robot provides the positions and orientations of the child robots for the control algorithm and the map builder. If artificial markers are used, it is possible that the ID, the relative position, and the orientation are measured [19,20,21]. However, if the artificial markers are not utilized, the child robot can be identified using the initial position, specified templates, and kinematic model in Equation (1), and the control inputs [22,26]. Second, since the child robot just provides obstacle information without the controller and its own positioning system, the child robot does not require a high-level processor (e.g., high performance MCU) or additional positioning sensors except the range finders and communication device. Finally, to estimate the position and control the child robot, it is assumed that the child robot is an underactuated system with non-holonomic constraint in Cartesian coordinates [27,28]. The kinematic model of the child robot is thus described as:
[ x ˙ i y ˙ i θ ˙ i ] = [ cos θ i 0 sin θ i 0 0 1 ] [ v i ω i ]
where (xi, yi) is the position of the ith child robot, θi is the orientation, and vi and ωi are the linear and angular velocities, respectively, which will be designed as control inputs.
After the estimation of the position and control of the child robot, consider the measurements from the range finders on the child robot. As mentioned before, CESS presents both 2D and 3D information from the vertical angles of the range finders on the child robot, as depicted in Figure 2.
Figure 2. The vertical angles of the range finders on the child robots.
Figure 2. The vertical angles of the range finders on the child robots.
Sensors 15 06483 g002
In Figure 2, δi is the vertical angle of the range finder. In the case that all the vertical angles are 0 (i.e., δ1 = δ2 = ··· = δn = 0), a 2D plane is described. On the other hand, if the vertical angles of the sensors of all robots are not zero, CESS describes the 3D space information. Since the positions of the child robots are monitored by the base robot, the information of the detected obstacles is acquired as:
xp = dsin(δi)cos(θi)+xi
yp = dsin(δi)sin(θi)+yi
zp = dsin(δi)
where d is the measurement of the range finders. Note that the robots can be equipped with more than one range finder.
To connect the base and child robots and maintain CESS, the gateway communication model in [29] is employed. In this gateway model, the base robot serves as the gateway, and each child robot sends its measurements to the base robot and receives motion control inputs. For the gateway communication model, this paper assumes that there is little communication delay and noise. Note that since little information is transferred between the base and child robots, and the performance of commercialized communication devices such as Zigbee, Bluetooth, and Wi-Fi have improved, this assumption can be reliable. Of course, a communication time delay can occur as the number of child robots increases; thus, the number of the child robots should be modified with respect to the performance of communication devices.

3. Control Mechanisms According to the Positioning Systems

The CESS coverage area is determined by the positioning system on the base robot. If a vision-based positioning system is utilized, child robots should move in the visible area of the base robot, on the other hand, if the UWB-based positioning system is implemented, the child robots move in the boundary of the positioning system without considering visibility. Accordingly, the control strategies are chosen by the positioning system as depicted in Figure 3.
Figure 3. Two CESS control strategies with respect to the positioning systems. (a) CESS using the vector field based control law with vision based positioning system; (b) CESS using the behavior based control algorithm with the UWB-based positioning system.
Figure 3. Two CESS control strategies with respect to the positioning systems. (a) CESS using the vector field based control law with vision based positioning system; (b) CESS using the behavior based control algorithm with the UWB-based positioning system.
Sensors 15 06483 g003
As can be seen in Figure 3a, in the case of the vision-based positioning system, the child robots move in the desired circle designed in a visible area of the base robot, since the base robot can only monitor the visible child robots. To guarantee visibility, the vector field based multiple robot control algorithm in [27] is employed. However, if the UWB-based positioning system in Figure 3b is implemented, the child robots can move anywhere in the boundary of the positioning system even though they are invisible. Thus, in this case, the behavior based control scheme in [30,31] is adaptable.
Remark 1.
To choose the positioning system, the vision- and UWB-based positioning systems are compared as follows. First, in the vision-based positioning system, all the child robots have artificial markers by which the relative position and orientation between the base and child robots are provided; however, the positioning system is limited in the visible area. Second, the UWB-based positioning system monitors the invisible child robots behind obstacles; however, it is difficult to identify the child robots. Therefore, the positioning system should be determined with respect to the desired specification of a given system.

3.1. Vector Field Based Multiple Robot Control Algorithm for a Vision-Based Positioning System

To control child robots using the vision-based positioning system, the vector field-based multiple robot control laws in [27] are employed. Here, we design simplified control laws for the algorithm in [27]. Since all the child robots should be in the visible area, let them move on the circle whose center is the base robot, as depicted in Figure 4.
Figure 4. The motions of the child robots following the desired circular path.
Figure 4. The motions of the child robots following the desired circular path.
Sensors 15 06483 g004
In Figure 4, (xb, yb) is the position of the base robot, r is the radius of the desired circular path with respect to the base robot, ri is the distance between the ith robot and the base robot, ψi is the angular position, and θ i d is the desired orientation guiding the ith robot towards the desired path. To avoid collisions with obstacles, the radius of the desired circle, r, in Figure 4 is determined with respect to the obstacles around the base robot as follows:
r = kbmin(Bob)
where 0 < kb < 1 is constant, Bob is the set of the distances between the base robot and obstacles with respect to the angular position (the output of the commercialized LRF is similar to Bob). The time derivatives of ri and ψi are as follows:
r ˙ i = v i cos ( θ i ψ i )
ψ ˙ i = v i r i sin ( θ i ψ i )
Since all child robots should move on the circle, the child robots have the desired orientation:
θ i d ( r i ) = ψ i + π 2 + tan 1 ( k d e r )
where er = rir, ψi = atan2(yiyb, xixb), and kd > 0 is constant. Here, atan2(·) is a four-quadrant inverse tangent with the values in the interval (−π,π), and kd is a parameter controlling the ratio of the attraction angle towards the circle. The time derivative of the desired orientation in Equation (6) is as follows:
θ ˙ i d = v i r i sin ( θ i ψ i ) + k d v i cos ( θ i ψ i ) 1 + ( k d e r ) 2
To allocate the child robots regularly, the angular positions between the adjacent child robots should be maintained as ψd = 2π/n. The control objective is that the errors chosen as:
e i θ = θ i d θ i ,   e i ψ = ψ i 1 ψ d ψ i
converge to zero. The time derivatives of e i θ and e i ψ are:
e ˙ i θ = v i r i sin ( θ i ψ i ) + k d v i cos ( θ i ψ i ) 1 + ( k d e r ) 2 ω i
e ˙ i ψ = v i 1 r i 1 sin ( θ i 1 ψ i 1 ) v i r i sin ( θ i ψ i )
Then, we determined the control law of the child robots moving on the desired circle as:
v i = μ ( θ i ψ i ) r i sin ( θ i ψ i ) ( ψ ˙ i 1 + k ψ e i ψ )
ω i = θ ˙ i d + k θ e i θ
where kψ and kθ are positive constants, and:
μ ( θ i ψ i ) = 1 exp { | θ i ψ i | 2 σ 2 }
for when σ > 0 is constant. As mentioned in [27], the radial function μ(θiψi) makes control inputs in Equation (3) avoid divergence to infinite value in the case that θiψi and ψi ± π. When the child robots use the designed control laws in Equation (10), the stability of the entire system is presented in the following theorem.
Theorem 1.
When the control law in Equation (10) is employed to the child robots in Figure 4, their stability can be guaranteed in the sense that the error variables e i ψ and e i θ are uniformly bounded, and the ultimate bounds can be made smaller by choosing a smaller value of σ and larger values of kψ.
Proof of Theorem 1. To show the ultimate boundness of e i ψ and e i θ , we choose the Lyapunov function candidate as follows:
V = 1 2 { ( e i ψ ) 2 + ( e i θ ) 2 }
whose time derivative is:
V ˙ = e i ψ e ˙ i ψ + e i θ e ˙ i θ   = e i ψ ( ψ ˙ i 1 v i r i sin ( θ i ψ i ) ) + e i θ ( θ ˙ i d ω i )
Substituting the control inputs in Equation (10) into Equations (13) and (13) becomes:
V ˙ = e i ψ { 1 μ ( θ i ψ i ) } ψ ˙ i 1 k ψ ( e i ψ ) 2 k θ μ ( θ i ψ i ) ( e i θ ) 2
Due to Equations (5) and (14) is revised to:
V ˙ = k θ ( e i θ ) 2 k ψ μ ( θ i ψ i ) ( e i ψ ) 2 + e i ψ { 1 μ ( θ i ψ i ) } { v i 1 r i 1 sin ( θ i 1 ψ i 1 ) }   k θ ( e i θ ) 2 k ψ μ ( θ i ψ i ) ( e i ψ ) 2 + 1 r i 1 | e i ψ | | v i 1 |
From Equations (12)–(15), we ensure that V ˙ < 0 outside { | e i ψ | | v i 1 | / ( r i 1 k ψ μ ( θ i ψ i ) ) } . In addition, if σ is small and kψ is large, the ultimate bounds of e i ψ become much smaller (Q.E.D.)
Here, it should be noted that, in the case that a child robot does not move on the circle, ψi becomes the opposite side of the desired orientation, and in the case of a child robot following the desired circle, ψi is the orthogonal direction of the desired orientation. If the child robot faces the direction of ψi, the linear velocity becomes zero and the robot only turns towards the circle. Furthermore, because the radial function can be 1 and sin(θiψi) is not zero around θi = θd, the control input in Equation (10a) can be:
v i = r i sin ( θ i ψ i ) ( ψ ˙ i 1 + k ψ e i ψ )
Thus, the errors, e i ψ and e i θ , can converge to zero in the case that θi is close to θd.

3.2. Behavior Based Multiple Robot Control Algorithm for UWB-Based Positioning System

When the base robot uses the UWB-based positioning system, the child robots can move anywhere in the boundary of the positioning system, even though they are invisible due to obstacles. Therefore, the behavior based control algorithm in [30,31] can be chosen as the control algorithm for the child robots. In this control algorithm, basic behaviors are designed and behaviors that are more complex are acquired by combining them. Three basic behaviors are used in proposed CESS as follows:
(a)
Safe-wandering: The child robots wander around the base robot without collisions with adjacent child robots or obstacles.
(b)
Dispersion: The adjacent child robots disperse.
(c)
Aggregation: All the child robots move within the boundary of the positioning system.
In this paper, the outputs of the basic behaviors are the desired orientation [31] as in Table 1.
Table 1. The basic behaviors.
Table 1. The basic behaviors.
Behavior 1. Safe-wandering
θ d v = { θ p i / 4 left obstacle θ + p i / 4 right obstacle
Behavior 2. Dispersion
θ d d = atan 2 ( e y d , e x d )
where e x d = x d x , e y d = y d y , ( x d , y d ) is the mean position of the two nearest robots
Behavior 3. Aggregation
θ d a = atan 2 ( e y a , e x a )
where e x a = x a x , e y a = y a y , ( x a , y a ) is the base robot
If the basic behaviors are described as a normalized vector with respect to their outputs, the combined behavior of the child robots is acquired by linear weighted vector summation, as in Figure 5.
Figure 5. The combination of the basic behaviors.
Figure 5. The combination of the basic behaviors.
Sensors 15 06483 g005
Figure 5 shows the combination of the three basic behaviors as in (17):
Φ i = w a Φ a + w d Φ d + w w Φ w
where Φ is the normalized vector, w is the weighting factor, and i, a, d, and w mean the outputs of the ith robot, aggregation, dispersion, and safe-wandering, respectively. Accordingly, the desired orientation of the ith robot can be chosen as a direction of Φ i , as depicted in Figure 6.
Figure 6. The combination of the basic behaviors.
Figure 6. The combination of the basic behaviors.
Sensors 15 06483 g006
As can be seen in Figure 6, the basic behaviors are combined by the weighted vector summation, and the collision avoidance using the safe-wandering behavior is included to avoid collisions. To control the child robots, the linear velocity of all the robots is chosen as a constant value, vi = vd, and the angular velocity should be designed to achieve the desired orientation, which is the angle of Φ i .

4. Simulation Results

To show the performance of CESS, numerical simulations with four scenarios are included. In these scenarios, the base robot has twelve child robots, which have two one-beam range finders with 4 m range and 0.01 m resolution. Because the objective of the proposed CESS is the replacement of LRF, all scenarios are compared with the numerical models of URG-04LX by Hokuyo [32], which is a 1-ch LRF, and the HDL-32E by Velodyne [33], which is a 32-ch LRF.
The first and second scenarios show the performance of CESS replacing the 1-ch LRF using the vision- and UWB-based positioning systems, respectively. The work space is 10 m × 10 m, and the initial conditions of the base and child robots are determined as follows: (xb(0), yb(0), θb(0)) = (4, 4, −π/2) and (xi(0), yi(0), θi(0)) = (xb(0) + 2.5cos(2πi/12), yb(0) + 2.5sin(2πi/12), 2πi/12 + π/2). In addition, the obstacle information is marked on the grid map where each grid is 0.1 m × 0.1 m. To compare with commercialized LRF, a virtual URG-04LX is used, which has the following specifications: a distance range of 4 m, an angular range of −120°–120°, and 0.001 m resolution. Figure 7 shows the results of the first and second scenarios.
Figure 7. The performance of CESS in the 2D plane. (a) The obstacle map; (b) The detected obstacles by LRF; (c) The detected obstacles by CESS, based on the vision-based positioning system; (d) The detected obstacles by CESS based on the UWB-based positioning system.
Figure 7. The performance of CESS in the 2D plane. (a) The obstacle map; (b) The detected obstacles by LRF; (c) The detected obstacles by CESS, based on the vision-based positioning system; (d) The detected obstacles by CESS based on the UWB-based positioning system.
Sensors 15 06483 g007
Figure 7a shows the given unknown environment. Figure 7b shows the performance of the LRF. The big circle in Figure 7b shows the boundary of the LRF. Figure 7c,d show the performance of the CESS using the vision- and UWB-based positioning systems, respectively. In Figure 7c, the big circles show the changes of the desired circle of the child robots that result from Equation (3), which makes the child robot move in the visible area of the base robot without colliding with obstacles. In Figure 7d, the big circle shows the boundary of the UWB-based positioning system on the base robot. From Figure 7b–d, LRF provides exact information about the obstacles in the limited area; meanwhile, it can be ensured that the proposed CESS system extends the detectable area. In particular, as in Figure 7d, in the case of the UWB-based positioning system, it is possible that the hidden obstacles are detected. In addition, in Figure 7c,d, there are position errors for the obstacles, because of the quantization errors derived from marking the obstacle information on the grid map. However, as can be seen in Figure 7c,d, the marking grids are similar to the real map.
The third and fourth scenarios show the performance of obstacle detection in 3D space, as in Figure 8 where the following cases are included: (a) Virtual HDL-32E with 100 m range and 0.001 m resolution; (b) The child robots with two one-beam distance finders whose vertical angles are δi = 2π (i − 1)/180 for i = 1, …, n and range is 4 m using the vision-based positioning system, and (c) the child robots using the UWB-based positioning system whose range is 6 m. In these scenarios, the initial conditions of the base and child robots are determined as follows: (xb(0), yb(0), θb(0)) = (4, 4, π/4) and (xi(0), yi(0), θi(0)) = (xb(0) + 1.5cos(2πi/12), yb(0) + 1.5sin(2πi/12), 2πi/12 + π/2). In addition, the obstacle information acquired via child robots will be marked on the grid map where each grid is 0.1 m × 0.1 m × 0.1 m.
Figure 8. The performance of CESS in a 3D space. (a) The obstacle map; (b) The detected obstacles by LRF; (c) The detected obstacles by CESS, based on the vision-based positioning system; (d) The detected obstacles by CESS based on the UWB-based positioning system.
Figure 8. The performance of CESS in a 3D space. (a) The obstacle map; (b) The detected obstacles by LRF; (c) The detected obstacles by CESS, based on the vision-based positioning system; (d) The detected obstacles by CESS based on the UWB-based positioning system.
Sensors 15 06483 g008
The given environment is depicted in Figure 8a. Figure 8b shows the results of the 32-ch LRF, such that, the obstacles are described exactly in the large area since the virtual LRF has the distance range of 100 m. Meanwhile, Figures 8c,d show the results of CESS using the vision- and UWB-based positioning systems, respectively. While the child robots have range finders with a short range, they can cover a large area, because the detectable area is extended by the positioning systems on the base robot. In particular, the proposed system using the UWB-based positioning system presented in Figure 8d can acquire information about obstacles hidden behind others.
Remark 2.
In the scenarios, the resolution of the range finders on the child robots is 0.01 m. Contrasting with LRFs whose resolutions are 0.001 m, the accuracy of the range finders on the child robot are under 10% of the LRF. Despite the big performance difference between LRF and the range finders on the child robots, the proposed CESS shows similar performance to LRF and the extension of the detectable area, because CESS with multiple child robots provides information using a number of robots instead of a single device.
From the results of the four scenarios, it can be ensured that the proposed CESS provides similar performance to high-cost LRF, extends the detectable area, and acquires information in the invisible area that cannot be measured by LRF. In addition, the scenarios show that the sensitivity of position errors derived from measurement and communication noises and communication delay are reduced because of the quantization errors included in the grid map.

5. Conclusions

We proposed CESS with multiple child robots that can be utilized instead of LRF. Unlike the robot system with high-performance LRF, which has a high cost, the proposed CESS acquires obstacle information using multiple child robots with low performance range finders, which have a low cost. To control multiple child robots, the vector field–based multiple robot control algorithm using the vision-based positioning system, and the behavior-based control algorithm using the UWB-based positioning system are employed. From these positioning systems, certain advantages that surpass the performance of LRF, such as the extension of the scanning range and the capability to detect hidden obstacles are achieved. The simulation results are included to show the contributions and performance of the proposed cooperative environment scan system by contrasting it with LRF. In future research, for real robots, the acquisition of robustness against the communication delay resulting from increased child robots (i.e., scalability) will be pursued, and these will be implemented to actual robot system.

Acknowledgments

This research was supported by the Ministry of Science, ICT and Future Planning, Korea, under the “IT Consilience Creative Program” (NIPA-2014-H0201-14-1002) supervised by the National IT Industry Promotion Agency (NIPA).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Latombe, J.-C. Robot Motion Planning; Kluwer Academic: Dordrecht, The Netherlands, 1991. [Google Scholar]
  2. Lavalle, S. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  3. Yang, D.-H. A Collision Avoidance Algorithm for Multiple Mobile Robots Using Roadmaps. Ph.D. Thesis, Ajou University, Suwon, Korea, 2006. [Google Scholar]
  4. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  5. Rekleitis, I.M. A Particle Filter Tutorial for Mobile Robot Localization; Technical Report TR-CIM-04-02; Centre for Intelligent Machines, McGill University: Montreal, QC, Canada, 2004. [Google Scholar]
  6. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping (SLAM): Part I the essential algorithms. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
  7. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping (SLAM): Part II state of the art. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef]
  8. Do, Y.; Kim, J. Infrared range sensor array for 3D sensing in robotic applications. Int. J. Adv. Robot. Syst. 2013, 10, 193. [Google Scholar]
  9. Jiménez, F.; Naranjo, J.E.; Gómez, O.; Anaya, J.J. Vehicle tracking for an evasive manoeuvers assistant using low-cost ultrasonic sensors. Sensors 2014, 14, 22689–22705. [Google Scholar] [CrossRef] [PubMed]
  10. Schwarz, B. LIDAR: Mapping the world in 3D. Nat. Photonics 2010, 4, 429–430. [Google Scholar] [CrossRef]
  11. Pandey, G.; McBride, J.R.; Eustice, R.M. Ford campus vision and lidar data set. Int. J. Robot. Res. 2011, 30, 1543–1552. [Google Scholar] [CrossRef]
  12. Lacaze1, A.; Murphy, M.; Giorno, M.D.; Corley, K. Reconnaissance and autonomy for small robots (RASR) team: MAGIC 2010 Challenge. J. Field Robot. 2012, 29, 729–744. [Google Scholar] [CrossRef]
  13. Butzke, J.; Daniilidis, K.; Kushleyev, A.; Lee, D.D.; Likhachev, M.; Phillips, C.; Phillips, C. The University of Pennsylvania MAGIC 2010 multi-robot unmanned vehicle system. J. Field Robot. 2012, 29, 745–761. [Google Scholar] [CrossRef]
  14. Foix, S.; Alenyà, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef] [Green Version]
  15. May, S.; Fuchs, S.; Droeschel, D.; Holz, D.; Nüchter, A. Robust 3D-mapping with time-of-flight cameras. In Proceedings of the International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009.
  16. Howard, A. Multi-robot simultaneous localization and mapping using particle filters. Int. J. Robot. Res. 2006, 25, 1243–1256. [Google Scholar] [CrossRef]
  17. Mourikis, A.I; Ourmeliotis, S.I. Predicting the performance of cooperative simultaneous localization and mapping (C-SLAM). Int. J. Robot. Res. 2006, 25, 1273–1286. [Google Scholar] [CrossRef]
  18. Lee, H.-C.; Lee, S.-H.; Lee, T.-S.; Kim, D.-J.; Lee, B.-H. A Survey of map merging techniques for cooperative-SLAM. In Proceedings of the International Conference on Ubiquitous Robots and Ambient Intelligence, Daejeon, Korea, 26–28 November 2012.
  19. Howard, A.; Parker, L.E.; Sukhatme, G. Experiments with a large heterogeneous mobile robot team: Exploration, mapping, deployment, and detection. Int. J. Robot. Res. 2006, 25, 431–447. [Google Scholar] [CrossRef]
  20. Cruz, D.; McClintock, J.; Perteet, B.; Orqueda, O.A.A.; Cao, Y.; Fierro, R. Decentralized cooperative control: A multivehicle platform for research in networked embedded systems. IEEE Control Syst. Mag. 2007, 27, 58–78. [Google Scholar] [CrossRef]
  21. Kim, J.H.; Kwon, J.-W.; Seo, J. Multi-UAV-based stereo vision system without GPS for ground obstacle mapping to assist path planning of UGV. Electron. Lett. 2014, 50, 1431–1432. [Google Scholar] [CrossRef]
  22. Das, A.K.; Fierro, R.; Kumar, V.; Ostrowski, J.P.; Spletzer, J.; Taylor, C.J. A vision-based formation control framework. IEEE Trans. Robot. Autom. 2002, 18, 813–825. [Google Scholar] [CrossRef]
  23. Fontana, R. Advances in ultra wideband indoor geolocation systems. In Proceedings of the 3rd IEEE Workshops on WLAN, Newton, MA, USA, 27–28 September 2001.
  24. Fontana, R. Recent system applications of short-pulse ultra-wideband (UWB) technology. IEEE Trans. Microw. Theory Tech. 2004, 52, 2087–2104. [Google Scholar] [CrossRef]
  25. Pahlavan, K.; Li, X.; Makela, J.-P. Indoor geolocation science and technology. IEEE Commun. Mag. 2002, 40, 112–118. [Google Scholar] [CrossRef]
  26. Kwon, J.-W.; Park, M.-S.; Chwa, D. Localization of the mobile agent using indirect Kalman filter in distributed sensor networks. In Proceedings of the International Conference on Ubiquitous Information Management and Communication, Suwon, Korea, 15–16 January 2009.
  27. Chwa, D. Sliding-mode tracking control of nonholonomic wheeled mobile robots in polar coordinates. IEEE Trans. Control Syst. Technol. 2004, 12, 637–644. [Google Scholar] [CrossRef]
  28. Kwon, J.-W.; Chwa, D. Hierarchical formation control based on a vector field method for wheeled mobile robots. IEEE Trans. Robot. 2012, 28, 1335–1345. [Google Scholar] [CrossRef]
  29. Ghabcheloo, R.; Pascoal, A.; Silvestre, C.; Kaminer, I. Coordinated path following control of multiple wheeled robots linearization techniques. Int. J. Syst. Sci. 2005, 37, 399–414. [Google Scholar] [CrossRef]
  30. Mataric, M.J. Interaction and Intelligent Behaviors. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1994. [Google Scholar]
  31. Kwon, J.-W.; Kim, J.H.; Seo, J. Consensus-based obstacle avoidance for robotic swarm system with behavior-based control scheme. In Proceedings of the International Conference on Control, Automation and Systems 2014, Seoul, Korea, 22–25 October 2014.
  32. Scanning Range Finder (SOKUIKI Sensor), HOKUYO. Available online: http://www.hokuyo-aut.jp/02sensor/07scanner/urg_04lx.html (accessed on 11 Febuary 2015).
  33. HDL-32E, Velodyne. Available online: http://velodynelidar.com/lidar/hdlproducts/hdl32e.aspx (accessed on 11 Febuary 2015).

Share and Cite

MDPI and ACS Style

Kwon, J.-W. Cooperative Environment Scans Based on a Multi-Robot System. Sensors 2015, 15, 6483-6496. https://doi.org/10.3390/s150306483

AMA Style

Kwon J-W. Cooperative Environment Scans Based on a Multi-Robot System. Sensors. 2015; 15(3):6483-6496. https://doi.org/10.3390/s150306483

Chicago/Turabian Style

Kwon, Ji-Wook. 2015. "Cooperative Environment Scans Based on a Multi-Robot System" Sensors 15, no. 3: 6483-6496. https://doi.org/10.3390/s150306483

Article Metrics

Back to TopTop