Next Article in Journal
A Review of Super-Resolution Imaging through Optical High-Order Interference [Invited]
Next Article in Special Issue
Communication with Self-Growing Character to Develop Physically Growing Robot Toy Agent
Previous Article in Journal
Altitudinal Shift of Tetrao urogallus in an Alpine Natura 2000 Site: Implications for Habitat Restoration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Robot Formation Platform based on an Indoor Global Positioning System

1
School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an 710072, China
2
Beijing Electro-mechanical Engineering Institute, Beijing 100074, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(6), 1165; https://doi.org/10.3390/app9061165
Submission received: 19 January 2019 / Revised: 15 March 2019 / Accepted: 17 March 2019 / Published: 19 March 2019
(This article belongs to the Special Issue Swarm Robotics 2020)

Abstract

:
Aimed at the problem that experimental verifications are difficult to execute due to lacking effective experimental platforms in the research field of multi-robot formation, we design a simple multi-robot formation platform. This proposed general and low-cost multi-robot formation platform includes the indoor global-positioning system, the multi-robot communication system, and the wheeled mobile robot hardware. For each wheeled mobile robot in our platform, its real-time position information in the centimeter-level precise is obtained by the Marvelmind Indoor Navigation System and orientation information is obtained by the six-degree-of-freedom gyroscope. The Transmission Control Protocol/Internet Protocol (TCP/IP) wireless communication infrastructure is selected to support the communication among robots and the data collection in the process of experiments. Finally, a set of leader–follower formation experiments are performed by our platform, which include three trajectory tracking experiments of different types and numbers under deterministic environment and a formation-maintaining experiment with external disturbances. The results illustrate that our multi-robot formation platform can be effectively used as a general testbed to evaluate and verify the feasibility and correctness of the theoretical methods in the multi-robot formation. What is more, the proposed simple and general formation platform is beneficial to the development of platforms in the fields of multi-robot coordination, formation control, and search and rescue missions.

1. Introduction

Multi-robot formation control is one of the most important areas of research in Multi-Robot Systems [1,2], due to general practical applications such as joint handing [3], cooperative rescue [4], group stalking, and exploration [5]. The goal of multi-robot formation control is to maintain a specified geometrical shape of a group of robots by adjusting the pose (positions and orientations) of robots [6], which generally follows the process of forming, maintaining, and switching of formation. The current research on the multi-robot formation mainly focuses on formation control theory [7,8,9], while some research lacks physical experimental realization to validate their theories and algorithms for having no available experimental testbed, especially a general and low-cost indoor multi-robot formation experimental platform.
The positioning system is the basis of a multi-robot formation experimental platform, which determines the position and orientation of robots in real time. Currently, the indoor positioning systems used in multi-robot formation can be roughly classified into two main categories [10]—relative and absolute positioning systems [11]. For the relative positioning systems, the robots’ pose is obtained from onboard sensors or other robots. For example, the pose information has been obtained by the encoders equipped with the robots [12], or a laser scanner mounted on the robot has been used to estimate the relative position of other robots [13]. However, considering formation scalability, the overall cost of the system will increase with the addition of robot number, as new cameras or laser range finders would be required. In addition, relative positioning based on onboard sensors will produce an accumulative error, which will cause distortion of the formation.
In the absolute positioning systems, the pose of a robot is determined via measuring and recognizing the landmarks in indoor environments. In some research [6,14,15], cameras were mounted on the ceiling or overhead place to estimate the position and orientation of robots via measuring the positions of robots’ marks. For example, Guinaldo [16] designed a positioning system with a single camera on the ceiling and each robot was distinguished thanks to three high-brightness LEDs. However, since the camera is susceptible to light and dynamic environment, the image processing including landmark recognition and feature extraction is not robust enough for positioning. To improve the precision of positioning, more cameras need to be equipped. In view of this, Zhang [17] proposed a vision system including 24 OptiTrack cameras to obtain each robot’s position information, but this will also significantly increase the overall cost. To reduce the cost, choosing some economical positioning technologies to replace vision systems is urgently required. In addition, technologies such as Radio-Frequency Identification (RFID) [18], Ultra-Wideband (UWB) [19], and Bluetooth [10] are not suitable to be employed for multi-robot formation due to low precision. Recently, the ultrasonic system has been verified that it can reach a better trade-off between cost and precision, which absorbs the sight of many researchers and practitioners [20,21].
Some formation experiments hve been implemented in the existing robot platforms, such as Koala from K-team [22] or TurtleBot3 from TurtleBot [23], which are often relatively complex and expensive. On the contrary, some commercially available off-the-shelf solutions, like the Create2 robot from iRobot [24] and the LEGO robot used in Reference [17], often lack onboard processing and networking [25]. In more recent work, Kilobots are designed for testing collective algorithms on large groups [26], which mainly serviced for swarm robots. Few of the existing platforms are uniquely designed for multi-robot formation control. For this reason, it is reasonable to design and manufacture our custom robot. What is more, effective formation and coverage control of mobile robots also require a reliable and powerful wireless communication infrastructure for exchanging information among themselves [27]. Since the high-performance wireless local area network (WLAN) technology is relatively low cost, its use for wireless control of multi-robot systems has become a practical proposition [28].
In this paper, the formation of wheeled mobile robots is selected as the research object, and we propose a general and low-cost multi-robot formation experimental platform to facilitate the application validation of theories and methods on the multi-robot formation research. Our multi-robot formation platform contains three key parts—the indoor global-positioning system, the multi-robot communication system, and wheeled mobile robot hardware. The real-time and precise pose of every robot is achieved by the indoor global-positioning system, where the position is obtained by the Marvelmind Indoor Navigation System based on the ultrasonic system and the orientation is obtained by MPU-6050. The mobile robots are made by us, based on an embedded microcontroller STM32 and stepper motors. In addition, a wireless communication network is established for exchanging information among robots based on the ESP8266 Wi-Fi communication module. The control is distributed in the sense that each robot makes, by itself, the decision of when to transmit its state, and the control law is computed locally. Finally, we validate the platform using a formation control leader–follower strategy and complete a series of experiments of formation forming, switching, and maintaining with the external disturbance.
The rest of the paper is organized as follows. Section 2 introduces the indoor multi-robot formation platform. Section 3 explains the leader–follower formation control and setup of the experiment platform. Experimental results will be presented and discussed in Section 4. In Section 5, the main contributions of the paper are summarized, and future research directions are highlighted.

2. Indoor Multi-Robot Formation Platform

In this section, we will introduce an indoor multi-robot formation platform that consists of common and low-cost components. It can be adopted and replicated by researchers who are interested in the multi-robot formation or mobile robot.

2.1. Platform Architecture and Components

Our multi-robot formation platform consists of three components—an indoor global-positioning system, the robot hardware, and the multi-robot communication technology. Figure 1 shows the architecture and the components of the multi-robot formation platform, which are the following:
  • A personal computer to monitor the indoor global-positioning system, which is employed to collect and record the robots’ pose.
  • The modem of the Marvelmind Indoor Navigation System connected with PC through Universal Serial Bus.
  • A Universal Serial Bus (USB) server to link the Dashboard with PC.
  • The Marvelmind dashboard is the dashboard of Marvelmind Indoor Navigation System.
  • A mobile beacon of the Marvelmind Indoor Navigation System mounted on the Robot, which is used to receive position information and send to the robot’s controller.
  • A Wi-Fi communication module ESP8266 installed on each robot, to transmit and receive data as the Transmission Control Protocol(TCP) client.
  • The mobile robot equipped with a microcontroller and other sensors.
  • A Wi-Fi communication module ESP8266 connected to the PC, to transmit and receive data as the TCP server.
  • USB-TTL (Transistor-Transistor Logic) is used to build a connection between communication module ESP8266 and the PC.
  • The data collection application is connected to the PC through I/O API and developed in LabVIEW software, for collecting the robots’ poses and recording them in data files.
  • The MPU-6050 devices combine a 3-axis gyroscope and a 3-axis accelerometer on the same silicon die, together with an onboard Digital Motion Processor, which is the server for the robots’ orientation.

2.2. Indoor Global Positioning System

The indoor global-positioning system for multi-robot contains Marvelmind Indoor Navigation System and MPU-6050. The former is an off-the-shelf indoor navigation system based on the ultrasonic system, which can provide high-precision (±2 cm) indoor coordinates for mobile robots [29], and it can reach the update rate of 16 Hz. It mainly contains three core components i.e., Modem (router), Mobile beacon, and Stationary beacon, as shown in Figure 2.
The MPU-6050 devices combine a 3-axis gyroscope and a 3-axis accelerometer on the same silicon die, together with an onboard Digital Motion Processor, which processes complex 6-axis Motion Fusion algorithms. The device can access external magnetometers or other sensors through an auxiliary master I²C bus, allowing the devices to gather a full set of sensor data without intervention from the system processor. MPU6050 is mounted on the mobile car, which is used to provide each robot’s orientation.
The real-time and precise pose of every robot is achieved by the indoor global-positioning system, where the position is obtained by the Marvelmind Indoor Navigation System based on the ultrasonic system and the orientation is obtained by MPU-6050.
The modem is the central controller of the indoor global-positioning system, which can, not only communicate with the stationary beacon, but can also calculate the position of mobile beacons and send the position to the mobile beacons. Mobile beacons are installed on the mobile robots, receiving position information from the modem and, at the same time, interacting with the microcontroller of mobile robots. Stationary beacons are mounted on the wall and they measure the distance of other mobile beacons through the method of ultrasonic pulses (time-of-flight). The position of the mobile robot is calculated based on the propagation delay of an ultrasonic signal to a set of stationary beacons using trilateration method.
Each mobile robot is equipped with a mobile beacon and can communicate with the STM32 microcontroller unit (MCU) through the serial port. Through the decoding process, mobile robots can obtain the real-time position.
Another part of the indoor global-positioning system is MPU-6050 installed on the mobile car, which is used to provide the orientation of the robot in the forward direction. It is an integrated 6-axis motion-tracking device, which contains 3-axis gyroscope, 3-axis accelerometer and a Digital Motion Processor [30]. The pose of robots is composed of the position information from the Marvelmind Indoor Navigation System and the orientation information from MPU-6050. Each mobile robot is equipped with the MPU-6050, and the orientation of robot can be obtained in real-time through I²C bus.
The coverage area of the indoor global-positioning system is up to 1000 m2. Taking the 5 m × 6 m indoor experimental site as an example, this positioning system can track and compute the robots’ position information up to 25 Hz with the differential precision of ±2 cm in robot position and ±1° in robot orientation.

2.3. Multi-Robot Communication and Monitoring System

For a reliable execution of coordination tasks by multi-robot, communication between robots is a key issue, and in order to observe the experimental process, the in-process data also should be recorded. In this paper, the TCP/IP wireless communication infrastructure is selected to support the communication among robots and the data collection in the process of experiments.
As a low-power and highly integrated Wi-Fi module, ESP8266 incorporates a firmware and provides a simple means of wirelessly communicating. The module has a complete and self-contained Wi-Fi network capability that can be either alone or as a slave to other host MCUs [31]. So, it is available to support communication in our multi-robot formation platform.
Figure 3 illustrates the network architecture showing the data collection terminal and a number of mobile robots. Wi-Fi module ESP8266 mounted on each robot can communicate with the microcontroller via interruptions using the UART at the speed of 38,400 bps. Besides, the module ESP8266 is connected with PC as a wireless data collection terminal through USB-TLL.
In order to communicate with each other, the ESP8266 module needs to be connected to the same Wi-Fi network. Under the same network, the communication among robots will be established when one of the robots is considered as TCP servers and others join the servers as clients. The point-to-point connection between the server robot and the client robot can avoid information loss, and extend the number of server robots according to task requirements. Further details about this application in leader–follower formation will be discussed in next part.
The ESP8266 supports the TCP/IP protocol and fully complies with the 802.11 b/g/n WLAN MAC protocol. The master host can achieve full duplex data transmission of up to four slaves. The clock frequency in master mode is up to 80 MHz, and the clock frequency in slave mode is up to 20 MHz.
As for the computer screen, shown in Figure 4, the real-time monitoring system has been developed in LabVIEW to record the pose information and monitor each robot trajectory. The ESP8266 module connects with PC through USB-TTL, so the PC can get the information collected by the ESP8266 module.
The real-time monitoring system reads the information in the multi-robot task process using the function of LabVIEW’s serial port read, and through the image drawing VI, robot trajectory is recorded. To facilitate data analysis and processing, all strings in the buffer are sorted according to the title of each data frame and record in the table.
In the multi-robot formation platform, the communication method is distributed, and the distributed architecture is originally designed for scalability, although we only use several robots to form a team. Because of the distributed architecture and its communication methods, there is not much limit on the number of follower robots.

2.4. Wheeled Mobile Robot Hardware

In our multi-robot formation platform, for more practicality, lower costs, and better compatibility, we chose to design and manufacture our custom mobile robots. Mobile robot hardware including eight components is shown in Figure 4, and each module installed on the mobile robot is listed in Table 1.
The mobile robot with differential drive serves as the general platform for multi-robot formation control experimentation. It is equipped with a microcontroller Unit-STM32, a mobile beacon, a six-degree-of-freedom gyroscope MPU-6050, tow motors, and power equipment.
The robot is controlled by a low-cost and high-performance microcontroller Unit-STM32. The CPU core is an ARM 32-bit Cortex-M4 CPU with FPU running at 168 MHz with 1 Mbyte of Flash memory and 192 KB of SRAM [32]. The microcontroller Unit-STM32 is selected as the core control cell of the mobile robot, which is used to execute the formation controller algorithm. In our multi-robot formation platform, the control is distributed so that each robot has a microcontroller by itself. This distributed method will not cause the whole operation of multi-robot system paralysis or confusion in the case of robot problems or central processor faults. In addition, it is easy to add or reduce the number of robots in the process of operation.
STM32 is equipped with rich communication interfaces. Through its serial port, it can interact with the indoor global-positioning system. At the same time, it drives and controls the other sensors such as the communication sensors, motion tracking sensor, and so on.
The platform adopts a distributed framework, and each robot is equipped with a microprocessor. Thus, the control is distributed in the sense that each robot makes, by itself, the decision of when to transmit its state, and the control law is computed locally.
The stepper motors mount directly to the bottom circuit board and are controlled with PWM generated by the microcontroller. The 41 mm wheels give the robots a maximum speed of 0.3 m/s, while the minimum controllable speed is around 0.03 m/s.

3. Configure the Platform for the Leader–Follower Formation

To verify the positioning accuracy, the communication stability, and the microprocessor’s execution performance of the controller algorithm in the real experimental environment, the multi-robot formation platform will be applied in leader–follower formation control. This section is divided by subheadings, and should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.

3.1. Leader–Follower Formation Control

As the typical representation of multi-robot formation, leader–follower formation control is to maintain the desired separation ( l and the desired orientation ( φ ), between the leader and the follower, which is called l φ control strategy described in Reference [31].
Considering a group of n non-holonomic wheeled mobile robots, we denote l i L R as the actual distance between the follower robot Fi and the leader robot L. l i L R is the desired distance, φ i L [ π π ] is the actual orientation, which is the angle from the follower robot to the X-axis connecting, and φ i L [ π π ] is the desired orientation. The follower robot’s desired pose p i d = ( x i d , y i d , θ i d ) T with respect to the leader can be obtained using
{ x i d = x L l i L d cos ( θ L + φ i L d ) y i d = y L l i L d sin ( θ L + φ i L d ) θ i d = θ L ,
where ( x L , y L , θ L ) T is the pose of the leader robot.
Comparing the follower robot’s desired pose p i d = ( x i d , y i d , θ i d ) T with the follower robot’s current pose p i = ( x i , y i , θ i ) T , the tracking error in a coordinate system can be described as:
( x e , y e , θ e ) T = ( x i x i d , y i y i d , θ i θ i d ) T   ,
and the controller for follower robot is selected from Reference [33]:
[ ν i ω i ] = [ ν i d cos θ e + k 1 x e ω i d + k 2 ν i d y e + k 3 ν i d sin θ e ] ,
where ( ν i , ω i ) T are the control inputs, and ( ν i d , ω i d ) T is the desired velocity of followers.

3.2. Experiment Setup for Leader–Follower Formation Control

Each robot gets their pose in real-time via the indoor global-positioning system. Specifically, the actual position of the robot is obtained through the Marvelmind Indoor Navigation System and actual orientation is obtained through MPU-6050.
To form, maintain, and switch a certain geometric formation based on the controller besides its own pose, the leader’s actual pose must be transmitted to each follower robot. The communication among robots is established as Figure 5, and the implementing process in detail as follows:
Step 1. The leader robot and follower robots connect to the same Wi-Fi network.
Step 2. TCP-Server1 and TCP-Server2 are created by the leader robot and the host computer, respectively.
Step 3. Each follower robot as a TCP-Client connects to TCP-Server1 and TCP-Server2. The pose of the leader robot is transmitted to each follower by the Wi-Fi network from TCP-Server1 to each TCP-client. Meanwhile, the pose information, including the follower’s pose and the leader’s pose, is transmitted to host computer from each TCP-client to TCP-Server2.
During the process of controller calculation, the follower robots work out their desired pose according to Equation (1) and then calculate the errors according to Equation (2). Next, through Equation (3), they get the control inputs ν i and ω i which are the follower robots’ line velocity and angular velocity; lastly, they move into the desired formation. The controller calculating process of each follower robot is shown in Figure 6.

4. Experiments of Leader–Follower Formation Control

In order to validate the effectiveness and robustness of our multi-robot formation platform, we will perform two types of leader–follower formation experiments—experiments of trajectory tracking under deterministic environment and a formation-maintaining experiment with the external disturbance. In our experiments, we select the follower robots’ controller from Reference [34], described in detail as Equation (3) with parameters k1 = 1, k2 = 0.6, and k3 = 0.5. Based on the setting of the controller and its parameters, Figure 7 shows the real scenario of leader–follower formation experiments. The process of all the following experiments is illustrated by snapshots of the video, in which the actual trajectory time displayed in the Marvelmind dashboard. The distance error of the follower robot x e 2 + y e 2 and the angular error of the follower robot θ e are also recorded by the real-time monitoring system.

4.1. The Experiment of Trajectory Tracking of Leader–Follower Formation under Deterministic Environment

In this section, a triangle formation-switching experiment is first addressed to show the effectiveness of the proposed platform. Then, the circle formation of two robots and the diamond formation with four robots are also performed, respectively, to verify the scalability of the proposed platform.

4.1.1. Experiment of Triangle Formation Switching

We use the platform to perform a formation switching from the triangle formation (l = 0.60 m) to another triangle formation (l = 0.30 m) while tracking a straight line. The initial pose of leader robot L is ( 3.26   m , 1.25   m , π / 2 ) , the linear velocity is ν = 0.05 ( m / s ) and angular velocity is ω = 0 ( rad / s ) . The initial pose of follower robots F1 and F2 are ( 2.84   m , 0.61   m , π / 2 ) and ( 3.88   m , 0.56   m , π / 2 ) . In addition, initial velocity of the three robots are 0. The velocity constraints of all the robots are set as ν M a x = 0.3 ( m / s ) and ω M a x = π / 6 ( rad / s ) .
The experimental process in Figure 8 shows that the leader robot leads the two follower robots F1 and F2, forming and maintaining a desired triangle formation. The formation begins to switch into another triangle formation at T = 43 s, and after 25 s a new desired formation is formed autonomously. From Figure 9a,b, we can see at T = 43 s, the distance and angular errors suddenly increased because of the formation switching, but during T = 43–50 s, under the autonomous control of the controller, the follower robots calculated their new control inputs to form the new desired formation, at the same time the distance error was decreasing rapidly. At last, the three robots formed the new triangle formation autonomously. In summary, the leader–follower formation based on our platform can implement the application and verification of the theories and methods in the multi-robot formation. The experiment video of the triangle formation switching can be found at Supplementary Materials https://youtu.be/7WtsZoNVp5A.

4.1.2. Experiment on Scalability Formation Control

To validate adaptability to complex formations and the scalability of the multi-robot formation platform, we also consider two robots forming a typical circle formation and four robots forming a diamond formation.
Figure 10a shows the leader robot performing a circle path and the follower robot tracking the leader robot to form a typical line formation. Tracking the circle trajectory for the follower robot is a challenge because of the changing orientation. The follower robot tracked the circle trajectory smoothly, which demonstrates the adaptability of our multi-robot platform to complex formations. Figure 10b shows the leader robot guided the three follower robots F1, F2, and F3 forming and maintaining a desired diamond formation. The experiments have been performed in a different number of robots, which validates the scalability of the multi-robot formation platform. The experiment video of circle trajectory of the leader robot with one follower robot can be found at Supplementary Materials https://youtu.be/4caScl5PF_U. The experiment video of the line trajectory of leader robot with three follower robots to realize a diamond formation can be found at Supplementary Materials https://youtu.be/NYSVTKw46vU.

4.2. The Experiment of Triangle Formation Maintaining with External Disturbance

Considering the complex external environment in a practical application, introducing external disturbance is the best way to test the anti-interference capability of the platform, so as to verify the robustness of our platform. The external disturbances include lateral disturbance and longitudinal disturbance. The experiment was to use the platform and control the three robots maintaining a triangle formation with external disturbance while tracking a straight line. The initial pose of leader robots L was ( 3.24   m , 2.29   m , π / 2 ) , the linear velocity was ν = 0.05 ( m / s ) and angular velocity was ω = 0 ( rad / s ) . The initial pose of follower robots F1 and F2 were ( 2.61   m , 1.96   m , π / 2 ) and ( 3.83   m , 2.10   m , π / 2 ) . In addition, initial velocity of three robots were both 0. The velocity constraints of all the robots were set as ν M a x = 0.3 ( m / s ) and ω M a x = π / 6 ( rad / s ) . The experiment video of line trajectory of a triangle formation maintaining with external disturbance can be found at Supplementary Materials https://youtu.be/HGQjjoYARJc.
Figure 11 illustrates the robustness of the leader–follower formation process collected from the video at different times. The leader robot L led the two follower robots F1 and F2 and maintained a desired triangular formation l = 0.30 m; at T = 32 s and T = 50 s follower robot F1 received two external disturbances—the first external disturbance was longitudinal disturbance and the second external disturbance was lateral disturbance. As shown in Figure 12, when F1 moved with the external disturbance, compared with F2, its formation tracking errors changed fast, but finally, it also converged nearly to zero and achieved the triangle-like formation, as desired. These results demonstrate our multi-robot platform is fault-tolerant. During the process, though a robot of the system faces the external disturbances in the formation process, it can recover the desired formation quickly; moreover, the other robots are not affected. Therefore, our platform shows a good robustness under external disturbances.
For the above two types of leader–follower formation experiments, our platform can effectively implement the application of the existing theories and methods, and shows better scalability and robustness, since real-time and accurate pose information are supported by our indoor global-positioning system. Therefore, our platform is expected to be used as an available test platform to evaluate and verify the feasibility and correctness of the theoretical methods in the multi-robot formation.

5. Conclusions and Future Works

We propose a new multi-robot formation experimental platform based on an indoor global-positioning system. This general and low-cost multi-robot formation platform can provide the precise and real-time pose of every robot based on the Marvelmind Indoor Navigation System and the six-degree-of-freedom gyroscope MPU-6050. The former provides the centimeter-level precise position and the latter provides the orientation information of each robot. In addition, robots exchange information based on ESP8266 Wi-Fi communication module. Then, we performed and analyzed a set of leader–follower formation experiments by our platform, including formation forming and switching under deterministic environment, and a formation-maintaining experiment with the external disturbance. The results illustrate that our experimental platform can be applied to formation control successfully and verify the correctness and effectiveness of the theoretical methods for robot motion control in the multi-robot formation.
Our general and low-cost multi-robot formation platform based on the indoor positioning system with three foundation technologies of actual pose acquisition, indoor global positioning, and multi-robot communication can be used in the fields of multi-robot coordination, formation control, and search and rescue missions.
In the future, based on the indoor global-positioning system, other sensors will be equipped on the robot to provide pose information jointly by means of multi-sensor information fusion, so as to improve positioning accuracy of mobile robots. At the same time, the communication system will be improved to face the individual failures due to complex and unknown environments. In order to effectively reduce the impact of formation failure of a certain body robot on the formation, the existing peer-to-peer communication mode is promoted to the broadcast communication mode.

Supplementary Materials

The experiment video of the triangle formation switching can be found in https://youtu.be/7WtsZoNVp5A. The experiment video of circle trajectory of the leader robot with one follower robot can be found in https://youtu.be/4caScl5PF_U. The experiment video of the line trajectory of leader robot with three follower robots to realize a diamond formation can be found in https://youtu.be/NYSVTKw46vU. The experiment video of line trajectory of a triangle formation maintaining with external disturbance can be found in https://youtu.be/HGQjjoYARJc.

Author Contributions

H.Y. designed the experimental framework and provided experimental and financial support; X.W. designed the leader–follower formation control; X.B. executed the experiment; S.Z. analyzed the data; all the authors wrote the paper.

Funding

This research was funded by the National Natural Science Foundation, China (No.51775435), and the Programme of Introducing Talents of Discipline to Universities (B13044).

Acknowledgments

In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, G.; Li, D.; Gan, W.; Jia, P. Study on formation control of multi-robot systems. In Proceedings of the Third International Conference on Intelligent System Design and Engineering Applications, Hong Kong, China, 16–18 January 2013; pp. 1335–1339. [Google Scholar]
  2. Chen, H.; Yang, H.A.; Wang, X.; Zhang, T. Formation control for car-like mobile robots using front-wheel driving and steering. Int. J. Adv. Robot. Syst. 2018, 15, 172988141877822. [Google Scholar] [CrossRef]
  3. Alonso-Mora, J.; Baker, S.; Rus, D. Multi-robot formation control and object transport in dynamic environments via constrained optimization. Int. J. Robot. Res. 2017, 36, 1000–1021. [Google Scholar] [CrossRef]
  4. Eoh, G.; Jeon, J.D.; Choi, J.S.; Lee, B.H. Multi-robot cooperative formation for overweight object transportation. In Proceedings of the IEEE/SICE International Symposium on System Integration, Kyoto, Japan, 20–22 December 2011; pp. 726–731. [Google Scholar]
  5. Yasuda, Y.; Kubota, N.; Toda, Y. Adaptive formation behaviors of multi-robot for cooperative exploration. In Proceedings of the IEEE International Conference on Fuzzy Systems, Brisbane, Australia, 10–15 June 2012; pp. 1–6. [Google Scholar]
  6. Mariottini, G.L.; Morbidi, F.; Prattichizzo, D.; Valk, N.V.; Michael, N.; Pappas, G.; Daniilidis, K. Vision-based localization for leader–follower formation control. IEEE Trans. Robot. 2009, 25, 1431–1438. [Google Scholar] [CrossRef]
  7. Yan, Z.; Xu, D.; Chen, T.; Zhang, W.; Liu, Y. Leader-follower formation control of uuvs with model uncertainties, current disturbances, and unstable communication. Sensors 2018, 18, 662. [Google Scholar] [CrossRef] [PubMed]
  8. Qian, D.; Tong, S.; Li, C. Leader-following formation control of multiple robots with uncertainties through sliding mode and nonlinear disturbance observer. ETRI J. 2016, 38, 1008–1018. [Google Scholar] [CrossRef]
  9. Poonawala, H.A.; Satici, A.C.; Spong, M.W. Leader-follower formation control of nonholonomic wheeled mobile robots using only position measurements. In Proceedings of the Control Conference, Istanbul, Turkey, 23–26 June 2013; pp. 1–6. [Google Scholar]
  10. Mainetti, L.; Patrono, L.; Sergi, I. A survey on indoor positioning systems. In Proceedings of the International Conference on Software, Telecommunications and Computer Networks, Split, Croatia, 17–19 September 2015; pp. 111–120. [Google Scholar]
  11. Consolini, L.; Morbidi, F.; Prattichizzo, D.; Tosques, M. Leader–follower formation control of nonholonomic mobile robots with input constraints. Automatica 2008, 44, 1343–1349. [Google Scholar] [CrossRef]
  12. Rosales, A.; Scaglia, G.; Mut, V.; Sciascio, F.D. Formation control and trajectory tracking of mobile robotic systems—A linear algebra approach. Robotica 2011, 29, 335–349. [Google Scholar] [CrossRef]
  13. Huang, J.; Farritor, S.M.; Qadi, A.; Goddard, S. Localization and follow-the-leader control of a heterogeneous group of mobile robots. IEEE/ASME Trans. Mechatron. 2006, 11, 205–215. [Google Scholar] [CrossRef]
  14. Xu, D.; Han, L.; Tan, M.; Li, Y.F. Ceiling-based visual positioning for an indoor mobile robot with monocular vision. IEEE Trans. Ind. Electron. 2009, 56, 1617–1628. [Google Scholar]
  15. Nascimento, R.C.A.; Silva, B.M.F. Real-time localization of mobile robots in indoor environments using a ceiling camera structure. In Proceedings of the Robotics Symposium and Competition, Arequipa, Peru, 21–27 October 2014; pp. 61–66. [Google Scholar]
  16. María, G.; Ernesto, F.; Gonzalo, F.; Sebastián, D.C.; Dictino, C.; José, S.; Sebastián, D. A mobile robots experimental environment with event-based wireless communication. Sensors 2013, 13, 9396–9413. [Google Scholar]
  17. Kamel, M.A.; Ghamry, K.A.; Zhang, Y. Real-time fault-tolerant cooperative control of multiple uavs-ugvs in the presence of actuator faults. In Proceedings of the International Conference on Unmanned Aircraft Systems, Arlington, VA, USA, 7–10 June 2016; pp. 1267–1272. [Google Scholar]
  18. Saab, S.S.; Nakad, Z.S. A standalone RFID indoor positioning system using passive tags. IEEE Trans. Ind. Electron. 2011, 58, 1961–1970. [Google Scholar] [CrossRef]
  19. Alarifi, A.; Alsalman, A.M.; Alsaleh, M.; Alnafessah, A.; Alhadhrami, S.; Mai, A.A.; Alkhalifa, H.S. Ultra wideband indoor positioning technologies: Analysis and recent advances. Sensors 2016, 16, 707. [Google Scholar] [CrossRef] [PubMed]
  20. Yazici, A.; Yayan, U.; Yücel, H. An ultrasonic based indoor positioning system. In Proceedings of the International Symposium on Innovations in Intelligent Systems and Applications, Istanbul, Turkey, 15–18 June 2011; pp. 585–589. [Google Scholar]
  21. Díaz, E.; Pérez, M.C.; Gualda, D.; Villadangos, J.M.; Ureña, J.; García, J.J. Ultrasonic indoor positioning for smart environments: A mobile application. In Proceedings of the Experiment@international Conference, Faro, Portugal, 6–8 June 2017; pp. 280–285. [Google Scholar]
  22. Koala Robot. Available online: https://www.k-team.com/koala-2-5-new (accessed on 23 June 2018).
  23. Turtlebot3 e-manual. Available online: https://www.turtlebot.com (accessed on 23 June 2018).
  24. Irobot Create2 Programmable Robot. Available online: https://www.irobot.com (accessed on 23 June 2018).
  25. Michael, N.; Fink, J.; Kumar, V. Experimental testbed for large multirobot teams. Robot. Autom. Mag. IEEE 2008, 15, 53–61. [Google Scholar] [CrossRef]
  26. Rubenstein, M.; Cornejo, A.; Nagpal, R. Programmable self-assembly in a thousand-robot swarm. Science 2014, 345, 795–799. [Google Scholar] [CrossRef] [PubMed]
  27. Bhuiya, A.; Mukherjee, A.; Barai, R.K. Development of wi-fi communication module for atmega microcontroller based mobile robot for cooperative autonomous navigation. In Proceedings of the IEEE Calcutta Conference, Kolkata, India, 2–3 December 2017; pp. 168–172. [Google Scholar]
  28. Winfield, A.F.T.; Holland, O.E. The application of wireless local area network technology to the control of mobile robots. Microprocess. Microsyst. 2000, 23, 597–607. [Google Scholar] [CrossRef]
  29. Marvelmind Navigation System Manual(v2018_01_11). Available online: https://marvelmind.com/pics/marvelmind_navigation_system_manual.pdf (accessed on 23 June 2018).
  30. Mpu-6050 Datasheet. Available online: https://www.invensense.com/products/motion-tracking/6-axis/mpu-6050/MPU-6000-Datasheet1.pdf (accessed on 23 June 2018).
  31. Esp8266 Ex Datasheet. Available online: http://espressif.com/sites/default/files/documentation /0aesp8266ex_datasheet en.pdf (accessed on 23 June 2018).
  32. Stm32f407xx Datasheet. Available online: https://www.st.com/resource/en/datasheet/stm32f405rg.pdf (accessed on 23 June 2018).
  33. Desai, J.P.; Ostrowski, J.; Kumar, V. Controlling formations of multiple mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation, Leuven, Belgium, 20–20 May 1998; pp. 2864–2869. [Google Scholar]
  34. Kanayama, Y.; Kimura, Y.; Miyazaki, F.; Noguchi, T. A stable tracking control method for an autonomous mobile robot. In Proceedings of the IEEE International Conference on Robotics and Automation, Cincinnati, OH, USA, 13–18 May 1990; Volume 381, pp. 384–389. [Google Scholar]
Figure 1. The architecture and the components of the multi-robot formation platform.
Figure 1. The architecture and the components of the multi-robot formation platform.
Applsci 09 01165 g001
Figure 2. The multi-robot indoor global-positioning system based on the trilateral measurement.
Figure 2. The multi-robot indoor global-positioning system based on the trilateral measurement.
Applsci 09 01165 g002
Figure 3. Multi-robot wireless network architecture.
Figure 3. Multi-robot wireless network architecture.
Applsci 09 01165 g003
Figure 4. Wheeled mobile robot hardware.
Figure 4. Wheeled mobile robot hardware.
Applsci 09 01165 g004
Figure 5. Wireless network architecture for leader–follower formation.
Figure 5. Wireless network architecture for leader–follower formation.
Applsci 09 01165 g005
Figure 6. The process of controller calculation of each follower robot.
Figure 6. The process of controller calculation of each follower robot.
Applsci 09 01165 g006
Figure 7. The real scenario of leader–follower formation experiments.
Figure 7. The real scenario of leader–follower formation experiments.
Applsci 09 01165 g007
Figure 8. Snapshots of the video about line trajectory of a triangle formation switching.
Figure 8. Snapshots of the video about line trajectory of a triangle formation switching.
Applsci 09 01165 g008
Figure 9. The formation tracking errors of the follower robots F1 and F2 in formation-switching experiment; (a) is the distance error of follower robots; (b) is the angular error of follower robots.
Figure 9. The formation tracking errors of the follower robots F1 and F2 in formation-switching experiment; (a) is the distance error of follower robots; (b) is the angular error of follower robots.
Applsci 09 01165 g009
Figure 10. (a) Snapshots of the video about circle trajectory of the leader robot with one follower robot to realize a line formation. (b) Snapshots of the video about line trajectory of leader robot with three follower robots to realize a diamond formation.
Figure 10. (a) Snapshots of the video about circle trajectory of the leader robot with one follower robot to realize a line formation. (b) Snapshots of the video about line trajectory of leader robot with three follower robots to realize a diamond formation.
Applsci 09 01165 g010
Figure 11. Snapshots of the video about line trajectory of a triangle formation maintaining with external disturbance.
Figure 11. Snapshots of the video about line trajectory of a triangle formation maintaining with external disturbance.
Applsci 09 01165 g011
Figure 12. The formation tracking errors of the follower robots F1 and F2 in the experiment of triangle formation maintaining with external disturbance; (a) is the distance error of follower robots; (b) is the angular error of follower robots.
Figure 12. The formation tracking errors of the follower robots F1 and F2 in the experiment of triangle formation maintaining with external disturbance; (a) is the distance error of follower robots; (b) is the angular error of follower robots.
Applsci 09 01165 g012
Table 1. Mobile robot specification.
Table 1. Mobile robot specification.
IDModuleItemDescription
-Dimension-L∗W∗H: 21 cm × 18 cm × 8 cm
1MicrocontrollerSTM32F407ZEG6Microcontroller unit
2Stepper motorMG42S1Differential motors
3CommunicationESP8266Wireless communication unit
4Mobile beaconMarvelmind HW v4.9Providing the position of robots
5OrientationMPU-6050Providing the orientation of robots
6Buck moduleDC-DC Buck moduleVoltage converter 12 V–5 V
7Oled Display0.96 OLEDDisplaying
8Switch-Power switch

Share and Cite

MDPI and ACS Style

Yang, H.; Bao, X.; Zhang, S.; Wang, X. A Multi-Robot Formation Platform based on an Indoor Global Positioning System. Appl. Sci. 2019, 9, 1165. https://doi.org/10.3390/app9061165

AMA Style

Yang H, Bao X, Zhang S, Wang X. A Multi-Robot Formation Platform based on an Indoor Global Positioning System. Applied Sciences. 2019; 9(6):1165. https://doi.org/10.3390/app9061165

Chicago/Turabian Style

Yang, Hong’an, Xuefeng Bao, Shaohua Zhang, and Xu Wang. 2019. "A Multi-Robot Formation Platform based on an Indoor Global Positioning System" Applied Sciences 9, no. 6: 1165. https://doi.org/10.3390/app9061165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop