Abstract
In this paper, a navigation method is proposed for cooperative load-carrying mobile robots. The behavior mode manager is used efficaciously in the navigation control method to switch between two behavior modes, wall-following mode (WFM) and goal-oriented mode (GOM), according to various environmental conditions. Additionally, an interval type-2 neural fuzzy controller based on dynamic group artificial bee colony (DGABC) is proposed in this paper. Reinforcement learning was used to develop the WFM adaptively. First, a single robot is trained to learn the WFM. Then, this control method is implemented for cooperative load-carrying mobile robots. In WFM learning, the proposed DGABC performs better than the original artificial bee colony algorithm and other improved algorithms. Furthermore, the results of cooperative load-carrying navigation control tests demonstrate that the proposed cooperative load-carrying method and the navigation method can enable the robots to carry the task item to the goal and complete the navigation mission efficiently.
1. Introduction
The robotics is rapidly progressed in recent years. Many researchers [1,2,3,4] have applied robots to various fields. Paul et al. [1] proposed a biomimetic robotic fish for informal science learning. Christopher et al. [2] presented a new robotic harvester that can autonomously harvest sweet pepper in protected cropping environments. Michail et al. [3] designed an autonomous robotic vehicle for monitoring the difficult fields to access or dangerous for humans. Maurizio et al. [4] developed effective emotion-based assistive behaviors for a socially assistive robot intended for natural human-robot interaction scenarios with explicit social and assistive task functionalities.
Several real-life tasks are easy for humans but difficult for robots. Robots do not possess brains to think as humans. Therefore, if people can make robots think like humans and judge things more flexibly, then robots can be used in various ways. Recent studies have evaluated robotic applications such as obstacle avoidance, path navigation, load carrying, and path planning.
In the real environment, the noise and interference cause sensor uncertainty. To solve complicated problems in engineering, we cannot successfully use classical methods because of various uncertainties typical for those problems. Several theories can be considered as mathematical tools for dealing with uncertainties. For a stochastically stable phenomenon in probability theory, it should exist a limit of the sample mean in a long series of trials. To test the existence of the limit, a large number of trials must be performed. Interval mathematics have arisen as a method of taking into account the errors of calculations by constructing an interval estimate for the exact solution of a problem. This is useful in many cases, but the methods of interval mathematics are not sufficiently adaptable for problems with different uncertainties. The most appropriate theory, for dealing with uncertainties is the fuzzy set theory [5] developed. Fuzzy set is defined by employing the fuzzy membership function. The fuzzy set in [5] is also called type-1 fuzzy set. The fuzzy set operations based on the arithmetic operations with membership functions. Rough set theory [6] is similar to fuzzy set theory, however the uncertain and imprecision in this approach is expressed by a boundary region of a set, and not by a partial membership as in fuzzy set theory. Rough set is defined by topological operations called approximations, thus this definition also requires advanced mathematical concepts. Molodtsov [7] did not impose only one way to set the membership function and used an adequate parametrization to soft set theory. In this study, because the fuzzy set theory is progressing rapidly at the present time, we focus on the fuzzy set for solving control problems.
Some researchers have used the type-1 fuzzy system to robot control. Algabri et al. [8] proposed the fuzzy logic control system for controlling mobile robots and used evolutionary algorithm to adjust the parameters of a fuzzy controller. Lee et al. [9] adopted a fuzzy neural network based on reinforcement learning to implement the robot’s navigation control. Fathinezhad et al. [10] used supervised fuzzy sarsa learning (SFSL) to guide the robots’ navigation in environments that have obstacles. SFSL combines supervised learning and reinforcement learning to reduce learning and failure times. Infrared sensors were used in the study conducted by Anish [11]. These infrared sensors transmitted the distance between the robot and the obstacle to the adaptive network-based fuzzy inference system (ANFIS) controller. According to the aforementioned results, although the type-1 fuzzy system can successfully solve the navigation control and avoid collision problems, the control performance of robots is not good enough.
Recently, some researchers have extended the type-1 fuzzy set to the type-2 fuzzy set in a fuzzy system for solving robotic control [12,13,14,15], data classification [16], function approximation [17,18], and automatic control [19,20]. The type-2 fuzzy set incorporate uncertainty about the membership function into fuzzy set theory. Thus, type-2 fuzzy sets generalize type-1 fuzzy sets and more uncertainty can be handled. In [16,17,18,19,20], the experimental results have shown that the type-2 fuzzy set has a better performance than the type-1 fuzzy set for dealing with uncertainties. The output set corresponding to each rule of the type-2 fuzzy system is a type-2 fuzzy set. The type-reduction combines all these output sets and then performs a centroid calculation on this type-2 fuzzy set. The center-of-sets defuzzification method [21] is to replace each rule consequent set by a singleton situated at its centroid and then find the centroid of the type-1 set comprised of these singletons. However, general type-2 fuzzy systems are computationally intensive because type-reduction is very intensive. Things simplify a lot when secondary membership functions are set to interval sets. In this case, the secondary memberships are either zero or one. Therefore, the interval type-2 fuzzy set was proposed by [22,23] and adopted to reduce the computational complexity in this study.
To resolve control problems, the backpropagation (BP) algorithm is used commonly to adjust the parameters of neural fuzzy controllers. However, the fast convergence ability of BP might cause the system to fall into local optimal solutions instead of global optimal solutions. Hence, evolutionary algorithms, such as the particle swarm optimization (PSO) [24], ant colony optimization [25], differential evolution (DE) [26], and artificial bee colony (ABC) [27], are proposed to find global optimal solutions. To speed up the convergence time without falling into local optimal solutions and improve accuracy, the group strategy is imported in the ABC algorithm. The proposed dynamic group artificial bee colony (DGBAC) balances the employed bees’ searching ability and changes the movement equations of employed bees and onlooker bees.
The goal of this paper proposes a cooperative load-carrying method of mobile robots to carry the task item and navigate in an unknown environment. The behavior mode manager is used efficaciously in the navigation control to switch between two behavior modes, wall-following mode (WFM) and goal-oriented mode (GOM), according to various environmental conditions. Additionally, an interval type-2 neural fuzzy controller (IT2NFC) based on DGABC is proposed in this paper. Reinforcement learning was used to develop the WFM adaptively. The proposed IT2NFC-based DGABC has several advantages, such as (1) Only small amount of parameters are required; (2) Interval type-2 fuzzy sets are used to reduce sensor-sensing noise and disturbance; (3) The effectiveness and robustness of the proposed controller are improved. Based on the aforementioned advantages, the behavior mode manager makes the moving path of the mobile robot smoother and automatically switches between the two modes to complete the task.
2. Structure of IT2NFC
In this section, the IT2NFC is introduced. The five layers in this structure are as follows: input layer, fuzzification layer, firing layer, output processing layer, and output layer. In Figure 1, X1~Xn represent the inputs of the IT2NFC, which are the distances between the nearby objects and the robot’s infrared sensors. and indicate the outputs of the IT2NFC, which are the speeds of the right and left wheels, respectively.
Figure 1.
Structure of IT2NFC.
To improve the performance of the fuzzy system, the order reduction method, established by O. Castillo [19], was used in this paper. The reduction of order in the system can decrease the computational complexity. The rule in the IT2NFC is illustrated as follows:
where is the input variable, is an IT2FS, and are the output variables, is the output of the Takagi–Sugeno–Kang linear function, represents the input from 1 to n, and is the number of the rule.
- 1.
- Input Layer
In this layer, the input data is sent to the next layer directly without any computation.
- 2.
- Fuzzification Layer
In this layer, named the membership function layer, the input data is calculated in a fuzzy manner. Each node is defined as an IT2FS. Figure 2 displays that each membership function is one Gaussian membership function and has an uncertainty mean and a fixed standard deviation σ.
where and are the mean and deviation in the ith input in the jth Gaussian membership function. However, horizontal shifting causes the upper bound and lower bound of the membership function. The interval can be represented as .
and
Figure 2.
Interval type-2 fuzzy sets.
- 3.
- Firing Layer
In this layer, each node denotes the one rule node. Take the product of each fuzzy set and calculate the firing strength . of each rule.
where and denote the upper and lower bounds of the firing strength.
- 4.
- Output Processing Layer
In this layer, the center-of-sets order reduction operation is used to produce IT1FS, and then the center-of-gravity method is used to de-fuzzy the set and obtain the output value .
The output of the fuzzy set is defined as the following:
and
where and indicate the upper and lower bound values of the output, represents the linked weights of each node, M represents the number of the rule, and n means the number of the input.
- 5.
- Output Layer
In this layer, the pervious output value is averaged to get the certain output from the neural network y.
3. Structure of DGABC
3.1. Structure of ABC
The ABC, proposed by Karaboga in 2005, is a global optimization algorithm based on swarm intelligence. The advantages of ABC include fewer control parameters, parallel computing, global searching, simple computing, easy implementation, and robustness. The ABC consists of employed bees, onlooker bees, and scout bees. In the traditional algorithm, the employed bees search for a new food source within previous searching experiments and share the resource messages to onlooker bees. The onlooker bees wait for the messages from the employed bees and then search for the new food source according to employed bees’ information. The scout bees search all the food sources. If the food source gains do not improve, then the scout bees search a new food source in the search area randomly. The traditional ABC is introduced in the following steps [23].
Step 1: Initialization
Initialize the total number of food sources (FN), the maximum number of epochs (MEN), and the limit number of each non-improved food source (limit). Put each employed bee in the solution space randomly and set the location as a food source. The fitness of the food source is named the food source gains.
where is the initial value of the ith employed bee in the jth dimension, and are the minimum and maximum values of jth dimension in the search space, and the is the random number from 0 to 1.
Step 2
Calculate the profits of the food source after each employed bee moves to a new food source.
where is the new location of the ith employed bee in the jth dimension, is the previous location of the ith employed bee in the jth dimension, is the location of the randomly chosen employed bee k in the jth dimension, and the is a random number from −1 to 1.
Step 3
Use roulette wheel selection (Equation (14)) to determine which food source should be searched by each onlooker bee. The onlooker bee searches for the nearby food source and then calculates the profits of that food source.
where is the probability of the ith food source being selected, is the profits of the ith food source, and FN is the number of the food source.
Step 4
If the food source gains does not improve after limit times, then abandon this food source. According to Equation (12), the scout bee will find a new food source and replace the abandoned food source.
Step 5
Record the highest food source gains and set it as the best solution in the algorithm.
Step 6
Recursively run the algorithm until the current epoch is equal to MEN. If YES, then stop the algorithm and output the best solution. If NO, then return to step 2.
3.2. Structure of DGABC
The proposed DGABC is used to adjust the parameters in the IT2NFC. DGABC balances the search ability of the traditional ABC and avoids the early converge problem. The proposed DGABC uses the dynamic grouping strategy and reforms the movement equations of the employed and onlooker bees to improve the convergence speed and the algorithm’s performance. The evolution flowchart is displayed in Figure 3.
Figure 3.
Flowchart of DGABC.
In DGABC, each bee represents an IT2NFC and a set of parameters. Figure 4 depicts the coding of all the parameters including the uncertain mean , standard deviation , and link weights .
Figure 4.
Coding of a bee.
In the traditional ABC, the number of employed and onlooker bees is equal. However, improper setting of the number of employed and onlooker bees might result in the unequal searching ability of global and local searching and lower the performance.
The group strategy is used to bring parity to the searching competence. Bees are separated into groups dynamically according to their performance. The best bee is selected as the leader. To maintain consistency with the traditional ABC, in which employed bees lead onlooker bees, the leader in each group is selected as an employed bee and the rest of the bees in the group are selected as onlooker bees. During training, the proportion of two types of bees can be adjusted automatically.
Using similarity evaluation to categorize individuals can prevent the following three cases (depicted in Figure 5).
Figure 5.
Similarity evaluation of individuals.
Case 1:
and are in a similar location. However, the fitness values are very different.
Case 2:
and have similar fitness values. However, the distance between them is large.
Case 3:
and have similar fitness and are located close to each other. Based on similarity evaluation, and are similar individuals.
Some researchers [20] have used thresholds to group the individuals. However, the fixed threshold results in uneven number of groups in early and late epochs. Therefore, dynamic adjustment of the threshold is proposed in this paper.
The DGABC steps are as follows (depicted in Figure 6):
Figure 6.
Dynamic grouping method.
- Dynamic Grouping
Step 1: Sorting
Sort the bees from the best to worst and initiate 0 to all the groups.
Step 2: Calculate the threshold value of similarity.
Find the fittest bee among the ungrouped bees. Then, select the fittest bee as the new group leader, name the group g, and calculate the fitness threshold.
where and are the distance and fitness thresholds, respectively, in group g, is the leader location in group g, is the fitness value of leader in group g, is the total number of bees, is the dimension, and is the total number of bees in group 0.
Step 3: Evaluate the similarity
Calculate the distance and fitness between ungrouped bee and the leader.
If and , then the bee is similar to the group leader. Place them into the same group and change the group number to g.
Step 4: Checking
Check whether there are ungrouped bees. If YES, then go back to step 1 and select the fittest bee in the ungrouped bees as the new group leader and repeat step 1 to step 3. If NO, all the bees are grouped and grouping is complete.
- 2.
- Employed Bees Phase
In the traditional ABC, employed bees randomly select their food source searching direction. In the proposed method, the searching direction is improved. The global best solution is used in the movement equation to enable the employed bees to move to the better solution. Moreover, the proposed method maintains the random search mechanism.
where is the best solution in the group and and are random numbers between .
- 3.
- Onlooker Bees Phase
The grouping strategy alters the traditional onlooker bees’ searching method to the leader (employed bees). The assigned leader must guide the teammates (onlooker bees) to search for a food source.
where is a random number between [−1, 1] and is the location of the leader in the group.
4. Control of the Mobile Robot
In this paragraph, the mobile robot is introduced first and then the method of wall-following control is explained in the following section.
4.1. Description of the Mobile Robot
E-puck mobile robots, new small-sized mobile robots manufactured by EPFL, were used in this research. As displayed in Figure 7a, the robot was controlled by the DSPIC processor. Several standard configurations such as a proximal infrared sensor, voice sensor, accelerometer, and camera are included in the robot. The mobile robots connect to each other through Bluetooth and use Bluetooth to communicate with the computers. Nowadays, e-puck mobile robots are widely used. Applications for e-buck include mobile mechanical engineering, real-time programs, interpolation systems, signal transmission, image transmission, combination of sounds and images, human–computer interaction, and robot internal communication.
Figure 7.
(a) E-puck mobile robot; (b) E-puck 360° infrared sensors.
Figure 7b shows that e-puck is a two-wheeled mobile robot with eight 360° surrounding infrared sensors from S0–S7. The diameter of e-puck is 7 cm, height is 5 cm, and maximum speed is 15 cm/s. The maximum distance between the mobile robot and nearby objects is 6 cm, and the minimum distance is 1 cm.
4.2. Wall-Following Control of the Mobile Robot
In this study, RL was used to implement the wall-following behavior. The robot can evaluate the performance by defining the suitable fitness functions in the training environment even when predefined rules and pre-specified training data are not provided. The wall-following control learning flowchart is displayed in Figure 8.
Figure 8.
Flowchart of wall-following control learning.
The IT2NFC controller has four input signals (S0, S1, S2, and S3) and two output signals (VL and VR). The input signal Si denotes the distance from the ith infrared sensor and the output signals represent the turning speeds of the left and right wheels.
Figure 9 exhibits the training environment. The goal is to enable the mobile robot to adapt to any type of unknown environments including straight lines, sharp corners, obtuse corners, smooth corners, and U-curve corners.
Figure 9.
Training environment.
To prevent the robot from moving far away from the walls or colliding with the obstacles, three terminate actions were designed in this study.
Action 1:
The total moving distance must be more than the preset maximum distance (the distance of navigating the training environment), which means that the robot successfully detours the training environment once.
Action 2:
As Figure 10a indicates, the detected distance from one of the sensors is less than 1 cm, which means that the robot hits the wall.
Figure 10.
(a) Robot hits the wall; (b) Robot deviates from the wall.
Action 3:
As Figure 10b illustrates, the detected distance from one of the side sensors is more than 6 cm, which means that the robot is deviating from the wall.
To evaluate the performance during the learning, the fitness function is proposed in this paper. The fitness function has three sub-fitness functions, namely the total distance of the robot, the average distance between the robot and the wall, and the parallel degree between the robot and the wall (SF1, SF2, and SF3).
- SF1
If the robot’s moving distance is close to the preset distance , then the robot is close to detouring the training environment.
If the robot’s moving distance is more than the preset distance , then the robot traverses the training environment successfully and .
- 2.
- SF2
To ensure the robot and the wall stay at a constant distance, the distance between the robot’s side sensor and the wall is calculated in every time step, as indicated in Figure 11a.
where is the expected distance between the wall and is time step. In this study, was selected as 4 cm.
Figure 11.
(a) Average distance; (b) Parallel degree.
If the robot stays at a constant distance with the wall, then .
- 3.
- SF3
To ensure the robot remains parallel with the wall, the law of cosines is used on the distances from the robot’s front right sensor and the side sensor to calculate the length . Like Figure 11b.
where r is the radius of the robot, is the distance from the sensor , and is the distance from the sensor . If the side sensor is vertical to the wall, then is equal to and .
Therefore, the fitness function can be defined as the following:
6. Experimental Results
Simulation is very important in the robot design process, and the developed algorithm can be quickly verified. At the same time, for robot learners, simulation tools can greatly reduce the cost of learning. The robot simulation platform integrates a physics engine which can calculate motion, rotation and collision according to the physical properties of the object. The simulation tools commonly used in robots are Gazebo, V-REP, and Webot. Table 1 shows the features of various robot simulator software. In Table 1, the V-Rep and Webots support Linux, mac OS, and Windows development environments, whereas the Gazebo only supports Linux development environment. The V-Rep simulation software has less functional support than Gazebo and Webots. Therefore, in this study Webot is used as a robot simulation platform to verify the proposed IT2NFC controller based on DGABC learning algorithm and perform the WFM control and cooperative load-carrying of mobile robots.
Table 1.
Features of various robot simulator software.
6.1. Results of WFM Control
The performance and stabilization for DGABC were compared with those for other algorithms. Best, worst, average, standard deviation (STD), and the number of successful runs after ten runs and the average executing times are mentioned in Table 2. The number of successful runs demonstrates that the mobile robots can traverse the training environment one time successfully from the start point to the goal during the simulation. And the learning curves of each algorithms are shown in Figure 22. Table 2 and Figure 23 are under the same conditions. In Table 2, the proposed method exhibited a higher learning effect than improved ABC, ABC, and DE. In addition, the average executing time of proposed method is shorter than those of other methods.
Table 2.
Performance comparison of various algorithms.
Figure 22.
Learning curves of wall-following control in various algorithms.
Figure 23.
Moving path of various algorithms in the training environment.
In addition, we have also compared the proposed IT2NFC (type-2 fuzzy sets) based on DGABC with the IT1NFC (type-1 fuzzy sets) based on DGABC. Table 3 shows the performance comparison of IT1NFC and IT2NFC. The experimental results show the performance of IT2NFC (type-2 fuzzy sets) is better than the IT1NFC (type-1 fuzzy sets)”
Table 3.
Performance comparison of IT1NFC and IT2NFC.
6.2. Results of Cooperative Load-Carrying WFM Control
The results revealed that the trained control method was implemented for cooperative load-carrying mobile robots. Table 4 compares the performance of the proposed WFM with that of other algorithms in the simulation. The better performance and higher stabilization for cooperative load-carrying WFM control in DGABC is demonstrated in Figure 24. Figure 25 shows the moving path of two mobile robots in the training environment. The PSO did not display any advantage in cooperative load-carrying WFM control. Therefore, the proposed DGABC method exhibited better performance in this control.
Table 4.
Performance of cooperative load-carrying WFM control.
Figure 24.
Learning curves of various algorithms.
Figure 25.
Moving path of the proposed method.
6.3. Results of Cooperative Load-Carrying Navigation Control
Implementation of the proposed method in cooperative load-carrying mobile robots in unknown environments is demonstrated in Figure 26. In this experiment, the average distance between two mobile robots (RAD) and the average distance between the wall and the follower robot (FAD) were the two critical standards for evaluating the performance of the proposed method. As shown in Table 5, if the RAD value is too large, the two mobile robots do not stay within the safe distance, which might cause the object to fall. On the other hand, if the FAD value is too large or too small, then the WFM does not perform satisfactorily when the mobile robots are turning around the corner. This might cause the object to shift during the mission, the object to fall, and thus, the mission to fail.
Figure 26.
Moving path of cooperative load-carrying navigation control in (a) testing environment 1 and (b) testing environment 2.
Table 5.
Performance of cooperative load-carrying navigation control.
6.4. Discussion
In this study, we have successfully implemented the cooperative load-carrying task of two mobile robots by using the proposed IT2NFC-based DGABC. Next, the proposed method will be extended to three and more mobile robots for the cooperative load-carrying task. For example, Figure 27 shows the cooperative load-carrying task of three mobile robots. First, the wall-following control method of the cooperative load-carrying task is used for the leader, the follower-1, and the follower-2. Second, the auxiliary controller is attached to the followers based on the description in Section 5.1. Finally, the behavior mode manager in navigation control is also used to switch between the GOM and the WFM in Section 5.2. Therefore, the cooperative load-carrying task of three mobile robots would be implemented. According to the aforementioned steps, the proposed method can be successfully extended to the cooperative load-carrying task of three and more mobile robots.
Figure 27.
Cooperative load-carrying of three robots.
7. Conclusions
The proposed IT2NFC-based DGABC improves the search capacity and shortens the convergence speed for avoiding falling into local optimal solutions. Using the proposed method, the mobile robots can develop the controller adaptively because DGABC does not need any predefined rule set nor does it require pre-specified training data. According to the environmental situations, mobile robots use the behavior mode manager to switch between WFM and GOM. Additionally, the pre-rotate mechanism ensures the follower robot follows the wall perfectly and allows the leader robot to complete the cooperative load-carrying task in unknown environments.
The cooperative load-carrying task is so complex for the robots that several factors, such as the robot speed, the object payload, and the working environment, should be taken into consideration. Therefore, we focus on implementing the cooperative load-carrying task of two mobile robots in this study. In the future work, we will consider implementing the cooperative load-carrying task of two and more real mobile robots in the unknown environments.
Author Contributions
Formal analysis, C.-H.L.; Methodology, C.-H.L., S.-H.W. and C.-J.L.; Software, C.-J.L.; Supervision, S.-H.W. and C.-J.L.; Writing—original draft, C.-H.L. and C.-J.L.
Funding
This research was funded by the Ministry of Science and Technology of the Republic of China, Taiwan grant number MOST 106-2221-E-167-016.
Acknowledgments
The authors would like to thank the Ministry of Science and Technology of the Republic of China, Taiwan for financially supporting this research under Contract No. MOST 106-2221-E-167-016.
Conflicts of Interest
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
References
- Paul, P.; Raymond, L.; Maurizio, P. Robotic Fish: Design and Characterization of an Interactive iDevice-Controlled Robotic Fish for Informal Science Education. IEEE Robot. Autom Mag. 2015, 22, 86–96. [Google Scholar]
- Christopher, L.; Andrew, E.; Christopher, M.; Adam, W.T.; Tristan, P. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar]
- Michail, P.; Konstantinos, K.; Christos, D.; Georgios, S. Design of an Autonomous Robotic Vehicle for Area Mapping and Remote Monitoring. Int. J. Comput. Appl. 2017, 12, 36–41. [Google Scholar]
- Maurizio, F.; Junichi, T.; Goldie, N. Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior. IEEE Trans. Cybern. 2015, 46, 2911–2923. [Google Scholar]
- Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
- Pawlak, Z. Rough set theory and its Applications. J. Telecommun. Inf. Technol. 2002, 3, 7–10. [Google Scholar]
- Molodtsov, D. Soft set theory-First results. Comput. Math. Appl. 1999, 37, 19–31. [Google Scholar] [CrossRef]
- Algabri, M.; Mathkour, H.; Ramdane, H.; Alsulaiman, M. Comparative study of soft computing techniques for mobile robot navigation in an unknown environment. Comput. Hum. Behav. 2015, 50, 42–56. [Google Scholar] [CrossRef]
- Lee, C.L.; Lin, C.J.; Lin, H.Y. Smart Robot Wall-Following Control Using a Sonar Behavior-based Fuzzy Controller in Unknown Environments. Smart Sci. 2017, 5, 160–166. [Google Scholar] [CrossRef]
- Fathinezhad, F.; Derhami, V.; Rezaeian, M. Supervised fuzzy reinforcement learning for robot navigation. Appl. Soft Comput. 2016, 40, 33–41. [Google Scholar] [CrossRef]
- Anish, P.; Parhi, D.R. Multiple mobile robots navigation and obstacle avoidance using minimum rule based ANFIS network controller in the cluttered environment. Int. J. Adv. Robot. Autom. 2016, 1, 1–11. [Google Scholar]
- Mendel, J.M. Advances in type-2 fuzzy sets and systems. Inf. Sci. 2007, 177, 84–110. [Google Scholar]
- Mendel, J.M. Type-2 fuzzy sets and systems: An overview. IEEE Comput. Intell. Mag. 2007, 2, 20–29. [Google Scholar] [CrossRef]
- Zaheer, S.A.; Choi, S.H.; Jung, C.Y.; Kim, J.H. A modular implementation scheme for nonsingleton type-2 fuzzy logic systems with input uncertainties. IEEE/ASME Trans. Mech. 2015, 20, 3182–3193. [Google Scholar] [CrossRef]
- Kim, C.J.; Chwa, D. Obstacle avoidance method for wheeled mobile robots using interval type-2 fuzzy neural network. IEEE Trans. Fuzzy Syst. 2015, 23, 677–687. [Google Scholar] [CrossRef]
- Nguyen, T.; Khosravi, A.; Creighton, D.; Nahavandi, S. Medical data classification using interval type-2 fuzzy logic system and wavelets. Appl. Soft Comput. 2015, 30, 812–822. [Google Scholar] [CrossRef]
- Zarandi, M.H.F.; Rezaee, B.; Turksen, I.B.; Neshat, E. A type-2 fuzzy rule based expert system model for stock price analysis. Expert Syst. Appl. 2009, 36, 139–154. [Google Scholar] [CrossRef]
- Melin, P.; Castillo, O. A review on type-2 fuzzy logic applications in clustering, classification and pattern recognition. App. Soft Comput. 2014, 21, 568–577. [Google Scholar] [CrossRef]
- Tai, K.; El-Sayed, A.-R.; Biglarbegian, M.; Gonzalez, C.I.; Castillo, O.; Mahmud, S. Review of recent type-2 fuzzy controller applications. Algorithms 2016, 9, 39. [Google Scholar] [CrossRef]
- Bay, O.F.; Yatak, M.O. Type-2 fuzzy logic control of a photovoltaic sourced two stages converter. J. Intell. Fuzzy Syst. 2018, 35, 1103–1117. [Google Scholar] [CrossRef]
- Karnik, N.N.; Mendel, J.M. Type-2 fuzzy logic systems: Type-reduction. In Proceedings of the 1998 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, USA, 14 October 1998; pp. 2046–2051. [Google Scholar]
- Liang, Q.; Mendel, J.M. Interval type-2 fuzzy logic systems: Theory and design. IEEE Trans. Fuzzy Syst. 2000, 8, 535–550. [Google Scholar] [CrossRef]
- Castillo, O.; Melin, P. A review on the design and optimization of interval type-2 fuzzy controllers. Appl. Soft Comput. 2012, 12, 1267–1278. [Google Scholar] [CrossRef]
- Kaveh, A. Advances in Metaheuristic Algorithms for Optimal Design of Structures; Springer: Cham, Switzerland, 2017; pp. 11–43. [Google Scholar]
- Dorigo, M.; Caro, G.D. Ant Colony Optimization: A New Meta-Heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; pp. 1470–1477. [Google Scholar]
- Zheng, L.M.; Zhang, S.X.; Tang, K.S.; Zheng, S.Y. Differential evolution powered by collective information. Inf. Sci. 2017, 399, 13–29. [Google Scholar] [CrossRef]
- Barbosa, H. Ant Colony Optimization Techniques and Applications; Intech Open: Rijeka, Croatia, 2013; ISBN 978-953-51-1001-9. [Google Scholar]
- Tsai, P.W.; Pan, J.S.; Liao, B.Y.; Chu, S.C. Enhanced artificial bee colony optimization. Int. J. Innov. Comput. Inf. Control 2009, 5, 5081–5092. [Google Scholar]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).


























